Three new segments of EOS R cameras coming in 2025

You probably mean scaled to frame view.
In the DPR comparison tool, there's three buttons on the top right: 'full', 'comp' and 'print'. The 'comp' and 'print' are normalised views, 'full' shows 1 image pixel per 1 screen pixels (1:1).
Normally you'll want to use 'comp' for overall image quality comparison. 'full' shows noise at the pixel level, it gives an impression of what you might get if you crop.
The R3 and R5 have almost identical noise level despite the different pixels size: https://www.dpreview.com/reviews/im...4&x=-0.08065569276718167&y=0.6584857246639197

1730425370733.png

The R5II has a slightly poorer performance because they sacrificed quality for speed. That's not because the pixels are smaller - that's because the sensor has a fast readout and it causes more noise in individual pixels, compared to the R5.
Already did. And reporting here about findings.

The easiest way to see this is shooting Cinema RAW and viewing C-Log2 instead of display OETF and you get to see more about the shadows and highlights.
You don't really compare still shooting performance by shooting video, and vice versa.
 
Last edited:
  • Like
Reactions: 1 user
Upvote 0
Download and compare the RAW files from DPR before drawing any conclusions based on their “comparison“ tool. I have found differences of a stop or more in exposure levels between cameras in their testing.
The issue with the DPR tool is that it relies on in-camera metering that may be mapping different (generally unknown) level in raw data into the middle grey in the image. We don't know how many stops there are between the middle grey and saturation in each concrete image sample in the DPR tool.
This may cause differences in the visible noise even when the sensors have roughly the same dynamic range.

When we compare cameras of the same generation from the same manufacturer, we can assume that the middle grey mapping will be more or less consistent, so comparing the R cameras makes a bit more sense than comparing R and Z cameras.
 
Upvote 0
In the DPR comparison tool, there's three buttons on the top right: 'full', 'comp' and 'print'. The 'comp' and 'print' are normalised views, 'full' shows 1 image pixel per 1 screen pixels (1:1).

I have more elaborate methodology for this, I don't need or depend on these sites.

The R5II has a slightly poorer performance because they sacrificed quality for speed. That's not because the pixels are smaller - that's because the sensor has a fast readout and it causes more noise in individual pixels, compared to the R5.

No it's mainly because pixels are smaller, which affects many things.

Same principles with slower readout cameras.

You don't really compare still shooting performance by shooting video, and vice versa.

You are misinterpreting what I wrote.

I am talking about principles in digital imaging related to pixel size, manifesting as image properties.
It doesn't matter whether you sample a single bitmap or bunch, i.e. shoot photos or video or RAW motion capture. RAW motion capture technically is not video, video is baked-in display-referred imagery, with RAW you capture full data range and can define output colour gamut and OETF after capture.

More noise in R5C is obvious in full screen on a laptop screen with 6K R3 and 8K R5C full sensor resolution downscaled for same frame size.
 
Last edited:
Upvote 0
The issue with the DPR tool is that it relies on in-camera metering that may be mapping different (generally unknown) level in raw data into the middle grey in the image. We don't know how many stops there are between the middle grey and saturation in each concrete image sample in the DPR tool.

Yes, these are not proper DR tests and don't tell you the full story.

Mid grey is known for log coding and for C-log2 lands on about 40%.

R3 clips about 6 & 1/3 stops over mid grey and R5C a bit sooner. This is a very simple info as it tells you nothing about tonal performance within that DR determining actual usable DR, where R3 is also superior. In both highlights and shadows.

Because of larger pixels.
 
Last edited:
Upvote 0
No it's mainly because pixels are smaller, which affects many things.
Given the same generation of the sensor tech (well capacity/quantum efficiency/read noise etc.) and the same sensor area, smaller pixels result in higher quality images.

The maximum information per image is the total amount of light received during exposure. A lot of that gets lost during the capture. Having more pixels allows to capture more spatial and colour information.
I am talking about principles in digital imaging related to pixel size, manifesting as image properties.
It doesn't matter whether you sample a single bitmap or bunch, i.e. shoot photos or video or RAW motion capture. RAW motion capture technically is not video, video is baked-in display-referred imagery, with RAW you capture full data range and can define output colour gamut and OETF after capture.
When capturing video, the sensor is in different mode. It's 10 or 12 bits instead of 14 to start with. Readout speed is also different. Then there's an unknown amount of processing and lossy compression (again even if it's raw video). Whatever you measure in the video files only tells you about the video performance of the camera.

Nobody draws conclusions about still shooting mode from video files, even if they're raw. What you're describing is probably the first case of such an unconventional approach. But not only it's unconventional, it's technically flawed.

Yes, these are not proper DR tests and don't tell you the full story.
DPR comparison tool is not a DR test and it doesn't even claim to be that.
R3 clips about 6 & 1/3 stops over mid grey and R5C a bit sooner. This is a very simple info as it tells you nothing about tonal performance within that DR determining actual usable DR, where R3 is also superior. In both highlights and shadows.
How did you get 6.3 stops? Canon cameras clip about 2.5 stops above middle grey. 18% of a max saturation is ~2.47 stops below saturation, Canon cameras generally follow that, you can verify it with RawDigger.

That's way below 6.3 stops you're claiming, that number is very wrong for either stills or video.
 
  • Like
Reactions: 1 user
Upvote 0
Given the same generation of the sensor tech (well capacity/quantum efficiency/read noise etc.) and the same sensor area, smaller pixels result in higher quality images.

Belief out of touch with reality.

The maximum information per image is the total amount of light received during exposure.
A lot of that gets lost during the capture. Having more pixels allows to capture more spatial and colour information.

You are missing some concepts, oversimplify with what you got and get innacurate conclusions.

When capturing video, the sensor is in different mode. It's 10 or 12 bits instead of 14 to start with. Readout speed is also different. Then there's an unknown amount of processing and lossy compression (again even if it's raw video). Whatever you measure in the video files only tells you about the video performance of the camera.

Overthinking cannot compensate the absence of knowledge.

Compression or 12/14 bits have nothing to do with this. Compression affects both cameras and 14 bit quantization will yield superior result on both cameras. With same principles and differences between sensors manifesting in higher bit depth.

Second attempt: RAW clips are not video.

Also which "unknown processing" of RAW (?) inside the camera and where did you get this from.

Nobody draws conclusions about still shooting mode from video files, even if they're raw. What you're describing is probably the first case of such an unconventional approach. But not only it's unconventional, it's technically flawed.

See above.
And read my previous post again to differentiate the topic of discussion.

How did you get 6.3 stops?

Expose on mid grey for scene-referred full range coding OETF and count the stops until it clips.

Canon cameras clip about 2.5 stops above middle grey.

No they don't.

Even pocket camera or today's tiny sensor phone doesn't.

That's way below 6.3 stops you're claiming, that number is very wrong for either stills or video.

Option 2: You don't understand how this works.

To reach understanding you can start with transfer functions and how they affect DR figures.
 
Last edited:
  • Wow
Reactions: 1 user
Upvote 0
In terms of the motion blur, the "start movement" can be avoided by taking shorter exposures, stacking, or using a star tracker. In fact the star movement is directional, it's not like a random camera shake. A higher res sensor will still give a sharper image even with the same angular start movement.
A higher resolution sensor would require a more sturdy/stable tripod, and a better-aligned/more reliable sky tracker though, right? As small movements/errors would be more visible at a pixel level.
 
Upvote 0
I must have missed that rumour. And wouldn't it be called the R1s, according to Canon's naming conventions (such as they are)?
We don’t have that much precedent honestly since the two s models we’ve had were a little different (1ds having larger sensor than 1d, 5ds being higher res sensor but same format)

You can argue that the most analogous would be for an R1 to be the high speed camera, “R1S” to be a medium format, and “R1X” combining the best of both. But obviously that’s an unlikely situation too.

Maybe they just think X sounds cooler and think that would help with the influencers.

All that said, I don’t think there was much substance to the R1x rumor.
 
Upvote 0
I must have missed that rumour. And wouldn't it be called the R1s, according to Canon's naming conventions (such as they are)?
The R1X (or R1S) rumor (80MP, started July 2024, so right after the R1 announcement) is a reclycled rumor from the days when the R1 was supposed to be a high MP camera. The rumor was recycled by “reputable” sources like the Ordinary Film Maker.
So you are not missing anything serious ;).
 
  • Like
Reactions: 1 user
Upvote 0
Belief out of touch with reality.


The reality is that in modern sensors, the pixel size doesn't have a strong correlation with the dynamic range. You will struggle to find any correlation at all.
Compression or 12/14 bits have nothing to do with this. Compression affects both cameras and 14 bit quantization will yield superior result on both cameras. With same principles and differences between sensors manifesting in higher bit depth.
Before comparing the cameras, you need to do individual measurements of the dynamic range. And you have to measure it on still raw files, not video.

Second attempt: RAW clips are not video.
Most importantly, they're not still raws.
Also which "unknown processing" of RAW (?) inside the camera and where did you get this from.
Do you know the details of the video processing chain in Canon cameras? You think there's no processing just because it's called 'raw'?
Expose on mid grey for scene-referred full range coding OETF and count the stops until it clips.
You measure clipping in raw, before applying a gamma.

Shoot a uniform white wall so that the camera meters it at 0ev. The histogram will look like a single spike in the middle. Then increase the exposure in 1/3-stop steps and check in RawDigger when the green channel starts clipping.

You'll be surprised there's way less than 6.3 stops between the mid grey and clipping.
 
Upvote 0


The reality is that in modern sensors, the pixel size doesn't have a strong correlation with the dynamic range. You will struggle to find any correlation at all.
*image dynamic range* vs *per pixel dynamic range* is I think what you are talking about.

per pixel dynamic range suffers with pixel density, but as long as the sensors are the same generation - and employing the same technology (ie: backside illumination) you aren't going to see much difference in the overall sensor dynamic range response.

Also more pixels gives you greater ability to do computational recovery on the data, as an image captured is simply data.

Side note.. this thread is still active? what the heck lol
 
Upvote 0
*image dynamic range* vs *per pixel dynamic range* is I think what you are talking about.
Yes, in Bill Claff's terms, it's 'photographic dynamic range' vs 'engineering dynamic range'.

per pixel dynamic range suffers with pixel density, but as long as the sensors are the same generation - and employing the same technology (ie: backside illumination) you aren't going to see much difference in the overall sensor dynamic range response.
Technically, there are caveats because of the read noise and dual gain that may give theoretical advantage or disadvantage to bigger or smaller pixels at different ISOs, but as we can see from photonstophotos measurements, in practice the photographic dynamic range doesn't depend much on the pixel size.

Individual pixels, if they're smaller, will always be receiving less light and therefore will be noisier.
Also more pixels gives you greater ability to do computational recovery on the data, as an image captured is simply data.
More pixels = more data for denoise and sharpening, less aliasing and better spatial colour resolution, smoother lens corrections, more room for enlargement or cropping etc.
Side note.. this thread is still active? what the heck lol
Notifications on replies encourage participation, it's a good feature from the forum's perspective.
 
Upvote 0