Are two cameras going to replace the Canon EOS R5? [CR]

Since some are talking about RAW vs cRAW, it makes me wonder if there has ever been any accurate comparison of the line graphs of the dynamic range of RAW vs cRAW for Canon (R5 etc) sensors?

That is the only way we can accurately know what we are losing in dynamic range if we want to save 50% of storage space with cRAW. Then we can decide for ourselves which choice is best for each or us. I have been using cRAW up to now to save memory, but I bracket the exposure levels for my shots so that I can avoid highlight burnout and often find that I prefer the appreciably underexposed images for that reason. Therefore I have to pull much more detail out of the shadows. But since the cRAW is reportedly discarding detail in the shadows I now wonder if I should switch back to using full RAW for my future use.

What might be a better option for me is to take photos in RAW only, and then in the images that I might (now or in the future) want to edit in post to print and frame, I could keep the original RAW files. In the remaining images that are good enough to keep (but not frame), I could use a post tool (if it ever exists) to convert the RAW to cRAW so that my storage goes back down to 50%. With that in mind, maybe I would be better off using another existing tool to convert those remaining images to jpg or heif to get their size down far below 50%. In that case, I would avoid using cRAW completely.

This makes me wish that Canon had a storage option of lossless RAW so that I could always use it first. I'd appreciate any thoughts on this if you have any.
 
Last edited:
Upvote 0
[...] What might be a better option for me is to take photos in RAW only, and then in the images that I might (now or in the future) want to edit in post to print and frame, I could keep the original RAW files. In the remaining images that are good enough to keep (but not frame), I could use a post tool to convert the RAW to cRAW so that my storage goes back down to 50%. [...]
Such a tool doesn't currently exist and likely won't exist as a 1st party option. What I did for a bunch of old CR2 files: have lightroom convert them to lossy compressed DNGs.

Doing that on a CR3 from an M50:
Bash:
Mac-Studio:temp koen$ du -hs ../../2019/09/*IMG_5458.{CR3,dng}
 29M    ../../2019/09/20190927 1155 Canon EOS M50 - Canon EF-M 22mm f2 STM - IMG_5458.CR3
7,1M    ../../2019/09/IMG_5458.dng

Better compression than cRAW, but it has been demosaiced and then lossily (is that a word?) compressed. Good enough for me, for those pictures.
 
  • Like
Reactions: 1 user
Upvote 0
Such a tool doesn't currently exist and likely won't exist as a 1st party option. What I did for a bunch of old CR2 files: have lightroom convert them to lossy compressed DNGs.

Doing that on a CR3 from an M50:
Bash:
Mac-Studio:temp koen$ du -hs ../../2019/09/*IMG_5458.{CR3,dng}
 29M    ../../2019/09/20190927 1155 Canon EOS M50 - Canon EF-M 22mm f2 STM - IMG_5458.CR3
7,1M    ../../2019/09/IMG_5458.dng

Better compression than cRAW, but it has been demosaiced and then lossily (is that a word?) compressed. Good enough for me, for those pictures.
Thanks, koenkooi.

Unfortunately, I don't use Lightroom. But I appreciate the info. Maybe there are other options out there as well as that one.

As a (now retired) game programmer, I got heavily into image compression algorithms - both lossless and lossy. With lossless if you want fast compression one choice is to count how many most significant "0" bits you have in each pixel element (as well as in its neighbors) and output codes to reduce the data stored for them. You can't just throw away any of the least significant bits or else it's no longer lossless. This is a very simple algorithm that would reduce storage somewhat, especially in darker areas of course.

Another better simple choice it to convert left-right pairs of pixel colors into an average and difference for them. The difference values are often very small, especially in things like the sky. You can repeat that for up-down pairs of the average values (or difference values) into the average and difference for them. Now (for example) the average of the average is for 4 pixels and only 1/4 the original size. You can repeat this again and again a few times so that you're mostly outputing data for differences, which are usually small. This works particularly well on bright, smooth or out-of-focus areas (frequently found in most images). And as the pixel count increases (eg with a 200MP sensor) the compression becomes more efficient (percentage-wise) since the smaller pixels are usually closer in value to each other.

You can also convert the RGB values into luminance and 2 color components, and then compress those values with similar routines as above.

These types of simple concepts lend themselves to parallel processing which can process large areas of image data at the same time (which is why graphic processors are so fast as they specialize in that). There are lots of other routines which take more time due to their complexity, but when you have a massive image buffer to be compressed and stored quickly they can easily take too long to empty the image buffers as needed.

I don't know if Canon is doing any of this in their RAW output, but you'd think that they would be advertising it proudly if they were. I am really mystified why they don't offer (appreciable) lossless compressed RAW, particularly when other main competitors do.
 
Upvote 0
If I were to design cRAW, I would be dropping the lowest bits in highlights. They contain nothing but shot noise anyway, which can as well be added back in post.
I don't know if it's true that highlights have only noise in their lowest bits. But I would agree that the lowest bits are not noticeable visually in the highlights versus in the dark areas, where they are closer to becoming the most significant bits of the dark image. Therefore your idea is perfectly appropriate for lossy (eg cRAW) compression.

Of course, lossless compression is supposed to be defined by 0 loss, and not "unnoticeable loss". Once you are willing to just drop bits, then you are entering lossy compression and all sorts of new routines are appropriate.
 
Last edited:
Upvote 0
If I were to design cRAW, I would be dropping the lowest bits in highlights. They contain nothing but shot noise anyway, which can as well be added back in post.
I don't understand that. Shot noise is observed when there are few photons, and the highlights have the highest number of photons in the whole image. What have I missed?
 
  • Like
Reactions: 1 user
Upvote 0
I don't understand that. Shot noise is observed when there are few photons, and the highlights have the highest number of photons in the whole image. What have I missed?
Absolute vs. relative.

Shot noise magnitude is about the square root of the signal magnitude. So, approximately half of the significant bits of the signal are shot noise. When there are few photons, there are less signal bits to start with, so the shot noise is easier to observe, even though its absolute magnitude (in photon counts) is lower.
 
  • Like
Reactions: 1 users
Upvote 0
I would take a R5sr style camera. FF, 80MP, 16-bit sensor tuned heavily for photographic dynamic range and color fidelity. Get rid of the flip screen (because no need with no video modes), replace it with a much higher resolution, bigger screen.
 
  • Like
Reactions: 1 users
Upvote 0
I would take a R5sr style camera. FF, 80MP, 16-bit sensor tuned heavily for photographic dynamic range and color fidelity. Get rid of the flip screen (because no need with no video modes), replace it with a much higher resolution, bigger screen.
Never shoot video, but use the flip screen often. Why would anyone think the flip screen is just for video?
 
  • Like
Reactions: 4 users
Upvote 0
I don’t understand this one either. A flip screen is really useful when shooting in portrait orientation at anything less than head height.
There are situations where the current mechanism forces it to be off-axis, which is, for me, very distracting when trying to frame up a shot. But overall the current flip screen is very much a net positive for me.
 
Upvote 0
I don’t understand this one either. A flip screen is really useful when shooting in portrait orientation at anything less than head height.
Actually super useful for photojournalism, holding the camera above the heads in a crowd to get a shot of the person at the center. I’ve also used it when people want a large group photo. Holding the camera up high helps you get the faces in the back.
 
  • Like
Reactions: 1 user
Upvote 0
Since the topic of the R5 flip out panel has come up, one thing I wish they allowed is this:
When the flip out panel is open and showing a display, if I choose to look into the EVF (with the flip panel still out) I'd like it to sense that my eye is there and turn on the EVF. This would require a sensor to know if the eye is there or not (which I assume isn't on the R5) and so this would not be possible with a firmware update. But if the new R5 version comes out with an eye-tracking ability, I hope that they offer this ability as default. (And note that I wouldn't normally view the EVF with the flip screen out, but sometimes I have it flipped out with the R5 on a tripod, but I might briefly want to look in the EVF for a much better view (than the flip panel) of what's going on in the view)
 
Upvote 0
There is a sensor for that on the R5. The little ‘window’ just below the viewfinder.
OK - thanks, neuroanatomist. I guess it had to have one so that it didn't waste energy by being on all the time when the camera was on with the flippy screen stowed shut.

But is there any way to tell the R5 you want it to turn on the EVF when your eye is there (even when the flippy screen is flipped out) ?
 
Upvote 0