Is the Canon EOS R7 the next camera to be announced? [CR2]

Why?

The DOF of a shot on a full-frame camera at 160mm/16 will be the same as the DOF of a shot on a crop camera at 100mm/10.
I really don't want to get into this, but since you asked. You are correct in your example. But the reason you are correct has nothing to do with the sensor.

It has to do with the distance to subject and focal length of the lens. Using your example. If you shoot the same subject at 160mm/f16 standing from the exact same spot using a crop sensor camera and a full frame camera and then examine the same areas in both the crop and full frame image, they will show identical depth of field. In fact, you could take the crop image and overlay it onto the full frame image in Photoshop and they would be identical in depth of field in the cropped portion of the full frame image.

Depth of field is not sensor dependent. It only appears to be so, because you must change either your shooting position or, in the example you are using, the focal length of the lens, in order to get the same cropping in the final image.

Depth of field resides solely within the lens and distance to subject. Since depth of field is sensor independent, I don't like to use the term equivalence for depth of field, because people think it has something to do with the sensor, which it does not.
 
  • Like
Reactions: 1 users
Upvote 0
Why is it that this diffraction calculator shows a 32 MP 1.6x APS-C camera being diffraction limited at f/8 but not at f5.6 (with only full stop increments available), and this other diffraction calculator with 1/3-stop increments shows a 32 MP 1.6x APS-C camera being diffraction limited at f/6.3 but not at f5.6?

Based on those, it's much closer to 5 than to 15.

Because, as I stated above, it's common for people to both calculate and interpret diffraction limits incorrectly.

By the way, I've confirmed my own math with carefully-collected actual images.
 
Upvote 0
Somewhere buried deep in this thread is a person chiming in hoping the purported R7 will be housed in an R6 body. As someone who came into the 5 series bodies from a 7D, which was a natural progression and ergonomic fit, I would agree that this would make sense for many of us and make a great 2 body approach to handling and settings.

If I only had a nickel for every time I heard someone say they shot with both a 5D 2 and a 7D.

My R7 hopes and dreams would be:

24-32 MP backlit sensor
2 SD card slots
R6 dials and button layout
faster fps than R6
same sealing
same or improved AF, improved with trickle down from R3 for servo cases

$200-300 more than R6 in USD...am I delusional?
You are not delusional. I think this is a highly likely scenario.

And I very much appreciate your efforts to get this thread back on track. The off-topic discussions simply repeat points that have been endlessly discussed and are of no more interest and insight today than they were in the literally hundreds of other threads that have been hijacked by such discussions.
 
  • Like
Reactions: 1 user
Upvote 0
I really don't want to get into this, but since you asked. You are correct in your example. But the reason you are correct has nothing to do with the sensor.
It has to do with the sensor size.
It has to do with the distance to subject and focal length of the lens. Using your example. If you shoot the same subject at 160mm/f16 standing from the exact same spot using a crop sensor camera and a full frame camera and then examine the same areas in both the crop and full frame image, they will show identical depth of field.
Because cropping in post and cropping by reducing sensor size are the same thing.
In fact, you could take the crop image and overlay it onto the full frame image in Photoshop and they would be identical in depth of field in the cropped portion of the full frame image.
Right, but irrelevant.
Depth of field is not sensor dependent.
No, it's actually enlargement dependent.
It only appears to be so,
Which is all that matters.
because you must change either your shooting position or, in the example you are using, the focal length of the lens, in order to get the same cropping in the final image.

Depth of field resides solely within the lens and distance to subject.
False.
Since depth of field is sensor independent, I don't like to use the term equivalence for depth of field, because people think it has something to do with the sensor, which it does not.
It does - changing sensor size changes enlargement ratio and that changes DOF.

DOF is *entirely perceptual*. There is no "actual DOF" as the plane of focus is a plane - infinitely thin.

Viewing the same print on the same wall from a closer distance changes DOF. When you crop, either in post or by using a smaller sensor and viewing the final image at the same final size, that changes DOF.
 
  • Like
Reactions: 1 user
Upvote 0
The point is that CoC varies directly with sensor size, so a smaller sensor will have a shallower DoF, all else been equal (not equivalent, equal). With a smaller sensor, when you use a wider focal length or increase subject distance to match framing, that results in a deeper DoF (i.e., the 'equivalent f-stop' is narrower).
That's one (correct) way to look at it, yes.
 
Upvote 0
Perhaps you don't mean it this way, but this response is insulting and condescending. It is not that my poor little pea brain is unable to understand the concepts. It is that I understand it and simply disagree with the way some use equivalence interchangeably to describe both exposure equivalence and apparent depth of field. I have seen the endless confusion and downright baloney that gets posted on here when people use equivalence to refer to depth of field so I decline to use it in that way.
Step back and chill out. You take offence too easily. There is no "perhaps" about it. You are correct that my post was NOT met in the very odd way that you interpreted it. I told neuro that I agreed with his reply to you. Separately, my comments about how to best explain the concept to others, was a generalisation, not aimed at you or anyone else in particular.

I'm an expert in some fields but not in others, and the same applies to most people, including you. I just think that sometimes the experts talk in terms that people can't easily digest. It's nothing to do with intelligence, so your "little pea brain" sarcasm is entirely misplaced. I can follow a discussion about microbiology, palaeontology, entomology or astrophysics with ease, but I'd have difficulty following a discussion about cryptocurrency!
 
Upvote 0
Somewhere buried deep in this thread is a person chiming in hoping the purported R7 will be housed in an R6 body. As someone who came into the 5 series bodies from a 7D, which was a natural progression and ergonomic fit, I would agree that this would make sense for many of us and make a great 2 body approach to handling and settings.

If I only had a nickel for every time I heard someone say they shot with both a 5D 2 and a 7D.

My R7 hopes and dreams would be:

24-32 MP backlit sensor
2 SD card slots
R6 dials and button layout
faster fps than R6
same sealing
same or improved AF, improved with trickle down from R3 for servo cases

$200-300 more than R6 in USD...am I delusional?
Good luck, our desired specifications are not dissimilar, and the price seems about right. Canon have launched some amazing gear recently (as have Nikon and Sony), but unfortunately I think we'll both have to settle for a compromise. Or it could be something well below what we hope for - perhaps a cheap entry model to compete with recent Sony and Nikon APS products...
 
Upvote 0
I really don't want to get into this...
Good call, you shouldn't have.

Depth of field resides solely within the lens and distance to subject. Since depth of field is sensor independent, I don't like to use the term equivalence for depth of field, because people think it has something to do with the sensor, which it does not.
This is false. Thank you for confirming my suspicion that you prefer to avoid discussing the concept of equivalence and would rather restrict such discussion only to equivalent focal length/FoV because you really don't understand the concept of equivalence.
 
Upvote 0
Of course no one gets exactly what they want. So I'd be interested in knowing if a more likely specification list would still entice you or not.

I am guessing that Canon is not going to reinvent the wheel, but instead base an R7 on existing specifications in other cameras. That's why I asked about a crop sensor R6 with a sensor resolution similar to the 90D and priced in the neighborhood of the R6 give or take a few hundred dollars.

From your list I think dual CFExpress slots and redesigning the flip screen are definitely non-starters. I doubt that the autofocus will see any improvement over the R6/R5 although it is possible that they will add next generation refinements, as Canon has frequently done that with subsequent camera bodies. I would expect that the EVF would be closer to the R6 than the R5.
Yes I think my twin CFE-B slots and tilt 'n flip screen are non-starters, my wish list was just that - a wish list. I could live without either of those features.

What I'm looking for is a camera with roughly equal performance to an R5, but with the extra APS-C "reach". 24MP might be enough to entice me, as an APS-C crop from my R5 is only 17MP, but obviously the better the specification, the more likely I am to buy, given that I'm willing to go as high as £3000. If the camera is too far removed from my "ideal", I'd skip it, as the R5 and 5DMkiv between them cover most of my needs and wants already. I'd want a "better" sensor than the one in the 90D, as I shoot a lot in the ISO 800-3200 range, and also in conditions where I need enough DR to be able to pull detail out of deep shadows and washed out highlights.
 
Last edited:
Upvote 0
It is that I understand it and simply disagree with the way some use equivalence interchangeably to describe both exposure equivalence and apparent depth of field. I have seen the endless confusion and downright baloney that gets posted on here when people use equivalence to refer to depth of field so I decline to use it in that way.
Sorry, but no – you do not understand it, as your posts have made abundantly clear.

The resource that PBD often cites is here: http://www.josephjamesphotography.com/equivalence/ which is an accurate and fairly readable treatment of the subject.

If you prefer a more scholarly treatment from a peer-reviewed publication, try this: https://doi.org/10.1117/1.OE.57.11.110801

Sadly, I doubt you'll bother reading and trying to understand either of them and instead will just continue with what is the real baloney here, complaining about rabbit holes and pedantry in discussions of a concept that exceeds your comprehension, and trying to limit the discussion to the only part of the concept you grasp: equivalence is only about focal length...mic drop.
 
Upvote 0
It's wrong.

I'm not sure what they did wrong, but it's usually one of two methods of getting the wrong answer.

1) Use MTF50 as the cutoff
2) Assume a monochrome sensor with no AA filter

Both are wrong. I've seen people apply *both*, which seems to be closest to the case here.

Using MTF0 as the cutoff (some will use MTF5 for extinction, some MTF9=Airy disk) and assuming a Bayer sensor, it's about f/17 for the 3.2 micron pixels on the 90D. If you prefer MTF9 it's more like f/14.

Obviously, in the case you mentioned above where you are light-limited, resolution will decrease because of a simple lack of photons leading to either low MTF or reduction from noise filtering. But in good light, f/5.2 is nowhere close to correct.
According to the Digital Picture site, the DLA for the 5DS and 5DSR is f/6.7. According to your calculations, the DLA for the 5DSR measured by MTF50 would be f/22. It doesn't have an AA-filter, but at the worst using your factor of 1.5x for the Nyquist it would be f/15, and comparing the measured resolutions of the 5DSR vs 5DS would be f/18. I am an experimental scientist so I look for evidence. A few years back I actually plotted the values of MTF50 on the 5DSR with increasing f-number from data from ePhotozine and photozone (opticallimits) sites (I looked at the sharpest wide aperture lenses). You can below see that the MTF50 drops off linearly above about f/5, which is what you would expect for a DLA about f/6.7. A DLA of f/18 would drop off at a much higher value. What has my simple science got wrong?


5DSR_Photozone_All.jpeg5DSR_ephotozine_New.jpg
 
  • Like
Reactions: 1 user
Upvote 0
Sorry, but no – you do not understand it, as your posts have made abundantly clear.

The resource that PBD often cites is here: http://www.josephjamesphotography.com/equivalence/ which is an accurate and fairly readable treatment of the subject.

If you prefer a more scholarly treatment from a peer-reviewed publication, try this: https://doi.org/10.1117/1.OE.57.11.110801

Sadly, I doubt you'll bother reading and trying to understand either of them and instead will just continue with what is the real baloney here, complaining about rabbit holes and pedantry in discussions of a concept that exceeds your comprehension, and trying to limit the discussion to the only part of the concept you grasp: equivalence is only about focal length...mic drop.
The only think that frustrates me about these forums is the way every thread on aps-c invariably turns into this same issue. It is always involving the same protagonists. It can be fun sometimes, but it really is a turn off for many.
 
  • Like
Reactions: 1 user
Upvote 0
Because, as I stated above, it's common for people to both calculate and interpret diffraction limits incorrectly.

By the way, I've confirmed my own math with carefully-collected actual images.
Have you? The available information linked below and in my previous post indicates that you're wrong.

32 MP APS-C sensor, f/5.6 vs f.8. DLA ≈5.2. Why is the f/8 image softer?

20 MP FF sensor, f/5.6 vs f/8. DLA ≈10.6. Why is the f/8 image just as sharp?

Same 20 MP FF sensor, f/11 vs f/16. DLA ≈10.6. Why is the f/16 image softer?

Whether or not it's common for people to both calculate and interpret diffraction limits incorrectly, it seems like that's exactly what you are doing.
 
  • Like
Reactions: 1 user
Upvote 0
The only think that frustrates me about these forums is the way every thread on aps-c invariably turns into this same issue. It is always involving the same protagonists. It can be fun sometimes, but it really is a turn off for many.
It's unfortunate that you're frustrated when people try to correct false information and to improve others' understanding of the technical principles that underlie photography, here on this gear- and tech-oriented forum.
 
Upvote 0
Regarding DLA, honestly, the real 'problem' with misunderstandings of DLA is that some people incorrectly assume that stopping down past the DLA, whatever that value is, has dire consequences. If you need more DoF, you need more DoF.

It's good to understand that depending on your sensor and the selected aperture, you may be giving up some sharpness to get more DoF. Knowing what the DLA is can help you make better choices on the spectrum of compromise between DoF and sharpness. But most people don't view an actual picture at 1:1 so pixel-level sharpness is usually less important than getting what you want within the DoF.

Conversely, it's good to understand that if you're shooting a painting on a wall you should probably not stop down to f/22.
 
Upvote 0
Yes. In my own testing, extinction resolution started dropping linearly between f/11 and f/16, on a 7D Mark II.

I think the testing above shows MTF dropping all the way, which it does, but that's not the same as extinction resolution.
The values that TDP reports for DLA are, as Bryan puts it, "The aperture where diffraction begins to visibly negatively affect image sharpness at the pixel level." The ISO 12233 chart images align with that definition. Here's the CIC calculator for your 7DII – note the last line: "Overall Range of Onset."

Screen Shot 2022-02-03 at 12.43.39 PM.png

TDP reports a DLA of f/6.6 for the 7DII. That's the 'squishiness' you mention.

We are all talking about the point at which diffraction begins to affect an image. You seem to be talking about the point at which diffraction maximally affects an image, i.e. beyond which a narrower aperture has no additional softening effect.

IMO, the former is far more relevant to photography. The latter is not meaningless, but there's a reason we commonly say 'stop down the lens' (from wide open) and not 'stop up the lens' (from fully closed).
 
Upvote 0
Good luck, our desired specifications are not dissimilar, and the price seems about right. Canon have launched some amazing gear recently (as have Nikon and Sony), but unfortunately I think we'll both have to settle for a compromise. Or it could be something well below what we hope for - perhaps a cheap entry model to compete with recent Sony and Nikon APS products...
Every camera body model is a compromise, whether you're speaking from a personal viewpoint or for all shooters in general. You stay in the craft long enough you accept that fact and don't find yourself in 70 page arguments on DPR. Peace!
 
  • Like
Reactions: 1 users
Upvote 0
The only think that frustrates me about these forums is the way every thread on aps-c invariably turns into this same issue. It is always involving the same protagonists. It can be fun sometimes, but it really is a turn off for many.
Point taken. I will try very hard not to take part in these discussions in the future. Every point has been make hundreds of times before and no one ever convinces anyone else.

This would be a much better place if everyone would model their behavior after @sanj. A talented and successful professional who is always respectful and humble. I can't guarantee I will succeed, but I will resolve to try.
 
  • Like
Reactions: 1 users
Upvote 0
The bottom line is claiming an EVF is better because it shows you what the camera sees is false.

Personally, I didn’t like the image quality displayed in the EVF on my EOS R, especially coming from the excellent OVF of the 1D X. The image quality displayed in the EVF on the R3 is definitely better, but I still prefer that of a good OVF.

However, I like the convenience of the EVF. Being able to see a lot of relevant information overlayed, or none at the touch of a button. Being able to literally see in the dark to compose a shot that will be taken with high ISO.
I need to get my hands on an R3 for a bit just to see what the newest EVF is like. I normally get to play with lots of stuff thanks to friends, but nobody I know IRL has grabbed that one yet. I've said before that EVFs are better in low light until they aren't, until they fail completely on things that you can still see with an OVF and a fast lens. An example would be composing the Milky Way though a 24mm f/1.4. But perhaps the R3 is to the point that you can do this? Something I saw in a review video made me think the A7s III was there.

I'm definitely in the OVF camp, and probably will be until EVFs provide the same IQ as current high DPI/Retina screens. That phrasing implies resolution though the bigger problems seem to be color, contrast, DR, and potential lag. (Again, have not played with an R3 so I cannot say how much it has improved on any of those, other than I imagine lag is gone even under continuous shooting?) That said if one of the R5's features crossed the line, for me personally, from "nice" to "need" or even "really, really want" the EVF wouldn't stop me. And as you point out, EVFs can be very convenient. Having shutter, aperture, and ISO on three dials and setting exposure manually via EVF/LiveView is both fast and intuitive.
 
Upvote 0