PhotonsToPhotos does the Canon EOS R5 Mark II and it’s good

I own the EF 135mm f2.0 and bought the RF 135mm f1.8. My personal opinion, it is worth it. I do mainly flash-based portrait, and videos, and the RF version is worth it. I like the 85mm F1.2 thanks to its shorter working distance, but the look is better on the 135mm in my eyes. Obviously, it is a very personal preference. If you like a 135mm focal length, go for it.
I am occasionally tempted by both the RF 135/1.8 and the RF 85/1.2. I previously owned the EF 135/2 and the EF 85/1.2L II that I replaced with the EF 85/1.4 IS. The 'problem' is that I ended up using the 70-200/2.8 for most of those use cases. Of the two, I'd use the 135/1.8 more than the 85/1.2, based on my historical data.

I've resisted the temptation so far, and I'm hoping for a 70-150/2 to pair with the excellent 28-70/2.
 
Upvote 0
What??? not investing in the newest highly depreciating camera body but instead choosing to investing in long term non-depreciating glass instead???? Sounds like Crazy talk!!! :ROFLMAO:

On a serious note....I would always choose classy glass over pricey cameras any day. The great lenses generally stay with you, the cameras come and go.
My MP-E65mm was first used on my 20D, I still use that lens!
 
  • Like
Reactions: 2 users
Upvote 0
Computational photography is an oxymoron if you think about it.

What I expect from a camera upgrade, in terms of the image quality, is for the camera to capture more information about the scene - capture the actual information, not compute. Top end mobile phones do computational photography cheaper and faster; I don't need a dedicated camera to do a lot of computational photography for me.

The R5II seems to have a slightly lower DR than the R5, although it has less hot pixels popping up in long exposures. Also its baked in noise reduction is milder that that of the R5, so the actual physical DR difference before the noise reduction may be less that 0.4 stops.

But overall the R5II basically slightly worse than the R5 - so those who don't care about AI, AF improvements, pre-burst feature and video improvements, may skip the R5II and get an R5, or stay with their R5 if they have it already.

In other words, the R5 looks like a better value for money camera if your primary focus is on landscapes, architecture etc.
If you've been following the discussions over the past....ever....the trend has definitely been toward image taking advances coming from software, not hardware. At least not in the same sense as hardware wars in megapixel wars (still alive on the forums), BSI vs FSI (functionally moot with the current generations of both). For a while now, the physics of sensors and lenses has been such that gains there are very slight and mostly incremental, as we see here. Stacked sensors come with some trade offs, also seen here (an often inconsequential drop in DR for a massive speed increase). Global shutter has some significant tradeoffs as well (noise, DR, for sensor speed). This is what allows us to look back at something like a 5D4 and say 'not much has improved'. There have been improvements, just not where you are looking for them. There are real, physical limits to capturing data on a sensor - bigger well depth defines a hardware limit on dynamic range on the top end, and sensor noise on the low end, but the physics of the number of photons coming in during an exposure, and how much data is really captured there are is a limit as well. So while we love to push our shadows 6 stops in post and analyze to death the performance of the sensor, the reality is the gains to be had on the hardware side might not be as much of a difference maker as you think. Perhaps a future breakthrough will change that, but it hasn't happened yet for any of the major manufacturers.

The breakthroughs have been in software solutions. What people broadly (perhaps too broadly) refer to as Computational Photography. If you went back to the 5D4 release and said that 2 cameras later it would be smart enough to know what the ball is, predict where its going, and focus on that for you, or recognise and track a specific face....you might have received some curious looks. The software available for noise reduction in post making ISO 52k and beyond usable was also not there at that time. Remember pushing things at ISO 3200?

I find the 'fear' of 'computational photography' an amusing discourse online. I myself am not a big editor of my photos. I do things for exposure, color, contrast etc in DXO (a lightroom alternative) for sure. But many of those online that decry in camera noise reduction or other computational effects applied are also the ones who replace skies, manipulate facial features, add light sources that didn't exist in the original photo, clone in our out various features, and otherwise 'compute' an image that mostly did not exist in the first place.

There is currently no photography without computation (unless you still buy film, and don't scan any of it). And the software side is where the innovation has been highest in the last couple of generations of cameras.

Brian
 
  • Like
Reactions: 4 users
Upvote 0
What is worrying is that you are right and the major upgrades will be on the computational side but in order to get them we will have to buy new hardware that is is only superficially upgraded.
Always have had to buy new hardware to get new features. Partly because the companies need to sell stuff to stay in business, and partly because the software stuff is hardware supported (Digic advances, digic accelerators, etc). And eventually the pendulum will swing back and more major hardware advances will come.
 
  • Like
Reactions: 1 user
Upvote 0
haha....As a R5 Mk I owner...it did make me feel good.

As someone that has upgraded with every generation of a "5 series" since the 5DIII, it also does take away of some rationale for upgrading to this generation.

5DIII to 5DIV to R5...solid improvements with each generation:
View attachment 219062

Add in the R5 II...
View attachment 219063

While I was not expecting much of an improvement, if any....and I agree with others assessment that the R5/R5II doesn't mean much....increase in DR is certainly not a reason to upgrade from the R5 to R5 II. If anything....it might be a reason to stay with the R5.

Edit...I do want to say, increased DR is really not needed. I rarely had an issue with the 5DIII, almost never with the 5DIV, and do not with the R5. So an improvement is was not needed, IMO. The R5 and R5 II are industry leading for FF sensors at this level.
I had an issue with the 5 DIII: Shadows.
Otherwise, a great and reliable camera with, at least for me, enough MP. Yet, after having bought the 5D IV, I stopped using it, lifting the shadows being the reason (more DR).
And eye AF is THE reason for me to buy the R 5 DII.
 
  • Like
Reactions: 1 user
Upvote 0
If you've been following the discussions over the past....ever....the trend has definitely been toward image taking advances coming from software, not hardware. At least not in the same sense as hardware wars in megapixel wars (still alive on the forums), BSI vs FSI (functionally moot with the current generations of both). For a while now, the physics of sensors and lenses has been such that gains there are very slight and mostly incremental, as we see here. Stacked sensors come with some trade offs, also seen here (an often inconsequential drop in DR for a massive speed increase). Global shutter has some significant tradeoffs as well (noise, DR, for sensor speed). This is what allows us to look back at something like a 5D4 and say 'not much has improved'. There have been improvements, just not where you are looking for them. There are real, physical limits to capturing data on a sensor - bigger well depth defines a hardware limit on dynamic range on the top end, and sensor noise on the low end, but the physics of the number of photons coming in during an exposure, and how much data is really captured there are is a limit as well. So while we love to push our shadows 6 stops in post and analyze to death the performance of the sensor, the reality is the gains to be had on the hardware side might not be as much of a difference maker as you think. Perhaps a future breakthrough will change that, but it hasn't happened yet for any of the major manufacturers.

The breakthroughs have been in software solutions. What people broadly (perhaps too broadly) refer to as Computational Photography. If you went back to the 5D4 release and said that 2 cameras later it would be smart enough to know what the ball is, predict where its going, and focus on that for you, or recognise and track a specific face....you might have received some curious looks. The software available for noise reduction in post making ISO 52k and beyond usable was also not there at that time. Remember pushing things at ISO 3200?

I find the 'fear' of 'computational photography' an amusing discourse online. I myself am not a big editor of my photos. I do things for exposure, color, contrast etc in DXO (a lightroom alternative) for sure. But many of those online that decry in camera noise reduction or other computational effects applied are also the ones who replace skies, manipulate facial features, add light sources that didn't exist in the original photo, clone in our out various features, and otherwise 'compute' an image that mostly did not exist in the first place.

There is currently no photography without computation (unless you still buy film, and don't scan any of it). And the software side is where the innovation has been highest in the last couple of generations of cameras.

Brian
If its about less noise and less unsharp pictures, I don't see why I should reject AI. Call it manipulation if you want, but editing is nothing else.
Where to draw a line is a very personal decision, but photography is always kind of an interpretation of OUR reality.
Besides, even in film times, objective photography didn't exist. Velvia users know what I mean...
 
  • Like
Reactions: 4 users
Upvote 0
If its about less noise and less unsharp pictures, I don't see why I should reject AI. Call it manipulation if you want, but editing is nothing else.
Where to draw a line is a very personal decision, but photography is always kind of an interpretation of OUR reality.
Besides, even in film times, objective photography didn't exist. Velvia users know what I mean...
I agree, and one can argue that ‘digital in camera’ ND filters and ND grad filters like Olympus (sorry OM Systems) has, is not very different from putting a ‘real’ filter in front of the lens.
 
  • Like
Reactions: 1 user
Upvote 0
Have a feeling, I won’t need a new camera for a long time. Sad feelings.
Especially the 5d mark IV vrs R5 mark II chart is frustrating.

Yes I know… fast and faster for video. But for the high dynamic range / low noise guys or high MPx the race is dead for years now.
The 5d mark IV is still great
The fact that there is basically no hit to performance for using ES anymore, plus access to RF glass, IBIS, 8-years-newer mirrorless focusing logic, and the multi-function shoe are all potential reasons to upgrade from a 5D MKIV...low ISO DR, battery life, and the costs to upgrade lenses and peripherals are basically the only remaining reasons not to.
 
Upvote 0
While that may be a valid opinion for some I don't think that is the view of most. I don't think it should be baked in but having the options of these features would greatly improve the usability of these cameras. Why would you not want to give people more options?
Personally, I would prefer a way to interface a Google Pixel or iPhone to my camera, and use the phone's brain with my camera's guts. Arsenal 2 Pro is kind of the right idea, but ideally I wouldn't need a 3rd gadget to sit in-between, or if I did it would be much more seamless/require no additional software.
 
Upvote 0
Shame about the baked in noise reduction. Honestly, I had hoped that the R5 Mk2 would show improvement over the R5. All those patents but nothing meaningful in any of the new cameras.
The Canon sensors show just as much detail and sharpness as the Z8, A1, etc.
What is the issue here? All camera makers process (“cook”) the raw data…Otherwise you would not see an image.
 
  • Like
Reactions: 5 users
Upvote 0
If you've been following the discussions over the past....ever....the trend has definitely been toward image taking advances coming from software, not hardware. At least not in the same sense as hardware wars in megapixel wars (still alive on the forums), BSI vs FSI (functionally moot with the current generations of both). For a while now, the physics of sensors and lenses has been such that gains there are very slight and mostly incremental, as we see here. Stacked sensors come with some trade offs, also seen here (an often inconsequential drop in DR for a massive speed increase). Global shutter has some significant tradeoffs as well (noise, DR, for sensor speed). This is what allows us to look back at something like a 5D4 and say 'not much has improved'. There have been improvements, just not where you are looking for them. There are real, physical limits to capturing data on a sensor - bigger well depth defines a hardware limit on dynamic range on the top end, and sensor noise on the low end, but the physics of the number of photons coming in during an exposure, and how much data is really captured there are is a limit as well. So while we love to push our shadows 6 stops in post and analyze to death the performance of the sensor, the reality is the gains to be had on the hardware side might not be as much of a difference maker as you think. Perhaps a future breakthrough will change that, but it hasn't happened yet for any of the major manufacturers.

The breakthroughs have been in software solutions. What people broadly (perhaps too broadly) refer to as Computational Photography. If you went back to the 5D4 release and said that 2 cameras later it would be smart enough to know what the ball is, predict where its going, and focus on that for you, or recognise and track a specific face....you might have received some curious looks. The software available for noise reduction in post making ISO 52k and beyond usable was also not there at that time. Remember pushing things at ISO 3200?

I find the 'fear' of 'computational photography' an amusing discourse online. I myself am not a big editor of my photos. I do things for exposure, color, contrast etc in DXO (a lightroom alternative) for sure. But many of those online that decry in camera noise reduction or other computational effects applied are also the ones who replace skies, manipulate facial features, add light sources that didn't exist in the original photo, clone in our out various features, and otherwise 'compute' an image that mostly did not exist in the first place.

There is currently no photography without computation (unless you still buy film, and don't scan any of it). And the software side is where the innovation has been highest in the last couple of generations of cameras.

Brian
...winner winner chicken dinner.

At least from where I sit, your post here wins the internet today.
 
  • Like
Reactions: 2 users
Upvote 0
What is worrying is that you are right and the major upgrades will be on the computational side but in order to get them we will have to buy new hardware that is is only superficially upgraded.


That's not completely true, as when these thing start to get both computationally heavy and also needing to be lightening fast, there will be substantial hardware upgrades needed, but they will come in the form of silicon.
 
Upvote 0
The Canon sensors show just as much detail and sharpness as the Z8, A1, etc.
What is the issue here? All camera makers process (“cook”) the raw data…Otherwise you would not see an image.
I am not promoting for or against "cooking" or downgrading Canon versus the rest but just trying to be factually correct.

It is not necessary for the camera maker to process the RAW data for you to see an image. RAW files are called RAW precisely because they are either unprocessed or minimally processed. You see an image either because the camera makers do process the RAW data into a jpeg (or other file) or they do not process the data but allow you to download the unprocessed or minimally processed data and you process it yourself in your computer. I do my own "cooking" at the RAW conversion phase.

The Z8 and A1 sensors are measured to be sharper than Canon because they don't have AA-filters whereas Canon does. I actually prefer the Canon R5 sensor because I have never had problems with Moire with the R5 whereas I have had with Nikon, and the Canon is sharp enough (The R5 sensor is superb as I am sure is the R5ii).
https://www.optyczne.pl/496.4-Test_aparatu-Nikon_Z8_Rozdzielczość.html

Screenshot 2024-08-14 at 19.44.01.png
 
  • Like
Reactions: 1 users
Upvote 0
I agree, and one can argue that ‘digital in camera’ ND filters and ND grad filters like Olympus (sorry OM Systems) has, is not very different from putting a ‘real’ filter in front of the lens.
And even holy Daguerre manipulated reality.
Same with optical corrections. Though I admit I subjectively prefer optical corrections, I know the result is all that matters, and that the same result can be achieved with intelligent software. In a few years from now, this kind of discussion will turn into obsolescence.
 
Last edited:
  • Like
Reactions: 1 user
Upvote 0
That's not completely true, as when these thing start to get both computationally heavy and also needing to be lightening fast, there will be substantial hardware upgrades needed, but they will come in the form of silicon.
Some processor upgrades will no doubt be necessary but we can bet our bottom dollars that some improvements that could be implemented will be deliberately held back because of planned obsolescence.
 
Upvote 0
Meta analyses suggest that >85% of data published in scientific journals cannot be reproduced. That certainly tracks with my experience in trying to replicate data from academic labs. Some of the problem is innocent, e.g. publishing data on a cell line unaware that your stock got contaminated and outgrown by another cell line (which is why we regularly test all our lines for identity/purity), or behavioral data on animals (I have personal experience with neurobehavioral studies where animals were ordered from the same vendor at the same time, shipped to labs in different parts of the country then housed and tested under conditions as identical as they could be made, and behavioral measures were still subtly different). But some is intentional, because it's publish or perish.
Surely your words here don't intend to suggest that >85% of data published in scientific journals cannot be reproduced.

Or do they mean exactly that?

C'mon man!
 
Upvote 0
I am not promoting for or against "cooking" or downgrading Canon versus the rest but just trying to be factually correct.

It is not necessary for the camera maker to process the RAW data for you to see an image. RAW files are called RAW precisely because they are either unprocessed or minimally processed. You see an image either because the camera makers do process the RAW data into a jpeg (or other file) or they do not process the data but allow you to download the unprocessed or minimally processed data and you process it yourself in your computer. I do my own "cooking" at the RAW conversion phase.

The Z8 and A1 sensors are measured to be sharper than Canon because they don't have AA-filters whereas Canon does. I actually prefer the Canon R5 sensor because I have never had problems with Moire with the R5 whereas I have had with Nikon, and the Canon is sharp enough (The R5 sensor is superb as I am sure is the R5ii).
https://www.optyczne.pl/496.4-Test_aparatu-Nikon_Z8_Rozdzielczość.html

View attachment 219068

Take a look at real pictures and show me the difference. I have both the R5 and the A1, and I am not seeing differences as to detail.
And yes the Canon has AA, but since it often focuses better than the A1 (especially in video), and colors often look better, it very often looks superior to the A1 as to real photos, (not charts).
I also had the Z8 and the AF was a mess.

Back on topic: all cameras have "cooked" RAW files, and indeed the MTF reflects the presence of the AA filter - thus yielding less moire.
 
Upvote 0
Surely your words here don't intend to suggest that >85% of data published in scientific journals cannot be reproduced.

Or do they mean exactly that?

C'mon man!
I am afraid it's pretty well true in biomedical research, @neuroanatomist is right, reproducibility there is very poor. However, in your subject, chemistry, and the physical sciences, engineering etc, that are based on quantitative measurements, reproducibility is very high. It's a bit like discussions of the basics of photography - reliable measurements vs hand waving and believing eyes are better.
 
  • Like
Reactions: 1 user
Upvote 0