PhotonsToPhotos does the Canon EOS R5 Mark II and it’s good

it's actually an impressive feat to make a sensor readout 10x or so faster and not lose image quality.

16.5ms to 6.3ms (2.5x) , as 12 bit to 14 bit is 4x.

Eye control if it works for you (and it doesn't for all) and it is needed then fine, but it isn't what I would use; much better AF is for 2 or 3 particular sports if you photo them and I don't; less heat if you use video modes that I don't; and the E-shutter has worked fine for me for the last 4 years - I use it all the time for BIF and DIF. For you coming from DSLRs, the R5ii is an incredible upgrade but so is the R5. The improvement of the R5ii over the R5 is a variable that varies from significant to negligible depending on your needs. If you need it and have the cash get the R5ii, if you don't need the features either stick with the R5 or take advantage of its falling price. As @neuroanatomist keeps pointing out, Canon's target for the R5ii is primarily DSLR users.
I would have rather seen the optical joystick gizmo.
 
  • Like
Reactions: 1 users
Upvote 0
Meta analyses suggest that >85% of data published in scientific journals cannot be reproduced. That certainly tracks with my experience in trying to replicate data from academic labs. Some of the problem is innocent, e.g. publishing data on a cell line unaware that your stock got contaminated and outgrown by another cell line (which is why we regularly test all our lines for identity/purity), or behavioral data on animals (I have personal experience with neurobehavioral studies where animals were ordered from the same vendor at the same time, shipped to labs in different parts of the country then housed and tested under conditions as identical as they could be made, and behavioral measures were still subtly different). But some is intentional, because it's publish or perish.
This has just come into my inbox from Nature:

Your papers are training AI models

Artificial-intelligence developers are buying access to valuable data sets that contain research papers — raising uncomfortable questions about copyright. Anything that is available to read online — whether in an open-access repository or not — is “pretty likely” to have been fed into an LLM already, says AI researcher Lucy Lu Wang. “And if a paper has already been used as training data in a model, there’s no way to remove [it].”

So those large language models for biomedical research will have an 85% poo base!
 
  • Like
Reactions: 3 users
Upvote 0
Upvote 0
I am afraid it's pretty well true in biomedical research, @neuroanatomist is right, reproducibility there is very poor. However, in your subject, chemistry, and the physical sciences, engineering etc, that are based on quantitative measurements, reproducibility is very high. It's a bit like discussions of the basics of photography - reliable measurements vs hand waving and believing eyes are better.
Even in the software field, reproducibility is surprisingly hard to do. There’s always someone getting cute and using the date or the hostname as an internal identifier, ensuring no one can get the exact same output for the same inputs.

It’s like timezones: the concept is simple,
the reality is a lot more stubborn.
 
Upvote 0
Even in the software field, reproducibility is surprisingly hard to do. There’s always someone getting cute and using the date or the hostname as an internal identifier, ensuring no one can get the exact same output for the same inputs.

It’s like timezones: the concept is simple,
the reality is a lot more stubborn.
Scientific publishing is trying to get around the problem of reproducibility by insisting on complete transparency and deposition of data. I am afraid though that some scientists are just as bad as some posters here of never admitting mistakes and doubling down on them.
 
  • Like
Reactions: 3 users
Upvote 0
Shame about the baked in noise reduction. Honestly, I had hoped that the R5 Mk2 would show improvement over the R5. All those patents but nothing meaningful in any of the new cameras.
New user here, but long time lurker.

I think the assumption that there is noise reduction present should be revisited. From what I gather looking at the old threads on dpreview, all we can definitively say is that there is some sort of signal processing going on, but whether that is noise reduction vs something else is more difficult to say. Others who are smarter than me have noted they cannot find evidence of actual noise reduction within the photos. Ie discussion here with regards to the R6 / R6II.

The determination as to whether or not a sensor should be labeled as having noise reduction or not seems to be based on the energy spectra charts. I did ask Bill about the Sony cameras since they do show a similar (if slightly less pronounced) pattern as the R5II, but are not labeled as having NR. He did note that there isn't really a hard cutoff. And although not posted there, personally I think the R5II ES spectra look even less pronounced than the various Sony cameras, but the former is labeled as having NR and the latter is not. (R5II ES vs Sony A1 for example).

In any case, it certainly does seem there is signal processing going on (and many of the R cameras prior to the R5II have a much more pronounced spectral pattern to that effect). But I don't think we can conclusively say it's NR, nor can we really say what impact it's having on photos, dynamic range, etc.
 
  • Like
Reactions: 8 users
Upvote 0
Scientific publishing is trying to get around the problem of reproducibility by insisting on complete transparency and deposition of data. I am afraid though that some scientists are just as bad as some posters here of never admitting mistakes and doubling down on them.
Sad but true. There are some whoppers out there in terms of 'productivity':
 
Upvote 0
New user here, but long time lurker.

I think the assumption that there is noise reduction present should be revisited. From what I gather looking at the old threads on dpreview, all we can definitively say is that there is some sort of signal processing going on, but whether that is noise reduction vs something else is more difficult to say. Others who are smarter than me have noted they cannot find evidence of actual noise reduction within the photos. Ie discussion here with regards to the R6 / R6II.

The determination as to whether or not a sensor should be labeled as having noise reduction or not seems to be based on the energy spectra charts. I did ask Bill about the Sony cameras since they do show a similar (if slightly less pronounced) pattern as the R5II, but are not labeled as having NR. He did note that there isn't really a hard cutoff. And although not posted there, personally I think the R5II ES spectra look even less pronounced than the various Sony cameras, but the former is labeled as having NR and the latter is not. (R5II ES vs Sony A1 for example).

In any case, it certainly does seem there is signal processing going on (and many of the R cameras prior to the R5II have a much more pronounced spectral pattern to that effect). But I don't think we can conclusively say it's NR, nor can we really say what impact it's having on photos, dynamic range, etc.
This is quite interesting. It shows how complaining about baked NR, without seeing any issue with the sharpness of the actual photo, is silly.
 
  • Like
Reactions: 1 users
Upvote 0
I’m going to bookmark this article so I can refer to it when my wife’s R5II comes in and she starts lording it over me.

“No, look, see? It says right here the differences are minute….”
 
  • Haha
Reactions: 3 users
Upvote 0
The breakthroughs have been in software solutions. What people broadly (perhaps too broadly) refer to as Computational Photography.
There are real breakthroughs in things like AF assist that happen pre-capture.
But computational photography can't create scene-referred information, you only get it from sensor improvements. That aspect is related to 'photography'.

Fake detail in AI upscaling or AI noise reduction do not improve sensor performance.
I find the 'fear' of 'computational photography' an amusing discourse online. I myself am not a big editor of my photos.
Fear?
I'm perfectly fine with processing, upscaling, noise reduction etc., and I do edit my photos myself.

But again, I don't like the idea of substituting technology advancements in sensors with AI-generated fake detail. Note that despite very heavy computational photography in phones, the phone sensors have actually been getting better. There's a lot of room for improvement in the full frame sensors too. Maybe they don't have enough money on R&D.
 
  • Like
Reactions: 1 user
Upvote 0
This has just come into my inbox from Nature:

Your papers are training AI models

Artificial-intelligence developers are buying access to valuable data sets that contain research papers — raising uncomfortable questions about copyright. Anything that is available to read online — whether in an open-access repository or not — is “pretty likely” to have been fed into an LLM already, says AI researcher Lucy Lu Wang. “And if a paper has already been used as training data in a model, there’s no way to remove [it].”

So those large language models for biomedical research will have an 85% poo base!
I do not have the data, but I would assume that (experimental) physics papers, and probably also chemistry, would be mostly reproducible. By 'scientific', I assume it includes social and behaviourial sciences, in which case it seems that they do suffer from the 'replication crisis'. However, that's the 'beauty' of the scientific approach, where sooner or later, results would need to be verified and tested by others before they are accepted. The fact that these '85%' gets found out eventually is a good thing. The problem is when AI indiscrimately (and irresponsibly) use these publications to generate responses to prompts. As users, we would have difficulty to verify on our own.
 
Upvote 0
I am afraid it's pretty well true in biomedical research, @neuroanatomist is right, reproducibility there is very poor. However, in your subject, chemistry, and the physical sciences, engineering etc, that are based on quantitative measurements, reproducibility is very high.
I do not have the data, but I would assume that (experimental) physics papers, and probably also chemistry, would be mostly reproducible.
At least according to the Nature news article that I linked above, the highest rate of non-reproducibility is among chemists, followed by biologists then physicists/engineers.

1723684926588.png
 
  • Like
Reactions: 1 users
Upvote 0
...

But overall the R5II basically slightly worse than the R5 - so those who don't care about AI, AF improvements, pre-burst feature and video improvements, may skip the R5II and get an R5, or stay with their R5 if they have it already.

In other words, the R5 looks like a better value for money camera if your primary focus is on landscapes, architecture etc.
Which is a good thing, as I think most photographers will understand.
 
Upvote 0
DxOMark puts the actual base ISO of the R5 at 54. That means at the setting of ISO 100, the camera is actually pushing the exposure by nearly a full stop. Most likely the 'baked in' NR is intended to counteract the additional noise from that 1-stop push.
I guess that makes sense in that all ISO values for the R5II have baked-in noise reduction, but only the ISOs below 800 had noise reduction on the R5.

I wonder what the reasoning was behind the decision on the R5II to bake in noise reduction rather than make it a base ISO 50 or 64 camera. Did they just not want the dual gain to kick in a full stop lower?
 
Upvote 0
I guess that makes sense in that all ISO values for the R5II have baked-in noise reduction, but only the ISOs below 800 had noise reduction on the R5.
'Measured ISO' figures on DxOMark are definitely not the reason Canon implemented the noise reduction. It's some obscure metric from DxO that doesn't have a practical use.

Also, ISO standard doesn't specify the noise levels per ISO value, so Canon wouldn't even think of using noise reduction to match a non-existent noise level requirement per ISO setting.
I wonder what the reasoning was behind the decision on the R5II to bake in noise reduction rather than make it a base ISO 50 or 64 camera. Did they just not want the dual gain to kick in a full stop lower?
Canon wouldn't tell us and those who analysed it (including Bill Claff) found the noise reduction in the R5II to be very mild, lower that that in the R5. Anyway it's not related to the ISO at which the second gain kicks in.
 
Upvote 0
I have to say, in normal daylight the R6 Mark 2 images absolutely throw me for a loop. Something just looks off about them. They look sharp and not sharp at the same time. Ironically this can work amazingly in the opposite scenario in night time shots with signage and lights. Images can look ridiculously clean in the right circumstances.

I have one image from a set taken in the heart of tokyo, last September and I'm still always stupefied how amazingly clean and sharp participated but then the daytime images makes me want to yell at clouds with its lack of critical sharpness compared to a 5d3 files. When I got back to the States I did some more tests with some ducks in the pond and it's the results are a bizarre sharpness but softness at the same time and it drives me nuts.

I'm not trying to start a debate, I have tens of thousands of edited pictures here to compare to so I need not go to the Internet for advice or so. Just putting my own experience out there. I just wasn't happy with the R6 Mark 2 iq. I really need to get up and sell this thing it's just sitting here.

I do a lot of portraits and it made me appreciate how much work the 5d3 has been able to do all these years. That critical sharpness around eyes makes a huge difference.. to me.

Good luck everyone else. I already have a R5 so I'm good to go.
Would you mind posting some photos? I have an R5 and I have been considering moving over to the R6II. Mainly because I just don’t feel the need for such heavy files. I’m no longer full time doing pro shoots when I used to want the extra pixels and I always loved the 5DIV pixel count and resultant balance of image. I’d appreciate if you shared some more thoughts about this
 
Upvote 0
At least according to the Nature news article that I linked above, the highest rate of non-reproducibility is among chemists, followed by biologists then physicists/engineers.

View attachment 219070
Look at the numbers involved - only a 106 chemists for example. Nature claims to have 63,000 subscribers in 230 countries, and those subscribers include many universities, companies, libraries etc so the readership could be 10 or a 100 times more. So, that article ends with a big disclaimer:

The survey — which was e-mailed to Nature readers and advertised on affiliated websites and social-media outlets as being 'about reproducibility' — probably selected for respondents who are more receptive to and aware of concerns about reproducibility.

In other words, it consists of completely flawed statistics, not based on unbiased sampling, written by a house journalist for effect and not subject to peer review. A bit like having a survey of the perceived quality of Canon cameras based on replies from a Sony site.
 
  • Like
  • Haha
Reactions: 3 users
Upvote 0