![]() ![]() I endorse his response as being accurate and truthful! #Adobe gamma 64 bit professionalIn some ways that’s what it’s aimed at - to make that as good as it can be.No one here is “down on 64-bit” but rather, are giving you real information, not Screen Name is a highly skilled, respected, industry professional with many years of industry experience. The effective change messing with it is likely to be mid tone level contrast. In some ways it’s best to just accept it. It’s become the standard and the value has a lot to do with the appearance of the final image as a camera can’t match our eye view in all circumstances.Īll that might help or might not. The story goes that MS were not happy with the images they were getting out of super colour as they called it and HP came up with the processing that was eventually used. Both of those decisions were based on the final appearance of an image taken digitally or scanned from negative. Initially crt displays had a gamma of about 2.5 due to the way they work. Some might say that some of the above has nothing to do with gamma but I’m trying to explain why. ![]() HDR is needed to take care of that if needed as the camera can’t cover the range we can. That’s why cameras show shadows in our shots that we couldn’t see. At one end there are too many steps and compared with our eyes no where near enough at the other end. The bit counts from the camera have to be processed into these to give an acceptable tonal gradation. 8 stops effectively of equal brightness steps and not that many in practice. Gamma is all about providing a tonal range and gradation that is suitable for our eyes but in an 8 bit colour space that can only have 256 distinct values. As we would be unlikely to see a 1bit signal on a monitor it’s worse than that really. So taking a 10 bit camera as far as we are concerned equal brightness changes would go light levels of 1,2,4,8,16,32,64,128,256,512,1024 and so on for higher bit count cameras. We see equal brightness changes when the light intensity changes roughly by a factor of 2. At high brightness levels the camera is more sensitive so it’s steps in brightness step levels are irrelevant as far as our eyes are concerned cause we wont see them. Our eyes are far more sensitive to low light level changes than an camera is so that aspect needs to be corrected. I thought that link I posted earlier would clear the main aspect up. The mentioned rec2020_log icc is two posts before the link … it’s TRC has a linear part at the start (darks) with a slope = 64 i.e 64X denser sampling there making an 8 bit file to have equal density with 14 bit linear (gamma = 1, slope = 1) ![]() GitHub to see … save the cat with the above pp3 using various output gamma settings and color profiles then open in RT and brighten the shadows.Play with the DNG + pp3 linked at Please add LogC export Things get worse with jpegs where both the integer transform from RGB to Y’CbCr JPEG - Wikipedia inserts one more quntization step and then the compression increase posterization even more. With low bit depth files (8-bit) the posterization is obvious with gamma=1 and even with sRGB gamma (12X denser sampling at the darks). ![]() With clear synthetic pictures (no dither due to noise) posterization is easier to come up. With 16 bit tiffs photos (i.e somehow noisy darks which is a kind of dithering) we can hardly see any difference in darks’ posterization even with gamma = 1, maybe a bit more with Adobe’s PS/LR which convert 16bit tifffs to 15-bit … There is no difference in color/contrast etc but there is a difference in posterization at the darks (and not only darks ,) ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |