trygvewastvedt Posted May 18, 2014 Share Posted May 18, 2014 I've been looking around for info on simulating the human eye (instead of a camera lens) in Maxwell but haven't found much. Of course it is to some extent subjective. Right now I'm trying to figure out appropriate settings for Simulens. Does anyone know the real meaning of the number in the sliders for scattering and diffraction? Or does the appropriate number here change based on the light? Any other thoughts on human eye simulation would be welcome. Thanks. Link to comment Share on other sites More sharing options...
Chris MacDonald Posted June 23, 2014 Share Posted June 23, 2014 I don't know anything about maxwell, really. But I am fairly sure the human eye is equivilant to a focal length of ~22mm and an f stop of f8. A quick google search turned up this and a fair few others http://petapixel.com/2012/06/11/whats-the-f-number-of-the-human-eye/ Link to comment Share on other sites More sharing options...
trygvewastvedt Posted June 28, 2014 Author Share Posted June 28, 2014 Chris, Thanks for the reply! I remember reading something similar but not as succinct. According to your link, the f stop ranges between 3.2 and 8.3 depending on light levels. Still looking for something concrete on scattering and diffraction though. Anyone? Link to comment Share on other sites More sharing options...
RyderSK Posted June 28, 2014 Share Posted June 28, 2014 I always wondered about this. To me the biggest thing seemed on how to align dynamic range of that of eye to camera, and that, to 3D. Based on Wikipedia, it seems human eye can achieve a dynamic range of 90DB (would never expect the same unit used for optics, but that's what written there) which at supplied table equates to roughly 6.5 stops, seems less than some top CMOS in current cameras. Now, how to naturally tonemap linear result (1 stop) generated in CGI to mimic human sight ? I don't know honestly, I don't even know how to convert linear footage into camera response correctly so it matches 1:1 of what cameras output. Some renderers allow directly to load LUT/color profiles of these response curves, but those just seem to apply over like an filter instead of changing the linear curve to identical curve of camera. Same LUT is used also for color grading camera real footage (from digital cmos) which is not linear so the result is obviously, not the same. I would wager that only pro researches in CGI like Paul Devebec and him likes can clearly articulate some answer to this. Link to comment Share on other sites More sharing options...
Ernest Burden III Posted June 29, 2014 Share Posted June 29, 2014 There are only so many parallels we can draw between human vision, optical cameras and virtual ones. The human eye does not work like a CMOS, it does not have linear response across its surface (retina) and also there is no separating the output stream from the eye itself from the processing the brain does with it. We do not see with our eyes, they are just sensors. We see with our minds. I would like to know about the spectral response of the retina, also. Remember film? Film came in a multitude of flavors that each had chemical tonemapping via their unique spectral characteristics. I do think it is useful to look at this concept of the equivalent focal length and aperture of human vision when considering virtual imaging, but it only goes so far. Like our eyes, it's all in how it's processed and interpreted. "Photoreal" isn't. Link to comment Share on other sites More sharing options...
Ismael Posted July 7, 2014 Share Posted July 7, 2014 http://forums.cgarchitect.com/34116-photography-versus-rendering.html http://www.billingpreis.mpg.de/15572/mantiuk.pdf http://forums.cgarchitect.com/21183-vray-physically-correct.html http://en.wikipedia.org/wiki/Tone_mapping Link to comment Share on other sites More sharing options...
trygvewastvedt Posted July 19, 2014 Author Share Posted July 19, 2014 Thanks for all the great info! This is all very helpful. I think Chris's last point is key - the main problem, as far as field of view and distortion are concerned, is that you're trying to represent a 360 environment wrapping around your eyes in a flat image, typically occupying a small portion of that environment. So, I need a 360 panorama viewed inside of Oculus Rift goggles? Sounds good to me! @Chris: yes, that formula would be great if you can find it. Any Maxwell users out there have thoughts about accurate Simulens settings for the human eye? Link to comment Share on other sites More sharing options...
eagle_ear Posted July 24, 2014 Share Posted July 24, 2014 by the look of dis here, the replicants are coming, and soon. Link to comment Share on other sites More sharing options...
Ismael Posted July 31, 2014 Share Posted July 31, 2014 Thanks Chris. Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now