lenses Vs the human eye

Started by TheBadger, March 13, 2015, 04:25:55 PM

Previous topic - Next topic

TheBadger

Hey,

this maybe is not the right way to ask, but,
In 3D and specifically TG, what is the aspect ratio and lens setting that would create the most human like view of any scene?

So what I mean is, if you are standing in a field and just looking to the horizon, then regardless of what else is in the scene, what frame (aspect ratio) is most like how you see the real world? And also, what lens most looks like how you see the things in the landscape; 35mm, 55mm so on.

I am pretty sure there is a lot of research on this, and I may have heard the answer I am looking for sometime in the past, but I cant recall. And I am not sure how to search google for the answers because I cant think of the right terms to get to the meat of it.

Additionally, then there is also the question of how the created image should be presented so that (given a certain distance from the viewer) it keeps the proper scale. So after the image is created, if it were printed, how large should the print be, and how far away would you stand so that if staring at the horizon, the image would appear just as if you were standing in from of the scene in reality? Hope that makes sense to you.

Now then, in terms of digital viewing, likewise, what are the setting most relevant to human perception when the image would be viewed in VR in general?

Keep in mind the reality of human Peripheral vision but ignore the effect of DOF since at the right presentation size/scale, the eye would do this on its own, so no need for digital DOF.

Does all that make sense then?

Thank you for some answers or leads.
It has been eaten.

bigben

For fov, it's going to depend on the viewing distance and size of the image. It's not so much a question about which one is correct as making sure that the render matches the viewing conditions. Projection type on the other hand is a different matter. It could be argued that fisheye projection is more natural. At smaller fov the distortion is not as noticeable and you don't get the radial stretching near the edges that you get with rectilinear images.

For VR, the viewing side of things is taken out of the equation as the image/scene is essentially being re-rendered for the viewer.

Upon Infinity

The Human Eye in photographic terms would be like a 50 mm lens on a 35mm camera 36x24mm (Terragen default).  And runs at approximately 40 frames per second.  DOF, due to the automatic changes in pupil diameter, is variable.

oysteroid

#3
Badger,

I am afraid that there is no real straightforward answer here. Part of what you are looking for might be here:

http://www.cambridgeincolour.com/tutorials/cameras-vs-human-eye.htm

But there are several other things to consider. You wouldn't necessarily want to try to match the limits of human vision in a picture because with such a wide angle of view, you'd have lots of distortion. That doesn't look natural in most cases when projected onto a flat surface. We don't see distortion like that in our visual field because, at least phenomenologically, we are "in the scene". It wraps around us.

And it isn't as simple as just considering the angle of view of our eyes when looking at a fixed point straight ahead. When looking at a real scene, we don't take it in all in one glance or fixing our eyes in one direction. The eyes move in saccades, darting around the scene, taking lots of samples and assembling an impression in the mind based on the information integrated from lots of such samples. And the head moves too. So when we explore what might be near the corners in our projected picture, where there would be lots of distortion, we don't see any distortion in the real world. That area, when seen with eyes, is always in the center of our visual field and the camera (our eyes) is pointed straight at it. Also, our retinas are round, not flat like the film or sensor plane in a camera or the image plane in a rendering engine. So that also plays a role in the lack of distortion.

Everything we pay attention to is always in or very near the center of the visual field with our eyes pointed straight at it. When we stand facing a large print, our eyes move to center the area of interest on the fovea, but when we look at the corner of the picture, there is a mismatch between the projection plane of the image and the angle of the retina.

Really, the only way to solve this problem is to render a portion of a spherical projection of the scene and print it on a big concave surface such that your head can be in the center of the sphere and you can look around at it from there. But that is problematic for a number of obvious reasons.

I think that if you want the viewing experience to feel as natural as possible, maybe a better way to look at it is to simply decide first how large your print will be and what the best viewing distance is, and then select an angle-of-view for your TG camera that would match that. This way, the picture has a projection similar to what you'd see if you were looking through a window of that size. You need to do a little trigonometry:

2*atan((picture dimension/2)/viewing distance) = angle of view for one picture dimension

So a 30x18inch window seen from 30 inches would involve an angle of view of about 53x33 degrees.

If you want to do this calculation, make sure your calculator or software is set to degrees rather than radians. And 'atan' might look more like 'tan-1'. With the Windows calculator, when in scientific mode, to get 'tan-1', you need to hit 'inv' first. That'll turn the trig functions into their inverse versions.

As for aspect ratio, if I remember right, the typical cinema aspect ratios were selected with an idea similar to yours. They were trying to mimic the human visual field as much as possible. But there is just no simple answer there, because our vision falls off gradually toward the periphery. Where you draw the line is rather arbitrary.

And in your peripheral vision, notice that if you look straight ahead, you can't see as far above as you can below. I can see almost straight down and almost straight to the sides, but probably only about 45-50 degrees up from straight ahead. This makes sense from an evolutionary standpoint. Seeing the ground is more important to us than seeing the sky. So how would you deal with that if you are trying to mimic the visual field?

oysteroid

#4
I forgot to address the VR part. That part is simple. In VR, you can look all around, just like in a real scene. So you would just render a spherical view with the spherical camera. How much of this would be seen without moving, or the angle of view on the screen for each eye, would depend on the hardware you are using.

QuoteThe Oculus Rift has a horizontal field of view (HFoV) of approximately 90 degrees...

PabloMack

#5
Quote from: TheBadger on March 13, 2015, 04:25:55 PMIn 3D and specifically TG, what is the aspect ratio and lens setting that would create the most human like view of any scene?

I know you aren't going to like this answer but the truth is that you can't really set a camera to have a human view. A camera sensor pretty much has uniform resolution and sensitivity across the whole picture.  The best vision in the human eye starts at the fovea (near the center of the picture) where the resolution is at its maximum and it goes down from there to the periphery of vision. If I stretch my arms out to each side and wriggle my fingers I can judge that my field of view is actually about 180° but the resolution is very poor at these "edges". So the edge of view is not distinct but a gradient. Where you select the effective cut-off (edge of the picture) is somewhat arbitrary and up to you to decide. Even this is not totally accurate because there is a blind spot with no resolution a few degrees medially from the fovea in each eye where the optic nerve enters the eye to attach to the light sensors in the retina. But the redundancy in stereoscopic vision makes these somewhat un-noticeable.

I pretty much agree with what Oysteroid wrote.

Matt

Quote from: bigben on March 13, 2015, 09:07:26 PM
For fov, it's going to depend on the viewing distance and size of the image. It's not so much a question about which one is correct as making sure that the render matches the viewing conditions.

Exactly right.

Quote
Projection type on the other hand is a different matter. It could be argued that fisheye projection is more natural. At smaller fov the distortion is not as noticeable and you don't get the radial stretching near the edges that you get with rectilinear images.

Following from the logic which leads to FOV depending on viewing distance and display geometry, the same is true of the projection type. If your display is a flat rectangle and you view it straight on from a central position, then your CG camera should also be rectilinear. If you have a curved display, on the other hand, then fisheye might be more appropriate, but the display would have to be curved around both X and Y axes for it to be absolutely correct. BTW, this is why these new curved TVs are silly, because most material is rendered for flat displays

Quote
For VR, the viewing side of things is taken out of the equation as the image/scene is essentially being re-rendered for the viewer.

Yup.

Matt
Just because milk is white doesn't mean that clouds are made of milk.

Matt

Quote from: Matt on March 15, 2015, 01:33:14 AM
If you have a curved display, on the other hand, then fisheye might be more appropriate, but the display would have to be curved around both X and Y axes for it to be absolutely correct.

... but if the viewer is seeing the display from an odd angle, or with incorrect FOV, fisheye might be more appropriate because it removes one aspect of the distortion.

Matt
Just because milk is white doesn't mean that clouds are made of milk.

bigben


TheBadger

VERY BIG THANKS for the in depth talk on this topic. I am processing it.
It has been eaten.

Matt

Just because milk is white doesn't mean that clouds are made of milk.