FOVO a new 3D rendering technique

Started by WAS, May 30, 2020, 07:16:17 PM

Previous topic - Next topic

WAS


Kadri

I just read it. Interesting. We will hear more in the future about it probably.

WAS

Quote from: Kadri on May 31, 2020, 11:05:59 PMI just read it. Interesting. We will hear more in the future about it probably.

Hopefully. I couldn't find any other information on it like how it's done.

Kadri


Looks like they are stretching-bending the geometry in a certain way to get the look.
Makes me think how much of it is feeling based and how much physical.

WAS

I wondered that too. Like, they show landscapes and interior, and there is certainly a difference in depth perception, I wonder if it requires manual settings to tune it to spaces.

WAS

I will admit the first person perspective with the body looks "right". This has been attempted many times in FPS games but always looks cardboard and flat, and the depth from chest to feet looks skewed, even in modern VR games I've tried.

Kadri

Yes. I don't know the physical rightness but it feels-looks right.

Tangled-Universe

This is really interesting, I like it.

To me it looks like it's basically a kind of lens shader.
With a lens shader supporting depth of field, for instance, the lens shader allows for occluded elements to still appear visible though as out of focus blurred elements.
This is known as circle of confusion and a lens shader supporting depth of field basically codes for how the light paths/rays are being bent.

Similar to this you can also code lens shaders to behave like a fish-eye or a spherical camera (which we have in TG).
Each calculate the light paths to the sensor plane differently and this is what this fovo rendering is doing as well.

WAS

#8
I think it's a little more complicated than a basic lens shader. A lens shader takes scene rays as they are from what I understand. Which is why despite FOV the overlay of elements within the scene will always be the same, just skewed/stretched. The examples in this project from 10 years of r&d shows different actually scene perspectives of elements in 3d from the linear perspective. They even mention bending and curving rays rather than skewing/stretching scene images. Which is why it may be more taxing on game 

Also isn't a 360 camera just 360 linear perspective warped to a flat surface? Image processing?

Tangled-Universe

I see what you mean, but I have difficulties trying to explain myself I think.

The geometric aspect of FOVO, which I think you refer to as "overlay of elements in the scene" and I refer to as "occluded elements" is something a lens shader can do I think.
Hence I mentioned depth of field, which allows for "scene rays" originating from scene elements which are occluded from a camera ray can still reach the sensor because of the 'circle of confusion' phenomenon.
I think you rightfully mention that 360 degrees and perhaps other lens shaders are warps on flat surfaces, a 2D matrix conversion if you will.
However, when you consider depth of field is also warping, then it certainly is in 3D (circle confusion, but also the simple fact that it requires focal distance setting) and depth of field is a lens shader effect.

You might still be right, but this is why I think it could basically be a lens shader. A 3D lens shader then, which I did not mention explicitly enough I guess.
The penalty in FPS when playing games may be attributed to how depth of field calcuation is also more expensive, since you do not perform 2D linear matrix conversion from scene to screen space, but 3D non-linear scene space conversion to screen space.
Here's an example of non-linear 2D conversions:
http://paulbourke.net/miscellaneous/lens/
Looking at this I can imagine how you can extend these principles to one extra dimension and how that would change occlusion of elements.

I'd love to have such a camera/lens/whatever it is(!) in TG, because it looks like it can really contribute to a heightened sense of scale in large scale TG environments!

WAS

Quote from: Tangled-Universe on June 06, 2020, 09:08:49 AMI see what you mean, but I have difficulties trying to explain myself I think.

The geometric aspect of FOVO, which I think you refer to as "overlay of elements in the scene" and I refer to as "occluded elements" is something a lens shader can do I think.
Hence I mentioned depth of field, which allows for "scene rays" originating from scene elements which are occluded from a camera ray can still reach the sensor because of the 'circle of confusion' phenomenon.
I think you rightfully mention that 360 degrees and perhaps other lens shaders are warps on flat surfaces, a 2D matrix conversion if you will.
However, when you consider depth of field is also warping, then it certainly is in 3D (circle confusion, but also the simple fact that it requires focal distance setting) and depth of field is a lens shader effect.

You might still be right, but this is why I think it could basically be a lens shader. A 3D lens shader then, which I did not mention explicitly enough I guess.
The penalty in FPS when playing games may be attributed to how depth of field calcuation is also more expensive, since you do not perform 2D linear matrix conversion from scene to screen space, but 3D non-linear scene space conversion to screen space.
Here's an example of non-linear 2D conversions:
http://paulbourke.net/miscellaneous/lens/
Looking at this I can imagine how you can extend these principles to one extra dimension and how that would change occlusion of elements.

I'd love to have such a camera/lens/whatever it is(!) in TG, because it looks like it can really contribute to a heightened sense of scale in large scale TG environments!


I agree, this seems exciting for both 3D rendering, and realism in video games; like I mentioned first person perspective with a body without it being a weird cardboard skew from shoulders to feet.

I do wonder how it actually works, cause i notice if you change your FOV in pretty much any game, or even TG, the elements in 3D space are always in the same place, overlaying the same elements, and never change. They just start polar shifting. Doesn't mean it isn't a lens shader I guess, I just haven't seen any FOV that actually changes the occluded elements it's seeing rather than warping them (think fisheye). Than again, this method appears to be using two sources, like human eyes, which would allow that depth I'm referring to like you shows in that example.

Also yeah the distant landscape example they show looks really nice. It gives that sense of scale and distance.

Tangled-Universe

I too wonder how it works, curious if they will ever release a white-paper, probably not at this stage.

The elements are the same in because its linear and you could indeed see it as polar shifting or something along those lines, it's what I referred to as matrix conversion.
If you check their morphing examples (car interior) you can see that their method is non-linear. It's a certain blend of barrel distortion at the centre and pincushion at the periphery of the frustum.
Without meaning to sound like a broken record, I notice you don't really go into my explanations and though process about how I think it works...so here I go again...with depth of field :p you still have a FOV with the limitations you mention, but along the Z-depth all elements in 3D can be shifted in 2D screen space because of circle of confusion phenomenon.
Imagine...well I actually did it myself...taking a photograph of an animal in a zoo behind a fence and the fence is 3ft away and the animal 20-30ft away. With a large aperture and shallow depth of field you can completely remove the fence and have a sharp image of the animal. The fence is still there, but the rays are 'bent' way behind your sensor plane, which renders the fence invisible. This effect can be so strong that the out of focus scene element is projected upside down on the sensor plane. That is a really drastic way of shifting rays from 3D to 2D sensor plane.
In effect this is changing the occluded elements with regards to what you *see*, though in 3D scene space nothing has changed and neither is it in such interior scenes with FOVO rendering.
If it is a lens shader then it samples the environment in a non-linear fashion and perhaps with similar principles as how a renderer samples depth of field , but I'm only using depth of field as an example why a lens shader could do perspective shifting like this FOVO thing.

Anyway...it seems they will release a plugin at some point so we can load in our renders and apply the effect in post. It will not have the same properties as doing it natively (similar to how you can't do *proper* depth of field in post) but it will certainly enhance the result I think!

WAS

Oh ok. I get what you're saying, and yes that's true. But what I'm trying to emphasize; they seem to be using two viewports, emulating [most] human vision. So there is a cross section of focus on L/R viewports. Each is probably linear, being processed into a bilinear sample. Each "eye" has depth of field seeing objects from L/R perspectives. I think this is where the impact is coming from they mentioned (on games).

For example, there was a fun game they did in preschool I rememebr. We set objects in-front of us, and than we would quickly close and open one eye at a time, and watch the perspective change, for me, my soda can was popping to the left, and right, because of where my eyes sit. Together with both eyes, a whole new image is formulated - a sample of the both, creating 3D depth and perspective.

Tangled-Universe

Can you point me to the split L/R viewport examples? Before I start saying this doesn't seem to be about stereo-rendering I'd like to be sure I understand what you mean.