Terragen camera - ray tracing screen shape

Started by jknow, June 01, 2010, 09:18:13 AM

Previous topic - Next topic

jknow

Hi folks. I've been thinking about the basics of ray tracing in Terragen2. Rays are passed from a camera position through the 'pixels' of a virtual screen. I'm pretty sure that's the basic idea.

This virtual screen - in terragen is it a flat screen, meaning that angles through distant pixels will be slim? Is it a cylindrical screen, meaning that horizontal perspective is true, but vertical distortion occurs? Or is it a spherical screen, much in the same way that you can imagine painting a landscape by sitting inside a glass sphere and painting the glass.

All the textbook examples of ray tracing seem to use a flat screen in their illustrations, but surely this produces distortion at the edges? I'd like to know what metaphor TG2 uses for its ray tracing.

I hope this makes sense, sorry if I haven't explained the question very clearly. Thanks a lot

jknow

Hope you don't mind if I give this a bump.

What shape is the virtual camera in TG2? Is it planar, cylindrical, or spherical? Thanks

Goms

If i get your meaning right, then the answer is: the camera is perspective, just like a normal camera. Therefore you have some Distortion in the edges, determined by your field of view (fov). You can adjust these settings in the camera node. Also you can set the camera to orthographic.
Best regards
Quote from: FrankB
you're never going to finish this image ;-)

jknow

Thanks Goms. So that means the camera can't achieve 180 degrees horizontal, right?

So we're talking a flat image plane, which means distortion increases at the edges as you try to squeeze more and more of the landscape onto the flat image plane.

The alternative of course is to use a cylindrical projection (up to 360 degrees, but not good for looking up or down high angles), or of course a spherical projection, which would model the spherical field of view that exists in the real world.

It looks very much like TG2 is constrained to the flat image plane, so it can't render panoramas. Or rather, you'd have to stitch them together, and thus experience even more distortion! Thanks for your reply. If anyone thinks I'm wrong please feel free to correct me

Oshyan

Correct, you have to render sections and stitch to get panoramas currently. We do plan to allow for direct panorama rendering in the future (presumably through a spherical camera or similar).

- Oshyan

jknow

Thanks for clarifying.

These two images describe better what I was trying to ask, in case any one else interested or didn't understand what I was on about. I'm not sure how to post them inline, the IMG button on the editor doesn't seem to do anything other than insert img tags.

It would be great to see TG2 support cameras other than flat planes

nikita

#6
This is something I've been wondering about too. Personally, if I were to program a (new) raytracer, I'd use a constant angle between two neighboring pixels instead of a fixed distance on a plane.

ps: You got the projected area in your sphere wrong. Its edges shouldn't be aligned with the meridians (it'd fail with shots near the poles) ;)

jknow

#7
That's a great point Nikita. Luckily the diagrams's not mine, it's from the book Mastering Digital Panoramic Photography, you can download the first chapter from the publishers http://www.rockynook.com/books/27.html

If you were standing in the sphere and took a photo (using a low distortion lens) what shape would the rectangular frame of the photograph project onto the inside of the sphere? It would look like a rectangle if your eye was in line with the camera sensor, but on the sphere it would be a 3D shape and I'm not smart enough to picture it in my head.

Thinking about perspective is hard! I wish someone would explain it once and for all!

The other thing I don't understand is why you need a 'virtual screen' when ray tracing. Why can't you just start with the observer in 3D space, and cast a bunch of rays a fixed angle apart either horizontally or vertically? The angle along with the number of rays would define the field of view. Why do we need this 'virtual screen' metaphor to get in the way? I guess it's got something to do with the monitor and its pixels, but I haven't really figured it out I'm sorry to say.

nikita

I think that metaphor exists for historic reasons and as an analogy to photography.
[see attachment]
The first line describes a pinhole camera model. A ray travels from the object through some kind of aperture, eg. the pupil of your eye, and finally hits the film, sensor, retina.
The second line has an additional "virtual" screen in it. You can think of it as a copy of the real screen that is flipped around the aperture. It has the same distance to the aperture as the original screen and thus has the same image as the real screen (but upright). Basically, the first line says: ray goes from object though aperture, through (imaginary) sensor. The second line says:  ray goes from object though sensor, through aperture.
It is a purely theoretic thing, but it leads to the third line where the black thing symbolizes the camera model raytracing usually uses: The aperture becomes the source/origin of all the rays cast. Then they travel through some pixel in the image out into the world.

Quote from: jknow on June 23, 2010, 07:26:26 AMIf you were standing in the sphere and took a photo (using a low distortion lens) what shape would the rectangular frame of the photograph project onto the inside of the sphere? It would look like a rectangle if your eye was in line with the camera sensor, but on the sphere it would be a 3D shape and I'm not smart enough to picture it in my head.
Just think of the field of view as a 4-sided pyramid that has it's apex in the camera. The walls are where the 4 edges of the camera travel through space. Now imagine what the intersection of the pyramid and the sphere looks like. Think about it as a segment of a sphere. Dou you own a cathode ray tv or pc monitor? The glass of the screen looks a bit like a segment of a sphere - the edges are bent outward with the middle of the screen even more so.


jknow

Nice one! Except....the back of the eye, the retina, isn't flat. The eye is spherical, so the retina must be vaguely spherical too. What impact does this have on how we see the world, and to what extent does a curved retina further highlight the inadequacy of the camera metaphor for perspective and CG?

nikita

A curved sensor represents exactly the constant-angle approach, I think.

I've also been thinking about why nobody is actually doing this and I guess the reason is, that with a curved virtual screen/sensor, you are not really solving the projection problem - you still have to project the curved image onto a plane again.

Henry Blewer

The infinite point for the camera (pinhole) makes the calculation of the ray path simpler. It's been a long time since I have worked with the math; so I can't get too techy.
http://flickr.com/photos/njeneb/
Forget Tuesday; It's just Monday spelled with a T

jknow

Hi. I still don't understand the approaches to computer graphics rendering illustrated above. Would someone be able to explain it to me?

In nikita's excellent diagram, I understand the top two approaches. The pinhole camera analogy is often used on computer graphics textbooks, as illustrated below.


They either place a virtual screen between the observer's eye (the centre of perspective) and the object (b), or they model the back of a camera (a). I think b is more common.

This analogy works fine when the observer's eye is 'outside' the model space, eg it's great for discrete 3d models like houses or bunnies.

But I don't understand option three on nikita's diagram. It's clear where the eye is, and the field of view is shown by the angle of the 'wedge'. But where the hec is the screen? Where are the pixels? The screen can't be a certain distance from the observer otherwise we'd lose the foreground terrain. I'm thinking here that the observer is now 'inside' the model, not standing on the outside looking in. I just don't understand where the rays that trace the virtual screen are, because I don't see a virtual screen. Does this confuse anyone else, or is it just me? Thanks a million if you can help

nikita

That third thing is the same as the second option, only visualized in a different way.

jknow

But if there's no screen, how do you work out the ray vector directions? You can't just divide the field of view by the number of pixels because that would emulate a cylindrical or spherical projection. A linear projection assumes that as the angle from eyepoint to screen edge increases, the angle subtended by each pixel decreases, because each pixel is successively more oblique to the eyepoint.

Perhaps you just need to project the screen behind the image, as a film camera does. But then the question is: how far back do you program the imaginary film plane? I suppose it doesn't matter because the ratio between pixels and angles per pixel remains the same.

Wow. That makes no sense. But you can see my conceptual difficulty hopefully?

The key problem is working out the ray directions, and to do that you need a screen somewhere. With terrain there's no point putting the screen in front of the camera because then the foreground between screen and camera technically doesn't exist. So I guess you can put it behind, and remember to invert the image back to normal (left to right, and up to down).

Wow my head hurts.