Theory thread

Started by TheBadger, January 27, 2013, 03:11:52 AM

Previous topic - Next topic

TheBadger

Hello,

I would like to know (as much is possible or reasonable) at what distance (from camera to surface) real displacement is necessary to convince the eye of a 3d texture on an object.  ::)

Ok, let me ask another way.

On a terrain, from a certain height, it is possible to create the illusion of grass, simply by adding a number of colors to said surface. We all know from experience that if you move your camera to the surface this trick no longer works.

But I want to ask about this at a more micro level relative to the grass example.

How close can you get your camera to a porous brick surface before you need to add displacement/bump to continue to trick the eye? Or fabric, or stone, or whatever?

Has anyone ever run into a study or book or essay or anything that discusses this type of 3D production theory?
Because if we were talking about something to do with editing or photography, or most any other discipline, I guarantee that there is writing on the subject.

I'm not asking for no reason, I would really like to have access to this kind of information. I believe having enough information like this could save a lot of work time. It sure does in every other area of the arts.

thanks for trying to remember if you have ever run into a source I can get access to.
It has been eaten.

TheBadger

Here is a interesting article on making your renders look like they were shot with a real camera.
I dont except all of the writers premises or conclusions, but the information seems good. And the example images help to see the issues.
http://www.blenderguru.com/the-1-reason-your-render-looks-fake/

It has been eaten.

mhaze

Sadly photographic images are increasingly being perceived as more real to us than what we see in the real world.

TheBadger

#3
In response to my own OP question... I was able to find this article http://www.blenderguru.com/videos/the-secrets-of-realistic-texturing/

It seems to have a lot of information related to my question, and should be of use to others. Its a blender article, but the information is not blender specific. Or rather, the info can be applied generally.
It has been eaten.

Dune

Thanks for this thread, Michael, really interesting. I can't tell you much (nothing actually), but I'm interested what comes up here.

FrankB

#5
Sure you can replace small scale displacement with just color, creating the illusion of something. It's just a question of by which distance something, let's say a bush, is merely a pixel or less in the final render. So your parameters are object size, distance from camera, and render resolution

If you have seen the close up shot of the tropical environment of my planets thread... there I used displacement to mimick trees, and that goes on perpetually up to the horizon. Now, I tried to blend the displacement out at a certain distance, like 5 km awway from the camera, to save render time. I did that, and it looked right. From the point where the displacements were fading out, color took over and it looked right.
However, Terragen's level of detail calculations are so good, that there were barely measurable speed gains, if any at all.
So in the end, I just removed the displacement fader.

One more thing: If I had rendered much larger, I would have had to let the displacement fade out farther away even.

Frank

bla bla 2

comme ceci ?  :)

TheBadger

Thanks guys, for concidering the question. I was sure at least one of you had come across a paper, or book, or something, that contained research proof. Maybe a grad students thesis statment will turn up at some point, that would be fun  ::) Nonetheless, I do hope to find some kind of info to shove into my ear, like a pencil.

Hey FrankB!
I like how you discribed your thoughts on the topic.
these words really put it into perspective
QuoteOne more thing: If I had rendered much larger, I would have had to let the displacement fade out farther away even.
So like everything else, its entirely scene independent... But at a certain macro (camera) level, it has to be the same for everyone?
I would be inclined to just except that I was frustrated with something and was wishful thinking that some magic answer was out there someplace. But a lot of what you wrote, the way you wrote, makes it sound like it is something that is quantifiable.

Its a little strange, because like I said, all the other areas of the arts have tons of writings and info like I was asking for. Its got to be out there some place.

On topic. PBS did run a documentary on the golden mean, and fractals. The Thumb Print of God, or something like that. That was a really broad and grand view of ideas. But if this software is a practical application of those big ideas, then shouldn't there be research and writing on the details (like the OP question?)
They even did research that proved most people prefer A golden rectangle to any other 4 sided shape when confronted with many similar forms.
It has been eaten.

Oshyan

I don't think there is any writing on this because the details of this will vary by the techniques used (e.g. real geometry, displacement, bump mapping, etc.), and the render engine used. In theory, or rather, ideally speaking, your render engine's Level of Detail optimizations should automatically handle the reduction of detail in the background and optimize the balance between necessary render time and detail. Essentially, you're aiming for detail that is roughly pixel-sized to be fully represented, and anything smaller than that gets ignored or at least represented less accurately (and with less time consumed to render). So yes, it depends on render resolution.

But again the ideal is for the render engine to generally handle this. Not to say that this is perfectly implemented in TG or any other render engine, but you'll note Frank's comment above that trying to control displacement by distance to reduce render time actually did not work particularly better than TG's built-in (and, more importantly, automatic) LoD systems. So this is where things are working right. They don't always work ideally and sometimes workarounds are needed, but they are quite honestly almost always based around the same idea (control level of detail by distance from camera/size in screen space), and the actual requirements will really tend to vary scene-to-scene. There may be some basic rules of thumb you could come up with, but they will likely only hold true on scenes that are similar to the one you create them from, so you could maybe end up with a whole bunch of different rules or guidelines for different types of scenes, but ultimately it may be better to just develop an intuitive understanding of what elements do increase render time, and how to control their appearance at distance from the camera.

- Oshyan

TheBadger

You are right.

I believe it is possible to do. But unlikely anyone would do the testing necessary to compile the data to draw ranges from. Sure would be nice to have at least a few "always" and "never" statements to trust in though.

So after searching quite a lot for something like I was asking about, I give up. I think FrankB's statements are as close as I will get. Thats alright, of course we have all done fine without that kind of stuff so far.

Thanks guys.
It has been eaten.