Compute terrain VS Compute normal

Started by dhavalmistry, April 02, 2007, 11:48:15 pm

Previous topic - Next topic

Harvey Birdman

April 03, 2007, 02:44:14 pm #15 Last Edit: April 03, 2007, 04:11:44 pm by Harvey Birdman
Neither 'include lighting'. The two serve different, complementary purposes which together determine how the light interacts with a bit of terrain. Position has an effect because (in a typical lighting model) the light intensity is reduced the further away a point is from the light source. The normal (which may be thought of as describing the orientation, as opposed to position, of a given bit of terrain) determines how much of the light available at a given position actually falls on the bit of terrain. If the terrain is edge-on to the light, none will fall on it; if it's perpendicular to the light, the max available will fall on it.

I wouldn't be surprised if 'Compute Terrain' calls 'Compute Normal' internally, as part of defining the surface at a given point.

All this being true, you can see why using 'Compute Normal' probably isn't appropriate for placing objects. Under certain conditions you might end up with something that looked acceptable, but it would be pure luck.

(I'm not a great teacher, really. I always have trouble coming up with alternative analogies and end up repeating myself.)


great help guys, thanks a lot

to the object placing part, has been a fault somewhere else, messed around with a cloud shader, ended up in having objects on the ground, in the air etc, which might be cool for snow or raindrops, dunno

got the placing sorted out aswell, so everything's fine :)
perfection is not when there's nothing more to add, it's reached when nothing more can be left out


Thanks, Harvey.  This has helped.  I'm still learning about this subject...and I think I have time to let this sink in and gradually learn about it.  These two items sound so similar, even in definition - therein lies my focal learning point.
So this is Disney World.  Can we live here?

Harvey Birdman

You think THAT sounds confusing, just wait till we get to normalizing the normals.

I'd like to shoot the SOB that came up with that one.


April 03, 2007, 03:25:39 pm #19 Last Edit: April 03, 2007, 03:32:07 pm by njen
calico, if you would like to find out more, I suggest you start looking into Renderman. Specifically the documents and tutorials relating to rsl (renderman shading language), It might seem overwhelming at first, but with dedication, you can eventually piece together what it all means. I had a hard time originally understanding these terms, coming from an art background, but I persevered until I was able to write my own renderman shaders using these various mathematical concepts (and I still don't consider myself much of a coder).

'Advanced RenderMan: "Creating CGI for Motion Pictures"', and 'Essential RenderMan fast' are the two books to start and almost end with as far as books go.

Here are two great website to start you of with:

Harvey Birdman

A more general introductory 3D graphics text might also be useful. '3D Computer Graphics' by Alan Watt is pretty good - it's the first book I read on the subject.


I am still confused...can some one please explain to me in plain English (English is my second language and its not very good)
"His blood-terragen level is 99.99%...he is definitely drunk on Terragen!"


April 04, 2007, 10:07:36 am #22 Last Edit: June 16, 2009, 07:39:47 pm by Matt
Basically Compute Terrain does two things:

1) Calculate the surface normal according to a chosen "patch size".
2) Update the texture coordinates (shader coordinates) to match the current shape of the surface. It is a slightly smoother version of the terrain according to the chosen "patch size" for reasons I will explain later.

Compute Normal calculates the surface normal in the same way as Compute Terrain, but does not update the texture coordinates.

1) Surface normal

The "normal" of a surface is the vector that is perpendicular to the surface. As someone said, you can visualise a set of normals as a whole bunch of arrows pointing out of the surface. On a flat plain they would all point upwards, but on the side of a cliff the surface normal points sideways out of the cliff.

Traditionally in 3D graphics, surface normals would be used for lighting calculations. But with the advent of displacement shaders we also need some direction in which to displace the surface, and usually this is along the surface normal (although not always).

Displacement can happen in any direction in TG2, but most shaders default to displacing along the surface normal. This is usually what we want so that rocks can jut out the side of steep cliffs instead of distorting the side of the cliff up and down. If there is no Compute Terrain or Compute Normal, the surface normal is simply the local "up" vector, which is the vector that points away from the centre of the planet. (For other objects the situation is different, but Compute Terrain and Compute Normal are only recommended for use on a planet.)

"Patch size" affects how rapidly the surface normal changes with respect to position on the surface. It is important to understand this if you have very large displacements occurring after the surface normal has been calculated. Imagine, for example, that you have a displacement shader that creates 100 metre spikes and you apply it after the Compute Terrain or Compute Normal. If the underlying terrain is fairly rough and the Compute Terrain's "patch size" is only 1 metre, the surface normal will change direction very rapidly. The spikes will overlap in some places and spread out far to much in others. In this case the solution would be to increase the patch size, perhaps to 100 metres, so that the surface normal changes more gradually.

The surface normal is also used for other purposes such as slope constraints in the Surface Layer and in the Distribution Shader. If you use the Surface Layer to apply displacement or use the Distribution Shader to control displacement, then the surface normal is important if you use slope constraints.

"Final normal":

However, if you use slope constraints to control only the colour or other non-displacement attributes of a surface, usually you don't need to worry about the surface normal. This is automatically calculated from the micropolygons after all displacement has been applied, and this is called the "final normal". This gives you the highest level of detail. In some shaders you have the option of changing which type of normal you want to use when applying effects based on slope or similar, and this is sometimes useful if you want the constraints to be applied identically to both displacement and colour.

You don't need to worry about surface normals for lighting calculations in Terragen 2. All shaders in TG2 currently use the "final normal" for lighting calculations.

2) Texture coordinates

There are many shaders in TG2 which apply both displacement and colour, and it's important that the colour and displacement align as closely as possible. All colour and non-displacement shading effects happen after all the displacement from all shaders has occurred. This can lead to problems with shaders that are part way through the displacement chain, because they operate on a surface which does not know its final position after the remaining displacements have been calculated. That is OK if they only perform displacement, but if they apply colour that needs to align with the displacement there will be a mismatch.

To solve this problem, most shaders in TG2 use texture coordinates and feed those into their noise functions for calculating both colour and displacement. Initially the texture coordinates at any point on the terrain are simply the 3D coordinates of that part of the terrain prior to any displacement, or in other words the flat part of the planet that terrain was displaced from. Unfortunately this makes it impossible to texture the sides of cliffs without stretching the texture (and also causes other problems because so many shaders need to use the texture coordinates), so at some stage in the shader chain we need to copy the texture coordinates from the displaced terrain coordinates. Compute Terrain does this for us as well as computing the normal. You can also use "Tex coords from XYZ" (in Shaders -> Other) to copy these coordinates without recomputing the normal.

It is important that the texture coordinates are updated before any colour or non-displacement shading occurs, otherwise the colour and non-displacement will not match the displacement correctly.

"Smoothed texture coordinates"

The Compute Terrain node performs both of the above functions (surface normal and texture coordinates). However, when it calculates the texture coordinates it actually generates a slightly modified version of the coordinates which are later used for special effects in the Surface Layer, and these are the "smoothed texture coordinates".

The smoothed texture coordinates serve as a smoother version of the terrain that can be utilised by the "Smooth effect" in the Surface Layer shader. These smoothed texture coordinates are also essential to the correct functioning of the "Intersect Underlying" feature in the Surface Layer. This is broken in the current public release (build but has been fixed and improved for the next update.

The scale over which the smoothing effect operates is controlled by the "patch size" in the Compute Terrain, and this therefore affects the results of the "Intersect Underlying" feature. I intend to provide some documentation on this upcoming feature when we release the update.

Motivation for the Compute Terrain node

We decided that it would be useful to combine both surface normal and texture coordinates into a single node so that it is easier to separate the concepts of "terrain" and "surface shaders" and delimit them with a single node. Any large scale displacements that you apply should happen before the surface normal is computed, and it also helps if you avoid doing any colouration prior to this. So we wrapped everything up into a single Compute Terrain node. Any shaders that apply colour should come after the Compute Terrain so that displacement and colour can be aligned properly and so that slope and altitude constraints work properly. It also ensures that "smoothed texture coordinates" are available to any Surface Layer that need to use them for its smoothing effect or Intersect Underlying effect.

Compute Terrain also serves another purpose: it indicates to the user interface the point at which to separate shaders into the separate Terrain node list and Shaders node list. This helps to encourage the use of large scale displacements (terrains) before the Compute Terrain and all other shaders after it, via the popup menus. This is one aspect of the interface that I hope we can improve upon in future, as it's still very clumsy and not at all clear why it works this way.

When to use additional Compute Normal or Compute Terrain nodes

When you perform large scale displacements prior to the Compute Terrain, sometimes you need to limit them according to slope or altitude. If you need them to know about altitude, use a "Tex Coords From XYZ" node because this is very fast to compute. If your displacements need to know about slope, for example if you use a Distribution Shader to affect displacement, use a Compute Normal. However, beware that Compute Normal and Compute Terrain slow down computation of their inputs, and the slow-down is compounded each time they are used. The slowdown only applies to the input shader network, so if they are high in the shader chain the slowdown can be minimised. It multiplies the time needed to compute the input displacement by a factor of approximately 3.

Apologies for the mini thesis, but I hope it sheds some light. If someone who is not so close to the implementation can offer a summary of the important information, that would be appreciated.

Just because milk is white doesn't mean that clouds are made of milk.


Apologize?  No way!  Thanks!!! This is going in my collected notes.
So this is Disney World.  Can we live here?

Harvey Birdman

Really! That's a keeper.


One question regarding performance, if I may. Suppose I have several populations of plants, and have connected the output of the (default) ComputeTerrain node to the TerrainShader input on these populations. Is this going to cause the ComputeTerrain to re-evaluate for each connection, or will it just evaluate once and supply the common result to the various populations?


Thanks for clearing that up, Matt.  The basics I was pretty much aware of, but the rest was pretty interesting :). - A great Terragen resource with models, contests, galleries, and forums.


Thanks for taking the time to explain this Matt. Lots of Food for Thought here!  ;D


April 06, 2007, 09:26:48 am #27 Last Edit: April 06, 2007, 09:39:04 am by Matt
Quote from: Harvey Birdman on April 04, 2007, 11:12:40 am
One question regarding performance, if I may. Suppose I have several populations of plants, and have connected the output of the (default) ComputeTerrain node to the TerrainShader input on these populations. Is this going to cause the ComputeTerrain to re-evaluate for each connection, or will it just evaluate once and supply the common result to the various populations?

Unfortunately it will re-evaluate this for every populator you have it connected to. However, unless you have multiple objects in exactly the same locations, this would be necessary anyway, so in most typical situations it wouldn't be possible to optimise this without sacrificing accuracy.

EDIT: This is probably a good place to say that you don't have to use the Compute Terrain node as the input terrain for your populations. Any shader network can be used, and it's often better to use the very last shader that plugs into the planet. Misunderstanding this is probably one of the more common causes of populations not sitting correctly on the surface. However, whichever shader you use, you should make sure that the surface normal has been computed either by that shader or further up the chain, otherwise slope constraints won't work on your populations.

Just because milk is white doesn't mean that clouds are made of milk.

Harvey Birdman


Argh!!!... of course... (sounds of little lightbulbs going on)
back to the Cave !