Yes this type of documentation is still missing and for the time being is filled in by experienced users, if they feel up to it which luckily tends to be the case mostly
There's indeed logic and a "modus operandi" to the network and is best demonstrated by video.
There are superbasic videos already at my site:
www.cgscenery.comI got distracted lately and haven't updated my site for a while, but I do intend to go through the basics in a video tutorial.
Quote from: SteveR on July 01, 2013, 05:14:21 PM
For example, if I texture a rock, I cannot seem to map that texture to my terrain without killing the geometry (it textures a flat plan lovely).
It either seems it's messing up the geometry, because the texture changes the surface's appearance, or you are applying displacement along with the texture.
Mapping a texture with an image shader does not affect geometry by default.
Quote
Why does my base terrain link to the 'input' of a 'base colours' which in turn links into the 'input' of a surface shader. Is this just a way of linking the terrain to a surface layer? Basic stuff I guess, I appreciate I am not quite getting it correctly, but I really just need an overview and a logic of the information flow through the nodes so I understand the correct order things.
Yes you're correct.
It's most easy to imagine that the network "starts" at the terrain tab and "ends" at the planet shader.
You create and/or displace a terrain, then compute it with a compute terrain to update the coordinates of your geometry so that subsequent layers will match the geometry.
Then you indeed start with applying surface shading with a "base colours", followed by a "surface layer" (in your case).
You can imagine this construction of nodes and their connections in between as a pipeline where you put something in at the top and at the end it gives you the final result.
So it's a flow you're directing.
Logically, if you break a connection, you break the pipeline and you will only get the result from the nodes below/beneath the place you broke the connection.
This is because all the nodes perform additive operations which aren't necessarily dependant on its input.
For example: if you disconnect your terrain from the compute terrain node you will still have a flat planet, but still with the shading added by the surface shaders.
The input ports of the nodes act as nulls and is only meant to hook it up into the network so that the node network's isn't broken.
They only accept the input and pass it on to the shader itself. So they don't manipulate it (null).
The output at the centre contains the updated result, defined by that shader's parameters (which has other inputs as an option for manipulating that shader's parameters).
These parameters can be things like displacement or colour.
At the end your terrain + it's computed/updated coordinates + the texturing/shading of the terrain are being fed into the planet.
The renderer creates the visible geometry within your camera (and a reduced version outside of the camera).
For each face of the geometry the software marches the node network to see what kind of detail and colour need to be added to that face of geometry.
That's kind of how it works, but Matt (the developer) knows this best by far. Logically.
I hope it's a tad bit more clear!?
Cheers,
Martin
(In practice the software marches the network from the planet upwards, Matt once told me, but I find that concept hard to understand in terms of the way you generally set up your terrain, compute and shade it. I find it hard to understand how that would work vice versa, but it probably explains why the workflow is additive if you consider the flow to go top to bottom. Anyway, it doesn't matter in the end, because it works! So just forget this!)