distribution shader

Started by René, August 05, 2019, 03:59:42 AM

Previous topic - Next topic

Dune

I have to contradict you. Unless I miss something (I often miss something obvious  :P ). I see no difference between using a negate scalar or just an inverse color adjust. I used to use the green to scalar, but I guess in this case it's the same as Y to scalar.

What I don't understand though, and maybe you can explain; if you clamp the green/Y to the tops, then negate, I would expect the positive to turn negative and get you the underside. But that obviously doesn't work.

Dune

I just realized the obvious  :P ; the last color adjust clamps at positive values, which are not there after the negated clamped color!

Hetzen

Yeah that's right. The output of Get Normal ... has a range -1 to 1. When extracting the Y component, 1 faces up, -1 faces down. Clamping at 0 removes all the negative values. Negating negative values makes them positive and positive negative. So in essence you're flipping the Get Normal range so that you can isolate a mask of 0 to 1.

Using Get Normal in Texture depends on the resolution of the patch size in a Compute Terrain for the accuracy of the surfaces you're looking at.

Using a Get Normal, will use the final normal in your network which allows you to not use a Compute Terrain, BUT has the sometimes undesired effect of computing on all displacements further up stream, including Fake Stones which can really mess up the mask.

WAS

Quote from: Hetzen on August 07, 2019, 04:48:06 AMYeah that's right. The output of Get Normal ... has a range -1 to 1. When extracting the Y component, 1 faces up, -1 faces down. Clamping at 0 removes all the negative values. Negating negative values makes them positive and positive negative. So in essence you're flipping the Get Normal range so that you can isolate a mask of 0 to 1.

Using Get Normal in Texture depends on the resolution of the patch size in a Compute Terrain for the accuracy of the surfaces you're looking at.

Using a Get Normal, will use the final normal in your network which allows you to not use a Compute Terrain, BUT has the sometimes undesired effect of computing on all displacements further up stream, including Fake Stones which can really mess up the mask.

Thanks for elaborating on this. Makes a lot more sense. This will surely come in handy. I've actually been brainstorming around this approach. 
 
I hope one day get normal (in all scenarios) would switch to their input node (why have it otherwise) when plugged rather than defaulting to planets compute terrain. So you could just hang a compute normal or terrain off your chain and feed it into these nodes (even contours or pfs for distort by normal). Maybe one day.

Hetzen

The reason is at some point you need to work out what your displacements are doing. Compute Terrain is there as a snapshot of the surface at that point in your work. It sets up the texture and normal mapping at that stage, for further displacements and colouring to occur. If you want one over the other, you use a Compute Normal or Texture Coordinates from XYZ. Compute Terrain encapsulates both processes.

Get Position, Get Normal have their uses, but remember they look at the final surface at render time, which when used for displacements often cause unwanted 'spikes'.

TG has always had the philosophy that you set up your base terrain, give it a general mapping via the Compute Terrain for you to colour and displace at a lower scale. There's nothing stopping you using several Compute Terrains/Normals in your scene, they just add more to the render time. There's no way of avoiding that.

WAS

#20
Quote from: Hetzen on August 07, 2019, 05:38:22 AMThe reason is at some point you need to work out what your displacements are doing. Compute Terrain is there as a snapshot of the surface at that point in your work. It sets up the texture and normal mapping at that stage, for further displacements and colouring to occur. If you want one over the other, you use a Compute Normal or Texture Coordinates from XYZ. Compute Terrain encapsulates both processes.

Get Position, Get Normal have their uses, but remember they look at the final surface at render time, which when used for displacements often cause unwanted 'spikes'.

TG has always had the philosophy that you set up your base terrain, give it a general mapping via the Compute Terrain for you to colour and displace at a lower scale. There's nothing stopping you using several Compute Terrains/Normals in your scene, they just add more to the render time. There's no way of avoiding that.

I understand  that but when working within internal node networking, away from a planet (or on another) you are forced to drive some of these shaders by the the one compute terrain. For example the contour shader will preview a new compute terrain plugged into it and show you a cool new setup based on separate terrain and patch Sizes, but when you apply, it's working off the main compute terrain for the planet and contouring main terrain not it's fed terrain it obviously can read.

Probably not that way for other shaders but that functionality when specifically fed content would be awesome.

The philosophy is sound for that [main] planet, but not elsewhere it seems.

René

Thank you all for thinking along. I'm currently working on an assignment; I'll come back to this tomorrow.