Rectangular Noise

Started by Hetzen, March 09, 2010, 11:28:51 AM

Previous topic - Next topic

Matt

#135
Hi Badger,

Procedural (in the usual Terragen sense) just means that we calculate the value of each point instead of reading the value from a file or user input. Vector displacement - as opposed to scalar displacement - means that we control the direction of the displacement, not just the magnitude. You can do procedural vector displacement just by plugging 3 functions into a Build Vector node and plugging that into a Vector Displacement shader. All this does is provide a function for each of the 3 components of the displacement: X, Y and Z, which together make the vector by which the surface is displaced. But you can think of it as being 3 different displacement shaders in X, Y and Z, combined into one vector displacement.

Matt
Just because milk is white doesn't mean that clouds are made of milk.

Oshyan

Edit: Matt explained this better and more briefly while I was typing, but I'll post this anyway for the fun (and because Matt confirmed I was right, haha!).

There's nothing contrary between vectors and procedurals in this context (or otherwise). They are, if anything, simply not directly related or dependent. You can generate vectors without a "procedural" function, and you can have a procedural function that does not output (or is not interpreted as) vectors (in fact most in TG are this way I believe). I think what you need to get a good handle on is the difference between vector displacement and regular displacement that I described above (and which is nicely illustrated at the link I provided to Pixar's docs). Then make sure you also understand how the procedural noise functions in Terragen (and, I think, most 3D apps) work, the kind of data they normally create. Once you have that understanding it should make more sense why this doesn't "just work". That's not to say it's impossible, but it doesn't work "out of the box" certainly. Terragen is setup to do basic, normal-oriented displacement (or single-axis if desired).

Now here's a thought for you, I have no idea if this will actually work so it might be a bad idea to tell you, it might just confuse you more. But I'm counting on other people to try it or at least correct me if I'm talking crazy. ;) Anyway, the thought is, take a look at the Vector Displacement Maps that people were able to use and *how they are applied in Terragen*. Then, instead of building your vectors out of an exported map from Mudbox or Zbrush, use Power Fractals as input to a Build Vector which goes into the Vector Displacement Shader. I don't see why this wouldn't work, at least in as much as it will *create displacement*. It will probably look terrible though because the vectors are all over the place since there'd be no relationship between the output of 3 different Power Fractals. ;) You could try using 3 copies of the same noise and see what that does, the result ought to be more coherent in some sense at least. But hopefully you get the idea that noise values are not inherently a good thing to derive vector coordinates from for vector displacement.

OK, my curiosity was piqued and now *I'm* playing with this, haha. But please note I have never even done vector displacement from Mudbox or anything, so I really have no idea what I'm doing. So far I have a "result" but no idea what it means, so I won't show it yet. However I did come across something totally unrelated but novel-looking and I couldn't help sharing it with the question: "Why don't people play with clouds-as-terrain/objects more often?". Just look at this little garden of weird cloud shapes! :D

- Oshyan

Oshyan

Ok, this is doing something. I have to get to bed for now, but I will pick this up again tomorrow, unless somebody beats me to it (its super easy to do, the question is how controllable/directable it is). But surprised I've not seen someone doing this already... I kind of think someone probably has and I just didn't know it. :D

- Oshyan

Dune

#138
Thanks for your comprehensive post, Matt and Oshyan. I was thinking along the same lines, but can't stop experimenting. As a matter of fact I've experimented before with procedural vdisp setups, like you mention, Oshyan, but you can't help getting weird unnatural crossovers, where the 'membrane' overlaps, so I dropped that. My latest experiment was with a repeated and procedurally multiplied/changed vdsip map, while using several angles for displacement and several angles where what should displace in what direction. But the main problem is indeed that a displacement is relative to a greyscale (PF or vdisp) and it's origin in space; it won't fill the space, or fold the membrane into the end of the '3D space's whiteness'.
If the surface is already vertical or horizontal you can get blocky stuff, but not when you start out of smooth rounded slopes, TG should then displace that with a biased number or something, based on Y, X and Z in some blocky mathematical relation.
This morning I was thinking about using angle and slope based tilt and shear, fed by blocky pf's..... so that'll be next. And make a better vdisp map in mud (I'll try it image based, Jochen, thanks).

Tilt and shear doesn't work.

mhaze

" I'd like to try to implement some soft-edged versions of some useful functions that should allow us to create more useable "rectangular noise" prodedures and other rocky cliff type effects"

The thing I'd most like is a soft edged voronoi 3d cell scalar.  So that I could build softer 3d displacement within voronoi cracks.

I've not had time to play much but I've been following this closely.  I have had some minor good results but no better than some of the ones here.  I did achieve another goal of mine " flaky rock and then as is the way crashed tg and lost the file.  I'll soon have time to play so we'll see.

Thanks Matt for the explanations they have given me some ideas.

Matt

Quote from: mhaze on October 21, 2014, 03:14:19 AM
The thing I'd most like is a soft edged voronoi 3d cell scalar.  So that I could build softer 3d displacement within voronoi cracks.

Yep, that's the one I want to try first :)

Matt
Just because milk is white doesn't mean that clouds are made of milk.

mhaze

Great news ;D Meanwhile I've been trying to adapt mogn's bevel quilt cubic noise, sadly though I don't really understand it or have the time to play at the moment.

Tangled-Universe

Thanks a bunch Matt and Oshyan,

It's good to read there is experience with this, regardless succesful or not, and that options for the future are in mind.
Personally I wish a solution or partial improvement could be found soon to aid me in my job for Unity, but I understand very well these things are not possible on such short notice.
But perhaps the following is nice for you to read and think about...

Yesterday I had a chat about this with Jon West (Hetzen) and we knew "vectorizing" the workflow is key.
What we need to take into account is the vector of the terrain, I suppose?
Using a "displacement shader to vector" node converts the terrain displacement to a vector.

From there we think we need to "normalize" the vectors from the rectangular noise values with the vector of the terrain.
Normally (pun), normalizing is a simple division of one over the other, but that gives really messed up results.

Yet I feel we have to work with these 2 together, the vector from the rectangular noise and the vector of the terrain you want to map it on.
After the normalization, or whatever you need to make it work, you feed the resulting final vector back into a vector displacement shader and you will have the terrain + rectangular noise.

How to tell the renderer how to "correct" or normalize is beyond me and even Jon couldn't wrap his brain around it.

Nonetheless I think it's nice to show this file showing the principle I'm talking about.


Matt,

Quote from: Matt on October 20, 2014, 09:03:28 PM
...

These days when I need to create a rocky cliff-like surface, I tend to hack away at a few different ideas and combine them together until it works well enough for the job at hand. It doesn't look any better than many of the networks that I've seen posted on this forum, and the setups are usually messy by the end of a VFX production where you never get a chance to really clean things up and understand how and why they're working. Sometimes I'll try to simplify these setups afterwards, and they fail to work in a satisfying way on other scenes that I try to apply them to. But I'll keep trying.

...

Haha, that's oh so damn recognizable Matt. I can make a NWDA product of all my rock setups or whatever I made, but they all seem to work in very specific situations :)

mhaze

This is close but note that any Z aligned faces have no displacement.

j meyer

+1 more thank you Matt and Oshyan from me too.

Seems to be as complex as I was afraid it was,sigh.
Still hoping that some genious might find a simple solution one day.

Heehee,the Simple Solution Shader. ;)

mhaze


Dune

Now rotate it in Z or X direction and apply only to Z or X faces.

mhaze

#147
One thing at a time, I've just spent the last hour trying to restrict it only to steep slopes with absolutely no luck whatsoever.

UPDATE managed to mask it with the disp to vector shader.

jmgibson


Gosh, thanks Matt.  What a compliment.  I think I still have the VFL code for that function.  I think we called it "Bouldering" at the time.  I can track it down and send it to you when I have sec.  If I recall correctly I also implemented it as a SOP in Houdini and it looks AWESOME animating.  I will see if I can find it but here is the general solution:

Loop over a 3x3x3 cell area centered on the current jittered cell center C, finding the neighboring cell centers Ax = A0...A8.  Define each cell's boundary surface to be the surface, Sx, half-way between C and Ax with the normal Nx = normalize(Ax - C).  Project the point P, being processed along P - C onto Sx.  Call this ray intersection Ix.  Remember Ix and dx = distance(Ix, P).  The closest boulder face after the loop is the one with the smallest positive dx.

Quote from: Matt on October 20, 2014, 09:03:28 PM
This is an interesting problem that I keep coming back to. I'm often finding that I need to create rocky cliffs, and it's really difficult to do well.

Simply displacing along the normal has problems. I've seen some procedural examples in this thread that look pretty good when you apply them on a smooth surface, or even a fairly steep surface that points in the same general direction. But there's always the problem of what to do near the top of the cliff where it merges into the flat top of the cliff, and what to do when the cliff face changes direction.

One of the smartest people I know is a guy by the name of Johnny Gibson. In 2001, I was working at Digital Domain. We were working on a film called The Time Machine. Johnny was simulating the erosion of a canyon, a bit like the Grand Canyon, over thousands of years, depicted like a time-lapse sequence. Starting with an elevation map of the Grand Canyon with all its tributaries and fractal valleys, I suggested that we could wind back the clock by adjusting the white point and black point on the elevation map and adjusting the curve, and so on, and then running those adjustments forward through time to gradually widen the canyon at the same time as deepening it. I don't remember if we stuck with that approach, but that was our general starting point for the shape of the canyon eroding over time. That was the easy part.

We needed to add procedural detail to the canyon, of course, and that detail had to change over time. It couldn't be static. Johnny wanted to make it as realistic as possible. He wanted to simulate volumetric rock with hard bits and soft bits that would influence the shape of the surface as it eroded. He was working in Houdini, which at that time was probably the package most suited to this kind of R&D with confidence that it could all be turned into production quality renders when all's said and done. As I recall, one of his ideas was to use Voronoi noise and to project/displace the surface towards the nearest Voronoi cell boundary. As the surface gradually lowered, there would be frames where the surface would instantly pop to a different cell wall. The idea was that this would create a really rocky, craggy looking surface that would collapse in discrete chunks over time as the soft material around the rocks eroded. It sounded like a great idea to me. Unfortunately the client didn't like the frenetic appearance of it due to the extreme time-lapse - even though that might be realistic as far as we were concerned - and we were never able to take it to its full conclusion and make it look really good. But the idea has stuck with me ever since, and I'm reminded of it every time I want to create a surface that should be defined volumetrically to get the best results.

The canyon we ended up with in that film didn't look great - and is pretty poor by today's standards - but I think that's because they were forced to change direction very late in the game and only had a few short weeks to come up with something completely different. Terrain was a difficult thing to make photorealistic in those days, so that kind of U-turn wasn't good. I would have liked to see Johnny's volumetric terrain given the chance to see the light of day. It could have been a much more awesome piece of cinema.

Incidentally, a strange phenomenon would occur when we tried to volumetrically texture (colour) the rock as it was eroding. As the canyon widens, it looks as though the texture is somehow sliding across the surface, and it looks strange even when you understand why it's happening. It's not the kind of thing you want to happen in a movie scene where there's already a lot of crazy stuff happening that most people won't be able to comprehend. So we had to do a lot of cheats, generate UV maps, and slide textures using keyframes. Horrible stuff you would never imagine needing to do, just to make it look OK in the end. Sometimes doing it 100% correctly results in something you really don't want to look at. While the frenetic popping of disappearing boulders had this effect on the client, the sliding volumetric textures made us realise that we weren't immune to such things.

In 2009 I was back at DD and was asked to develop a RenderMan shader for the earth opening up as Santa Monica Airport was being torn apart by whatever ridiculous thing was supposed to be happening in the movie "2012". I really wanted to produce a nice volumetric appearance to the sides of the chasm so I set about using Johnny's idea to do so. I needed to implement a version of Voronoi that returns more information than you get from the textbook/web examples. So I did that. After solving a few other problems along the way, and making some compromises to work around other problems that I couldn't figure out, I got something fairly decent. But it doesn't fully achieve the goal of a volumetric voronoi displacement. The things I wanted to do produced discontinuities that I didn't have time to work out a way to prevent. It was good enough for the specific geometry it would be applied to for the movie, but I ran out of time to really solve the general problem, and once they were happy with the results it was time for me to move on.

I wanted to bring that knowledge back into Terragen, and either write a shader or share a combination of function nodes to do this. I still want to do this. But it's difficult to devote weeks and weeks to a difficult problem that you don't even know for sure will succeed in the end, when users are screaming for more immediate problems to be solved and there are only 2 people to solve them. I have had some minor successes in this area and I keep coming back to this research every now and again. I think that some day I'll be able to show you a working technique or perhaps a shader to accomplish this.

By the way, this whole problem is solved by using an isosurface renderer. Maybe in the future we can give you isosurfaces as an alternative method of building and rendering terrains. While it could be done in a fairly simplistic way and released as part of the product, many of Terragen's existing capabilities would be missing from it. It would essentially be like a separate kind of entity within the scene. I don't know if isosurfaces will render more efficiently than micropolygons. It might take years before it would become a mature, reliable rendering solution in Terragen. So it's no small undertaking. Furthermore, it adds complexity to the application as a whole. But it's something I'm interested in trying out some day.

So the volumetric procedural surface problem is one thing. It's not that vector displacement can't be done, it's that it's difficult to obtain the correct vector that produces the volumetric target that you want. Another idea is to iteratively displace the surface towards the volumetric description, until you approximately converge on it, but that's slow. I tried it once, with a simpler volumetric description, and the speed didn't encourage me to continue that line of research. Isosurface rendering inherently uses an iterative approach, so that can also be slow, but I think the costs are higher when you combine microvertex displacement with an iterative solve. I don't see any evidence that the two approaches are going to hybridize in an efficient way, but I could be wrong.

These days when I need to create a rocky cliff-like surface, I tend to hack away at a few different ideas and combine them together until it works well enough for the job at hand. It doesn't look any better than many of the networks that I've seen posted on this forum, and the setups are usually messy by the end of a VFX production where you never get a chance to really clean things up and understand how and why they're working. Sometimes I'll try to simplify these setups afterwards, and they fail to work in a satisfying way on other scenes that I try to apply them to. But I'll keep trying.

The other problem that was raised in this forum thread is that of extremely stretched vertical surfaces. The way I like to solve this is to start with a surface that is inclined, but not completely vertical, and then displace it outwards so that it becomes vertical. In some situations this can be done quite easily, for example with a Twist and Shear shader. If you have a cliff that faces in the same general direction all along its length, this is fairly straightforward. You could use a Simple Shape Shader for the initial displacement, and then a Twist and Shear to shear it into a vertical wall. It has some unintuitive behaviour though. The entire surface at the top of the cliff is now offset horizontally from where it would have been otherwise. This might cause problems for texturing or applying other shaders. I've also been working on some ideas for built-in shaders that make this a simpler thing to do with only one node (e.g. a Cliff Shader), and I also want to give it some easy ways to add shaders to different parts of the cliff. It suffers from the same problems though. And unfortunately it might give awkward results if the cliff is not just a single face. A mesa which has surfaces facing in all directions requires that the top of the mesa is stretched outwards from some central point. This sets of limit to how small you can make that mesa so that you're not stretching the entire surface from a single point, which is impossible for the renderer to handle. Anyway, one of my goals for Terragen 4 is to provide nodes that make this stuff easier, even with these caveats.

I don't know if it will be possible to produce a "retopologize" node for displaced surfaces, but that's something I've started to think about. The way the renderer subdivides surfaces might make this difficult - I don't know yet.

A future goal of Terragen (and we're thinking about prioritising this for Terragen 4) is to be able to render imported (or otherwise modeled) geometry with the same fidelity as the built-in displaceable primitives. This way you could model your rock face with polygons and then not have any vertical displacements to worry about.

Another aspect to this whole subject is "discontinuity". When displacing a surface, you don't want a displacement that suddenly jumps from one value to another. Terragen will keep on subdividing this discontinuity until it reaches an internal limit, for performance reasons. The discontinuity is never resolved because the function simply does not define any inbetween points that the renderer could ever discover. We know about how this problem pertains to generating vertical cliffs. But it applies more generally than that. Some of the most convincing rocky surfaces are produced by functions that unfortunately have these discontinuities, so you can't get close to them. I'd like to try to implement some soft-edged versions of some useful functions that should allow us to create more useable "rectangular noise" prodedures and other rocky cliff type effects.

I think about these problems and I'll be slowly chipping away (aha) at them from various angles when I get chance.

Matt

Hetzen

Quote from: Tangled-Universe on October 21, 2014, 07:13:25 AM
Using a "displacement shader to vector" node converts the terrain displacement to a vector.

From there we think we need to "normalize" the vectors from the rectangular noise values with the vector of the terrain.
Normally (pun), normalizing is a simple division of one over the other, but that gives really messed up results.

Sorry Martin I didn't explain very well last night. What I was explaining is that you can get an RGB vector map of your terrain, that will work in an image viewer, by dividing the output of a "displacement to vector" node by your scenes maximum displacement (which in the default scene is 2000). This scales back the colour/vector information into a float (0..1) value that image viewers tend to use, rather than green values of 2000 plus, etc.