Rectangular Noise

Started by Hetzen, March 09, 2010, 11:28:51 AM

Previous topic - Next topic

j meyer

That's one reason chris suggested to work with an exported part of TG geometry
in the vdisp thread btw.


"...A big "however/but" is that what I said still applies to PF's not being able to create the overhanging features I referred to in my previous post..."

What about the shrooms and hoodoo stuff generated with PFs then?

Tangled-Universe

#121
Quote from: j meyer on October 18, 2014, 12:08:08 PM
That's one reason chris suggested to work with an exported part of TG geometry
in the vdisp thread btw.

Yes true, but that's not really the discussion here isn't it?

The discussion is how....well....just read it again ok ;) ;D

Quote from: j meyer on October 18, 2014, 12:08:08 PM
"...A big "however/but" is that what I said still applies to PF's not being able to create the overhanging features I referred to in my previous post..."

What about the shrooms and hoodoo stuff generated with PFs then?

Well, see the bold section in my quote... I didn't want to go through my explanation entirely again on self-repeat so you'll have to scroll up to understand what I really meant.

The shrooms/hoodoos are made of >1 nodes.
So just to clarify once again: if you have a 3D noise and a piece/part of it in the shape of a shroom/hoodoo would intersect with you terrain then TG does NOT displace that part of the terrain in the shroom/hoodoo shape.
Instead, it samples the presence of the noise AT the surface and only "sees" the circular base section of the shroom/hoodoo. The result will be a circular blob.
A true volumetric interpretation of the noise would extend the surface into that shroom/hoodoo.

Ok, last time I have tried to make my point ;) ;D

j meyer

Thank you sir,thankyou ,thankyou.....and thank you again.


Dune

Doing some tests again with vdisp. I made a 32 bit file with PS, using 'render clouds' in the RGB channels, and hardening them, then tiling, then used this to vdisp a terrain. No real rectangles, but interesting patterns can be made..

Tangled-Universe

#124
Quote from: j meyer on October 19, 2014, 10:41:02 AM
Thank you sir,thankyou ,thankyou.....and thank you again.

Haha well I can see how my post may have come accross as derogatory, but surely wasn't meant like that!

I'm struggling with this for weeks and its difficult stuff to explain what's really missing or not to me/our liking.
Also, or may be especially, the lack of attention for this subject by PS made me a bit grumpy.
8 years of TG2 and we're still struggling and nothing changes significantly in the way we can build scenes.

So sorry Jochen if I sounded like a dick, I didn't mean to and your sarcasm was deserved :)

Tangled-Universe

If you use a surface layer as parent for your noise function then that noise function maps a lot nicer on your terrain if you enable smoothing in that surface layer parent.

Be aware...

It will smooth out any displacements you did after that compute terrain, be aware of that.
Also, if you use multiple compute nodes then it will use the info from the very first one only (top compute node in network).
For mapping on overhanging/lateral features you need additional compute nodes, but the smoothing function in the surface layer won't allow you to work with any other compute node than the first/top one.
What will happen is that your overhanging/lateral features will (almost) disappear.
Ideally we should be able to select which compute node to use for smoothing.

TheBadger

^^To tell you the truth, I would not mind a more step by step break down of what you wrote in your last post. I am really nearing the point of giving up. And not just on this one issue, but completely. To say I am frustrated would be a massive understatement.

It has been eaten.

Dune

#127
I wonder if the smoothing also works/helps if you use a vdisp setup, instead of regular PF noise. Will try. I will also try to make a vdisp map of really hard blocks, but it's hard to model real hard blocks in mudbox. At least I haven't figured out how to do that. It would be really cool if you could replace the RGB channels of a 32bit vdisp map (why should it be 32 bit anyway, why not 16bit?) with a procedural noise making blocky outcrops. That is my line of thinking atm.

And another one, after a simple rotation.

j meyer

T-U  - Ok Martin,no problem.

Nice experiments Ulco.
Have you tried image based displacement as a base for
the modeling in mudbox already?

Please let me explain my view and why I think vdisp in TG
might be a solution.I'll try to keep it short.
Is the noise as such 3D or not? I say it is.
Apply a single PF as a density shader to a cloud shader and
what you get is a visual representation of the structure of
the noise.Not the displacement or so,just the pure noise
structure.
[attach=1]
If you examin these structure(s) closely you will see that it's
true (virtual) 3D and not 2D projections on every axis or some-
thing like that.You wouldn't get those arches and caverns with
a 3axes 2D projection.(If you don't believe that,try it,it's posssible
to try that in TG,too).
Then we have our planet,which - from my point of view - is a
procedural sphere or in other words a hollow ball with a sort of
membrane as surface.It's cutting through the noise.
[attach=2]
This membrane is displaced then.And this is where the problems are.
The usual displacement we see in TG is as far as I understand it
not vector displacement,but simple displ that can be compared to
how a heightfield is displaced,not exactly,but in a way at least.
(we have a conversion node displacement to vector,this also
suggests a difference)
Think of vacuumforming with a rather stiff material.
You won't get much undercuts and stuff that way.A more flexible
material is needed - in our case vdisp - to get more detail,namely
undercuts/overhangs and cavities and so on.No holes,though.
Due to the closed surface nature of our membrane we also won't
get holes in TG,but cavities and overhangs are dfinitely possible
with vdisp.
If you think along those lines and examine the images you can get
an idea where some of the distortions come from.(try the cubic noise
on clouds also to see that)
My conclusion is that we need a way to convert a noise into vector
displacement,the nodes needed are there already.
Or better how to specify a special shape and where it's desired to be
located and things like that.Seems to be quite complex,but maybe it
turns out to be rather easy,who knows.

It would be nice to have a working example file to learn from and
personally I hope Matt will give us a hint at least.

All of the above might be wrong of course,at least it's incomplete.
And please excuse may laymenship.






Tangled-Universe

This is exactly what I meant and my beef I'm having with TG during my current job, but also personal wish-list of things to be able to do.
Nice explanation and demonstration!

The fact that you can convert displacement to a vector doesn't necessarily suggest things might be otherwise as you are thinking though?
A vector is just a thing which defines the direction and magnitude a certain point on the surface is pushed to.
The surface itself is displaced into 3D coordinate space and that's where you need vectors to define pre- and post-displacement positions for a given point on the surface.
So even in the situation you described TG is working (which I agree with), the presence of vector operator doesn't suggest something truly 3D is going on.

Question indeed is, how do we get vdisp-like behaviour for our procedural noises???

Given that vdisp can do this means that TG implicitly is capable of performing such displacements and that the key is in how the noises are being sampled which really seems to be NOT volumetric?

Matt???

Matt

As you said, vector displacement maps can be imported into Terragen, and that works. So vector displacement can be done. There is absolutely nothing to prevent you from doing a procedural vector displacement, either. This is what the Redirect Shader is for, or more recently the Vector Displacement shader. The renderer is capable of doing so, and the nodes are there. What you're asking for is a specific procedural noise that not only produces a vector (of which there are many existing examples and methods of doing so) but also produced a vector that when added to the current position produces a surface that you like. This is a harder problem ;)

I have some more to write on the subject, and will do so soon.
Just because milk is white doesn't mean that clouds are made of milk.

Oshyan

#131
I may put my foot in my mouth as I lack a lot of knowledge in this area, but the way things in TG work now seems normal and no different than any other displacement-based system. Vector displacement is different thing, which is why it is more capable, but as Matt says simple noises do not (currently? normally?) produce vector values that can be used in this way. Let me try to explain *how I understand it* (which may be wrong).

In traditional displacement (and simple displacement maps), the map is generally a single channel, "grayscale" one might even say, and the lightness value defines the "amount" of displacement. Since it is only defining a single value - lightness, which is interpreted as "strength of effect", just as in a mask - there is no way to get overhangs on a single pass of displacement, just like you cannot have overhangs on heightfields. You can have lateral displacement (applying displacement that is lateral to another surface, relatively speaking), which creates a kind of "overhang", at least relative to the planet surface for example, but from the point of view of the surface the displacement is being applied to there is no non-planar movement that is happening.

Vector displacement is a solution because it encodes a vector for each point in the source map, 3 coordinates rather than 1, sometimes interpreted (or viewed) as R, G, B color values. This allows for the precise placement of the resulting displaced points/vertices, rather than just encoding an "amount of displacement", and thus overhangs or really any continuous-surface 3D shape are possible.

There's a lot of good info and examples on this here: http://renderman.pixar.com/view/vector-displacements

Now, simple noise functions generally create basic lightness values in 3D space so sampling that along a surface and then using it for displacement will only ever create non-overhanging results, at least for single-order displacement. Subsequent displacement can then operate on that already-displaced surface and create additional changes which are still planar, when considering the *current* displacement, but when taken as a whole do create overhangs, etc., i.e. more complex surfaces and shapes. The trick, then, is to come up with a function that outputs vectors instead of single values when sampled, and does so in a coherent way that creates the kind of shapes you want (as Matt said).

So what TG is doing with displacement is quite normal and "correct", you just want a higher or more complex form of displacement (vector displacement). If you think of a 3D noise function that is creating RGB values instead of simple lightness, this is perhaps easy to imagine. Or perhaps you simply use the density of the currently sampled point in the (existing) noise function and define a threshold for the surface, which I guess is kind of like how isosurfaces work. The question is how to create or derive these values in TG and how to then interpret them for a coherent result. Simply sampling RGB values of our theoretical "vector noise" instead of V(alue)/B(rightness) will give you the right *kinds* of values, but will they create coherent output? Remember that displacement is the difference between the flat surface you start with and the position each point should end up at, so Brightness is easy to deal with, you can ensure no overlaps, and the worst problem you run into is roughness I guess. But how do you translate the position of a flat sphere to equal the shape you want to sample from your noise function? Maybe it "just works" (like regular displacement) once you have the necessary source data, I'm really not sure. If it's that easy I'm surprised it hasn't been done (here) yet though. :D

As I mentioned, I think this is where isosurfaces can be interesting as they can essentially create arbitrary volumetric surfaces out of simple 3-dimensional input data (e.g. any current noise function). This is not something that simple displacement can do, but yes *vector* displacement can, so let's figure out how to get vector values out of the noise functions and apply them! :)

Hopefully that is helpful and adds to the discussion rather than just confusing things more. Like I said, I'm no expert at this stuff!

- Oshyan

Matt

#132
This is an interesting problem that I keep coming back to. I'm often finding that I need to create rocky cliffs, and it's really difficult to do well.

Simply displacing along the normal has problems. I've seen some procedural examples in this thread that look pretty good when you apply them on a smooth surface, or even a fairly steep surface that points in the same general direction. But there's always the problem of what to do near the top of the cliff where it merges into the flat top of the cliff, and what to do when the cliff face changes direction.

One of the smartest people I know is a guy by the name of Johnny Gibson. In 2001, I was working at Digital Domain. We were working on a film called The Time Machine. Johnny was simulating the erosion of a canyon, a bit like the Grand Canyon, over thousands of years, depicted like a time-lapse sequence. Starting with an elevation map of the Grand Canyon with all its tributaries and fractal valleys, I suggested that we could wind back the clock by adjusting the white point and black point on the elevation map and adjusting the curve, and so on, and then running those adjustments forward through time to gradually widen the canyon at the same time as deepening it. I don't remember if we stuck with that approach, but that was our general starting point for the shape of the canyon eroding over time. That was the easy part.

We needed to add procedural detail to the canyon, of course, and that detail had to change over time. It couldn't be static. Johnny wanted to make it as realistic as possible. He wanted to simulate volumetric rock with hard bits and soft bits that would influence the shape of the surface as it eroded. He was working in Houdini, which at that time was probably the package most suited to this kind of R&D with confidence that it could all be turned into production quality renders when all's said and done. As I recall, one of his ideas was to use Voronoi noise and to project/displace the surface towards the nearest Voronoi cell boundary. As the surface gradually lowered, there would be frames where the surface would instantly pop to a different cell wall. The idea was that this would create a really rocky, craggy looking surface that would collapse in discrete chunks over time as the soft material around the rocks eroded. It sounded like a great idea to me. Unfortunately the client didn't like the frenetic appearance of it due to the extreme time-lapse - even though that might be realistic as far as we were concerned - and we were never able to take it to its full conclusion and make it look really good. But the idea has stuck with me ever since, and I'm reminded of it every time I want to create a surface that should be defined volumetrically to get the best results.

The canyon we ended up with in that film didn't look great - and is pretty poor by today's standards - but I think that's because they were forced to change direction very late in the game and only had a few short weeks to come up with something completely different. Terrain was a difficult thing to make photorealistic in those days, so that kind of U-turn wasn't good. I would have liked to see Johnny's volumetric terrain given the chance to see the light of day. It could have been a much more awesome piece of cinema.

Incidentally, a strange phenomenon would occur when we tried to volumetrically texture (colour) the rock as it was eroding. As the canyon widens, it looks as though the texture is somehow sliding across the surface, and it looks strange even when you understand why it's happening. It's not the kind of thing you want to happen in a movie scene where there's already a lot of crazy stuff happening that most people won't be able to comprehend. So we had to do a lot of cheats, generate UV maps, and slide textures using keyframes. Horrible stuff you would never imagine needing to do, just to make it look OK in the end. Sometimes doing it 100% correctly results in something you really don't want to look at. While the frenetic popping of disappearing boulders had this effect on the client, the sliding volumetric textures made us realise that we weren't immune to such things.

In 2009 I was back at DD and was asked to develop a RenderMan shader for the earth opening up as Santa Monica Airport was being torn apart by whatever ridiculous thing was supposed to be happening in the movie "2012". I really wanted to produce a nice volumetric appearance to the sides of the chasm so I set about using Johnny's idea to do so. I needed to implement a version of Voronoi that returns more information than you get from the textbook/web examples. So I did that. After solving a few other problems along the way, and making some compromises to work around other problems that I couldn't figure out, I got something fairly decent. But it doesn't fully achieve the goal of a volumetric voronoi displacement. The things I wanted to do produced discontinuities that I didn't have time to work out a way to prevent. It was good enough for the specific geometry it would be applied to for the movie, but I ran out of time to really solve the general problem, and once they were happy with the results it was time for me to move on.

I wanted to bring that knowledge back into Terragen, and either write a shader or share a combination of function nodes to do this. I still want to do this. But it's difficult to devote weeks and weeks to a difficult problem that you don't even know for sure will succeed in the end, when users are screaming for more immediate problems to be solved and there are only 2 people to solve them. I have had some minor successes in this area and I keep coming back to this research every now and again. I think that some day I'll be able to show you a working technique or perhaps a shader to accomplish this.

By the way, this whole problem is solved by using an isosurface renderer. Maybe in the future we can give you isosurfaces as an alternative method of building and rendering terrains. While it could be done in a fairly simplistic way and released as part of the product, many of Terragen's existing capabilities would be missing from it. It would essentially be like a separate kind of entity within the scene. I don't know if isosurfaces will render more efficiently than micropolygons. It might take years before it would become a mature, reliable rendering solution in Terragen. So it's no small undertaking. Furthermore, it adds complexity to the application as a whole. But it's something I'm interested in trying out some day.

So the volumetric procedural surface problem is one thing. It's not that vector displacement can't be done, it's that it's difficult to obtain the correct vector that produces the volumetric target that you want. Another idea is to iteratively displace the surface towards the volumetric description, until you approximately converge on it, but that's slow. I tried it once, with a simpler volumetric description, and the speed didn't encourage me to continue that line of research. Isosurface rendering inherently uses an iterative approach, so that can also be slow, but I think the costs are higher when you combine microvertex displacement with an iterative solve. I don't see any evidence that the two approaches are going to hybridize in an efficient way, but I could be wrong.

These days when I need to create a rocky cliff-like surface, I tend to hack away at a few different ideas and combine them together until it works well enough for the job at hand. It doesn't look any better than many of the networks that I've seen posted on this forum, and the setups are usually messy by the end of a VFX production where you never get a chance to really clean things up and understand how and why they're working. Sometimes I'll try to simplify these setups afterwards, and they fail to work in a satisfying way on other scenes that I try to apply them to. But I'll keep trying.

The other problem that was raised in this forum thread is that of extremely stretched vertical surfaces. The way I like to solve this is to start with a surface that is inclined, but not completely vertical, and then displace it outwards so that it becomes vertical. In some situations this can be done quite easily, for example with a Twist and Shear shader. If you have a cliff that faces in the same general direction all along its length, this is fairly straightforward. You could use a Simple Shape Shader for the initial displacement, and then a Twist and Shear to shear it into a vertical wall. It has some unintuitive behaviour though. The entire surface at the top of the cliff is now offset horizontally from where it would have been otherwise. This might cause problems for texturing or applying other shaders. I've also been working on some ideas for built-in shaders that make this a simpler thing to do with only one node (e.g. a Cliff Shader), and I also want to give it some easy ways to add shaders to different parts of the cliff. It suffers from the same problems though. And unfortunately it might give awkward results if the cliff is not just a single face. A mesa which has surfaces facing in all directions requires that the top of the mesa is stretched outwards from some central point. This sets of limit to how small you can make that mesa so that you're not stretching the entire surface from a single point, which is impossible for the renderer to handle. Anyway, one of my goals for Terragen 4 is to provide nodes that make this stuff easier, even with these caveats.

I don't know if it will be possible to produce a "retopologize" node for displaced surfaces, but that's something I've started to think about. The way the renderer subdivides surfaces might make this difficult - I don't know yet.

A future goal of Terragen (and we're thinking about prioritising this for Terragen 4) is to be able to render imported (or otherwise modeled) geometry with the same fidelity as the built-in displaceable primitives. This way you could model your rock face with polygons and then not have any vertical displacements to worry about.

Another aspect to this whole subject is "discontinuity". When displacing a surface, you don't want a displacement that suddenly jumps from one value to another. Terragen will keep on subdividing this discontinuity until it reaches an internal limit, for performance reasons. The discontinuity is never resolved because the function simply does not define any inbetween points that the renderer could ever discover. We know about how this problem pertains to generating vertical cliffs. But it applies more generally than that. Some of the most convincing rocky surfaces are produced by functions that unfortunately have these discontinuities, so you can't get close to them. I'd like to try to implement some soft-edged versions of some useful functions that should allow us to create more useable "rectangular noise" prodedures and other rocky cliff type effects.

I think about these problems and I'll be slowly chipping away (aha) at them from various angles when I get chance.

Matt
Just because milk is white doesn't mean that clouds are made of milk.

Matt

How many times did I write "problem" in that last post?!

Matt
Just because milk is white doesn't mean that clouds are made of milk.

TheBadger

QuoteThere is absolutely nothing to prevent you from doing a procedural vector displacement

Ok, but in my limited experience these two things (vector and procederal) are contrary. That is, my experience with a vector is like a heightfield where the vector is limited to one place and form for the most part, like a DEM.

But a procedural is un limited by place or form, since it can go on forever in every direction, and it's form can be altered without limit.

So, if the way forward as things are now is a procedural vector, how is that done?

Again I am thinking of vectors in terms of the mudbox thread. So if I should be visualizing this in another way I need someone to describe it to me.

And it would not hurt is someone could post some files in a very simple way that shows precisely the whole thing
(procedural vector displacement) in the context of rectangular noises.

If everyone gets this but me, well oh well. But somehow I doubt that I'm the only one with a headache right now.

It has been eaten.