PBR shader setup in TG (was re: Speedtree 8)

Started by Matt, February 27, 2017, 09:38:58 PM

Previous topic - Next topic

Matt

Quote from: zaxxon on February 25, 2017, 10:57:34 AM
It would be great if Planetside can implement some of the PBR pipeline into TG so we could use some of these assets to their full extent

That shouldn't be difficult, even if we don't have a shader with "PBR" in the name. Which PBR workflow do you prefer, and can you point us to some assets for testing?

Matt
Just because milk is white doesn't mean that clouds are made of milk.

zaxxon

Sorry for the lag in getting back to you Matt. Hope this doesn't constitute a hijacking Cypher :).

I realize that Physical Based Rendering (PBR) is a hot and sometimes loosely used term these days. While not a technical guy per se, I do try to follow the current trends in rendering. I know that the TG renderer already uses physically based rendering as in specular and roughness (and probably a whole lot more outside of my understanding). What I really want to accomplish is to use the increasing number of available scanned assets to their optimal quality in TG. The Quixel Megascan libraries are tremendous assets and relatively inexpensive. I now own virtually all of the 3D scan content from them and have used some of them in my last few TG renders. But, I know I'm not getting the most out of them. As in many things in TG, the inability may be entirely caused by my lack of understanding of how to properly use the controls. Using the Megascan assets as an example though, I'm left with a number of maps that don't easily plug into TG's texture map workflow.  Megascans two PBR workflow options are the same as the Allegorithmic options of either Specular/Roughness or Metal/Gloss. The Specular/Roughness maps from Megascans are: albedo, bump, displacement, roughness, fuzz, normal, normal bump, specular. The Metal/Gloss maps are: albedo, bump, displacement, cavity, fuzz, gloss, normal, normal bump.

I've attached a couple of images to illustrate my current efforts using a Megascan asset. First is a comparison with my TG render alongside the sample image of the same asset from the Quixel website. It's a lovely asset and I think it looks pretty decent in the TG render, but to my eye the sample is clearly better. Discounting the different lighting and lens setting, the detail areas are worth comparing. However when I compared my test renderings in TG, I'm hard pressed to see much of a difference between no maps applied to all the maps that I could apply (given my understanding of applying texture maps in TG).

A simple (simplistic?) response to your offer is: if possible, create a texture loading template for the various map types for both Specular/Roughness and Metal/Gloss PBR work flows.

I suspect that the two normal maps play a big part in the final image quality. I don't know if those can be supported in the current TG render engine. Once again I only have a limited understanding of the technicalities involved.

If the current TG renderer indeed can accomplish that, I would truly appreciate a tutorial showing how, if not I'd love to see the capability added to the 'Roadmap'. Again, thanks for replying to one of my off hand comments.  :)


cyphyr

I'm a little hazy about this as well but I think you can plug the normal maps through displacement vector nodes and into the input of the default shader ...
www.richardfraservfx.com
https://www.facebook.com/RichardFraserVFX/
/|\

Ryzen 9 5950X OC@4Ghz, 64Gb (TG4 benchmark 4:13)

Matt

#3
The bump map and/or the displacement map should improve the render if it's applied properly. If it doesn't change the appearance much then perhaps the multiplier needs to be higher.

I don't know if they expect you to use both the displacement and the bump; I wish more renderers (and assets) would treat bump and displacement as one thing and automatically optimise the balance between bump and displacement according to the renderer's implementation. But since they don't, it might be best to add bump map and displacement map together and feed the result into the displacement input.

The normal maps shouldn't be used as bump/displacement maps because they encode a different thing. It would be best to use either the displacement map or the bump map or possibly both added together. I'm not sure if these particular assets expect you to use normal maps and displacement/bump, or if they provide both so you can use whichever your renderer supports, but it's most likely the latter. In the case of Terragen, we don't support normals maps yet, so you should ignore them.

Usually you should ignore the cavity maps because they are intended for some game engines to approximate micro shadows and reflection occlusion because they don't have the computation budget to calculate them properly. However, you might find the cavity map useful for doing interesting texturing tricks.

Matt
Just because milk is white doesn't mean that clouds are made of milk.

Dune

I used the cavity map with a color adjust to darken cavities a bit more. And if you add a tiny procedural grain, masked (or not) by the displacement/bump map, you can get just a little more microbump. I think that would pull the first example more into the Quixel quality.

cyphyr

Quote from: Matt on March 03, 2017, 08:07:02 PM
... I'm not sure if these particular assets expect you to use normal maps and displacement/bump, or if they provide both so you can use whichever your renderer supports, but it's most likely the latter. In the case of Terragen, we don't support normals maps yet, so you should ignore them.
...

Looking at the difference between the bump map and the displacement map it looks like the displacement map is on a very muck larger scale, probably intended to be used with the lower LOD models with sub division surfaces.
The high res megascan models don't need the Displacement map but benefit from the bump map (as displacement in Terragen) ... I think ...
Actually I have no idea why you'd need a normal map with a high res model in TG but  there may be uses ...

By the way I really like the MegaScan Rock models, almost certainly going to grab some and see if I can modify the maps.
www.richardfraservfx.com
https://www.facebook.com/RichardFraserVFX/
/|\

Ryzen 9 5950X OC@4Ghz, 64Gb (TG4 benchmark 4:13)

zaxxon

Thanks Matt. Between your suggestions and Dune's I'll keep at this. I actually did add the bump and displacement into the displacement input using a merge shader without much noticeable effect. I will push the multipliers some more as my test ranges were already more than double my usual levels.

It was my prior understanding that TG didn't use normal maps, and you've definitely confirmed that. My past experience with normal maps was with zbrush and Mudbox to create more detail on a low poly object with a normal map from a higher poly version. After digging into this a bit more, and there is a ton of info online, there appears to be some varying opinions and a lot of variation in usages. There are also a number of work flows that use these maps in combination. The most recommended was displacement+normal bump, though displacement+bump (using different images) was noted as well. I've copied a few snippets from a CGTalk Forum discussion ( CGTalk:http://forums.cgsociety.org/archive/index.php?t-858824.html ) on the differences and uses of bump, normal, displace maps:

"Normal maps and bump maps are (nearly) the same thing. They both define the per-pixel normal of the rendered surface. The difference is that a normal map defines it explicitly with a float3 value encoded into the RGB values of the map. A bump map implicitly defines the normal as the vector result of deltas in the scalar height from one pixel to the next on each axis."

"Normal maps change the surface normal of the polygon they're applied to, hence the name. That means lighting on a normal mapped surface will behave as if the surface has parts facing in different directions even if it's just a single flat polygon."

"The normal map contributes the vector directing the push, and the elevation map contributes the scalar representing the distance to displace by."

Well, all of that is interesting but not applicable to TG at present, but Matt did say "we don't support normals maps YET" (my emphasis  ;)).

Cyphyr: I think the Megascans libraries are terrific, and huge. With more coming. Based on their point based subscription model the example asset I used cost $3.00 US. Not bad.

j meyer

The different maps that come with the models are, as Matt indicated, for different
rendering solution the different apps use. Game engines, modeling/animation apps,
sculpting soft they mostly use different approaches. And thus you have to find the
best ones for the app you want to use.
Richard is right about the displacement maps, they are indeed for the lower level and
not for the high poly version.
Normal maps can have more than one purpose, again depending on your target app.
There are examples were they are used to control lighting and other stuff, very
confusing and way beyond my scope.
What Ulco suggests is a very sound way to go for use in TG, at least from my point of view
and level of experience so far. Let me add that it proved best to use the high poly version
generally.

KyL

I have been playing with their assets as well and started building a library base on it. As it has been already said, the displacement map is only meant to be used if you are using a lower LOD than the high definition mesh. It is probably extracted from the difference between the first LOD and the high res mesh. If you use for example the lod1 mesh and apply the displacement map, you will have a result very close to the high definition mesh.
From my test doing so reduces memory usage (which makes sense as high definition mesh can be quite heavy), but increases render times. But the result with a lower lod+displacement is never going to look as good as the high resolution mesh, mostly because of the displacement not being raytraced by terragen at the moment.

The option I like the most is using the high res assets and only use the bump map for the extra details. As this is meant to simulate very small details, I adjust the scale based on the size I would expect the details to be in real life.

[attach=1]

So for example on those rocks the displacement multiplier is only set to 0.008, which means the bump map is there to simulate details of a magnitude of 8mm. Of course the bigger the asset, the bigger the multiplier as the details represented in the bump map will become less and less refined. A bump map on a stone might represent details of a few millimeters, but the same map on a huge lof might represent details several centimeters deep.


Be carefull with the scale of their assets as most of the one I bought were not at the right scale. Also their color maps display a very yellowish tint most of the time, I am not sure if they either shot with the proper white balance or if they graded the textures at all in the end.

To sum it up regarding the PBR worflow, I think you just have to be carefull as it sounds more like a marketing thing that a real standard :)

zaxxon

Hey Kyl, that's very helpful. While the theory of all this is interesting, and how these maps are used in other applications is as well, the bottom line for me is how to utilize the assets to their best in TG. I hope you'll share some of your set ups, your examples look quite nice.  :)

KyL

#10
That's quite simple, really. I use a default shader and several image files to plug the color, displacement (bump) , specular and roughness maps to their respective "function" slots.

Leave the color map as "convert to linear" in the color tab of the image file, but change all the other to "data is linear". As the specular, roughness and bump maps are meant to be pure function describing a physical attribute, they should not be corrected the same way the color map is.
Adjust you bump strength so you can start to catch the fine details. I usually do that by unplugging the color map and change the diffuse color to dark grey so I can read the details more easily. Change your roughness to the maximum the slider allows, 0.8.  This means that anything white in the roughness map will give a very broad specular, and anything dark will give a very tight one.
As this will be controlled by the map, the slider becomes a simple multiplier affecting how "shiny" you want the object to look.
And that's pretty much it! Now you can use the slider to control the look of your object and play with the diffuse and specular balance.

This is where you have to be carefull if you want to stay physically correct. If you put the diffuse strength to 1, you will end up with a very blown-out looking object. You have to check the values of your maps and see if it makes sense. For example in any rocks maps, you often have the highest value reaching 1 in the very bright areas. This means that some part of you object will reflect 100% of the light. This never happens in real life as even the brightest things in nature rarely go above 70%. A rock would probably reflect something like 25-35% of the light reaching it. So that's why I like to use the diffuse color as a multiplier to drive the albedo.   https://en.wikipedia.org/wiki/Albedo

We should probably start a new thread to talk about that as this is getting quite far from the original speedtree topic  ::)

zaxxon

That's great kyl, thanks. I also think that this thread has wandered a bit, perhaps Oshayn can move some of this, or when one of us posts more examples it can be the start of a new thread.

cyphyr

#12
Ok so I have subscribed, looks like I can cancel at any time and there is a lot of models/surfaces you can pick up for your basic 50 points.
They offer free two surface creation/management tools, Megascan Bridge and Megascan Studio. Has anyone got any experience of these tools and comments?
Particularly the asset management tool. I like to run my own assets and don't want this taking over in any way. Can I add my own assets and will it still work if I cancel mu sub?
Cheers
www.richardfraservfx.com
https://www.facebook.com/RichardFraserVFX/
/|\

Ryzen 9 5950X OC@4Ghz, 64Gb (TG4 benchmark 4:13)

luvsmuzik

Kyl
Thank you very much for this very comprehensive explanation of your mapping procedures!
I have used the node network in Blender and have been trying to use bump maps for displacement in Terragen, using them as the displacement image rather than adding them in the function node as an image map. WOW what a difference! (Function usually to me means blue node set up or more complicated functions...so ) I have also seen the balloon distortion by using the wrong multiplier, ha, so funny and scary.

With the addition of the material examples by Hannes, fleetwood, AP and others to the Terragen Materials Library, I have been experimenting with many new (to me) methods as well.

Shown, successful displacement on a Terragen rock using black and white image map and color image texture,,,,tree displacement on bark (image map and PF color function) Both of these objects previously textured in Blender with cycles node setup and now very closely matched in TG4!

Great link to Wiki about albedo, now we know!

zaxxon

After all the helpful comments, especially from Kyl  :), here's what I've arrived at. The comparison image looks close enough in detail to the original for my purposes. The basic set-up is fairly simple and should allow for enough 'tweakage' to handle most of the Quixel 3D scans. There may be other and better settings to display these assets in TG, but to my eye these work at a more than acceptable level. If you haven't checked out the Megascan libary, it's well worth the time.