Planetside Software Forums

General => Terragen Discussion => Topic started by: WAS on July 04, 2019, 03:44:09 AM

Title: PBR Displacement Advice
Post by: WAS on July 04, 2019, 03:44:09 AM
I just wanted to share some advise for those of us making PBR materials, or downloading PBR materials. I found that often (especially free) PBR materials that are approximated often have a displacement map that doesn't have appropriate detail for it's texture.

When making your material displacement you need to somewhat think like Terragen, how you'd do displacement. We'd start with our base smoother geometry, than next our geometries basic definition, and than finally the displacements fine, low level surface detail. How you do each displacement map really depends on the texture base itself.

Below is a PBR I found a long time ago, and always liked the texture, but I figured it only being 1600x1600 it was just too small for use in Terragen. It doesn't look right at all, and the shapes are so smooth. I thought it was because of the texture it was based on, because of shadows and highlights like Ulco mentioned before.

[attachimg=1]

However, below I have generated 4 new displacement maps and combined them according to intensity. The base geometry (large shapes), a secondary base geometry (medium shapes), than geometry definition (defining the shape detail of the geometry), and than finally the surface detail displacement map.

[attachimg=2]

Even at 1600x1600 the results look pretty darn good in Terragen (I mean seriously), though mind you I optimized for Terragen, including the finalized displacement intensity which was lowered to fit with the default shaders 0.1 displacement.

Here is a comparison of the two displacement maps. Second being the newly generated one by me.

[attachimg=3]

I hope this helps some people when making their new approximated PBRs for TG or otherwise. Feel free to ask any questions, it's late. I remembered I told Bobby I'd make a post about this.
Title: Re: PBR Displacement Advice
Post by: Oshyan on July 04, 2019, 02:14:09 PM
How did you generate the new maps?

- Oshyan
Title: Re: PBR Displacement Advice
Post by: RichTwo on July 04, 2019, 05:16:19 PM
PBR?    I don't even know what that refers to - Pabst Blue Ribbon?  Thus spoke the Terragen Village Idiot...
Title: Re: PBR Displacement Advice
Post by: Oshyan on July 04, 2019, 05:21:26 PM
No, no Rich, Professional Bull Riders, jeez! ;)

- Oshyan
Title: Re: PBR Displacement Advice
Post by: archonforest on July 04, 2019, 05:44:30 PM
LOL!!!
Great job was.
Title: Re: PBR Displacement Advice
Post by: Matt on July 04, 2019, 06:24:02 PM
Thank you Rich for giving me an excuse to say what I think Physically Based Rendering is all about. In my opinion there are two separate things which both call themselves PBR.

https://en.wikipedia.org/wiki/Physically_based_rendering

Historically, "physically based rendering" was about rendering techniques. Path tracing is an example of a physically based rendering technique.

More recently, game engines made an important shift to using the same kinds of shaders and textures that had previously only been possible in non-real-time photorealistic renderers. Some people who spearheaded this in the real-time world were calling this Physically Based Shading (PBS) because they recognized that they were not using true physically based rendering (although some game engines are converging on that goal now). Most people now call this PBR even if it only applies to the shaders/textures.

Bringing the game engine world into using the same kinds of shaders and textures as other renderers has made it much easier to share assets between real-time and non-real-time engines. The whole computer graphics world has benefited from this and there has been an explosion in the number of available assets and the tools to make them and work with them.

This has all fallen under the banner of "PBR".

And that's where we are today, with the term PBR applying to certain rendering methods (e.g. path tracing) but also some loosely-defined conventions about what PBR shaders and textures are. But there are some themes that are present in most PBR assets. Most importantly, they should react realistically to many different lighting conditions. When it comes to textures, you will usually see textures maps for roughness (or gloss), albedo (or base color), and either a reflection, specular or metalness map. Different engines/shaders prefer different combinations of these maps. You'll also see displacement maps, normal maps and bump maps. But these things were around long before we called them PBR, and they really shouldn't be called that, in my opinion.

It's ironic, but things like AO maps are actually the *opposite* of PBR, yet they have become associated with the term because they are necessary for game engines to make some assets look good, so they are bundled with assets.
Title: Re: PBR Displacement Advice
Post by: WAS on July 04, 2019, 06:26:38 PM
(https://thumbs.gfycat.com/CloudyRectangularBison-max-1mb.gif)

Quote from: Oshyan on July 04, 2019, 02:14:09 PM
How did you generate the new maps?

- Oshyan

Honestly I had planned to use Materialize and PS, but I forgot I bought CrazyBump and should probably get my money worth back as I remember I made a big stink about needing to purchase it awhile back.  :-X

Thanks for the comments, I also learned a bit about delighting and also repairing a delight image (most processes create artifacts and levels 1 (white) and 0 (black) in their absolute forms aren't delight in all the programs I've tried).

So I just went and grabbed a "bad" texture, that one probably wouldn't want to use, and than got to work. Unfortunately I cannot share it due to the websites policy for their "Free" textures. In fact I can't even link to the texture (no joke) but I can link to the main website (yeah, really by their own technicality): https://www.highqualitytextures.com

But it goes to show what can be accomplished with 1k-2k images, not even upscaled besides being transformed to a 1:1 ratio for seamless texturing. This uses 3 displacement maps mixed together. Your large geometry (the lifts), the lifts definition (medium shapes) and than the surface detail of the stone itself (fine sharp detail).

The texture itself is truly 1k, but has been made 2k by conforming to 1:1 aspect ratio. Additionally, I of course performed some quality magic. :P

Quote from: Matt on July 04, 2019, 06:24:02 PM
Thank you Rich for giving me an excuse to say what I think Physically Based Rendering is all about. In my opinion there are two separate things which both call themselves PBR.

https://en.wikipedia.org/wiki/Physically_based_rendering

Historically, "physically based rendering" was about rendering techniques. Path tracing is an example of a physically based rendering technique.

More recently, game engines made an important shift to using the same kinds of shaders and textures that had previously only been possible in non-real-time photorealistic renderers. Some people who spearheaded this in the real-time world were calling this Physically Based Shading (PBS) because they recognized that they were only not using true physically based rendering (although some game engines are converging on that goal now). Most people now call this PBR even if it only applies to the shaders/textures.

Bringing the game engine world into using the same kinds of shaders and textures as other renderers has made it much easier to share assets between real-time and non-real-time engines. The whole computer graphics world has benefited from this and there has been an explosion in the number of available assets and the tools to make them and work with them.

This all fallen under the banner of "PBR".

And that's where we are today, with the term PBR applying to certain rendering methods (e.g. path tracing) but also some loosely-defined conventions about what PBR shaders and textures are. But there are some themes that are present in most PBR assets. Most importantly, they should react realistically to many different lighting conditions. When it comes to textures, you will usually see textures maps for roughness (or gloss), albedo (or base color), and either a reflection, specular or metalness map. Different engines/shaders prefer different combinations of these maps. You'll also see displacement maps, normal maps and bump maps. But these things were around long before we called them PBR, and they really shouldn't be called that, in my opinion.

It's ironic, but things like AO maps are actually the *opposite* of PBR, yet they have become associated with the term because they are necessary for game engines to make some assets look good, so they are bundled with assets.

I believe the full term is technically "Physically Based Renderer Materials". Historically, the sections of websites, and refereed to was "PBR Materials". And the materials themselves were designed off of real-world values and textures. It seems now in the last few years "Material" has been dropped altogether instead of PBRM used or something. Than we have "Photogrammetry PBR" which I think would be more appropriated called PBM or TOFBM; respectively Photogrammetry Based Material and Time of Flight Based Material

I am actually thinking of picking up a Hawuai(sp) P30 phone because it has a 5 meter ToF sensor that works with several AR/Photogrammetry apps. My LG G8 has a ToF sensor, but it's much smaller, has maybe a quarter meter full resolution distance, and has no API hook.
Title: Re: PBR Displacement Advice
Post by: Matt on July 04, 2019, 06:39:41 PM
Quote from: WASasquatch on July 04, 2019, 06:26:38 PM
I believe the full term is technically "Physically Based Renderer Materials". Historically, the sections of websites, and refereed to was "PBR Materials". And the materials themselves were designed off of real-world values and textures. It seems now in the last few years "Material" has been dropped altogether instead of PBRM used or something. Than we have "Photogrammetry PBR" which I think would be more appropriated called PBM or TOFBM; respectively Photogrammetry Based Material and Time of Flight Based Material

Yeah, there's nothing quite like a catchy TLA for everyone to jump onto!
Title: Re: PBR Displacement Advice
Post by: WAS on July 04, 2019, 06:46:33 PM
Quote from: Matt on July 04, 2019, 06:39:41 PM
Quote from: WASasquatch on July 04, 2019, 06:26:38 PM
I believe the full term is technically "Physically Based Renderer Materials". Historically, the sections of websites, and refereed to was "PBR Materials". And the materials themselves were designed off of real-world values and textures. It seems now in the last few years "Material" has been dropped altogether instead of PBRM used or something. Than we have "Photogrammetry PBR" which I think would be more appropriated called PBM or TOFBM; respectively Photogrammetry Based Material and Time of Flight Based Material

Yeah, there's nothing quite like a catchy TLA for everyone to jump onto!

Seriously. I still end up calling them just "textures" by mouth, which confuses my friends when I'm working with all the other maps. Lol

I just wanted to fiddle around with some method to create nice displacement from approximation as after reading some documentation on Google Scholar, I found most people aren't even approaching it right -- and why these programs like CrazyBump etc have "Displacement Mixers", though I'll be honest I use photoshop as layer settings such as "Overlay" come in handy.

Most the good "PBR" materials you see are done via generation in Substance Designer these days, which is why they are so crisp and clean. Which I find annoying when I see "Physical" involved, as it's all fractal based generation.
Title: Re: PBR Displacement Advice
Post by: Oshyan on July 04, 2019, 06:58:06 PM
I'm still confused about your workflow though. If that's the source of textures you used, all the ones I can see are just a photo of a real rock or something. So you'd have to derive the bump/displacement maps, correct? If that's the case then the issue may really lie in how the software generates maps and what it expects apps to do with them. As you've found even somewhat lower resolution textures can give a decent result, it's partly a matter of the amount of detail and contrast encoded in the bump/displacement map.

In the case of downloaded textures that do include bump/displacement already, sometimes they will include both a Displacement Map *and* a Bump or Normal Map, the intention being that the larger-scale displacement is separated from the finer detail in the Bump/Normal Map. This is not how you want to do it in Terragen, and so those kinds of setups can be problematic when trying to bring that straight in. As I *think* you have found, you want to find a way to combine those multiple surface shape (rather than color or reflectivity) textures into one.

- Oshyan
Title: Re: PBR Displacement Advice
Post by: WAS on July 04, 2019, 08:02:34 PM
Quote from: Oshyan on July 04, 2019, 06:58:06 PM
I'm still confused about your workflow though. If that's the source of textures you used, all the ones I can see are just a photo of a real rock or something. So you'd have to derive the bump/displacement maps, correct? If that's the case then the issue may really lie in how the software generates maps and what it expects apps to do with them. As you've found even somewhat lower resolution textures can give a decent result, it's partly a matter of the amount of detail and contrast encoded in the bump/displacement map.

In the case of downloaded textures that do include bump/displacement already, sometimes they will include both a Displacement Map *and* a Bump or Normal Map, the intention being that the larger-scale displacement is separated from the finer detail in the Bump/Normal Map. This is not how you want to do it in Terragen, and so those kinds of setups can be problematic when trying to bring that straight in. As I *think* you have found, you want to find a way to combine those multiple surface shape (rather than color or reflectivity) textures into one.

- Oshyan

Oh no, the first image is a old free PBR I got, actually not sure where that it's from but clearly approximated (from bitmap/png/etc). The second image is my attempt at using any old image that has bad lighting.

And yes, you are correct, Normal Maps are used for fine level displacement, however, that isn't a feature of Terragen for some reason, this is about optimizing for Terragen, which is perfectly capable of reading low level dispalcement in a file, it just needs to be mixed in accordingly.

This is how new PBRs are made from programs like Substance Designer, all your layers, masked fractals, are baked into the displacement map. The Normal is pretty much redundant legacy support and really just "amplifies" the surface detail already there (game engines heavily rely on parallax now).

I honestly don't know what you're talking about, we all know contrast detail and contrast makes your displacement, and all these applications, including PS are capable of any level of detail from diffuse. It's about how you mix different levels of displacement to create an appropriate vanilla map to use at a standard displacement value or there around. That noted, a lot of free materials that are approximated (materials usually note this, like on CC0 Textures) there isn't much time spent here, or they're not aware of this work flow. So you often have one displacement map that was exported, based on one level of detail/contrast.
Title: Re: PBR Displacement Advice
Post by: Matt on July 04, 2019, 10:50:57 PM
Aside from why most displacement maps suck, I think Oshyan was just asking how you made the extra detail. I'm curious too.
Title: Re: PBR Displacement Advice
Post by: Oshyan on July 05, 2019, 12:23:12 AM
Yes. A basic summary of my thought/point is this: the original displacement map shown above seems to simply be low detail and/or low contrast. The image resolution does not appear to be the main problem or limitation. Thus it makes sense to generate more data in some way, sure. BUT if you start with a high detail displacement map in the first place, it doesn't seem like there needs to be any special workflow. So I'm just trying to understand what the intention of this advise is here. Particularly the "think like Terragen" bit. This just seems like low detail displacement maps, nothing specific to do with Terragen *as far as I can see*. But maybe I'm missing something.

- Oshyan
Title: Re: PBR Displacement Advice
Post by: Dune on July 05, 2019, 01:40:11 AM
I think he added some detail based on the level of whiteness of the original 'soft' image in PS. Maybe a random noise, or same texture shifted, tiled and reduced, or something.
If so, you can do the same in TG; just add some noise based on the color values of a low quality image. Or a 'low quality' noise, for that matter (like wind wave ripples, based on the upper whites of a color driven wave displacement, resulting in smooth windstill wave valleys, for instance).
Title: Re: PBR Displacement Advice
Post by: WAS on July 05, 2019, 02:27:31 AM
Quote from: Dune on July 05, 2019, 01:40:11 AM
I think he added some detail based on the level of whiteness of the original 'soft' image in PS. Maybe a random noise, or same texture shifted, tiled and reduced, or something.
If so, you can do the same in TG; just add some noise based on the color values of a low quality image. Or a 'low quality' noise, for that matter (like wind wave ripples, based on the upper whites of a color driven wave displacement, resulting in smooth windstill wave valleys, for instance).

Nope. All approximated from diffuse. What I do in PS is adjust the maximum whiteness since CrazyBump I'm currently using only has overall intensity, you cannot adjust levels by channel (white/black). Other than that I just repair the diffuse after de-shadowing/de-lighting (blur/heal hard single pixels that are white/black)

Quote from: Oshyan on July 05, 2019, 12:23:12 AM
Yes. A basic summary of my thought/point is this: the original displacement map shown above seems to simply be low detail and/or low contrast. The image resolution does not appear to be the main problem or limitation. Thus it makes sense to generate more data in some way, sure. BUT if you start with a high detail displacement map in the first place, it doesn't seem like there needs to be any special workflow. So I'm just trying to understand what the intention of this advise is here. Particularly the "think like Terragen" bit. This just seems like low detail displacement maps, nothing specific to do with Terragen *as far as I can see*. But maybe I'm missing something.

- Oshyan

I'm not sure what programs you use, but of the 3 I've tried free/freeware, they only ever achieve one level of detail intensity based on the original image. So a highly detail displacement map alone would be nothing but jagged peaks in Terragen. This is why these same programs have built in displacement map mixers as I mentioned before, to do what I am explaining here that seems to somehow be lost in translation. Yes you can get more detail into one displacement map but that's by shifting texture influence and detail, which tightens slopes the more you go, so of course, why not mix different levels for more realism closer to what the texture is based on?

To try to illustrate what I am doing, here is 3 displacement maps before mixing or photoshop leveling on a random image that I would mix together to form a map for Terragen...

Image: https://pixnio.com/textures-and-patterns/rock-stone-texture/rock-texture-land

[attach=1] [attach=2] [attach=3]

The first map may be duplicated and leveled multiple times and mixed depending on the lift definition you want....

[attach=4] [attach=5]

And finally mix them into something...

[attach=6]

(mind you I didn't delight so there is holes from the shadows)

[attach=7]



Title: Re: PBR Displacement Advice
Post by: KirillK on July 05, 2019, 05:40:11 AM
Quote from: WASasquatch on July 04, 2019, 06:26:38 PM
respectively Photogrammetry Based Material and Time of Flight Based Material

I am actually thinking of picking up a Hawuai(sp) P30 phone because it has a 5 meter ToF sensor that works with several AR/Photogrammetry apps. My LG G8 has a ToF sensor, but it's much smaller, has maybe a quarter meter full resolution distance, and has no API hook.


I wonder if anyone could  really get any good material from  TOF  photogrammetry.   I always thought it was pretty low precision thing.    Is it really ready for material capture? What soft is working with that? 

  I do use Reality Capture and Agisoft Metashape and they  are both able to produce super hi-res meshes.  Hundreds of millions polygons up to smallest tiniest surface details like asphalt grains or veins in fallen leaves on the ground  for example. With good enough camera and proper captured series at least.     It takes hours to calculate although and  baking it into displacement map is whole separate problem.   

So I wonder if ToF is really useful by now already?
Title: Re: PBR Displacement Advice
Post by: WAS on July 05, 2019, 05:55:10 AM
Quote from: KirillK on July 05, 2019, 05:40:11 AM
Quote from: WASasquatch on July 04, 2019, 06:26:38 PM
respectively Photogrammetry Based Material and Time of Flight Based Material

I am actually thinking of picking up a Hawuai(sp) P30 phone because it has a 5 meter ToF sensor that works with several AR/Photogrammetry apps. My LG G8 has a ToF sensor, but it's much smaller, has maybe a quarter meter full resolution distance, and has no API hook.


I wonder if anyone could  really get any good material from  TOF  photogrammetry.   I always thought it was pretty low precision thing.    Is it really ready for material capture? What soft is working with that?

  I do use Reality Capture and Agisoft Metashape and they  are both able to produce super hi-res meshes.  Hundreds of millions polygons up to smallest tiniest surface details like asphalt grains or veins in fallen leaves on the ground  for example. With good enough camera and proper captured series at least.     It takes hours to calculate although and  baking it into displacement map is whole separate problem.   

So I wonder if ToF is really useful by now already?

I don't understand? ToF is already replacing most antiquated 3D scanning technology because it operates much faster. Most modern LIDAR is TOF.  As for resolution, it's used to accurately map human faces for facial recognition down to freckles and facial pock marks. Additionally my LG can recognition the veins in my palm and it's print. Only mention I see about resolution is from 2011.

The Nokia 9 uses a ToF for it's DoF and encodes it's Depth map into it's JPEGs (https://imgur.com/AqvW2lY) at 16bit depth (16x16 pixel shading).

This may be where some of the more commercial options are getting there resolution boosts from: https://ieeexplore.ieee.org/document/7335463 This was from 2015 well before my ToF was developed as well as others in phones today.

Raw demonstration footage from LUCID (developed 2016-2017) shows pretty tightly packed point data that can be approximated easily: (https://d1d1c1tnh6i0t6.cloudfront.net/wp-content/uploads/2018/10/LUCID-Helios-Sony-DepthSense-IMX556PLR.gif)


I think with how fast it can process over traditional laser scanning, and just like laser scanning approximates, ToF scanning will be able to process more quicker, getting you to a final product quicker, as computation of the data is PC/Mac reliant it can happen live as you collect data.
Title: Re: PBR Displacement Advice
Post by: KirillK on July 05, 2019, 07:14:06 AM
Quote from: WASasquatch on July 05, 2019, 05:55:10 AM

As for resolution, it's used to accurately map human faces for facial recognition down to freckles and facial pock marks. Additionally my LG can recognition the veins in my palm and it's print. Only mention I see about resolution is from 2011.



Could you show what you are able to do with TOF material wise please?   A render maybe?  With actual reconstructed surface .

That  "hand" gif  clearly shows  low precision with all those random bumpiness even having very dense point cloud.         Could it really make recognizable veins and pocks?     

As of LIDAR scanners they mostly use point by point laser beam scanning with rotating mirror, at least ones I saw  and If I am not wrong TOF is just one shot



I mean can ToF cameras could do something like this:   https://www.dropbox.com/s/xcsehxleves6ob7/dirt.jpg?dl=0

I am not trying to question ToF  technology  perspectives   just trying to figure out if it's  useful right now
Title: Re: PBR Displacement Advice
Post by: WAS on July 05, 2019, 02:03:15 PM
Quote from: KirillK on July 05, 2019, 07:14:06 AM
Quote from: WASasquatch on July 05, 2019, 05:55:10 AM

As for resolution, it's used to accurately map human faces for facial recognition down to freckles and facial pock marks. Additionally my LG can recognition the veins in my palm and it's print. Only mention I see about resolution is from 2011.



Could you show what you are able to do with TOF material wise please?   A render maybe?  With actual reconstructed surface .

That  "hand" gif  clearly shows  low precision with all those random bumpiness even having very dense point cloud.         Could it really make recognizable veins and pocks?     

As of LIDAR scanners they mostly use point by point laser beam scanning with rotating mirror, at least ones I saw  and If I am not wrong TOF is just one shot

Honestly not sure how you can't do recognition that one frame of a TOF has more point data than a linear laser scan to be used for point data...

I mean can ToF cameras could do something like this:   https://www.dropbox.com/s/xcsehxleves6ob7/dirt.jpg?dl=0

I am not trying to question ToF  technology  perspectives   just trying to figure out if it's  useful right now

I don't think you understand how point data is collected and used for resolution... A live video feed from ToF isn't 180 passes a second fed into a algorithm rebuilding a material. It's also in a fixed position, not scanning. The hand is only slightly moving. But in reality far more data in one frame of the video than a linear laser scans beam. Than multiplied exponentially at 180 shots a second.

As already stated, I am looking into getting a Huawei with a TOF that GAPI can hook into. The LG TOF is proprietary. If you want to see the results browse sketchfab and the many Huawei p30 pro photogrammetry materials that ARCore and 3DScan use the TOF.  Most photogrammetry on that website is from phones more than scanned by a company (rarer).

And not sure why you are arguing this as it's no secret the resolution is strong enough to create biorecognition prints down to skim imperfect and veins below the surface, all in a less than a second or two for completion of the task to gather direct field data.

Also, all scanning methods outside a professional expensive 3D scanner require editing to fix mesh issues. Even with standard photogrammetry techniques you'll need to gather all this data manually, which is why TOFs are starting to be integrated into software like ARCore, as it aids in this process exponentially by providing depth sensoring outside of photo processing and stitching to approximate depth.

Title: Re: PBR Displacement Advice
Post by: KirillK on July 05, 2019, 06:44:40 PM
Quote from: WASasquatch on July 05, 2019, 02:03:15 PM


I don't think you understand how point data is collected and used for resolution... A live video feed from ToF isn't 180 passes a second fed into a algorithm rebuilding a material. It's also in a fixed position, not scanning. The hand is only slightly moving. But in reality far more data in one frame of the video than a linear laser scans beam. Than multiplied exponentially at 180 shots a second.

As already stated, I am looking into getting a Huawei with a TOF that GAPI can hook into. The LG TOF is proprietary. If you want to see the results browse sketchfab and the many Huawei p30 pro photogrammetry materials that ARCore and 3DScan use the TOF.  Most photogrammetry on that website is from phones more than scanned by a company (rarer).

And not sure why you are arguing this as it's no secret the resolution is strong enough to create biorecognition prints down to skim imperfect and veins below the surface, all in a less than a second or two for completion of the task to gather direct field data.

Also, all scanning methods outside a professional expensive 3D scanner require editing to fix mesh issues. Even with standard photogrammetry techniques you'll need to gather all this data manually, which is why TOFs are starting to be integrated into software like ARCore, as it aids in this process exponentially by providing depth sensoring outside of photo processing and stitching to approximate depth.



I can admit I don't understand, as many of what you are saying too , sorry.     And I am not arguing I just want to see a surface reconstructed with TOF camera.     I couldn't  find anything from Huawai  on Sketchfab or anywhere.   

However I was  able to find Nokia 9 depth maps  here https://onedrive.live.com/?authkey=%21AOmSLg2vqdKCpqc&id=6FF84EEABE79B8A6%215226&cid=6FF84EEABE79B8A6
Someone posted them and what I see is not even close to regular photogrammetry .   They merely enough for DOF effect  and that's all.   

You are  saying  there is an algorithm rebuilding a material , good enough for veins and skin imperfections?    Could you post a link please, a picture something?  Is TOF photogrammetry  not ready yet and it's only theoretical possibility?

Sorry I don't understand what data  do you mean with standard photogrammetry techniques.    Usually it's just photo series.
Here is a concrete block of 34 millions  triangles   in Reality Capture.   No much of manual mesh editing required  if you shoot it right way.     
Title: Re: PBR Displacement Advice
Post by: WAS on July 05, 2019, 07:39:52 PM
Quote from: KirillK on July 05, 2019, 06:44:40 PM
Quote from: WASasquatch on July 05, 2019, 02:03:15 PM


I don't think you understand how point data is collected and used for resolution... A live video feed from ToF isn't 180 passes a second fed into a algorithm rebuilding a material. It's also in a fixed position, not scanning. The hand is only slightly moving. But in reality far more data in one frame of the video than a linear laser scans beam. Than multiplied exponentially at 180 shots a second.

As already stated, I am looking into getting a Huawei with a TOF that GAPI can hook into. The LG TOF is proprietary. If you want to see the results browse sketchfab and the many Huawei p30 pro photogrammetry materials that ARCore and 3DScan use the TOF.  Most photogrammetry on that website is from phones more than scanned by a company (rarer).

And not sure why you are arguing this as it's no secret the resolution is strong enough to create biorecognition prints down to skim imperfect and veins below the surface, all in a less than a second or two for completion of the task to gather direct field data.

Also, all scanning methods outside a professional expensive 3D scanner require editing to fix mesh issues. Even with standard photogrammetry techniques you'll need to gather all this data manually, which is why TOFs are starting to be integrated into software like ARCore, as it aids in this process exponentially by providing depth sensoring outside of photo processing and stitching to approximate depth.



I can admit I don't understand, as many of what you are saying too , sorry.     And I am not arguing I just want to see a surface reconstructed with TOF camera.     I couldn't  find anything from Huawai  on Sketchfab or anywhere.   

However I was  able to find Nokia 9 depth maps  here https://onedrive.live.com/?authkey=%21AOmSLg2vqdKCpqc&id=6FF84EEABE79B8A6%215226&cid=6FF84EEABE79B8A6
Someone posted them and what I see is not even close to regular photogrammetry .   They merely enough for DOF effect  and that's all.   

You are  saying  there is an algorithm rebuilding a material , good enough for veins and skin imperfections?    Could you post a link please, a picture something?  Is TOF photogrammetry  not ready yet and it's only theoretical possibility?

Sorry I don't understand what data  do you mean with standard photogrammetry techniques.    Usually it's just photo series.
Here is a concrete block of 34 millions  triangles   in Reality Capture.   No much of manual mesh editing required  if you shoot it right way.   

Granted the field is just starting to open up to the average consumer and even than barely. The API to hook to rear facing TOF sensors just came out this year, but it's been a hot topic that isn't hard to find lots of discussion on even comparison between all the formats from.yeara ago, I'll try to find a link to that. But here is some other stuff. As for sketchfab you won't really know unless they tell you it was taken on a supporting phone. And yes, as already noted the note 9 is used for DOF.

https://www.androidauthority.com/lg-g8-thinq-vein-recognition-956358/amp/
https://www.businessinsider.com/lg-g8-smartphone-unlocks-with-hand-id-vein-palm-recognition-2019-2
What's shown here is proprietary as well: https://www.laserfocusworld.com/detectors-imaging/article/16555309/facial-recognition-3d-tof-camera-technology-improves-facial-recognition-accuracy-and-security
Here is a old comparison between scanning types I was talking about, first being tof: https://www.researchgate.net/figure/Experimental-results-for-small-objects-at-a-distance-under-outdoor-illumination-a-Top_fig8_316026814
And https://www.researchgate.net/figure/Comparison-of-time-of-flight-ToF-and-photometric-stereo-methods-a-shows-the-target_fig1_323592356

That's all I'm really going to post on the subject matter. I've done enough digging myself to know it's a pioneering field, especially with mixing the fields like in third link above.

And I'm not sure what keywords you are using when researching but there is a whole lot of TOF and environment/object scanning.

And I'm not here to prove why I'd like to get involved with something new and get a sensor to fiddle with the API and ARCore with full features.
Title: Re: PBR Displacement Advice
Post by: KirillK on July 06, 2019, 06:24:03 AM
Thanks a lot WASasquatch,   I just got a wrong impression that ToF photogrammetry is kind of available already  with new phone generation specifically.   

A general idea  of why a ToF sensors wouldn't be useful for 3d material reconstruction have been crossing my mind for more than decade already.  But each time I tried to find any new achievements regarding this   it's always  just something smooth shaped and not detailed enough.

  Even this LG veins recognition  seems not exactly shape/surface reconstruction  but rather a picture of "infrared absorption" as they say it.  So not actually a time of flight but rather just using its infrared source to receive back what palm skin is absorbing.    After all a palm surface have no prominent veins at all.     
   
     I hope probably it's just a field that haven't found much of a focus  yet.  I mean tiny surface details shaping and reconstruction.  Maybe it's rather software and processing necessary that are not  for phones or mass market interests outside of  the area, not ToF technology limitations.   

But I am looking forward for  this too.   It's something promising indeed
Title: Re: PBR Displacement Advice
Post by: WAS on July 06, 2019, 12:48:11 PM
Quote from: KirillK on July 06, 2019, 06:24:03 AM
Thanks a lot WASasquatch,   I just got a wrong impression that ToF photogrammetry is kind of available already  with new phone generation specifically.   

A general idea  of why a ToF sensors wouldn't be useful for 3d material reconstruction have been crossing my mind for more than decade already.  But each time I tried to find any new achievements regarding this   it's always  just something smooth shaped and not detailed enough.

  Even this LG veins recognition  seems not exactly shape/surface reconstruction  but rather a picture of "infrared absorption" as they say it.  So not actually a time of flight but rather just using its infrared source to receive back what palm skin is absorbing.    After all a palm surface have no prominent veins at all.     
   
     I hope probably it's just a field that haven't found much of a focus  yet.  I mean tiny surface details shaping and reconstruction.  Maybe it's rather software and processing necessary that are not  for phones or mass market interests outside of  the area, not ToF technology limitations.   

But I am looking forward for  this too.   It's something promising indeed

Did you read the articles? Your idea of resolution is not part of the final image, even in the facial recognition software, what it reads in a second is far more accurate and detailed that what you will get in minutes of scanning all angles of someone's face. Fed into algorithms (just like laser scanning too) we can generate highly accurate depth maps. Surface detail isn't even recorded with consumer laser scanning so not sure what your gripe is there.

And yes, as of THIS year, TOF sensors are starting to be seen in phones, as mentioned several times, the API to even use those sensors through Android were also just released. Apple has none yet.

All the reasons why it is good for depth sensing (and why it is a depth sensor) and becoming a hot tech field is also pretty easy to see. Inherently you can gather 100x the data in a second than a scanner.... A second. Not minutes of scanning every possible surface angle so the laser isn't confused by any refraction or angles. A TOF had much better diffusion and refraction recognition than direct bouncing lasers.
Title: Re: PBR Displacement Advice
Post by: KirillK on July 06, 2019, 04:49:25 PM
I mostly compare it not with what  point by point LIDARs  do but rather with regular photogrammetry done from a parallax in series of photos  without any laser at all.     New  50mpix  cameras photo series  could produce unbelievably detailed surface   having almost zero noise errors.    I use 19mpix Foveon matrix camera and it makes super crispy geometry up to tiny pores and small cracks.  Sometimes 300-500 millions of triangles per square meter.
Disadvantages it takes lots of time, RAM, cpu/gpu power, needs a monster of a PC and works fine with static subjects only obviously.

I read the articles but not a single one demonstrates even what a typical LIDAR scanner could do with enough time for processing  at close 2-3 meters distance .   Leicas for example.  ( with static subjects too obviously)   

  Anything I see is just more or less low-res human face, probably enough for face recognition, but imo not enough for quality material displacement  maps.

So I wonder if it's something ToF cameras couldn't do or such low res shape is a result of real-time capturing  with not enough scan iterations, processing time  or something ?   

And why ToFs are small resolution 400x300pix  or something.    Could we expect them in quality pro cameras , not cell phones only?
Title: Re: PBR Displacement Advice
Post by: WAS on July 06, 2019, 05:11:37 PM
See this is where you're missing the point. Both collect point data... one does it astronomically faster... You said it yourself.

Quote from: KirillKI read the articles but not a single one demonstrates even what a typical LIDAR scanner could do with enough time for processing  at close 1-2 meters distance .   Leicas for example.  ( with static subjects too obviously) 

(http://ireadthearticlesbutnotasingleonedemonstratesevenwhatatypicallidarscannercoulddowithenoughtimeforprocessing atclose1-2metersdistance. leicasforexample. (withstaticsubjectstooobviously))

(https://img.laserfocusworld.com/files/base/ebm/lfw/image/2018/07/content_dam_lfw_print_articles_2018_06_1806lfw_wn_5.png?auto=format&h=640&w=640)

Where this can be achieved in a mere second just for facial recognition from one fixed position.... The camera collects a point cloud of the face in real-time and the CPU (of a flipping phone for crying out loud) calculates it in real time to a 3D depth map of a persons face. And than compares. I don't understand how you don't see the potential here, really don't. Lol These cameras can be uses for longer than a second and moved around, using spatial awareness of phones gyroscopes too. When emission distance is more than a hobbyists consumer field, it will be picked up and rigged into handheld and tethered scanners, trust me. Lol

I don't think you're looking deep enough at what it offers. When you're up close and personal with objects you can achieve a lot, and the TOF sensor range is only increasing. Phones alone achieving a max resolution at 5m (from like 1m just a year or two ago). That's pretty far for consumer scanning....

Alone most consumer 3D scanners using lasers have a quantum efficiency of less than 25% at 0.7mm scale (which is why most models are puddy-like). The facial recognition demonstration I linked alone has a quantum efficiency of 50% at 0.13mm.

No offense but I don't think you're comprehending the data between the formats. Here is document from thor, a popular consumer 3d scanner, and it's of substantially far less quality... http://thor3dscanner.com/what-is-%E2%80%9Cresolution%E2%80%9D-in-a-3d-scanner-and-why-is-it-important

I think you're glossing through and taking image examples as your proof of it's limits and not what it's demonstrating.

And your image exmaple above is extremely bias because it is also baked with approximation based normal mapping for it's roughing, and also incorporates it's texture for the illusion of detail for a final product.
Title: Re: PBR Displacement Advice
Post by: KirillK on July 06, 2019, 05:37:20 PM
   My goal is possible material Displacement / normal map  capturing .  Real time speed is good indeed but only for living things and I mostly interested in static subjects. 
  I  had an experience only with super expensive Leica LIDAR a company I work for using.      And even it couldn't make anything close to Reality Capture photogrammetry except the fact LIDAR make more accurate macro shape and traditional photogrammetry while doing super cool tiny details makes kind of macro errors sometimes.  Something flat  may be slightly bent and so on.

I guess for material capture it rather important not how far away sensor could see  but rather how much depth gradations it could record.  I bet  further it see  more stepped it might be.     Nokia 9 makes 1200 depth steps/layers  as what I read from one of your links. It's not that much actually

Here is that block in Zbrush  , And it's optimized to 19 mil.   Done from old Nikon not very hi res camera. Could be much more detailed surface actually.   No normal map. It's actually a source to bake normal map from

Again I am not trying to question the virtues of ToF approach.    Something real time and working at the same time you do shots is super cool indeed.   
I am just trying to figure out if the approach could give close level of surface accuracy and geometry crispness.  Enough to bake normal maps from.     Even if not I bet it still could be useful with micro surface details added by means of crazy bump or something.
I just want to see something to compare  not as ugly as Nokia9 examples that  I found.
Title: Re: PBR Displacement Advice
Post by: WAS on July 07, 2019, 01:45:39 PM
Quote from: KirillK on July 06, 2019, 05:37:20 PM
   My goal is possible material Displacement / normal map  capturing .  Real time speed is good indeed but only for living things and I mostly interested in static subjects. 
  I  had an experience only with super expensive Leica LIDAR a company I work for using.      And even it couldn't make anything close to Reality Capture photogrammetry except the fact LIDAR make more accurate macro shape and traditional photogrammetry while doing super cool tiny details makes kind of macro errors sometimes.  Something flat  may be slightly bent and so on.

I guess for material capture it rather important not how far away sensor could see  but rather how much depth gradations it could record.  I bet  further it see  more stepped it might be.     Nokia 9 makes 1200 depth steps/layers  as what I read from one of your links. It's not that much actually

Here is that block in Zbrush  , And it's optimized to 19 mil.   Done from old Nikon not very hi res camera. Could be much more detailed surface actually.   No normal map. It's actually a source to bake normal map from

Again I am not trying to question the virtues of ToF approach.    Something real time and working at the same time you do shots is super cool indeed.   
I am just trying to figure out if the approach could give close level of surface accuracy and geometry crispness.  Enough to bake normal maps from.     Even if not I bet it still could be useful with micro surface details added by means of crazy bump or something.
I just want to see something to compare  not as ugly as Nokia9 examples that  I found.

You're really stuck on Nokia 9 which has been already noted, and in it's own article... used for DOF recognition. All it's doing on the note 9 is double checking and correcting the depth approximated fro the camera array so there aren't any errors from image-based approximation as the TOF can tell it "No, that's actually in the background/foreground".

And as I've explained, this is used right now as a AID to photogrammetry. Even on the Huwaei it's used as a aid for the ARCore's spacial recognition and dpeth sensing. Making sure what is applied through imaging is accurate (much like the Note 9). The API was just released, like I mentioned several times (getting old now, at this point it's just ignorance), heck we haven't even seen ANY official TOF based AR applications released yet that were targeted for mid-2019.

You need to cool your jets, either appreciate a new field, or move on. Lol I know the potential here as been gone over various places. How you can't see how an accurate face map done in a second as apposed to 2-5 minutes of scanning all angles and re-scanning errors is not outstanding and opening up a whole new avenue is beyond me. At this point it must just be arrogance/ignorance.

I'm not here to prove anything, again, or why I want to tinker with the Google TOF API and look forward to applications to calculate point data outside of proprietary tech like facial recognition, or even just play with AR. Really not. And to have an argument about it because you're obsessed with antiquated techniques that are broken down in a article I shared and why mixing these technologies is the future.

And how you are still caught up on detail is still beyond me something improving day and night, with almost a x10 gain in resolution in a year... If the TOF in it's raw basic form on the LG can distinguish veins below the surface, and recognize a face down to moles and freckles, it already has a lot of detail. For example my fiancee cannot unlock her phone with foundation on as she covers her beauty marks that the software is specifically using as a unique identifier.

In general from looking at the export of the mans face as a depth map alone, I can tell it's reading unprecedented amount of surface detail in a second, without scanning. I'm sorry you can't. Even with a blurred and highly compressed JPEG. And to note a depth map is not a displacement map, depth maps are intentionally smooth

When the field opens up, I'd love to show you all it can do (even though It's been doing it for awhile such as the article I gave you on object scanning and it's use alone, plus with other mediums for accuracy from years ago, and pretty good for such low resolution)

Also, it still seems that object is using normal mapping from the distortion of detail by angle. And I'm going to assume it's not scanned with 15k scanner, and thus that surface detail is likely approximated from images. Almost all consumer scanners for hobbyists use a lot of approximations, even in mesh building, but the detail is all from images. The resolution of most scanners out today is LESS than the TOFs we have covered (again the Thor has a quantum efficiency of less than 25% at 0.7mm scale ) that's HALF the resolution of the TOF I compared too. And that's an expensive scanner. The meshes were pretty puddy-like without approximating detail from images.
Title: Re: PBR Displacement Advice
Post by: WAS on July 07, 2019, 02:03:34 PM
Since you're still caught up on mesh examples, here is a mesh example from last year form Sony's TOF.  Mind you, again, this is again, a mesh created from only a second or two of emission. Not scanning from all angles over the course of minutes+

https://www.unifore.net/product-highlights/sony-released-3d-bsi-tof-image-sensor-imx456ql.html

Again, remember, this is only a mesh based on depth, depth maps do not incorporate surface detail as that would interfere with meshing (just like 3d scanners).

Surface detail isn't really a concern as as I've mentioned, for most of us, this is done via approximation, not actual lasers scanning every bit of surface detail. That's out of most consumers reach and usually actually CT scanning, not laser scanning. Like the 3D scan of Nefertiti's head was done with a hand-held CT scanner, not a 3D scanner, to achieve the actual skull and skin detail needed for reconstruction.
Title: Re: PBR Displacement Advice
Post by: Oshyan on July 07, 2019, 02:17:34 PM
Guys, this discussion has moved out of the realm of productive and friendly discourse. WAS, Kirill appears to simply not see what you're seeing, but to call him arrogant or ignorant is needlessly inflammatory. The fact that you have yet to provide a link to a specific, high-detail model created by ToF (no, the Sony example doesn't cut it) would seem to indicate that it is still not ready for "prime time". Maybe you're right and it will prove itself very soon, but wait until it does and then you can demonstrate clearly. In the mean time accept that you have a difference of perspective and move on.

- Oshyan
Title: Re: PBR Displacement Advice
Post by: WAS on July 07, 2019, 02:26:06 PM
Quote from: Oshyan on July 07, 2019, 02:17:34 PM
Guys, this discussion has moved out of the realm of productive and friendly discourse. WAS, Kirill appears to simply not see what you're seeing, but to call him arrogant or ignorant is needlessly inflammatory. The fact that you have yet to provide a link to a specific, high-detail model created by ToF (no, the Sony example doesn't cut it) would seem to indicate that it is still not ready for "prime time". Maybe you're right and it will prove itself very soon, but wait until it does and then you can demonstrate clearly. In the mean time accept that you have a difference of perspective and move on.

- Oshyan

Imo it's due. To be going around like a record simply asserts that definition.

And it's funny how many times I've noted this is a brand new field I want to get involved with, not a standardized field, or even one that's STARTED.

And yes, it does cut it Oshyan. What is your excuse to refute that? It is a highly accurate 3D mesh created from a second of exposure. Please don't play arrogant as well. These are the steps any of the fields have taken in their infancy, and this shows unprecedented speed and accuracy over 3D scanning. That's just fucking inherent Oshyan, both in science and testing application. It's not even something to argue over. It's there, and proves itself. The fact you aren't familiar with what it takes to create a mesh or 3D model doesn't refute any of this.
Title: Re: PBR Displacement Advice
Post by: WAS on July 07, 2019, 02:31:40 PM
There's even very basic misconception of the level of detail 3D scanning can even accomplish without approximation, or a 10-20 thousand dollar CT scanner (which is technically not 3D Scanning and another field altogether like TOF)

I've already posted a article from the THOR model, which was a high quality scanner (they have a new model) and it wasn't even capable of the level of resolution of a modern TOF by leaps and bounds. 25% quantum effeciency at 0.7mm scale of the THOR compared to a quantum efficiency of 50% at 0.13mm scale. Speaks for itself. Just because it's not widely public for people to do what they want with the APIs doesn't mean it's not possible or why I'm ecstatic (like many). It's cost effective for higher detail and performance.

The puddy vase model scanned is likely even larger than the person scanned for the facial recognition, the vase itself being larger than the mans head likely, and we can immediately see the quality differences between the meshes... the facial recognition mesh is of far superior mesh quality
Title: Re: PBR Displacement Advice
Post by: Oshyan on July 07, 2019, 02:49:56 PM
WAS, I've asked you to drop the topic and you've responded by calling me arrogant and swearing at me. That's not acceptable behavior here. We value your contributions and want you to be able to continue to be a positive member of this community. Please take a step back and calm down.

I'm locking this topic now.

- Oshyan