Quote from: KirillK on July 06, 2019, 05:37:20 PM
My goal is possible material Displacement / normal map capturing . Real time speed is good indeed but only for living things and I mostly interested in static subjects.
I had an experience only with super expensive Leica LIDAR a company I work for using. And even it couldn't make anything close to Reality Capture photogrammetry except the fact LIDAR make more accurate macro shape and traditional photogrammetry while doing super cool tiny details makes kind of macro errors sometimes. Something flat may be slightly bent and so on.
I guess for material capture it rather important not how far away sensor could see but rather how much depth gradations it could record. I bet further it see more stepped it might be. Nokia 9 makes 1200 depth steps/layers as what I read from one of your links. It's not that much actually
Here is that block in Zbrush , And it's optimized to 19 mil. Done from old Nikon not very hi res camera. Could be much more detailed surface actually. No normal map. It's actually a source to bake normal map from
Again I am not trying to question the virtues of ToF approach. Something real time and working at the same time you do shots is super cool indeed.
I am just trying to figure out if the approach could give close level of surface accuracy and geometry crispness. Enough to bake normal maps from. Even if not I bet it still could be useful with micro surface details added by means of crazy bump or something.
I just want to see something to compare not as ugly as Nokia9 examples that I found.
You're really stuck on Nokia 9 which has been already noted, and in it's own article... used for DOF recognition. All it's doing on the note 9 is double checking and correcting the depth approximated fro the camera array so there aren't any errors from image-based approximation as the TOF can tell it "No, that's actually in the background/foreground".
And as I've explained, this is used right now as a
AID to photogrammetry. Even on the Huwaei it's used as a aid for the ARCore's spacial recognition and dpeth sensing. Making sure what is applied through imaging is accurate (much like the Note 9). The API was just released, like I mentioned several times (getting old now, at this point it's just ignorance), heck we haven't even seen ANY official TOF based AR applications released yet that were targeted for mid-2019.
You need to cool your jets, either appreciate a new field, or move on. Lol I know the potential here as been gone over various places. How you can't see how an accurate face map done in a second as apposed to 2-5 minutes of scanning all angles and re-scanning errors is not outstanding and opening up a whole new avenue is beyond me. At this point it must just be arrogance/ignorance.
I'm not here to prove anything, again, or why I want to tinker with the Google TOF API and look forward to applications to calculate point data outside of proprietary tech like facial recognition, or even just play with AR. Really not. And to have an argument about it because you're obsessed with antiquated techniques that are broken down in a article I shared and why mixing these technologies is the future.
And how you are still caught up on detail is still beyond me something improving day and night, with almost a x10 gain in resolution in a year... If the TOF in it's raw basic form on the LG can distinguish veins below the surface, and recognize a face down to moles and freckles, it already has a lot of detail. For example my fiancee cannot unlock her phone with foundation on as she covers her beauty marks that the software is specifically using as a unique identifier.
In general from looking at the export of the mans face as a depth map alone, I can tell it's reading unprecedented amount of surface detail in a second, without scanning. I'm sorry you can't. Even with a blurred and highly compressed JPEG. And to note
a depth map is not a displacement map, depth maps are intentionally smoothWhen the field opens up, I'd love to show you all it can do (even though It's been doing it for awhile such as the article I gave you on object scanning and it's use alone, plus with other mediums for accuracy from years ago, and pretty good for such low resolution)
Also, it
still seems that object is using normal mapping from the distortion of detail by angle. And I'm going to assume it's not scanned with 15k scanner, and thus that surface detail is likely approximated from images. Almost all consumer scanners for hobbyists use a lot of approximations, even in mesh building, but the detail is all from images. The resolution of most scanners out today is LESS than the TOFs we have covered (again the Thor has a quantum efficiency of less than 25% at 0.7mm scale ) that's HALF the resolution of the TOF I compared too. And that's an expensive scanner. The meshes were pretty puddy-like without approximating detail from images.