Photogrammetry

Started by Kadri, April 01, 2021, 09:06:20 AM

Previous topic - Next topic

WAS

Wow that's pretty darn cheap, especially considering the prices on turntables meant for heavy stuff. Whoa.

Kadri

Quote from: Kadri on April 03, 2021, 01:31:11 PMI terminated my 3Dcoat test as the bar was barely moving. I waited for half a day.
At least it didn't crashed this time earlier.
I used a 1.1 Gb obj file for this. The files i used some years ago were smaller...around 500 Mb or so.
...

About 3DCoat. I should be clearer. I was using retopology. Not just only opening. That could be different.

David

Quote from: WAS 03/04/2021, 18:23:18

    'I still see plenty of shadows in his raw, and this is what tells programs to create geometry based on lighting depth.'




My mistake WAS for posting an open GL rather than an actual original RAW image. So here, side by side, is 1 of the images used to create the rockface on the right. All 300 images were just like this one - totally flat - almost albedo-like. Notice that the overhangs and crevices are all well lit so that 3D renderer is able to project it's own shadows without having to fight any ambient shadows in the scene.


Rockface.jpg

WAS

Oh that makes much more sense. It seems like what I saw with Kadri's tests, and the lack of surface depth lets the programs only really grab boundary geometry where it actually curved away from the camera, and protrudes against the BG. The subtler surface geometry just isn't present because it's not really seen until it's rotating away from the camera. And I do notice like I was saying, geometry being mistaken from texture alone (without the geometry of that area). In TG this is bad because it's hard to conform textures to geometry, and often textures are flat relatives to the object (similar to rock in this thread with random PF) where it completely hides surface geometry.

aknight0

I did a lot of work with Terragen and photogrammetry a few years back.  It sort of works.  What we were doing was importing objects, rendering images of them, and then trying to reconstruct the object with photogrammetry.  Then you can use the original object as an easy baseline to compare against.  We were testing photogrammetry software and image planning, so we were more interested in relative accuracy, but we weren't ever able to create any models that looked as good as the originals.  

The structure-from-motion algorithm that most photogrammetry software use works by matching points in the images.  If it can identify the same point on the surface in several images, from different angles it can calculate the 3D position.  To make that easier, we'd often apply a high frequency power fractal over the surface to give more easily identifiable points to match.  That helped with getting better geometry in the end result, but won't help if texture is your end goal.  

The hard part of photogrammetry is usually calculating the camera positions and surface geometry, most software computes a texture map as a final step.  Since Terragen can export a mesh, and the camera positions and parameters are known, it might be worth checking to see if any of the software will let you import the mesh and camera positions, and jump straight to the last step of computing the texture map.

aknight0


Kadri

Quote from: aknight0 on April 05, 2021, 05:20:32 PMI did a lot of work with Terragen and photogrammetry a few years back.  It sort of works.  What we were doing was importing objects, rendering images of them, and then trying to reconstruct the object with photogrammetry.  Then you can use the original object as an easy baseline to compare against.  We were testing photogrammetry software and image planning, so we were more interested in relative accuracy, but we weren't ever able to create any models that looked as good as the originals. 

The structure-from-motion algorithm that most photogrammetry software use works by matching points in the images.  If it can identify the same point on the surface in several images, from different angles it can calculate the 3D position.  To make that easier, we'd often apply a high frequency power fractal over the surface to give more easily identifiable points to match.  That helped with getting better geometry in the end result, but won't help if texture is your end goal. 

The hard part of photogrammetry is usually calculating the camera positions and surface geometry, most software computes a texture map as a final step.  Since Terragen can export a mesh, and the camera positions and parameters are known, it might be worth checking to see if any of the software will let you import the mesh and camera positions, and jump straight to the last step of computing the texture map.

Thank you.

You next post is something good to know when needed.