Photogrammetry

Started by Kadri, April 01, 2021, 09:06:20 AM

Previous topic - Next topic

Kadri

Turntable is easy if you make the camera move in another software and import as chan or fbx file.
You actually use the same camera move all the time nearly by changing only scale.

I turned atmo off in these tests.

WAS

Not sure how that's easier then a plane or object rotating. You can keep focal lighting in same place too so resulting textures are all blended with for example forward head on lighting reducing baked shadows in the final texture. The benefit of a 3D world is our object can be just rotated and doesn't require moving around the object.

Kadri

Quote from: WAS on April 02, 2021, 02:11:05 PMNot sure how that's easier then a plane or object rotating. You can keep focal lighting in same place too so resulting textures are all blended with for example forward head on lighting reducing baked shadows in the final texture. The benefit of a 3D world is our object can be just rotated and doesn't require moving around the object.
If you can move rotate all nodes without any problem and the changing light angle-shadows aren't a problem (this could be easy by just using luminosity without lights). Then yes. But not sure if this is easier in all scenes then just moving the camera.

WAS

#18
Im wondering if luminosity would work. It makes objects appear 2D and there is no depth from lighting to read to create a 3D object. Just boundaries to the background. May end up with roundish object or whatever based on boundaries to BG with no surface geometry but what it reads from textures alone.

And should be able to rotate most stuff if its applied with the correct logic as I found out with my planets. Setting up your scene janky and you'll have problems.

But as for easy? Doing camera rigs in a other program is pretty contrary.  Even most real world scanning like in the video, he rotates the turntable and kept his camera on arm in front for lighting.

Kadri

#19
Quote from: WAS on April 02, 2021, 03:51:51 PM...
But as for easy? Doing camera rigs in a other program is pretty contrary.  Even most real world scanning like in the video, he rotates the turntable and kept his camera on arm in front for lighting.

That is the real world. You do what is better and easier.
Which one is? Turning the small object or rotating the camera around the object?
The part where i am not sure is the lighting shadowing use.
If that is better then using that method should be preferred in Terragen too.

If you can easily rotate the ground in Terragen that is better. If not you have to move the camera.
Using another program for the camera move is harder yes. But you could do it one time in Terragen and then exporting the scene-camera (chan-Fbx) and using this all the time you want over and over again.

WAS

Yeah, in some circumstances you'll probably have to use a camera on rail or set positions. For example if your texturing and displacement relies on shaders which cuts off the flow from transform shaders or object rotation.

And that is true you could used it overv and over, that is, if your scene is within scale. A lot of shaders could be using alt keys, slope limits, etc, and scale could change that so you'd need new cameras.

And clearly i'm talking about translating real-world methods to 3D, and where you can easily cut things out because you're in control of the world.

WAS

So I am trying out this software in the morning. Hearing good things and found some info for blender. 

https://alicevision.org/

David

If I understand you correctly you are trying to find a way to eliminate shadows on you exterior source photos?


If this helps, I shoot rock-faces either in shadow or overcast lighting. But crucially, in both cases I also use a powerful Elinchrom Quadra ring flash with a high quality lens. Ring flashes are idea for illuminating shadows in crevices and from overhangs because their light is parallel to the lens axis. The attached renders were created from 300 x 103MB images and processed in Reality Capture. RC delivers cleaner models in less time than Metashape, though with more images Metashape can produce excellent models too. This model comprises approx 1,200,000 polys. It was GPU rendered in Lightwave and Octane and they each took only a few minutes. Unfortunately, with these programs I can't get the skies or the range of water effects that Terragen offers.


Rockface detail.jpg  Lightwave screenshot.jpg

Kadri

#23
Thanks for the help.
We are actually talking about which way would be the best to get Terragen landscapes out (for UV texturing for example) and using in Photogrammetry to get it ones again into Terragen or other software.

That object looks nice. 300 photos are quite a lot. What do you think would be best to get something out from Terragen?
As render times would be a problem for so many photos-renders.
I mean what do you think would be acceptable as a minimum? Resolution and render amount...?
And do we need shadows for calculating to get better models or are they not needed or even problematic?
From your post it looks like shadows aren't needed at all for good model calculation.
This would be very easy to do then in Terragen (lighting wise) and in any other software that is.

David

Sorry, there's still something I'm not understanding here. Are you wanting to export a landscape mesh out of Terragen? If so I think that is possible with Terragen 4. Or are you hoping to use Terragen renders as images for photogrammetry? If so that sounds like a lengthy business.

Regarding shadows, no, you don't need them for successful photogrammetry so long as there is good colour texture information that isn't a single flat overall colour.

Kadri

Quote from: David on April 03, 2021, 10:06:06 AMSorry, there's still something I'm not understanding here. Are you wanting to export a landscape mesh out of Terragen? If so I think that is possible with Terragen 4. Or are you hoping to use Terragen renders as images for photogrammetry? If so that sounds like a lengthy business.

Regarding shadows, no, you don't need them for successful photogrammetry so long as there is good colour texture information that isn't a single flat overall colour.

Yes you can export mesh from Terragen.
The problem is with texturing the same way.
There are different methods for getting this of course like Ortho rendering, front projection etc.

This thread is kinda an extension from 1-2 other threads about this.
I don't care so much about texturing (still would be great of course) exactly as it is in Terragen.
But Jordan (Was) wants an exact detailed texture as it is in Terragen.

That is the real reason for this thread actually. I thought if this would be a temporary solution.

If we could get the mesh exported with the textures directly from Terragen as it is, this thread would be only for fun mostly.
I think Matt will bring this in another update most probably. But we don't know when, if it happens at all of course.

Good to know that shadows aren't a problem.

Kadri

#26
Quote from: David on April 03, 2021, 10:06:06 AM...are you hoping to use Terragen renders as images for photogrammetry? If so that sounds like a lengthy business.
...

If we need 300 renders it could be as you said. As even with disabled shadow and atmosphere that could take quite while to render.
With my CPU Ryzen 9 3950X it took more then one hour for 30 HD images. It depends on the scene too.
Would take basically one day for an object more or less.

Thus the reason i wanted to know about the minimums for doing this resolution and photo amount wise

David

Thanks for the explanation. I can't offer much help as to how many renders you will need except to say ....  as many as your patience allows! All the photogrammetry programs will be grateful for all the images you can throw at them. Good luck.


WAS

I still see plenty of shadows in his raw, and this is what tells programs to create geometry based on lighting depth. Its very similar to creating heightmaps etc from image. If the program can't see geometry, it can only work off the colors of the textures. You want light from POV though, like a ring light, however fo have a correct angle for lighting for depth. Same deal for bitmap approximations of textures or scans of textures. Additionally if you have stuff like distort by normal, this creates illusions of weird shadows for luminosity because its warping textures by surface normal.

You can go ahead and render a simple rock sphere on black BG with luminosity for texture, it will appear 2D and flat. So all the programs could do is built geometry based on the border shapes of the object, and some displacement of what it can see of textures which would just be the textures, no surface depth.