Photoscan alternative?

Started by j meyer, August 24, 2014, 12:06:32 PM

Previous topic - Next topic

bigben

Quote from: j meyer on August 26, 2014, 10:49:57 AM
Thanks for testing and spreading the info Ben,much appreciated.
I still don't understand the mega/gigapixel thing in regard to a model.
How big is a model with 0.5 gigapixels in mb or polygons?

It's not the size of the model, it's a limit on the amount of images that are used to create the model expressed as a total pixel count.  Too many images and it won't let you export a model in a reusable format, but too few images (or low res images) and you lower the quality of model. e.g. 8megapixel iphone images = 62 images max (500/8) which is probably OK for a model, but with a 21mp camera your down to 23 which is barely enough for a single row of images around an object.

TheBadger

Not for no reason, but there are cameras where you can operate in video mode and each frame is high res photo. So for example 24/30 photos per second. at least one of the RED cameras is good for this.

And in the iphone (others too?) you can just pan your camera by hand and get a pano image.

So my question is cant we in any of these softs just use video to get the capture? I mean rather than move around an object taking however many shots, is there a way to just use a video mode and walk around the subject a couple of times. And lastly just up load a single file to the soft?

effectively using the camera as a scanner of a kind?
Seems like thats how it should be. So I imagine someone out there is working on it.

Actually, wouldn't that sort of discribe mocap?
It has been eaten.

Dune

I suppose you can use reduced image sizes, as long as the basic shapes are recorded. You won't need 3x3k pictures, but maybe 1x1k will do just as nicely.

bigben

#18
It all depends on your end usage.  For making models i probably wouldn't go below 8mp, but once you shoot the same object at 21mp it's very hard to go back. In theory using video is possible, but compression artefacts will create noise in your data and from a software perspective it will still be treated as a series of stills. There is a lot of processing involved with photogrammetry so it's not the sort of thing that can be done on the fly at significant resolution.  The images also need to be sharp so excessive camera movement can cause problems, and before you suggest increasing the ISO, that also introduces noise.

I've done a few reconstructions from short HD video clips pulled from Youtube and it does work to a reasonable extent.

Motion capture is essentially the same thing, as is determining the camera positions in a video for matching perspective in renders is essentially stage 1 of reconstruction via photogrammetry.

Since Smart3DCapture is essentially the same software as 123D Catch I'd just use 123D Catch out of the free options as it's less limited for exporting

Speaking of HD video resolution....  Here's the Melbourne CBD recreated from 300 screen grabs of Google Earth (couldn't get the OpenGL or DirectX capture tools to work)


Dune

300 screen grabs  :o I would say (but I'm no expert) that for this sort of resolution only 10-15 would suffice, half from a lower altitude circling and half from a higher altitude.

bigben

Overkill for the sake of being sure. 60 probably would have sufficed. I "flew" 5 grids over the city at different camera orientations: Vertically down, and oblique facing N, S, E & W. 10-15 would have been enough for a photo texture, but there wouldn't have been enough overlap for a 3D reconstruction. Image quality and inadequate overlap are the main problems that people have.

I ripped the images out of the first of the Photosynths on this page: http://www.xrez.com/case-studies/cultural-heritage/sunken-treasures-of-the-nile/ to see if I could do a 3D reconstruction from them, and it worked quite well except for a few small holes where there wasn't overlap between images. But these images were captured to provide a photo texture for applying to laser scans.

JimB

If you know anyone who has a Fuji Real3D camera, Photoscan makes use of the stereo jpeg format (I picked mine up secondhand, and the 3D screen's really cool for when you're taking group selfies at the pub, as well as it fits in your pocket). You also get much better results if you take the time to mask your subject, and I do like the reconstructed cameras being in the exported Collada file as a camera texture reprojection safety. Also, for a giggle, try rendering a TG scene as a walkaround and run it through Photoscan (a logical progressive walkaround seems to be fairly important to Photoscan). The results are actually pretty good and you can end up with a fairly nice UV mapped model with baked lighting.
Some bits and bobs
The Galileo Fallacy, 'Argumentum ad Galileus':
"They laughed at Galileo. They're laughing at me. Therefore I am the next Galileo."

Nope. Galileo was right for the simpler reason that he was right.

Dune

The latter is an interesting idea...

JimB

I just remembered doing a test on a terrain without shadows and flat lit, which also worked out not too badly because there's still detail in the surfaces. And make sure there are no reflections or specularity.
Some bits and bobs
The Galileo Fallacy, 'Argumentum ad Galileus':
"They laughed at Galileo. They're laughing at me. Therefore I am the next Galileo."

Nope. Galileo was right for the simpler reason that he was right.