Tiger elephant.

Started by bigben, August 12, 2012, 05:37:35 PM

Previous topic - Next topic

bigben

Been playing with Autodesk's 123D Catch recently for work.  Here's a quick test.
[attach=1]



bigben


masonspappy

#2
Hey Bigben,
If we ever meet in person I will buy you beer!!

That app is exactly what I've been looking for.  And the timing is perfect.  I had been trying unsuccessfully  to get the David LaserScanning software to work - installed it on 2 PCs and a VM and it crashed every time - but quit trying last night.  From everything I've tested so far, the 123 Catch results are everything I need and blend perfectly with the other elements from my pipeline (see attached image).
Thanks again!!
- Cam
In the attached image Yoda is an OBJ sample pulled down from the 123 Catch website,  the cannon was created by me in Blender.

bigben

Hi Cam

Glad you found it useful. We've had a structured light system installed at work but from my early tests I think 123D Catch can outperform them in particular circumstances.  Both approaches have there pros and cons. The photogrammetry approach is very similar to stitching panoramas in that a knowledge of how the software works and "behaves" (AND misbehaves) makes a very big difference to how successful the result is out of the box. Certainly a useful tool for getting a quick textured model of something for a mock up.

Kadri


I didn't tried that software before because of the "cloud software" nature but your post made me curious.
I made a very basic quick and dirty animation test with an AT-AT model i have at home.
For a first try it came out better then i thought Ben.
It is fun to use at least .

Thanks Ben :)


TheBadger

Really interesting! Could be a great way to get background fill.
Does anyone know if the quality of the model is mostly based on the quality of the photo?

It is on PC and IPad, but not mac? Thats really strange. Whats the deal?
It has been eaten.

bigben

Quote from: TheBadger on August 14, 2012, 02:15:32 PM
Really interesting! Could be a great way to get background fill.
Does anyone know if the quality of the model is mostly based on the quality of the photo?

It is on PC and IPad, but not mac? Thats really strange. Whats the deal?


To a certain extent, yes, but there comes a point where feeding it higher resolution images just won't make much difference. There are many similarities between this and 2D stitching of panoramas in that there is a significant grey area between good and bad data. I'm running some tests to work out approaches to photographing these that will optimise the quality of the initial result. Tip 1: start with a view looking down on the object so that the top appears in every image.

The model here was made with an iPhone4. I was in a hurry and didn't lock the exposure and dound I got a much better result after using PTGui to adjust the images to the same exposure.

TheBadger

Thanks Ben. Please keep us posted on your findings.

QuoteTip 1: start with a view looking down on the object so that the top appears in every image.

Every image meaning every angle of the top?
It has been eaten.

bigben

#8
Quote from: TheBadger on August 17, 2012, 04:40:51 AM
Thanks Ben. Please keep us posted on your findings.

QuoteTip 1: start with a view looking down on the object so that the top appears in every image.

Every image meaning every angle of the top?

To edit tip #1:
Start with a front on view as this sets the default orientation of the object.
Then move the camera up so that you can see the top of the model, and then move around the object so that you can see the top of the object in each shot until you get back to the front again.  Then move down and do another lap of the object.

I've been experimenting with backgrounds and a shooting strategy specifically for 123D Catch, but it should also work for Photosynth.  Had a quick look at Arc3D but it seems to have different requirements for its source images.  My latest test used a cutting board (green mat with white grid lines) to sit the model on. This seems to have provided a better reference for the 3D reconstuction and I got the model below without requiring any post editing of the model :) (the pins were used in previous tests as reference points for manual stitching but weren't required this time, 47 images in 2 rows)

Oshyan

Impressive and very cool!

- Oshyan

bigben

Thanks Oshyan. We were somewhat amazed ourselves when we saw it appear on screen and I was very surprised to see that the only hole/crack in the object was the base.

There's another Autodesk freebie that I know many here will enjoy.  It's unsupported, isn't promoted with all of their other 3D apps and don't ask me about UV mapping (I don't know that much about 3D modelling) but it's both very fun and very useful.
http://www.meshmixer.com/.  The guys who set up our 3D scanner suggested we use Meshlab to fix our models before printing them but I find this much easier to use.  Watch the videos!

TheBadger

Thanks a lot Ben.

Just one question about your last image here. The Buddha model looks a bit "blocky". Is this a result of low render settings, or is that how the model looks? The image in the OP is to far away for me to tell.
It has been eaten.

bigben

#12
It's a bit blocky because it's just the standard resolution that is initially created.  Once it's created you can increase the mesh resolution after cropping. I'm just about to render a maximum resolution version of the same object.

[edit] 'tis done.  You can have a look at the files here:
http://files.digitisation.unimelb.edu.au/playpen/buddha_20120822/

Render : detail 0.75, AA 6. Raw OBJ export from 123D Catch (27Mb)

TheBadger

Thanks again, Ben  8)

I put this at the top of my "to learn" list. And those other softwares you mentioned are pretty interesting too. Great share, man. Thanks  for taking the time!
It has been eaten.

Upon Infinity

Oh my god...does this software do what I think it does???