Mini Mali

Started by bigben, May 31, 2014, 02:08:20 AM

Previous topic - Next topic

bigben

One of the good things about working at a uni is educational pricing on software.  Playing with Agisoft Photoscan at the moment. Better models than 123D Catch  :D  My 3D sculpting skills are still rudimentary, but I managed to smooth this model enough to look reasonable up close.

Dune

That looks really good. How many photo's did you have to take?

bigben

This one was about 70, but it was an old image set and I only had jpegs. The sculpture also has a few areas with very little detail which caused a few problems and had to be smoothed out.  Still learning Photoscan but it's a great program for the price. Results are much better than 123D Catch and for some objects it's better/more efficient than our NextEngine laser scanner. the texturing is pretty good too.  Working my way up to a building, or at least enough of one to include in a scene... or a detailed tree trunk for a foreground object.

kaedorg

I looked at the images before reading the text.
So I thought you created the 3d object file and then you made a real 3D render of it  :P
Then i read the text  ::)
But anyway good use of 123D catch

David

j meyer

How is the quality of the UV mapping and the textures compared
to 123D catch?

TheBadger

Please elaborate, BB. You know this topic is too interesting for just a blurb.  :)
It has been eaten.

bigben

#6
Quote from: j meyer on May 31, 2014, 11:02:00 AM
How is the quality of the UV mapping and the textures compared
to 123D catch?

http://bigben.id.au/demo/mini-mali2.tif
1 of 4 (I added alpha channel for TG)

Pretty good.  I had to clean up the model and import it back in so there are still a few odd bits from my average cleaning job. (Being able to export the mesh for cleanup and then import it back in for texturing is really useful)

I find this program much better than Catch as you have total control over texture size (size of image and number of images created) and can mask out sky/people/traffic. The only thing you can't do in the standard version is define matching points between images and set the scale of the model. 

I tried to do this model in Recap360 (the updated version of Catch) but half the uploads timed out and a full res file requires a subscription.  I gave up on the uploads after I had a dense point cloud in Photoscan.

http://www.youtube.com/watch?v=qNLzot1o_1c
https://vimeo.com/40602544

Quote from: TheBadger on May 31, 2014, 07:44:50 PM
Please elaborate, BB. You know this topic is too interesting for just a blurb.  :)
Moving beyond 123D Catch http://www.planetside.co.uk/forums/index.php/topic,14945.0.html. I didn't render that model too close because there were spikes I hadn't fixed.  Reprocesing other photos from around the same time.

[attach=1]
Full price of the standard version is just a bit higher than I'd like to pay without testing saving/exporting, but the education price was irresistable. The workflow is relatively simple although you need to learn a little bit about the process to get the most out of it... and of course a computer with grunt helps, especially the creation of meshes which is RAM sensitive. That said, I've only got 8Gb at home on a 4 year old computer.

I'm also testing it out at work and it's giving our NextEngine laser scanner a run for its money in terms of resolution and processing time for a range of subjects. Texturing on the NextEngine sucks big time, but I can now create a low res model in Photoscan, align the laser scan to that and import it into Photoscan for texturing.

There's a functional demo version.
http://www.agisoft.ru/products/photoscan/standard/

and I use Meshmixer for mesh cleanup (free)
http://meshmixer.com/


kaedorg

Thanks for this tutorial

bigben

 A concept test - high res tree trunk for a foreground object.  This one was created using a GoPro for the photos. NB. not really a suitable camera for this (FOV is too wide) but it worked OK. Also should have shot it on a cloudy day and cleaned up the mesh a bit, but you get the idea.

Both models are 500,000 faces with 4, 4096x4096 textures.

j meyer

That's really promising and good info.
Another software that can be of help cleaning up meshes
is MeshLab,also free,in case you don't know already.

TheBadger

Hi

another workflow that I have seen that uses this thinking and methods also uses retopo on the scan. So one example I saw was the model gets taken into mudbox (after its all fixed up (mesh lab and such)), and then the scan model is sculpted on and repainted. And finally they did retopo and exported various maps.

I have still never did a retopo on anything though. So no real idea how much time that can add to a project.


It has been eaten.

j meyer

Michael - Don't know about mudbox,but in ZBrush this can be done within
a few minutes( the retopo+mapping+and texture projection).
The Ten24 guys do their stuff like that.
Even the clean up can be done in ZB btw.

TheBadger

^^
Ahh, well, that is new info for me.
I just never read a description of that part, or saw a real time of someone doing it. I thought like everything else it must take ages ;D. Well good! Then this finally sounds like a practical and complete workflow to me.

I toyed with 3d catch on my phone. But Using a Iphone to take serious photos feels unnatural to me. And I did not like the 3Dcatch web browser plugin very much so never broke out my real camera for this.

I will take a look at the new service/soft that Ben linked.

Mud has retopo now, and there are also plugins. So , Probably should just start with a cola can or something. I think I have had my fill of bitting of more than I can chew... Finally.  ;)
It has been eaten.

bigben

Meshlab is indeed useful for a lot of things, particularly stitching multiple point clouds together and surface reconstruction of point clouds. Meshmixer isa bit more like ZBrush and Mudbox with its sculpting/smoothing tools (but it's free) and on the model repair side it's useful to get visual indications of where the mesh errors are. ZBrush still does my head in a bit, but in may have to keep learning that, if only for tidying up the texture maps at the end, but for TG this is not a critical step.
You can also export the cameras from Photoscan. Not sure if ZBrush can use these but I think Mudbox can.

The elephant was technically a difficult subject because it's very smooth with large areas of even colour and I had to do a fair bit of editing of the mesh. The tree on the other hand was a lot easier, and has Had no editing of the mesh. erros introduced by the use of an extremely wide angle lens were removed from the point clouds before generating the final mesh.

I'm finding this is meeting a number of people's needs at the uni, and one of the things everyone mentions is"fancy visualisations" of the objects/scenes they want to capture for which TG is pretty good at. Yes, it looks like I can finally get paid to do this stuff at "work"  ;D

If you're already using 123D Catch to create objects I'd say that the standard version of Photoscan is a worthwhile investment. We're using Canon 5Ds but that's because we have them. Any camera with a good lens that can save RAW images at 12-18MP will do. The canon S110 for example is used in drones to do stuff like this: http://m.youtube.com/watch?v=NuZUSe87miY (Software for that was Pix4D, but it's the same principle)