Scanning Environments

Started by rcallicotte, February 27, 2013, 07:37:54 am

Previous topic - Next topic


Hey guys.

Since this thread is pretty broad in terms of subject info, I'll ask this here.

Have any of you come across any info on a affordable hand held 3D scanner? Anything?!
something like this
Im a little confused about why a 3D printer is affordable but a hand held scanner is not. I feel like I would rather buy a scanner than a new camera if I could. Seems ideal to the other options we have been talking about in this thread and others
Any thoughts, ideas?

Im interested for both 3D image making like most of this forum, but also in 3D printing and milling. The scanner seems to be the missing part in what is obtainable (milling can be ordered out like rendering)


It has been eaten.

j meyer

Hey Michael here is a link to a cheap solution
some guy showed a few pretty decent (for that price etc) examples
he had done with it on ZBCentral some years ago.


September 13, 2013, 03:16:45 pm #17 Last Edit: September 13, 2013, 03:18:28 pm by Kadri

Just guessing Michael i think it is more about the software with scanners then the hardware part.
A printer has a 3D model ready only to print the shape.
Printing is maybe not easy but making the 3D model is the real hard part.
It is not only the hardware the software is what it makes  the 3D object.
I don't know any numbers but i will not be surprised if the price is more about the software side.
Then there are other factors like that they are limited on sales etc.

Not sure if this was posted here.
Not what you asked but not too different too ,kinda the same as j meyer posted above:


Hey guys.
No. those table top systems are too small and toyish. I really want a hand held with enough quality to do industrial work (precise parts reverse engineering) Like in the link in my last post. The problem is I cannot find a price for that, or anything similar. which leads me to believe its in the thousands of dollars. (I want a hand held because I want to use it for lots of things)

The software is easy to find, most of the hardware comes with software. The hardware is really the issue here. Im not looking for a hobby toy on this.
If I can get all my ducks in a row, I could make some money with this stuff. But I need the cheapest (yet best) solution to start with. The less I spend, the less it will hurt if I fail  ;) But I guess thats the trick with everything.

A mill that can do what I need is really affordable, I just lack some info about some of the other stuff... I guess this 3D scanning stuff is still really new?
It has been eaten.


Quote from: TheBadger on September 14, 2013, 05:50:25 pm
.. I guess this 3D scanning stuff is still really new?

Guess that depends on your definition of 'new'.  The first 3D scanning and 3D printing I saw was back in the mid-90's at a local business that manufactures power equipment.  But it's only recently (maybe 5 years, but could be wrong on that) that it's become financially accessible to folks like us.


Your needs are different as it looks Michael.
I was more thinking along the Photogrammetry line.
This pdf is a little old but may give you some answers roughly about some scanners and prices :o


September 15, 2013, 12:08:58 pm #21 Last Edit: September 15, 2013, 12:10:29 pm by PabloMack
Quote from: Kadri on September 13, 2013, 09:15:55 am
This came just to my mind:,16755.0.html

It was just a day or two earlier I had watched a longer video of a flyover of the city of Adelaide. I couldn't help but wonder how close the accuracy is to the real city of Adelaide. Did someone actually go out and model every building, street etc. and place them where they actual are in the model? 10GB sounds about like what it would take to do this. The shear size of the project it is mind-boggling.


Thanks Kadri, will take a look.
It has been eaten.


The 3D model of Adelaide has been generated from aerial photogrammetry. This is the same principle as 123D catch or AGI Soft but large scale. We didn't do any manual modeling.
We've just generated a 3D model of Melbourne CBD from more than 20 000 photographs.

Once it is fully processed, I will use TG3 to add all the atmospheric, lighting, bump mapping effects to it.
I am passionate  about the art and science of digital aerial mapping, photogrammetry, geospatial information, VR and 3D modelling.


Can you say what software you use for these projects ?


What about the shiny surfaces, how do you handle those? That's an issue with some software. It sometimes looks as if there's no reflection on the windows, but you can see through them, though it's hard to see on this scale and speed. I guess if you photograph with a polarizing filter you'll get that effect.
And how about moving cars between photographs, will they blur out of view?


September 17, 2013, 05:52:16 pm #26 Last Edit: September 17, 2013, 06:16:53 pm by PabloMack
Quote from: Dune on September 17, 2013, 03:43:39 am
What about the shiny surfaces, how do you handle those?

The movement is so fast in the animation that it doesn't hold still long enough to see such problems.

On another note, this is probably not the forum to ask this question but the AgiSoft application can produce an XML file that contains camera data. For each camera is listed the 4X4 transformation matrix of the camera. I am not sure how this matrix might be useful but I am wanting to derive each camera's [X,Y,Z] coordinates and orientation within the point cloud/model space. Does anyone know if and how this information can be extracted? Buying the Pro version of PhotoScan is not an option since it costs almost 20X that of the Standard version. I have an ongoing thread on the AgiSoft forum and I am getting some assistance. There are experts in there that give only some tidbits of information and expect me to have the understanding of a mathematician. I also have the first edition of a book on the subject but it is long on derivation and short on application (like mathematicians):

My plan is to write a program that can injest this file and compute the translation and rotational angles I need to align and scale the geometry to match the virtual space in my 3D packages.


In this video we just display the 3D model with basic textures using an home-made viewer based on OpenSceneGraph. To deal with the relfective surface we first need to identify these surface and create a reflextion map as input. TG is a great tool for that.

Regarding cars, it can be a problem if they move slowly and we often end up with half car in the 3D model that we need to remove. at normal speed, they are just not modeled.

I can't unfortunately give details on the processing but it is a mix of in-house, open-source and commercials photogrammetry softwares. The big trick is in the acquisition technique of the RAW data to produce the model.

I am passionate  about the art and science of digital aerial mapping, photogrammetry, geospatial information, VR and 3D modelling.



September 19, 2013, 08:20:59 am #29 Last Edit: September 19, 2013, 12:07:10 pm by PabloMack
For those who are interested, I answered my own question late last night. The transformation matrices produced using Tools/"Export Cameras..." in PhotoScan transform points in the cameras' local space to the point cloud/model's common space. So when you multiply one of these by the augmented origin of the camera you get the camera's location in model common space which is still local space relative to the scene you will ultimately place your model into. Pretty cool stuff. Now I have to explore the vast unknown of how to align the geometry with my scene's absolute coordinate system and motion tracking software. I am sure that what needs to be done depends on what is needed in a particular project so there may be no general solution.

Aerometrex's application is one example of a model where UV mapping won't be adequate if you want to map the whole thing to one image. You'd have to use multiple images and multiple UV maps. Here's an application where something like PTEX shading might be more appropriate. However, the model is so huge that even PTEX might not work very well because all of the images have to fit into a single file. I would presume the geometry and shading would need to be stored in a database, pieces of which can be queried up (and locally cached) as the renderer needs them. I think this is what happens in geodetic visualization systems. This would be needed in a virtual fly-over along a path that only a virtual camera visits. But if you want to use a fly-over from the point of view of a real video taken in an aircraft from which the geometry might have been digitized in the first place, you can then project the camera's image onto the geometry and you won't need to shade the surface. The problem I see is that shadows are burned into the image when you collected it with your cameras. When you place a virtual sun light source to simulate the lighting that was present when the photography took place, you will get double deep shadows making them darker than they should really be. If you are not placing any CG objects in the scene then you will just want to illuminate the environment evenly and have no virtual shadows. But if you want shadows from virtual CG objects then the shadows in your digitized environment will be too deep. Perhaps the way to solve this problem is to selectively make the object not able to receive shadows from itself. But it still needs to receive shadows from all the virtual objects. I am pretty sure that Lightwave can do this but I don't know about TG.

Another problem altogether is the tainted images you get from video when your camera (wannabe a true video camcorder) has a CMOS rolling shutter sensor. Don't get rid of your true camcorder that has CCD sensors in them. DSLR's don't hold a candle to them when trying to collect geometry with lots of motion in them. You want the images you collect your geometry with to be in sharp focus as possible with no bokeh at all. You can add this effect in your virtual 3D renderer.

Sorry, just thinking to myself. You guys probably already know all this stuff.