360 degree panorama

Started by Doofus, December 23, 2006, 05:26:10 AM

Previous topic - Next topic

Sethren

Nifty!   Thank You Kindly.     ;D

Doofus

AFAIK Autostitch only works on jpg files which means you will need varying exposures at step values to be able to create a series of panoramas and then join them together into one HDR image. Seems a shame since TG2 already outputs in OpenEXR format.
Sethren - If you are working in 3DS Max anyway, then you already have the tools required as that will open and save OpenEXR image and the mental ray renderer that ships with max works in 32bit floating if you want it to.
Rub a little funk on it baby :D

Sethren

I don't have 3dsmax.    JPG only?    :-[      Varying exposures which i would assume more renders needed?

3DGuy

Not really. With Photoshop you can adjust the exposure and then save it to jpg. Only 1 render nescesarry. There are commercial products available which use autostich and can handle HDR images.

Doofus

And also if it is Photoshop CS2 that you are using I believe that you can exposure combine the resulting panoramas into a single HDR file.

The method would be to render the different views from terragen and save as OpenEXR images.
Open the OpenEXR stills in Photoshop and save different exposures for each frame as jpgs
Convert each set of exposures into a separate panorama using Autostitch
Recombine the panoramas using Photoshop into one HDR panorama

A fair bit of user input required when all we really need is a spherical camera for TG2 :)
Rub a little funk on it baby :D

bib

MeltingIce is absulte right - rendering more than six images is useless. Six correctly taken 90° FOV images can be stitched together - always seamless and free of distortion. The result may be converted to a sphere map, and the result will be seamless, too (if the program works correctly). Essentially there is absolut no difference between six 90° FOV images and the image from a spherical camera besides the angular resolution.

DeanoD

Quote from: bib on December 29, 2006, 08:03:55 AM
Six correctly taken 90° FOV images can be stitched together - always seamless and free of distortion.

I have found this not to be true.. I have many times tried to create proper seamless spherical maps using many different programs. For open and blank areas of colour.. like the sky.. its quite easy to see weird seams along the edges - specialy in animations. The only proper seamless way is a proper spherical camera similar to the one Vray has..
' Dont Worry.. i crash better than anybody i know '

Mel Gibson
Air America
Anything, Anywhere, Anytime.

helentr

Quote from: DeanoD on December 29, 2006, 08:12:20 AM
Quote from: bib on December 29, 2006, 08:03:55 AM
Six correctly taken 90° FOV images can be stitched together - always seamless and free of distortion.

I have found this not to be true.. I have many times tried to create proper seamless spherical maps using many different programs. For open and blank areas of colour.. like the sky.. its quite easy to see weird seams along the edges - specialy in animations. The only proper seamless way is a proper spherical camera similar to the one Vray has..
The seams (from the cross image to some other 3d projection) come from the interpolation in your stitching program. I had this problem when testing the cube2cross and took me a long time to figure out why. The only solution was to stitch at a larger size and use no interpolation whatsoever in HDRShop or any other program that gives you your spherical or latitude/longitude projection. The seams will disappear.

Helen

bib

#39
It is theoretical true. If both - the program rendering the six images and the program converting the images to a sphere map - work correctly the result is seamless.

Using a spherical camera means projecting the scene onto a sphere.
Taking six 90° FOV images means projecting the scene onto a cube.

Both methods capture exactly the same part of the scene - everything visible from the location of the camera in every direction.
The only difference is the angular resolution. For the six 90° FOV images the angular resolution increases from the center of each image towards the edges of the cube, while the angular resolution of a sphere map is constant.

In order to get the sphere map, the only things to do are projecting the cube onto a sphere - that is really easy - and resampling the projected image at a constant angular resolution - that could introduce artefacts if not done proper.

--EDIT--

Helen replyed while writing this post, saying almost the same. But I want to add, that the conversion can't be performed without resampling or interpolation. You have to be aware of the correct processing over image borders.

Doofus

#40
bib - What program should I use to correctly convert without the seams 6 x OpenEXR images into 1 seamless HDR panorama?
Rub a little funk on it baby :D

bib

A simple solution is to use POV-Ray. Create a file like

camera
{
  spherical
  location <0, 0, 0>
}

box
{
  <0, 0, 0>, <1, 1, 1>
  texture { pigment { image_map { bmp "w4.bmp" } } finish { ambient 1.0 } }
  translate <-0.5, -0.5, -1.5> rotate <0,   0, 0>
}
box
{
  <0, 0, 0>, <1, 1, 1>
  texture { pigment { image_map { bmp "w3.bmp" } } finish { ambient 1.0 } }
  translate <-0.5, -0.5, -1.5> rotate <0,  90, 0>
}
box
{
  <0, 0, 0>, <1, 1, 1>
  texture { pigment { image_map { bmp "w2.bmp" } } finish { ambient 1.0 } }
  translate <-0.5, -0.5, -1.5> rotate <0, 180, 0>
}
box
{
  <0, 0, 0>, <1, 1, 1>
  texture { pigment { image_map { bmp "w1.bmp" } } finish { ambient 1.0 } }
  translate <-0.5, -0.5, -1.5> rotate <0, 270, 0>
}

box
{
  <0, 0, 0>, <1, 1, 1>
  texture { pigment { image_map { bmp "w5.bmp" } } finish { ambient 1.0 } }
  translate <-0.5, -0.5, -1.5> rotate < 90, 90, 0>
}
box
{
  <0, 0, 0>, <1, 1, 1>
  texture { pigment { image_map { bmp "w0.bmp" } } finish { ambient 1.0 } }
  translate <-0.5, -0.5, -1.5> rotate <270, 90, 0>
}

The images w1, w2, w3 and w4 are the "normal" views, w5 is the view up and w0 down. It may be that the images must be reordered or the rotation changed, depending on how the images have been generated.
The current beta has HDRI-support, just change the filetype to hdr.

Doofus

#42
That is basically what I have done but in 3DS max and instead of using a spherical camera I have used Max's built in Panorama exporter which essentially takes the six images and performs its own resampling to convert them into a spherical map. But I have tried it with using six images arranged in box formation with the camera in the centre and I get the same result as the one I highlighted on the HDR Shop website image before. The top and the bottom images are spread over such a large area that it becomes obvious where the joins are especially as DeanoD says during animation.
The difference with what I am doing now is that I am working in increments of 45 degrees so instead of 4 sides my box now has eight meaning the top and bottom images do not have to make up over half of the final image between them. (See the attached image)
Also render time wise it may not be as crazy as it sounds either.
If you use double size renders at 90 degrees than you do at 45 degrees to account for the fact that each image is covering a much larger area of the final panorama then the final difference in amount of rendering is actually quite small.

6 x 1024 x 1024 = 6291456 pixels of rendering
26 x 512 x 512 = 6815744 pixels of rendering
Difference = 524288 = 724 x 724 basically meaning you actuall only render about 3/4 of an image more than with the 6 image method, but the result is way nicer I think.

Also I am only after skies so ...
5 x 1024 x 1024 = 5242880 px
17 x 512 x 512 = 4456448 px
Difference = 786432 px less = 886 x 886 px image less rendering.
Rub a little funk on it baby :D

bib

#43
The image with the buildings seems to be made of photos (it's not reendered, is it?) and badly stitched - the street doesn't fit at all and is very blury. I don't know why, but I think not because of the conversion to a sphere map.

In general it's somewhat a problem with resolution, distortion and the distance to the objects. At the poles the spatial resolution is much higher than elsewhere assuming all objects have the same distance to the camera. In the image with the buildings this is even worse, because the street is also much closer to the camera. So you need to generate a huge cube map to reduce scaling near the poles, but you will never get rid of it exactly at the poles.

Using more images - 8 at 45° or more - aproximates a cylindrical (or spherical, if you do it also up- and downwards) projection and reduces the difference in the angular resolution. But you should get similar result with simply rendering at a higher resolution.

And it may be a problem with the program you are using the sphere map with - this program should, for example, use a complete line from the image to color a pole.

If you don't care about floor and sky consider using a cylindrical projection.

Can you post six 90° images you want to combine?

Doofus

I believe the buildings are photos. That image was taken from the HDRShop website as a bad example of what I mean. But the same problem is true for renders too. The reason for the blurryness I think though is that the bottom and the top images from the six that make up the box have to make up a large proportion of the final image and to do that they must either be rendered at really high resolution or spread out a long way. Like you say, you will never get rid of it entirely at the poles, but by using more divisions you can reduce the effect.
Using more subdivisions reduces the required resolution and like I said before it is actually worth doing more renders with at a lower resolution and putting them together than doing fewer larger renders, especially for me as I just want skies.
Also as I said when I began this thread, one of the main things I want is to limit the required amount of user interaction as I want to have lots of frames of this as I eventually want an HDRI animated sky dome. This is not just a one off image for me.
This way I can set terragen to render out all the frames (once I get the animated version), then simply open up the 3DS max file I have created (which will automatically load all the files assuming the correct naming conventions have been stuck to), and click the render to panorama button. Voila - One HDRI Panorama frame of animation.  -  Rinse and repeat.

"Using more images - 8 at 45° or more - aproximates a cylindrical (or spherical, if you do it also up- and downwards) projection and reduces the difference in the angular resolution. But you should get similar result with simply rendering at a higher resolution."
I am doing it up and down too to make it spherical, and it is more efficient to render more smaller images rather than the fewer larger larger ones.
Rub a little funk on it baby :D