32 bit Tiff import: Does TG internally retain the full precision after import ?

Started by epsilon, November 12, 2011, 08:56:42 am

Previous topic - Next topic



I recently discovered the really great TG feature of importing heightfields as 32 Bit TIFF files.
Now, as the original height fields that I want to import do use the full 32 bit floating point precision, I just want to make sure that TG does retain the full 32 bit precision during the import-from-TIFF32 process (i.e. that there is no intermediate conversion to a 16 bit representation as in .ter import, for example)
So could anyone of the experts briefly confirm if my assumption that TG2 does retain the full 32 bit precsison  after importing the data is correct ?

Thanks a lot !


32-bit floating point data should be fully preserved, whether loaded through Heightfield Load or through the Image Map Shader.

Just because milk is white doesn't mean that clouds are made of milk.


Hi Matt,

What do you think is the matter with displacement maps from Realflow? Those are .tiff's too, but upon importing in an image map shader I get an error that it's supposed to be in a kind of SGI standard format.
Normally I batch process the .tiff's to SGI with Photoshop, but with ~40MB/frame and 100's of frames that's quite time-consuming.

Is it possible to extend the support for more/different tiff specifications?



That message comes up if the image is a 16-bit greyscale image (16-bit with only one channel). So far that particular combination of bit depth and number of channels isn't hooked into our FreeImage integration. That's a limitation on our side, not with the FreeImage library, so we should fix this in future.
Just because milk is white doesn't mean that clouds are made of milk.


There seems to be a mis-understanding. 32-bit color depth does not imply a 32-bit floating point format when storing pixel color information in image files. If this were true, then each color channel (in a full color image) would require its own 32-bit value and would end up storing 3*32=96 bits of information per pixel. This would be called a 96-bit format. If an alpha channel were included it would require 128-bits per pixel and would be called a 128-bit format. 32-bit (single preceision) floating point is not what is meant by a 32-bit format. A "24-bit format" usually implies 8 bits per channel (i.e. red, green and blue) while going to 32-bits usually means that an 8-bit alpha channel is added bringing the total to 32-bits per pixel. Each of these color channels is only 8-bits wide, not 32. And the format is integer, not floating point. Four 8-bit channels total 32-bits per pixel. 16-bits per color channel requires 3*16=48 bits per pixel for color images and adding an alpha channel brings the per-pixel storage to 64-bits. Of course monochrome images using 16-bit gray-scale values only have one channel (i.e. brightness) and so these only store 16-bits per pixel in integer format.

Light intensities are usually encoded to vary between 0 and 1. Because floating point numbers are used for values that vary by orders of magnitude both negative and positive (not often needed in image files), the exponent (8 bits) and sign (1 bit) of a 32-bit floating point number would largely go wasted if used to store pixel values in image files. The resolution of a 32-bit floating point value used to store only values between 0 and 1 would effectively be limited to 24-bits since this is basically the size of the significand as per IEEE-754 terminology. (Actually, the significand is only 23-bits in size but the high order-bit [not actually stored] is implied to be a 1 when the value is non-zero.)

As I understand it, image files that are used as maps for scaling geometric values only use the brightness of the image. If the image file is a color image then the brightness value must be calculated from the color channel information. So a 24-bit color TIFF will probably only yield 8 bits of brightness depth per pixel.  This RGB information would also yield hue and saturation (at 8-bits each) which are thrown away. An alpha channel (if present in a 32-bit format) would also be ignored. Using a 16-bit gray-scale image for this purpose would both double your resolution and simplify the import process in the software (because the whole value is already a brightness value). It would also be much more efficient as no channel information would be thrown away.  However, as Matt pointed out, this format is not supported in TG2.


While that's all true, perhaps the OP was referring to a 32-bit-per-channel floating point greyscale TIFF. Also, often we store information outside the 0..1 range, so those extra bits aren't necessarily wasted.
Just because milk is white doesn't mean that clouds are made of milk.