There seems to be a mis-understanding. 32-bit color depth does not imply a 32-bit floating point format when storing pixel color information in image files. If this were true, then each color channel (in a full color image) would require its own 32-bit value and would end up storing 3*32=96 bits of information per pixel. This would be called a 96-bit format. If an alpha channel were included it would require 128-bits per pixel and would be called a 128-bit format. 32-bit (single preceision) floating point is not what is meant by a 32-bit format. A "24-bit format" usually implies 8 bits per channel (i.e. red, green and blue) while going to 32-bits usually means that an 8-bit alpha channel is added bringing the total to 32-bits per pixel. Each of these color channels is only 8-bits wide, not 32. And the format is integer, not floating point. Four 8-bit channels total 32-bits per pixel. 16-bits per color channel requires 3*16=48 bits per pixel for color images and adding an alpha channel brings the per-pixel storage to 64-bits. Of course monochrome images using 16-bit gray-scale values only have one channel (i.e. brightness) and so these only store 16-bits per pixel in integer format.
Light intensities are usually encoded to vary between 0 and 1. Because floating point numbers are used for values that vary by orders of magnitude both negative and positive (not often needed in image files), the exponent (8 bits) and sign (1 bit) of a 32-bit floating point number would largely go wasted if used to store pixel values in image files. The resolution of a 32-bit floating point value used to store only values between 0 and 1 would effectively be limited to 24-bits since this is basically the size of the significand as per IEEE-754 terminology. (Actually, the significand is only 23-bits in size but the high order-bit [not actually stored] is implied to be a 1 when the value is non-zero.)
As I understand it, image files that are used as maps for scaling geometric values only use the brightness of the image. If the image file is a color image then the brightness value must be calculated from the color channel information. So a 24-bit color TIFF will probably only yield 8 bits of brightness depth per pixel. This RGB information would also yield hue and saturation (at 8-bits each) which are thrown away. An alpha channel (if present in a 32-bit format) would also be ignored. Using a 16-bit gray-scale image for this purpose would both double your resolution and simplify the import process in the software (because the whole value is already a brightness value). It would also be much more efficient as no channel information would be thrown away. However, as Matt pointed out, this format is not supported in TG2.