Badger, just to re-state from an absolute baseline perspective, the concept of "baking" is to take something that is procedural or otherwise calculated dynamically at run/render time, and turn it into something *static*, non-procedural. The result is generally more portable as well, i.e. exportable to other packages. But the important concept is that you take something dynamic and make it static, hence the "bake" terminology; you're "fixing" something (in the sense of "to make firm, stable, or stationary"). You lose the benefits of dynamic/procedural functions, but you gain portability, potential render time improvements, etc.
Baking can be done for many types of 3D elements, from procedural noise functions, to spline curves, to rendering calculations like Global Illumination, and it can be done in several ways, too. For example with simple displacement of a sphere from a procedural noise function, it could be called "baking" both to rasterize that noise function and turn it into an image map for displacement, *as well as* to calculate the final geometry at a given resolution and actually make the new object a "post-displaced" version, i.e. if you exported the geometry, it would have all the shapes and distortions that the displacement function created (and would be very high poly as a result, most likely).
The reason I am trying to clarify is that "baking" is really nothing like "simply using opacity and or blending modes in Photoshop to put multiple 2d layers into one image", so I suspected there was still some misunderstanding. Does that clear things up at all?
I realize there is still a lot missing in the explanation in terms of specific baking functions in various apps, what can/should be baked, why and when, but it's really a topic with somewhat broad scope, and also somewhat specific to each app. I hope, however, that once you understand the basic concept you can begin to intuit its value (or lack thereof) for your particular needs, and to identify the functions in the apps you use that equate to "baking", even if they're not called that. Off the top of my head, baking is basically useful for 3 major things in CG: cross-application data interchange (e.g. ZBrush displacement map baking), render speed improvements (e.g. bake a procedural texture to an image map so that the procedural function doesn't have to be calculated dynamically anymore, it's just a simple image look-up), and stabilizing procedural calculations (e.g. GI cache in TG2, or any other situation where you can perform a procedural calculation once and share or interpolate it across multiple frames or multiple systems making calculations). I may be missing some other uses, so others please jump in as-needed.
So, to answer some of your original questions, provided your target application supports the same material functions as Maya (e.g. displacement), and that your output formats (base geometry and image for displacement) are supported, then yes in theory the target application (or any arbitrary app) should render the object similar to Maya. In the case of TG, of course, you know that objects only render with displacement when Raytrace Objects is disabled, but other than that yes, a model with a baked displacement map (displacement baked to an image) should render similarly in TG2.
- Oshyan