As Terragen is used more frequently to produce its amazing landscapes for use as backdrops for live action productions, it is going to become increasingly important to have the capability to automatically adjust live studio lighting that closely matches the lighting in the surrounding CG. Though one can already export channel files of the light sources themselves, as everyone knows, these only indirectly contribute to the ambient light at the virtual location(s) where the live actors are supposed to be in the virtual landscape. A solution to this problem would be to create a virtual light meter, say a sphere of specified diameter and placed into the TG environment at a place where the virtual light is to be sampled. When exported as a channel file it would contain both color and intensity of light passing through the "meter". An option that might limit the amount of data produced to prevent key-framing every frame would be to produce into the channel file only frames where light sources themselves and the meter are key-framed at the "beginnings"and the "ends" of their animation ramps. If the meter itself is moving or something like clouds going over start changing the lighting too frequently then the user might want to have the option to turn on key-framing (i.e. reporting values to the channel file) for every frame. Still it would be desirable to limit the amount of data by having some algorithm that might do a best fit for a series of ramps to limit the amount of data and the number of key frames going into the channel file. This best fit analysis could be done in the software that is importing the data for use in the lighting controller system. In such case one would probably want to report lighting data for every frame.
This metering could be done with a camera that is already a part of Terragen. However, cameras actually receive more light than they "see" because of light coming in from out-or-frame. But the angles of FoV can be changed to obtain the desired effect. And with the new 360 degree panoramic camera, lighting can be completely sampled from every direction at any point in the scene. In this light (pun intended) the lighting values reported to the channel file could be obtained by simply averaging the camera's image.
If TG already has this capability then I would like to know how it is done. I have queried this forum for "light meter" and it brings up nothing. Such a device could be made from a camera in TG if there was the capability to create a channel file on the averaging of its image to one pixel value. In the "Export" Camera Tab where the "Export chan file" and "Export FBX file" exist could be added
"Export Light Metering" as taken as the overall lighting as seen by the camera. With the panoramic camera now available, the overall environmental lighting coming from every direction could be sampled at any point in a virtual TG scene by placing the camera there. The real "gotcha" is that this information can't be obtained without rendering. And when the render is done to individual still frames, where can the data be stored pending export to a channel file?
It is my guess that this will become a feature request to the TG development team. As I am a developer and am a C++ programmer, I could consider using the SDK for this purpose. However, I do not have and have no plans to purchase the somewhat pricey MicroSoft C++ compiler. The toolchain I use is the most recent release of the Open Watcom compiler and it does not yet support the 64-bit environment. Purportedly this is in the works as well as support for ARM7 as a target but will take some time before it comes out and there may be incompatibilities between the two toolchains.
Perhaps the best way to solve this problem is to just write a program that will average a series of BMP files. Now that I am thinking about it, I'd don't think it will be very difficult. The only problem is that I will have to render a sequence just to do the metering. But I can just render at the key frames and this will cut down on the amount of rendering to be done as well as limit the amount of data in the lighting sequence.