Quote from: Matt on October 04, 2014, 03:39:09 AMPlease forgive the dumb question, but if you are not interesting in the directional information in the lighting (i.e. you just want to capture a "frosted glass" version of the light coming from all directions), then what is the purpose of the "light stage" you are building? If you compress the information down to a single RGB triplet, then you can simply shoot your live action with static lighting and do the lighting modulation in post, using a simple multiply/gain on the footage. (I'm using the term "light stage" pretty loosely here)
It's not a dumb question. The only way I can see accurately lighting a live stage (with real lights) using the spherical lens capturing HDRI images is to have the whole stage under a dome where the inside of the dome is basically a curved display panel facing inward. The virtual light sources coming from those portions of the dome occupied by the green screen are obviously not available for lighting the live stage. However, surfaces lit from those regions of the dome are always on the far side of the live action camera so are mostly out of view. Still, building such a light stage would be challenging and expensive to say the least.
As a much cheaper and easier alternative, I can see where portions of the surrounding spherical image could be used and analyzed to control discrete lighting panels that are strategically positioned to approximate the virtual environment. Deriving this information from a single large spherical image could be daunting. I can imagine some visual tools where you would load the image and then carve it up and assign each region to a zone of control. I will have to think about that some more. But, as I said in my other thread, I am having technical difficulties making use of the EXR libraries, and I will have to solve those problems before I can even do something as simple as loading and displaying an EXR image.
An easy way to tackle the (diffuse) discrete lighting approach is to render a series of images for each "zone-of-control". Each zone would be driven by a virtual discrete light meter that is looking at a different part of the dome. The resolution used to represent each region of the dome lighting the scene could be tiny as each one is boiled down to a single pixel. But you would have as many of them as you have zones of control in the attempt to adequately light the whole "stage". I can see satisfying the requirements for many virtual outdoor scenes just from mimicking skylight as one "zone of control" using overhead lights and some directional flood lights to mimic direct sunlight as another "zone of control". You do it all live and don't have to fix anything in post except for overall 2D color correction and brightness which are almost trivial.
As you know, real lighting has very complex 3D interactions and it is very labor-intensive to try to accurately approximate reality in post. It is extremely difficult to get it right using 2D video processing tools such as After Effects. On the other hand, real light does its accurate 3D differential lighting because it is real light as long as it is coming from the right general directions and has the right color and intensity (Reflected images are another matter). I think most post production houses do the lighting effects in post because (1) They have little control over production, (2) there are no good affordable programmable lights on the market and (3) they have a lot more talented people and larger budgets to pay them to do post production work than I have. Often the production planners don't have such things as an automatic programmable lighting process in their pipelines and so they throw the footage over the wall at the post people and let them deal with it in the ways they know best. Some things are far easier to do by making them part of production than it is to "fix it in post". It is my belief that there's a whole lot you can do very easily using permanent programmable overhead lights to provide the "skylight" and one discrete programmable directional light to mimic direct sunlight, for instance.
I'll be working to solve the EXR library problems. Once that is done, I will have a challenging task to try to use a single dome image to drive discrete lights. I know it can be done but I have the feeling it won't be easy.
Quote from: Matt on October 04, 2014, 03:43:23 AMYou'd need to take care to use smaller weights on the pixels near the poles.
I guess this is because a rectangular image is wrapped around a sphere and the pixels squeeze together (i.e. each pixel represents less angular area) as they approach the poles.