Quote from: TheBadger on September 12, 2014, 01:08:39 AMCan we see a photo of your set-up, P? The studio I mean. What exactly are you doing? What are you trying to make?
Here is a snap of the studio:
[attach=1]
The basic idea is to use CG packages such as TG and LW to create virtual sets in which to place live actors. In the photo you can see a green screen which is about 9ft (2.75m) tall and 24ft. (7.3m) wide. It is curved so that the 1/3 left and 1/3 right (8ft) sections are straight and the 8ft. middle section archs along a 90° curve. What they call a cyclorama would be more versatile but you can't make a temporary one by hanging muslin. Along the top of the photo you can see light coming from a rail that parallels the green screen. This carries four lines of LEDs spaced at about 1" (2.5cm) intervals. Each line is of a different color. They are powered using PWM dimmers. At present this is a box with a knob to control light intensity. In a room behind the left side of the green screen is the control center.
[attach=3]
There are two computers in there. One is a system from Newtek called a TriCaster. It does the real time compositing of the live action with the CG elements in the shot. In our current films these are merely still "photos". But the camera positions have to be created during a live shoot. So after the director decides where to place the real camera then the CG guy (me) has to create a virtual CG camera to match so that we get the perspective right for the angle. This turned out to be more difficult than I thought it would be but still well worth the effort. The second computer in the control room is for generating CG using TG (outdoor virtual sets) and LW (indoor virtual sets and CG characters). In cases where outdoors is visible through windows then renders from LW and TG are composited together. Here is an example:
[attach=2]
This second computer is very useful for rendering backdrops during a shoot in matching the real camera position to satisfy the director. This second computer will also serve as the MoCap machine in the future. Out of view to the right in the photo of the studio is a space where we plan to have the actor controlling a CG character that will "interact" with the live actor on the green screen stage. The TriCaster will composite everything together in real time for the director and actors to view as the action takes place. To accomplish this, the "second" video monitor is piped into the TriCaster as one of the live cameras via a converter box.
To convince the audience that the live actors are actually in the CG scene you have to match the real lights on the live actors with the lighting that is in the spot in the CG virtual set where they are supposed to be. Keep in mind that the lights might actually be changing. For example, light from lightning will change very fast while clouds going overhead might cause lighting to change somewhat slowly. In all cases, though, you really want a programmable controller that can precisely repeat the same "lighting program" over and over again without deviation. With "takes" having to be done over again because of an actor's mistake or any other sort of thing that might ruin a shot, we don't want to complicate this even more by having someone turning a knob a little differently on every take to manually control a lighting sequence. This is what the lighting controller is for. In a typical animated sequence you will have two guys in the control room. One guy controls the TriCaster and the other guy controls the MoCap machine (with simple setups I will do both). They will listen for the director to say "action". If the CG backdrop is a video then the guy on the TriCaster will press "Start" and the guy on the MoCap machine will start the lighting sequence at the same time. Depending on the complexity of the shot, to get the timing exactly synchronized we will need the director to stay "3..2..1..Action".