Avoid Rolling Shutter Sensors for Motion Tracking and LED Lighting

Started by PabloMack, November 21, 2012, 04:27:26 PM

Previous topic - Next topic

PabloMack

This blog article spells out why you should say 'NO' to CMOS sensors and buy only camcorders with CCD sensors. CMOS sensors are a big strike against the new generation of DSLRs wanting to be camcorders. Most everyone in this forum is likely to want to do a composite with CG and live video. If your real camera is moving then you are going to want to synch the virtual camera with the real one. Furthermore, if you want to track any object in the frame then you will also need a reliable time reference. To accomplish this you have to do a 'solve' on the live video using a motion tracking software package. The problem with CMOS sensors is that their "rolling shutters" take a picture of the motion as it happens. Each pixel sensor captures its pixel image as the scanner goes down the frame so all of the pixels have staggered time stamps. This is the cause of the infamous "jello effect" when you play the video back. With CCD sensors, the whole frame is captured at once and the image is read out to a frame in a file.

The article comes from the forum on the website by the maker of SynthEyes, a popular motion tracking software package.

http://ssontech.com/phpBB2/viewtopic.php?t=478

TheBadger

It has been eaten.

PabloMack

#2
I have come across another problem when using cameras with "CMOS" Rolling Shutters. They do not work well with LED lighting that uses PWM (Pulse Width Modulation) for dimming. There is no problem when the LEDs are full on. Also, PWM seems to never be a problem when your camera has CCD sensors.

I have built a studio for shooting live action against a green screen. I have put in and am putting in more LED lights for two reasons. First, LEDs require less power and therefore generate less heat. This is important for the comfort of the actors, director and crew working on the set. We have to turn off the air conditioner during a shoot to eliminate the noise that the fan and compressor generate when on. This makes lighting with low heat generation even more important. The second reason I am using LED lights is because they are dimmable and they come in different colors. I am developing an LED lighting controller for the purpose of simulating natural outdoor lighting, both real and CG. As it turns out, dimming the LEDs is a problem while using a camera with a rolling shutter. I have done tests with both CMOS and CCD sensors and the CMOS video sees lights that have unwanted artifacts. Video taken with a camera that has CCD sensors shows no ill effects. This is because cameras with CCD sensors have what are called "global shutters". They sample the entire picture at once so the pulsing of light does not adversely affect the image. Here is an article someone posted on this topic. The article was called "Pulse Width Modulation is not your friend" but it should have been called "Rolling Shutter is not your friend".

http://provideocoalition.com/aadams/story/pulse_width_modulation_is_not_your_friend/

This problem would go away if the dimming controller used voltage to control dimming instead of duty cycle. The reason why PWM is used to dim LED lighting is because it is efficient and, therefore, little heat is generated. The power transistors used to control power going through the LEDs are either full on or full off. It is when they are partly on that they have to burn that part of the power supply voltage that is not used by the LEDs. There are other ways to control the amount of current going through the LEDs but they are more complicated and more expensive.

TheBadger

Can we see a photo of your set-up, P? The studio I mean. What exactly are you doing? What are you trying to make?
It has been eaten.

PabloMack

Quote from: TheBadger on September 12, 2014, 01:08:39 AMCan we see a photo of your set-up, P? The studio I mean. What exactly are you doing? What are you trying to make?

Here is a snap of the studio:
[attach=1]
The basic idea is to use CG packages such as TG and LW to create virtual sets in which to place live actors. In the photo you can see a green screen which is about 9ft (2.75m) tall and 24ft. (7.3m) wide. It is curved so that the 1/3 left and 1/3 right (8ft) sections are straight and the 8ft. middle section archs along a 90° curve. What they call a cyclorama would be more versatile but you can't make a temporary one by hanging muslin. Along the top of the photo you can see light coming from a rail that parallels the green screen. This carries four lines of LEDs spaced at about 1" (2.5cm) intervals. Each line is of a different color. They are powered using PWM dimmers. At present this is a box with a knob to control light intensity. In a room behind the left side of the green screen is the control center.
[attach=3]
There are two computers in there. One is a system from Newtek called a TriCaster. It does the real time compositing of the live action with the CG elements in the shot. In our current films these are merely still "photos". But the camera positions have to be created during a live shoot. So after the director decides where to place the real camera then the CG guy (me) has to create a virtual CG camera to match so that we get the perspective right for the angle. This turned out to be more difficult than I thought it would be but still well worth the effort. The second computer in the control room is for generating CG using TG (outdoor virtual sets) and LW (indoor virtual sets and CG characters). In cases where outdoors is visible through windows then renders from LW and TG are composited together. Here is an example:
[attach=2]
This second computer is very useful for rendering backdrops during a shoot in matching the real camera position to satisfy the director. This second computer will also serve as the MoCap machine in the future. Out of view to the right in the photo of the studio is a space where we plan to have the actor controlling a CG character that will "interact" with the live actor on the green screen stage. The TriCaster will composite everything together in real time for the director and actors to view as the action takes place. To accomplish this, the "second" video monitor is piped into the TriCaster as one of the live cameras via a converter box.

To convince the audience that the live actors are actually in the CG scene you have to match the real lights on the live actors with the lighting that is in the spot in the CG virtual set where they are supposed to be. Keep in mind that the lights might actually be changing. For example, light from lightning will change very fast while clouds going overhead might cause lighting to change somewhat slowly. In all cases, though, you really want a programmable controller that can precisely repeat the same "lighting program" over and over again without deviation. With "takes" having to be done over again because of an actor's mistake or any other sort of thing that might ruin a shot, we don't want to complicate this even more by having someone turning a knob a little differently on every take to manually control a lighting sequence. This is what the lighting controller is for. In a typical animated sequence you will have two guys in the control room. One guy controls the TriCaster and the other guy controls the MoCap machine (with simple setups I will do both). They will listen for the director to say "action". If the CG backdrop is a video then the guy on the TriCaster will press "Start" and the guy on the MoCap machine will start the lighting sequence at the same time. Depending on the complexity of the shot, to get the timing exactly synchronized we will need the director to stay "3..2..1..Action".

Kadri


Pablo how easy-good are the keying results?
Anything you want to say about the chroma subsampling (4.2.0-4.2.2 ?) of you cameras?
There are many threads about this. Curious what your specs are and how they look to you.

PabloMack

Quote from: Kadri on September 13, 2014, 11:15:51 PMPablo how easy-good are the keying results?
Anything you want to say about the chroma subsampling (4.2.0-4.2.2 ?) of you cameras?
There are many threads about this. Curious what your specs are and how they look to you.

For this project, the "keying" I am talking about is not chroma-keying but light level/quality keying. i.e. The real/physical lights (colors/levels) are set at key frames just like any other kind of action is keyed when animating a CG object, light, camera etc. But the lights falling on the actors that are inserted into the CG scene can't be set directly within Terragen but must be sensed because the lighting is very complex as those at PS know. Instead, I plan to use virtual cameras as "light meters" to sense the light levels within the CG scene and use these to control actual physical lights in my live studio.

I'll let you know how this comes out when I get there. I am in the most complicated part of the firmware now. There's lots to do and software to write on the host side of the link before the system will be ready to be used for a test. I am hoping the lighting system will be usable some time in October.

But to answer what I perceive to be your question (which is not the what I intended to address in this thread):
I did some research on what the sampling numbers mean and none of the answers I found were satisfactory to me. I believe there are very few videographers who really understand the technicalities. In our group, I do the live chroma-keying and I suspect that the image I see coming live from the camera is before encoding so the encoding at that point is irrelevant. There is another guy who does the editing and I am sure he doesn't understand the meanings of the encoding subsampling numbers. All I know is that the higher numbers are better and the numbers stand for R-G-B somehow. But when there are zeros in the numbers it certainly can't mean that those colors are not sampled so I don't know exactly what they mean. But in all cases, since the human eye is most sensitive to green, all camera manufacturers use subsampling that maximizes the resolution of green in the pixels so that is why they recommend using a green screen. Since blue contributes the least to luminance then its use should be avoided as the key color (unless the video stream is uncompressed) when doing high definition and precision is needed. But if your subjects absolutely have to be wearing green (say he is a live-action leprechaun) only then use another color such as red or blue. My general rule is that if the live chroma-keyer has little problem getting a good "mask" then the chroma-keyer in post should have little problem with the shots. The live keyer is a good test of what the final will look like. To make sure you will have good separation you should always have a uniform well-lit chroma-key backdrop that really should have its own lighting that is independent of the lighting for the subjects.

Kadri


Yes from what i have read if the greenscreen is evenly lit and good lighted keying is easier.

"However, that source material is frequently uprezed from a lower quality source and in many cases no one knows the better.
Where higher bit rates are most definitely required as source material that that is to be composited, green screens, etc. 4:2:2 color sampling is the only acceptable source material and many would prefer 4:4:4. "
Said one guy in a forum for example.

When the footage is 8 bit 4.2.0 color grading and keying can be more problematic as it looks.
Especially gradients like in the sky might result in banding etc...

Anyway there are many links related to that and some are very technical.
I just wanted to hear what your results are. Curious what results you will get :)

Edit: just to show to others what i mean a little better:
http://www.dvinfo.net/forum/attachments/sony-xdcam-ex-cinealta/4915d1194262630-4-2-2-versus-4-2-0-chroma_examples-keyed.png

http://community.avid.com/cfs-filesystemfile.ashx/__key/CommunityServer.Components.PostAttachments/00.00.20.85.88/Chroma_2D00_Examples.jpg

https://vimeo.com/38076417

TheBadger

Some nice info here  :)

Pablo,
Are you leasing out services then, or you have some specific projects/clients in mind already? And when you say "my group", do you mean a company or some sort of co-op/collaboration. Of course you don't have to say, Just curious is all.
It has been eaten.

PabloMack

Quote from: TheBadger on September 15, 2014, 02:24:07 AMAre you leasing out services then, or you have some specific projects/clients in mind already? And when you say "my group", do you mean a company or some sort of co-op/collaboration. Of course you don't have to say, Just curious is all.

My main line of work is embedded electronics (software and hardware developer) and I have been an independent contractor since 1990. My "group" is a collaboration. The indy film industry is fairly large in Houston and if you include Austin then it would be double that number. They certainly number in the thousands counting actors, agents, makeup and prop people, grip, cameramen, editors, directors, producers, writers, script supervisors etc. I have been involved since about 2009. While there are many videographers, few can do much in the way of CG, so as I develop my capabilities and studio, I will be able to offer more to this growing industry. You might want to look into what is going on in your area. Many of these people need your local expertise. Here are a couple of things I pulled up on Google in your state:

http://madfilm.org/wud-film-mini-indie-film-festival-apr-24-27/
http://www.wifilmfest.org/

TheBadger

Thanks for the info on Texas Pablo. My wife and I have talked about moving there a number of times for a variety of reasons.
Im a UW grad so I am aware of the industry here I also worked here. I just don't like the freaking winters. And madison is a bit irritating as a place to live. I liked living in AZ much much better. Anyway, its good to get some look at the industry such as it is in various places.

Really the only good thing about my location is Im very close to Chicago and Minneapolis, although there are a lot of corporate headquarters here and few well known game companies too. Still, I would move to Texas in a heartbeat under the right circumstances.
It has been eaten.

PabloMack

Quote from: TheBadger on September 15, 2014, 08:02:49 PMI liked living in AZ much much better...I would move to Texas in a heartbeat under the right circumstances.

I spent 2.5 years in Flagstaff as a grad student at NAU. My wife & I had a vacation there just last month for 8 days. We visited a guy who was in my class. He liked it so much he moved back. He's a professional birding tour guide and works for a company in Austin. I really love AZ but I didn't like the state income tax. I did like that there was no auto inspection (I don't know about now) and they don't honor daylight savings time. Texas has no income tax but relies on sales tax instead. That's the way to go IMO. Austin used to be a small town but it has grown fast. As far as indy film making goes, that's the place to be if you are a Texan.  And, of course, the gun laws are like Georgia's. Our faith in people is greater than faith in government. Another great thing about Texas is that its not hard to find work. My mother once told me that Houston never saw a depression during the '30s. But, of course, Arizona's landscapes are the best. ;)
[attach=1][attach=2][attach=3][attach=4]
But I can see one potential problem. You once told me that you prefer little dancing doggies wearing little dresses over such things as scorpions. Alfred Sherwood Romer (a world renowned paleontologist from the American Museum of Natural History) is famous for having written this about Texas:  "Almost all the animals and plants bite or sting".