The future of film production

Started by René, August 09, 2018, 08:08:52 AM

Previous topic - Next topic

René

The future of film production. From 2:10 to 6:45

https://www.youtube.com/watch?v=tmKFUffO5G4

digitalguru

mmm - I work in VFX, so I'm not against technology obviously, but you can leave too much to the cutting room sometimes.

There's a possibility of wresting control away from the director with this and the suits from movie studios locking out the principals to create their own edit.

Look at something like Die Hard - John McTiernan shot that all in camera (i.e virtually no coverage) and look how that turned out - Spielberg often does too.

Then look at the middle episodes of the Star Wars series, where Lucas has the tech to cherry pick performances and comp them together, i.m.h.o a pretty sterile way to approach a movie.

Cameras might eventually get to the point where you won't have to bother to expose or even focus them correctly, perhaps in the future film crews could just stick a camera in the middle of the action, and let the actors get on with it while they go to lunch :-)

René

That will undoubtedly happen. Every new technology is used and abused. Add to that the use of A.I. and you get a movie where hardly any people are needed anymore. However, I am not very worried about this; some 20 years ago it was predicted that with the advent of new technology anyone would be able to make a professional film, and in some cases that has turned out to be the case, but nevertheless good directors and actors and skilled people are still needed to make a good movie.


digitalguru

of course, that was a semi-serious rant :-)

You see a lot of things on Click that seem to vanish into the ether eventually. I'm sure most producers when faced with the cost and the petabytes of data needed to realize this might say "can't we just get a director to shoot it and an editor to edit it?"

Deep data in compositing is a good example - it's a great idea and works better than conventional matting in most cases, especially with fluids and volumetrics etc. But a lot of studios will defer use of it if they can because it just eats up hard disk space and puts a big strain on the bandwidth of the pipeline.

I can see it really working in a big multi-angle battle sequence though, so you wouldn't have to worry about camera operators getting in the shot.

We really are at the vanguard of this technology and there a lot of exciting things emerging - cameras that can output perfect zdepth for example, and when HDR matures we'll get closer to reproducing how we see on screen.

They say in the clip that after the servers have processed the point cloud it still needs "vfx to clean it up" - so no redundancies just yet then :-)


WAS

You reminded me of Pixars Deep Shadowing using similar techniques. Actually, the results are quite intriguing. How common is deep images becoming in film?

pokoy

Hmm, so the mesh result from the photogrammetry isn't there yet - there are artifacts on 'object' borders and details look washed out in some places - but I guess it'll only take a few years to look a lot better.
Of course they show this to advertise it and get enough money in to develop further, but they don't mention that it's not possible to get a shot from every angle, there's too much occlusion when people/objects are too close or their motion is too fast. So probably a lot of exceptions where directors are forced to shoot conventionally.

Bandwidth - good point, and it needs a crazy amount of server space.

Thanks for posting this, very interesting nonetheless!

digitalguru

#6
QuoteYou reminded me of Pixars Deep Shadowing using similar techniques

That's how it started, it was basically a deep shadow map projected from the scene camera. In a recent project, we had a budget of what we could render deep and what to render with standard mattes - so shots with fire, dust volumetrics etc would go deep, but it would have used disk space like crazy to do it on all shots.

I worked on one shot at a studio and we were rendering a lot of iterations and systems called to clean my shots as we'd used 2 terabytes of space in a week :-)

QuoteHmm, so the mesh result from the photogrammetry isn't there yet - there are artifacts on 'object' borders and details look washed out in some places

I think that's a just a rough demo, if you look at the point cloud behind it, it looks quite detailed - looks like a rough and ready chroma key over the top.

PabloMack

#7
The initial solar system composite needs a Z map to go along with the real image in order to determine which image has precedence. That's something that simple chroma-keying can't do. I will have to think about how I could coax my Kinect and Tricaster into doing it. In the cowboy fight scene, there's no MoCap because there are no CG characters. Interesting stuff.