What are the limits of resolution in TG VS 35mm film?

Started by TheBadger, October 26, 2014, 02:14:06 am

Previous topic - Next topic


please read http://pic.templetons.com/brad/photo/pixels.html

Some quotes from the link with my questions:

Quotethere are around 20 million "quality" pixels in a top-quality 35mm shot...
many can also argue that a shot of around 9 million pixels would look as good to the eye as a 35mm shot, except when blown up very large and looked at quite closely...

When thinking on a render as a photo, how does the above statement apply?

QuoteFilm, as an analog medium, does not record just 256 grayscales or the corresponding 16 million colours. And film scanners, even doing just 8 bits per colour, get 24 bits of data for every single pixel. Today's digital cameras only get 8-12 bits of data for each pixel and they guess (interpolate) the other 16. So the colour accuracy for even a scanned film image is better than the modern digital camera. Good film scanners can also extract more than just levels from 0 to 255. They can often go to 12 bits (0 to 4097) to detect much more detail in shadows, and provide more contrast. As such a film scanner gets as much as 36 bits of information for each pixel, instead of 8.

Again, how does info like this apply to a TG render?

QuoteNegative film itself tends to be able to hold around 1000 to 1 contrast range. Quality slide film projects more levels, though over a slightly narrower exposure range. (To make this clearer, negative films capture a wider range but can't display it when printed. Slide films capture a more narrow range, just a bit better than digital, but can display it all when projected.) Generally one desires at least 12 bits per colour to represent it. Your eye, by widening and closing the iris, can sample an astounding (eye-popping!) 7 decimal orders of magnitude of range of contrast, which would need at least 24 bits.

So is the answer to all HDR? And if so, can we say rightly that a EXR render from TG is equal to a 35mm slide or negative?

Quoteo there is a lot of information in film. However, not all of it is usable information, which causes the debate about the equivalence in pixels. Film is made up of chemical grains or dye clouds. The more you blow up film, the more you start seeing noise caused by those grains, and eventually the very clumping of the grains themselves. Of course some are bothered by the grain more than others.

But this does not apply to us right? Because we can simply render bigger, right?

QuoteThere is more information to be extracted even at this fine resolution, but the deeper you go, the more noise you also extract.

To make the image not look "grainy" and otherwise poor, you need to pull back. Subjective tests suggest this is to about 4000 DPI, or around 5600 pixels. For a 3:2 frame, that means around 20 million pixels.

So we render bigger at a good detail, no such problem for us, right?

Thanks in advance for info.
It has been eaten.


Speaking of a 35mm format, digital full frame SLRs exceeded what was possible with all but the slowest 35mm film some years ago. (And by slowest I"m thinking Kodak Technical Pan with has an ISO of 25 and is an extremely thin film base and requires careful processing). With even a Canon 1DsmkIII from seven years ago, I can print nearly 300dpi 12x18 prints that looks as good or better than medium format prints from past decades. No grain, gorgeous details, wonderful tonal range. Now the Nikon D810 is offering 36MP of stunningly detailed resolution with the widest tonal range of any DSLR to date. With this, 20x30s at native resolution are possible - no uprezzing.

I have a 4000dpi film scanner here in the studio and I haven't used it for years. Even ISO 100 films simply fall apart with film grain at any typical resolutions. And for best results and color when scanning you really should be scanning slides, which gives you that narrower dynamic range - but usually better color as you aren't dealing with negative color masks.

As for dynamic range - the amount of detail possible - in particular in Nikon cameras (or rather, cameras using particular Sony sensors) - these days is quite incredible. Often to get HDR results, you tripod mount a camera and take a bracketed sequence of images at varying shutter speeds. Then you combine those images in special software to get an image that has much more tonal range than a single frame would. These newer sensors are allowing much of that to be done in a single frame because they are so good at preserving highlights and detail in the shadows.

Comparing that to 3D render ... Terragen has the advantage of offering HDR output right from the get go. No bracketing needed. And no grain due to increasing sensitivities or noise in the shadows ... but rather noise from incorrect render settings. :)

Assuming proper render settings, Terragen should give you better pixel quality for each pixel - you have no glass to abberate the path of the light, no camera sensors to add their own thermal noise to the signal, no moire from AA filters or from the sensel grid interfering with details in the scene, etc. As a result, you should actually be able to achieve similar quality at lower output resolutions as you don't have to fight all of those things.

On the flip side ... as an artist, you now have to create or simulate all of the detail, randomity, dust, dirt, grunge, etc. that real life provides endlessly! (Including things like chromatic aberration, lense vignetting, etc. if you're going for photographic results.) Additionally, when you start rendering at those higher resolutions, it can become apparent that the detail is finite, forcing you to add more and increase your render times. Ulco had to deal with that a bit in his museum wall renderings, from which I was lucky enough to be able to render a couple of frames. Inspecting it at pixel level, the detail in certain things fell apart a bit. But, in viewing the entire frame, it looked magnificent. Quite the same in viewing a photograph so closely that you start to see noise, lens aberration or other artifacts.

Still, it's nice to start with a file that is of very high quality and then degrade it to make it look more photographic. It's very difficult to start with a degraded image and try to make it look better! :)


The tricky thing with TG is that you need to keep this HDR of the render(er) in the back of your mind when designing your scene.

Too often in the past I tried to get details in shadows/highlights at the default camera exposure, while in fact it is much more realistic to have dark shadows and slightly over-exposed sky.
Then if you'd render it to a low dynamic range output, like .BMP, then like a real photographer you would have to choose to either expose for the shadows or for the clouds.


I wonder why photographic realism is such 'high aim'. Photography has its limitations (vignette, light refraction cells, dark shadows and overexposed sky!), so a render that delivers better quality (that is more like what you really see with your eyes) should (SHOULD) be a better standard. Something to think about...


I think it's simply because it's what people are "used to" and "expect". For the past 150 years we've been photographing the world through imperfect lenses and the imagery (with all of its imperfections) has kind of been embedded into our consciousness. So, if people don't see those imperfections in an image, something doesn't look right. They might not be able to put their finger on what ... but something.

Computer graphics have often been considered too clean, or too perfect. I think one of the reasons is because we're used to such photographic imperfection. This means CGI can become easier to spot due to its lack of photographic imperfections. (Not to mention the imperfections of unskilled application or failed simulations of real world phenomena).

On the flip side, there are photographers who strive for what they consider the photographic holy grail: a distortion/vignette/moire/color/artifact free image with massive resolution and bit depth.

So, Terragen artists have at their finger tips what many photographers want ... but the challenge they face is that they are responsible for creating the whole world as well!

I think it is a challenge that will be there for a long time to come, simply because I don't see imperfect lens based photography going anywhere any time soon. (At least I hope not! It's how I put food on the table. :) )

If it's truly an issue, one possible answer would be a set of post processing routines that allow you to take your pristine image and grunge it up a bit. Ideally, this would be something which would use something depth passes to simulate optical aberrations based on distance from camera as happens with real lenses, perhaps even use real world lens values to simulate vignetting, distortions, etc. I would think things like this already exist in some form. Rendered footage often has to be matched with photographic plates and it seems they wouldn't want the CGI sticking out.

For your work, Ulco, I don't see that there would be an advantage. You aren't trying to make "photographs". I doubt book publishers or museums would see a lower quality file and think - "Exactly what we wanted!" :)


Thank you for the info.

For me, I am interested in non-traditional printing, by these processes a render literally becomes a real photograph. How? well, you make a negative of the render in any number of ways (depending on the results you want) and you send the render through one of a number of non-traditional printing process'. There are a lot of these methods in photography, most all of which apply to printing a render (come are more complex than others).
I have even looked at ways to use renders in intaglio. Not as hard as you may think. All though all of these things are very expensive.

My idea was that rendering is superior to a real camera in terms of image capture (of sorts). And thats part of what I am trying to be sure of here.

Also, can anyone say for certain what is the pixel quality of 35mm film. I keep getting different info. I had heard it was 2k, but I recently heard it is 4k. And google returns a million results so I may not be asking my question the right way... Anyway, the more I learn, the less I know.

And while we are at it, what is medium format, large format, and also Imax just out of curiosity... So a render of (this size is = to),  __ format.

I have to reread everything you guys already wrote when I have more time to search out more info on this.
It has been eaten.


The Digital Resolution of Film

So how many pixels does it take to describe all the detail we can get from film?

Fuji Velvia 50 is rated to resolve 160 lines per millimeter. This is the finest level of detail it can resolve, at which point its MTF just about hits zero.

Each line will require one light and one dark pixel, or two pixels. Thus it will take about 320 pixels per millimeter to represent what's on Velvia 50.

320 pixels x 320 pixels is 0.1MP per square millimeter.

35mm film is 24 x 36mm, or 864 square millimeters.

35 mm film is scanned for release on DVD at 1080 or 2000 lines as of 2005.
The actual resolution of 35 mm camera original negatives is the subject of much debate. Measured resolutions of negative film have ranged from 25-200 lp/mm, which equates to a range of 325 lines for 2-perf, to (theoretically) over 2300 lines for 4-perf shot on T-Max 100. Kodak states that 35mm film has the equivalent of 6K resolution according to a Senior Vice President of IMAX.
A 15 perforation 70mm IMAX film negative captures at an estimated 18K, which is the equivalent of 18,000 horizontal pixels.

To scan most of the detail on a 35mm photo, you'll need about 864 x 0.1, or 87 Megapixels.

But wait: each film pixel represents true R, G and B data, not the softer Bayer interpolated data from digital camera sensors. A single-chip 87 MP digital camera still couldn't see details as fine as a piece of 35mm film.

Since the lie factor factor from digital cameras is about two, you'd need a digital camera of about 87 x 2 = 175 MP to see every last detail that makes onto film.

That's just 35mm film. Pros don't shoot 35mm, they usually shoot 2-1/4" or 4x5."

At the same rates, 2-1/4" (56mm square) would be 313 MP, and 4x5" (95x120mm) would be 95 x 120 = 11,400 square millimeters = 1,140 MP, with no Bayer Interpolation. A digital camera with Bayer Interpolation would need to be rated at better than 2 gigapixels to see things that can be seen on a sheet of 4x5" film.

As we've seen, film can store far more detail than any digital capture system.

The gotchas with any of these systems is that:

1.) It takes one heck of a lens to be able to resolve this well.

2.) It takes even more of a photographer to be able to get that much detail on the film, and

3.) If you want to scan the film and retain this detail, you need one hack of a scanner (320 lpmm = 8,000 DPI).

This is why every time higher-resolution film scanners came out back before amateurs could afford DSLRs, we saw more details where we though we wouldn't see any.

Consumer 35mm scanners hit 5,400 DPI (Minolta) before the amateurs went to DSLRs, and even at 5,400 DPI we still saw more detail in our scans than we did at 4,800 DPI.

Film never stopped amazing us as we scanned it higher, and this is why.

5,400 DPI is equal to 212 pixels per mm, or 0.045MP/mm^2. Thus a 35mm slide, scanned on that Minolta 5400 scanner, yielded 39MP images, without Bayer Interpolation. Open these in PhotoShop, and 39x3 = 120 MB files, again, sharper than the Bayer-interpolated images from digital cameras.

Resolution has nothing to do with getting the right pixels and making a good photo, but if all you want to do is count pixels, count on film.


Hi Chris.
I have to read again your post, there is a lot of info to take in. But thank you in advance!

Quotebut if all you want to do is count pixels, count on film.

Well, I want to understand things better in a context that is useful to me in a practical way. If I can understand better what a photo is in the way you talked about it, and in the way the rest of this thread takes aim at, in a 3d context, I may be able to make use of it in how I make and print. Maybe.

One thing I have noticed about all of the "fakes" out there (renders posing as photos and video), is that when they are exposed as non photos/video, it is almost always on technical grounds.

One (of many) benefits to printing renders in a non-tradional method is that the printed render (depending on your method) will take on many details of a real photo (since some of the methods are them selves a way of taking a picture). Though most methods I know are for B/W. there are a few color printing methods (like instant printing and color transfers for example, + all of the ways of printing you can see examples of here http://www.vervegallery.com/?p=represented_artists)

here is something people can play with https://shop.the-impossible-project.com/shop/cameras/impossible
Though it is going to be dependent on the resolution of the screen. it should nonetheless be obvious how one could add real "imperfections" to a render.
This is the simplest , least costly method I know of.

Also, you wrote about printing resolution and I think it was 8000dpi? elaboration on this subject would be welcome.
It has been eaten.


One more observation from me about 'photographic reality', and then I'll shut up: I think the photographic imperfections strived for in CGI helps to make CGI more believable, because you would actually get the idea that it was photographed, and thus reality.


Quote from: Dune on October 28, 2014, 05:25:00 am
One more observation from me about 'photographic reality', and then I'll shut up: I think the photographic imperfections strived for in CGI helps to make CGI more believable, because you would actually get the idea that it was photographed, and thus reality.

I think it's like 'mhall' tried to explain: the only way we get out information is through low dynamic range media, like paper, tv and movie-screens. Those are being recorded and processed in ways which has a distinct signature look.

You can render an HDR sequence and tone-map it to have it look very similar as our eyes would perceive, yet the viewer wouldn't 'buy it' because it doesn't look as expected.


That's kind (or partly) of what I meant indeed, we expect something like a photograph, and not what we really see.


The whole phenomena of human perception is inherently individualistic, to quote Robert Anton Wilson: "we each live at the end of our own reality tunnel".  What the photographic process brought about was imagery that could be 'shared' as being a common viewpoint of 'the real'. Prior to that, earlier media, such as painting and sculpture ranged a large spectrum of human perception from pre-historic cave paintings to the meticulous work of such masters as David and Ingres. Each human society has portrayed the world around them in various ways which were 'real' to that culture, but to our contemporary culture seem 'not real' (think traditional  Chinese landscapes). Photography has created a baseline of agreement between individuals as to 'the real'. However, given the complexity of physical phenomena bombarding our senses, our brain and nervous systems impose all sorts of filters to buffer the data down to a manageable stream. And as we've all experienced in the course of human interaction, not all 'filters' function the same: ever taken a psychedelic? So is the photographic 'baseline' the true 'real'? Hardly, as the comments of lens distortions, vignetting, etc already made here. Should Photo-real stand as the only way to express 'the real' ? Of course not. But, it serves as a common reference point to mediate disparate individual perception. Sorry to ramble on, but this is a primary concern of any visual artist, especially those of us trying to portray the 'natural' world. I personally dislike having my work called 'photo-real', that's what a camera does. But given that the photographic baseline now exists in global human culture, it does give the artist tremendous license to peel back some of the 'filters' and produce extraordinary works. Van Gogh's Starry Night, and Dali's Madonna of Port Lligat come to my mind as examples. Terragen may incorporate many of the attributes of the camera, but it is a tool that exceeds the sum of it's technological components and allows the artist to create imagery that doesn't exist in the physical world, yet is consistent with the photo-real-baseline.  Moreover is able to exceed that baseline thru the agency of the artist's perception. The camera is limited to what's in front of it, no matter how skillfully the photographer's eye perceives.


All good points here.

If you think about the video part it is interesting how people react to 48 FPS for example despite that it is closer to what we call real.
When we think about the first and second world war how many of you can imagine it in color and without grain?
Photography and cinema in general makes the things kinda more stylistic together with the technical flaws they have.

I tend to dislike anything that tries to mimic photography like DOF, Chromatic aberration, motion blur, lens flares, grain etc.
But still there are times you have to use them for yourself unfortunately because others expect that things.

Another aspect is 3D. Some don't like 3D movies but i like them.
Except some people who have problems to see them right i can not understand how you could not want all of them in 3D.
There are technical problems like , that they are too dark etc.
But in general 2D photography-cinema is totally fake by not using the third dimension.
Using it other then a gimmick is another problem of course.


Quote from: Tangled-Universe on October 28, 2014, 09:18:33 am
I think it's like 'mhall' tried to explain: the only way we get out information is through low dynamic range media, like paper, tv and movie-screens. Those are being recorded and processed in ways which has a distinct signature look.

You can render an HDR sequence and tone-map it to have it look very similar as our eyes would perceive, yet the viewer wouldn't 'buy it' because it doesn't look as expected.

Quote from: Dune on October 28, 2014, 11:55:38 am
That's kind (or partly) of what I meant indeed, we expect something like a photograph, and not what we really see.

One technological limitation that both of you are facing is the dynamic range of the display medium. To me it's not meaningful to say that we wouldn't buy it even if it were perfectly tone-mapped to make it appear how our eyes would perceive it, because no such tone map is ever likely to exist. In most cases it's also impossible to depict "what we really see". The sky is often brighter than the brightest pixel on my monitor, for example. Our brains can tell the difference, and no tone map can fool us. And our monitors don't show all of the colours that our eyes perceive. So when something is too bright, or too dark, or too colourful to be correctly represented by our monitors, we have to choose what compromises we'll make. From that moment onward, it can never look absolutely real. But it could look like a photograph, if that's the compromise we choose to make, and for many of us that's quite a nice compromise. Some might choose to apply modern "HDR photography" techniques, and maybe it'll look like an HDR photograph (I personally don't like this aesthetic at all, but it seems popular.) And that might also be a way to make the render seem acceptable as a depiction of a real place, because we've seen HDR photos of things we know are real. But if we choose a different compromise (e.g. change the lighting or materials to something less physically realistic in order to squeeze more things into the visible range), we are inviting the viewer to question what he's seeing because it deviates from reality in some way. It might be aesthetically pleasing, but it becomes more difficult to convince people that it's real unless the particular viewer happens to be insensitive to the aspect of reality that you manipulated in order to achieve some other objective.

Just because milk is white doesn't mean that clouds are made of milk.


October 29, 2014, 01:34:58 am #14 Last Edit: October 29, 2014, 01:40:21 am by TheBadger
QuoteThe camera is limited to what's in front of it, no matter how skillfully the photographer's eye perceives.

Well in a way of speaking yes, but so is a virtual camera in TG. But really you are implying that photography is more limited than 3d rendering and even that a render is not a photograph. I dispute this last idea.

Then again it depends on what your idea of a photograph is. To me there is practically no difference in a 3d render, and a photo (except if you want to get into the nitty of tech of cameras) In the end, I am drawing with light, which is the literal definition of a photograph. algorithms, chemistry, so what?

And materially, if I create a negative from a render (which is a rather simple process) and then I expose that negative to a light sensitive medium, What I have in the end is in fact a photograph. It make no difference at all if the place in the photograph exists or not. It does not even mater if the camera I used exists in the real world or is virtual.

Think on "Man Ray" for example. http://uploads0.wikiart.org/images/man-ray/bservatory-time-the-lovers-1936.jpg is this a photograph? Yes. But a photograph not simply because he used a "camera" to make it. No. Thats all I mean.

http://www.vervegallery.com/?p=artist_gallery&a=MG&g=3&r=1&photographer=Misha%20Gordin Photos? Yes. But why? simply because he did it in a darkroom rather than photoshop? I cant digest that.

But that is just how I am coming at this. I think it depends on what kind of pictures you like in the first place. For me, "print making" and printing is the door I came through for all of this; photography, 3D, graphics, all of it but cinema I guess.

The only thing I am stuck on is TG, which is still, after a few years, a very difficult way to take a picture. And a few technical details like what we have been trying to nail down in this thread. Mostly TG though ;)

But if you say, well you don't need a perfect 35mm equivalent render, to do those things, well I am feeling like you would be right. But I still want to! Its part of it to me. It matters. Its part of the story. The paper maters, the chemicals matter, and I think so does all of this thread.

Not trying to argue really, just, you gave me a way to write down my thoughts on a subject that I love :-*

Pretty much everything in this thread is awesome!

a 5K monitor now sounds like a pretty nice thing to have.  ;)
It has been eaten.