SimonButtes Closeup

Started by Henry Blewer, October 24, 2011, 11:42:24 AM

Previous topic - Next topic

FrankB

Quote from: Tangled-Universe on November 16, 2011, 12:30:20 PM


Of course this should never be the common way to do it, I never said that. Still it seems some think I say to everybody that they should do it this way(?).


easy my friend, yes you never said that :)

dandelO

Yes, and I do apologize if I sounded like I was in any way rubbishing you or insinuating that you would advise everyone to do things this way, Martin. That's really not what was intended at all, so please don't think it was meant this way.

In fact you did say in your first post here that you wouldn't recommend these settings to just anyone straight out of the box.
Again, I just thought Henry would be done much quicker for essentially not much drop in quality with a more standard approach, is all. :)

Tangled-Universe

No worries guys :) I'm not upset, though it may look like I am. Obviously I'm sensitive in situations where I feel I'm partially being heard :)
(then it has nothing to do with the technical side of the discussion, that is just fine as it is to me)
Perhaps I should keep the more advanced stuff a bit from/below the radar as I admittedly tend to confuse things or people with it, so I'm the one to blame first  :P

Henry Blewer

I can see a contrast difference on Dandelo's renders which is better using higher settings.

I have noticed that the Pixel noise threshold setting can reduce render time greatly when using the other settings on this window. I render a crop with 32 samples and 1/16. Then at 16 samples and compare them. For this last Simon Buttes render 24 seemed to be good. The noise threshold of 0.15 or 0.2 works well.

I am not sure, but I like to keep these numbers on a AA 8/16/24/32 setting. I think Jo or Matt suggested using AA 8 or adjusting the AA by 2, higher or lower. But when high AA and the reduced samples of 1/16 the AA 8/16/24/32 seem better for testing and finally rendering.
http://flickr.com/photos/njeneb/
Forget Tuesday; It's just Monday spelled with a T

Kadri


Guys could the higher settings more useful in animations? I think i remember that the settings were useful for this?...no?

Henry Blewer

I have not tried to use these settings for an animation yet. UnChecking the GI options would speed up things.
http://flickr.com/photos/njeneb/
Forget Tuesday; It's just Monday spelled with a T

Matt

#36
Quote from: Tangled-Universe on November 16, 2011, 10:19:32 AM
This setting does not mean the minimum samples is 36 per pixel, but that 36 is the average number of samples.

It means the minimum samples per pixel, effectively. The only reason it says 'mean' is because sometimes the minimum samples per pixel is not a whole number, so some pixels overlap a larger number of minimum sample points while some overlap fewer. This only happens when your AA number is something other than 1, 2, 4, 8, 16 etc.
Just because milk is white doesn't mean that clouds are made of milk.

dandelO

#37
Thanks, Matt.

And the 'mean' label only applies to adaptive sampling. I thought so.
Non-adaptive guarantees the set number of samples described in the min/max fields will be taken, since they are the same number, while adaptive sampling can differ(mean) because not all the sample numbers are whole.

Tangled-Universe

Thanks Matt,

Being a scientist myself the naming convention of "minimum number(mean)" can only mean there's variation in the minimum number of samples for that specific AA# setting.
Meaning there's a range/distribution of minimum samples being used in the image and the average/mean of that range is the number being showed in the UI is "minimum number(mean)".

However, it seemed that there's slight variability in the minimum samples for every AA# setting and that that minimum is constant without a range/distribution. So every AA# has it's own unique deviation for the given number of minimal samples.

Picky of course, but just an example to illustrate that the naming convention at first hand gives a different idea than the actual meaning/working of the thing, which ultimately may lead to confusement. Completely fed by different backgrounds of course.

So perhaps it's best to remove the "mean" part of the setting, since it seems not to be so relevant at all?

Cheers,
Martin

Hetzen

Quote from: Kadri on November 16, 2011, 08:01:49 PM

Guys could the higher settings more useful in animations? I think i remember that the settings were useful for this?...no?

That's right Kadri, I had an animation with a lot of shrubbery/trees in the populations. Some had quite thin geometry with billboard leaves, tightly packed together, which were giving me some flicker trouble when moving the camera.

The solution was to send around 64 initial rays at the pixel being rendered, to get the most accurate average of that area. And to keep down the use of the next layer of rays being triggered, we up'd the threshold to around 0.2.

The theory was... A pixel at render time is calculated by throwing a ray at that pixel area in your scene. The more rays you throw at that pixel area (which could be several meters of real estate in your scene), the more detailed average result you'll get back. This meant that when the next frame renderered the same pixel area, there would a more significant chance of it being closer to the previous frame's in colour and luminence. Less flicker.

We had a 720p scene rendering in 1.5 hrs a frame.


Matt

Quote from: Tangled-Universe on November 22, 2011, 02:31:34 AM
So perhaps it's best to remove the "mean" part of the setting, since it seems not to be so relevant at all?

Yes, that might be best.
Just because milk is white doesn't mean that clouds are made of milk.

Matt

#42
Quote from: dandelO on November 18, 2011, 05:56:23 PM
Thanks, Matt.

And the 'mean' label only applies to adaptive sampling. I thought so.
Non-adaptive guarantees the set number of samples described in the min/max fields will be taken, since they are the same number, while adaptive sampling can differ(mean) because not all the sample numbers are whole.

It's not really because of the adaptive nature. It's due to it being impossible to have the same number of samples on each pixel if you divide the number of samples by 4, 16 or 64. For example, if I have an AA of 5, that means there are up to 25 samples per pixel. They are arranged in a grid of 5x5 samples per pixel. If I have one level of adaptability, the first level is a grid that contains only 1/4th of the maximum samples. If I divide 25 by 4, I don't get a whole number. Samples are interpolated across multiple pixels, and some pixels overlap more samples than others. In reality, the actual number of samples that contribute to a pixel is even more complicated than that because of the anti-aliasing filter being used. So I suppose really it's only a guide.
Just because milk is white doesn't mean that clouds are made of milk.