Problems of renders !

Started by Phylloxera, April 22, 2008, 01:13:00 PM

Previous topic - Next topic

Phylloxera

Strange this render in a small size 260 x 195 parameters: details: 0.8 - AA: 3 and GI : 1 enabled
The time is very long 20 minutes and this is not finished and we did not see the successive passes as usual ?


Mohawk20

The models you use, are they free?

If so, can you point me to their download location, and post the tgd here? I'd like to try some things before giving more advice.
Howgh!

Matt

#17
I am sorry that the scene you are working on is causing so many problems. Without knowing how much memory Terragen is using when it renders your scene (memory use + VM size), I do not know whether the problem is simply that Terragen has run out of address space (2Gb). There is little I can do to reduce the amount of memory used by populations, imported objects and textures. (OK, Terragen could handle textures more efficiently, but I think vegetation textures usually are not the main problem.)

With populations, the main thing that affects memory use is the number of instances. You should be able render 2 or 3 million instances in total (perhaps more if your scene uses few resources), but if you have multiple populations then each population would need to be reduced.

If you have heavy objects in the scene (no matter whether they are single objects or populated), they will use memory, leaving less available to the populators, so for a scene with multiple complex objects you're unlikely to be able to render 2 million instances. When thinking about choosing simple objects for your populations remember that memory use works like this:

Memory needed to load object + memory needed to store positions/sizes/rotations of all the instances in the population.

Therefore, when deciding what objects should be made simpler, you don't need to choose the one that has the most instances. Choose the object that uses the most memory when it is loaded.

Matt
Just because milk is white doesn't mean that clouds are made of milk.

Matt

#18
Quote from: Phylloxera on April 23, 2008, 12:08:39 PM
Strange this render in a small size 260 x 195 parameters: details: 0.8 - AA: 3 and GI : 1 enabled
The time is very long 20 minutes and this is not finished and we did not see the successive passes as usual ?

What about "GI surface details"? That affects render time more than most other GI settings. I would not use it for very complex scenes.

The GI pre pass probably finished very quickly. This is rendering the final pass, where GI surface details makes a very big difference to render time.

EDIT: In the next update this scene may render more quickly. GI lookup in the final pass was quite slow with complex vegetation and that has been improved, but I would still recommend NOT using GI surface details. GI works and gives reasonable results without it.
Just because milk is white doesn't mean that clouds are made of milk.

Matt

#19
Quote from: Phylloxera on April 23, 2008, 11:35:31 AM
I even reduces the reflection of water by 50% and up the camera, no way, no change!

The reflectivity of the water has no effect on render time, memory usage, or render stability. If reflectivity is at 50% it still makes the same calculations as at 100%. The only thing that changes is the final appearance of the water. It takes the reflection it has calculated and adds it to the water with only 50% weighting.

Only if reflectivity is at 0% would it disable reflection calculations.

Matt
Just because milk is white doesn't mean that clouds are made of milk.

Phylloxera

Quote from: Mohawk20 on April 23, 2008, 01:26:12 PM
The models you use, are they free?

If so, can you point me to their download location, and post the tgd here? I'd like to try some things before giving more advice.
Sorry I am using XFrog populations and a grass Lightning, so I can not save link. You post it. Tgd here would therefore nothing!

Phylloxera

I'm doing one handed down by crop, lower throughout the height of the image width of 0.20. It is not easy!
Data Task Manager, rendering underway ...

Matt

#22
Thanks for those screenshots. Those numbers are unfortunately high. Nearly 1 800 000 Kb in total (about 1.7 Gb). This only leaves a few hundred Mb for rendering before it reaches Windows' 2 Gb limit per application. Additional RAM won't help - this is a limitation imposed by Windows.

When rendering a decent sized image, the ray-tracing engine used for shadows and reflections (and GI surface details if enabled) can easily use another 250 Mb. With the crop render you may be OK, but any larger renders would probably tip the renderer over the edge with the larger bucket/tile sizes which need memory for their anti-aliasing buffer.

Matt
Just because milk is white doesn't mean that clouds are made of milk.

rcallicotte

Matt, excuse my ignorance if so, but isn't it possible for virtual memory usage to go way beyond this Window's 2G limit?
So this is Disney World.  Can we live here?

Matt

#24
Virtual memory doesn't allow you to break the 2Gb barrier. Virtual memory is basically a way of allowing the OS and applications to use more memory than the amount of physical RAM installed in the machine, but Windows still only allows up to 2Gb of virtual memory per 32-bit application. Virtual memory can allow your combined applications and OS to use much more memory, but Windows still only allows each application to see up to 2Gb.

Matt
Just because milk is white doesn't mean that clouds are made of milk.

Cyber-Angel

In the past it has been mentioned that TG2 uses a proprietary GI method but there has been no mention of what it is based on, to hazed a guess form the render times reported (Even of late) I would, like some others here in the past have theorized that its some derivative of the Monte-Carlo Method which would enplane when combined with render time displacement would go some way toward the slow render times.

I thought that in recent years in the research and academic areas that there had been a large body of pure research done into refining the Monte-Carlo method to make it faster at render time?

All of the above is just my take on this.

Regards to you.

Cyber-Angel  ;D           

rcallicotte

Thanks, Matt.  I'm looking for a magic bullet.   ;D

Quote from: Matt on April 24, 2008, 12:02:45 PM
Virtual memory doesn't allow you to break the 2Gb barrier. Virtual memory is basically a way of allowing the OS and applications to use more memory than the amount of physical RAM installed in the machine, but Windows still only allows up to 2Gb of virtual memory per 32-bit application. Virtual memory can allow your combined applications and OS to use much more memory, but Windows still only allows each application to see up to 2Gb.

Matt

So this is Disney World.  Can we live here?

nikita

Quote from: Cyber-Angel on April 24, 2008, 12:17:35 PMI thought that in recent years in the research and academic areas that there had been a large body of pure research done into refining the Monte-Carlo method to make it faster at render time?
When computer scientists optimize the time complexity of an algorithm.. don't expect that this kind of optimization will help you in the real world.  ;D

I recently saw a paper from some guy from Norway who had tested two algorithms.. one was optimal for the problem to be solved - O(log n), the other one was comparably worse O(log²n).
Turned out that the "optimal" algorithm wasn't faster then the non-optimal one except for huge problem sizes (1.4e21).  ;D


@calico Some operating systems use techniques like PAE to address more memory than it would normally. See http://en.wikipedia.org/wiki/Physical_Address_Extension

rcallicotte

Thanks, Nikita.  Interesting.  I tried using the "fix" to take advantage of the extra RAM by manipulating the boot.ini, but it hasn't worked.  Not sure why, but I'd like to try again.
So this is Disney World.  Can we live here?

nikita

The 3G-switch? Remember it only works with XP Professional.