I'm rendering a pretty highres sky environment and it took quite a while to figure out the voxel count of the cloud layer to make sure it doesn't exceed the system's RAM. It takes some time until the renderer has done all the pre-processing and you're able to see how much RAM is used, if it is you need to tweak, repeat until it fits.
Is there a way to roughly estimate the RAM usage from the cloud layer's voxel count and resolution?
There is no really good way to estimate for final renders since it depends on a number of factors. The *final* voxel count is what matters. But I believe you can get a reasonable *relative* (not absolute) estimate by adjusting voxel count while in RTP mode and seeing how the memory use is affected. The RTP has some simplifications, but you'll see the memory use go up pretty closely linked to voxel count. This isn't the same as the memory that will be used in-render, but you may be able to work out a relationship between the two and thus use the RTP as a faster guide.
Thank you. I thought about estimating this for myself, and of course other factors such as render resolution and additional channels will have to be factored in.
Is there a number one could use? Like 4 bytes per cloud layer voxel or something like that?
With RTP I can see how memory usage increases but I don't know anything about the factor it uses - 0.05, 0.25, 0.5...? - which makes it impossible to extrapolate to final renders which would be what I'd like to look at.
This information applies to Terragen 4.0 to 4.4, but might change in future versions.
Each voxel may take up between 12 and 24 bytes, depending on whether it's visible in the image and whether it affects the lighting of other parts of the scene. In practice you will often find that, for large cloud layers where only parts of them are visible on screen, the memory use will be closer to 12 bytes per voxel. If most of the cloud is visible in the image you can expect it to get closer to the 24 bytes per voxel limit.
There is an additional buffer for cloud GI which is much smaller than the above, but has the potential to push the RAM use above 24 bytes per voxel.
In the 3D Preview the RAM use is much lower but also more variable (and likely to change between versions of TG), so it's probably not useful to try to use that to estimate the RAM use in a full render.
The clouds might need additional RAM to calculate GI during rendering (e.g. using more subdiv cache), but if you're rendering a typical scene that already uses the subdiv cache it probably won't make much difference.
I've done some tests to see if anything I've said here is wrong, and so far it seems to be about right, but there may be other factors which I've forgotten to account for.