Terragen doesn't use resources?

Started by Blonderator, July 03, 2008, 02:23:38 PM

Previous topic - Next topic

Blonderator

I run Terragen on a Windows Vista computer I built. I have a problem though - while Terragen uses supposedly 100% of both my processor cores, it seems like it's not. I have 4 gigs of RAM and Terragen only uses 20% of it. I want to know if there's a way to force Terragen to use more resources; when I'm rendering I rarely need to do other things, so I'd rather it use so much RAM that my computer is nearly locked up than so little that I can still run Crysis with a render running in the background.

Here are my specs:

Intel E6850 Overclocked at 3.4 GHz (dual core)
4GB DDR2 RAM
Windows Vista
Technology Preview 4


I'm running a render of a huge cloud scene, and it's using only 20% of my RAM.

With the render in the background, I'm running Crysis at 15 FPS and GRID at 56 FPS.

neuspadrin

RAM won't help much with rendering faster. Sure a certain amount of ram is nice to be able to pull objects fast, etc.  But overall its the processor that is the most used resource, not the RAM.

Also, I'm not sure on this, but i believe the computer is smart enough to scale down terragen a little to allow other things to run at semi decent speeds.  which i enjoy the fact my computer isnt at 100% nonstop only to terragen and that i can still open up other programs and browse the net while rendering.  The more you use other programs the slower the render might be though. 

Blonderator

I sure wish Planetside would implement graphics card acceleration - with an 8800, it would be like having 2 processors.

However I'm sure they would have to do a complete re-code to get graphics acceleration (but I'm not really familiar with programming) which I'm sure they are not willing to do, after all the work on multithreading.


Tangled-Universe

As far as I know and am told graphic card acceleration for rendertime displacement rendering (like TG2) won't give any or little speed-improvement.
Also, the ultra-dynamic graphic card market will also make it very difficult to make a cross-platform stable basis.
At least, that's what I understood :)

Martin

PG

Well in order to use a GPU to perform the types of calculations that TG does you'd have to use CUDA for nVidia cards and CTM for ATI to effectively turn the GPU into a cpu. If ATI agreed to use CUDA then it'd allow for standardization of the process but until then it is very difficult.
Figured out how to do clicky signatures

rcallicotte

GPU implementation is too fragmented...standards change constantly.

Look at what happened to Gelato...
So this is Disney World.  Can we live here?