will program move rendering to gpu

Started by mmatthe8667, April 13, 2008, 09:42:35 PM

Previous topic - Next topic

mmatthe8667

I was just wondering will the new terragen move rendering from cpu to gpu.  I hear that it really reduces render time...Can anyone deny or verify this?

mmatthe8667

old_blaggard

GPU rendering is far off in the future.  There are many reasons for this:
- GPUs require massive parallelization, which is usually quick difficult.
- GPU programming languages have only recently become mainstream and few people know how to utilize the new tools at their full potential.
- The GPU landscape is changing so incredibly quickly that any time spent learning to program and developing a program for a current set of GPUs would almost instantly be rendered obsolete by some update or architectural change.
http://www.terragen.org - A great Terragen resource with models, contests, galleries, and forums.

MeltingIce

Yes it would greatly reduce rendertime, but programming for GPU rendering is just not worthwhile right now.  TG2 would have to be basically rewritten from scratch anyhow.

MeltingIce Network | Wii Number: 3881 9574 8304 0277

Cyber-Angel

Would you have to star from scratch again? I thought that you had account for parallelization when you program for Multi-Threading (I am not a programmer so I don't know for sure) what is the difference between that and GPU use?

If TG2 is not parallelization aware then how is it meant to be used on Multi-Node/ Multi-CPU Render-Farms which have thousands of CPU's with thousands of Core? There must be documentation out there written by people in the Super-Computing/ Cloud-Computing part of the ITC industry such as IBM and Sun Micro-Systems even by leading GPU vendors such as Nvidia on the painless implementation of parallelization at the software level, information that must be available via their channel partner or eco-system programs or by white papers published on the subject!         
     
Regards to you.         

Cyber-Angel


rcallicotte

This would be my biggest concern.


Quote from: old_blaggard on April 13, 2008, 10:08:29 PM
- The GPU landscape is changing so incredibly quickly that any time spent learning to program and developing a program for a current set of GPUs would almost instantly be rendered obsolete by some update or architectural change.
So this is Disney World.  Can we live here?

Jeremy Ray

Quote from: calico on April 14, 2008, 08:42:48 AM
This would be my biggest concern.


Quote from: old_blaggard on April 13, 2008, 10:08:29 PM
- The GPU landscape is changing so incredibly quickly that any time spent learning to program and developing a program for a current set of GPUs would almost instantly be rendered obsolete by some update or architectural change.

Someone must be doing it or there would be no demand for GPU's.  How do game programmers stay ahead of the curve?

old_blaggard

Game programmers use existing technologies such as OpenGL and DirectX for their games.  These would be of no use to Terragen, though. 
http://www.terragen.org - A great Terragen resource with models, contests, galleries, and forums.

MeltingIce

As far as I know, Maya is the only 3D rendering program that supports hardware rendering.  However, it cannot do a full quality render using the GPU, as it usually comes out pretty poor quality intended only for previewing purposes.

MeltingIce Network | Wii Number: 3881 9574 8304 0277

PG

There are ways you can use GPU's to do mathematical calculations that the CPU normally does, but GPU's are designed to read vertices. In order to render the kind of things that Terragen does you'd have to create a library similar to DirectX, you wouldn't necessarily have to rewrite the entire program as the graphics library could be used externally and linked in, that's how graphics and physics engines work for games.
The main problem in doing what TG2 does on a GPU is a technical exercise. Ray tracing is often just as slow on the GPU as it is on the CPU, and things like cloud samples wouldn't be efficiently calculated if done on the GPU, you'd have to use CUDA to alter how the shader core was used, I say the shader core because it wouldn't be doing anything so could render all the samples you'd want ;D
Figured out how to do clicky signatures

Jeremy Ray

Quote from: old_blaggard on April 14, 2008, 10:44:45 AM
Game programmers use existing technologies such as OpenGL and DirectX for their games.  These would be of no use to Terragen, though. 

So much for that idea . . .

Cyber-Angel

I read in a recent issue of 3D World Magazine some mouths ago that there is an emerging trend to migrate away form Open GL based previews and move to DrectX based ones, the only other option which is more tricky to implement but offers the artist the chose is to offer both Open GL and DirectX as selectable options (A simple Radio Button would do with clear labels next to each) as an option in the preferences would suffice.

The question therefore arises is DirectX cross Platform for Mac and Linux and other platforms such as Sun and SGI that maybe supported in the future, of is it only found on windows?

Regards to you.

Cyber-Angel           

pixelmonk

Quote from: Cyber-Angel on April 14, 2008, 10:19:09 PM
I read in a recent issue of 3D World Magazine some mouths ago that there is an emerging trend to migrate away form Open GL based previews and move to DrectX based ones, the only other option which is more tricky to implement but offers the artist the chose is to offer both Open GL and DirectX as selectable options (A simple Radio Button would do with clear labels next to each) as an option in the preferences would suffice.

The question therefore arises is DirectX cross Platform for Mac and Linux and other platforms such as Sun and SGI that maybe supported in the future, of is it only found on windows?

Regards to you.

Cyber-Angel           


Windows.

3DGuy

Quote from: MeltingIce on April 14, 2008, 11:45:23 AM
As far as I know, Maya is the only 3D rendering program that supports hardware rendering.  However, it cannot do a full quality render using the GPU, as it usually comes out pretty poor quality intended only for previewing purposes.
There's an NVIDIA renderer called Gelato for 3DS max that lets you render using your GPU.

neon22

Here is the info on Cuda
http://www.nvidia.com/object/cuda_home.html

Its the most likely direction to go at this point (IMHO) as it allows the GPU to be programmed as though it were a CPU.
However its a C compiler not C++ so in all likelihood the amount of effort to restructure the code to use the GPU makes it uneconomic at this stage. (and its not like TG has huge numbers of developers)

The only other approach would be to take a section of the rendering code that looked like it might be Cudafiable (not really a word) and have it called from the main TG2 code. But clearly a lot of work for an unknown performance gain. Maybe in a later TG version :-)

Oshyan

GPU rendering is not being done on any notable scale in a production renderer at this time and there is a good reason for that - it's too new and the performance in scenes of typical complexity for realism is not actually that great. Recoding TG2 to completely take advantage of the best GPU's available today might provide some performance advantage for the minority of people who have spent a great deal of money on their graphics hardware, but many, many more people have fairly powerful, multi-core CPU's and the multithreaded version of TG2 (currently in alpha) already takes good advantage of those configurations. Taking all the time necessary to create a GPU version just wouldn't make much sense for us right now. As GPU-based rendering matures we will continue to evaluate its feasibility.

- Oshyan