Planetside Software Forums

General => Open Discussion => Topic started by: PorcupineFloyd on October 01, 2009, 07:25:47 AM

Title: Rendering via CUDA
Post by: PorcupineFloyd on October 01, 2009, 07:25:47 AM
http://www.mentalimages.com/products/iray

Looks like it's going to happen really soon. I wonder what advantages will it bring in performance and if it'll be possible to implement CUDA based rendering in TG2.
Title: Re: Rendering via CUDA
Post by: PG on October 01, 2009, 08:33:30 AM
CUDA is/was already being used in a render farm program called BURP (Big and Ugly Rendering Project) on BOINC. They're a right pain on prioritising projects but anyone could send them a project to be rendered and it would be done through distributed computing.
It's not very well integrated with particular programs though unfortunately. This looks like it's actually a plugin which'd be really cool. And yeah it'd be amazing to run TG2 on your GPU but typically GPU rendering or GPGPU computing (or crunching) is massively intensive and it can destroy them pretty quickly.I've had to buy 2 new GPUs from running BOINC and Folding@home.

I'm still hopeful in my distributed rendering idea for TG2 that I posted a while ago. People seemed pretty interested in it and it would lessen the load on peoples GPUs if they only had to do a small section of a scene. I've been working with CUDA for the last few months and am in contact with a guy at nVidia who's helping me with it, so if Planetside are interested in helping me develop an application for BOINC (not coding, obviously they're way too busy for that) then we could have this.


Oh and those images. OMFG. And they probably rendered in 10 minutes on a GPU.

Edit: For those not familiar with just how powerful GPUs are, here's a comparison of the evolution of nVidia chips versus Intel CPUs.

(http://img98.imageshack.us/img98/1288/nvidiacudaprogramminggu.th.jpg) (http://img98.imageshack.us/i/nvidiacudaprogramminggu.jpg/)
Title: Re: Rendering via CUDA
Post by: Tangled-Universe on October 01, 2009, 11:37:42 AM
I always appreciate graphs explaining stuff but not when they lack legends for axes :(
I mean: there's virtually no difference on the left half of the X-axis and from then the difference becomes huge, but what are these conditions? Are those conditions only experimental and not representative for users at home etc. etc.?
Title: Re: Rendering via CUDA
Post by: PG on October 01, 2009, 11:49:41 AM
Yeah I just googled it :D It x axis is just time. 2003 to 2008. So from the NV30 to the GT200 nVidia chips and the Intel Northwood up to the Harpertown.
Title: Re: Rendering via CUDA
Post by: Henry Blewer on October 01, 2009, 11:57:46 AM
I burned out my graphics card just a while ago. I asked too much of it. I do not think this is a good idea. :'(
Title: Re: Rendering via CUDA
Post by: Cyber-Angel on October 02, 2009, 02:05:08 AM
I'd rather use my GPU then the current situation we have with CPU rendering which I'm sure is a good way too shorten the life of your CPU: those of us with GPU's capable of TG2 rendering and I speak for my self here. I think that right now TG2 uses whats called Software Rendering where the rendering is handed off too the software for rendering (Thus CPU intensive) Vs Hardware Rendering where the software hands off the rendering to hardware at render time which I believe is faster (Hardware permitting).

I mean form what I've read both Mentalray and Maya both have Hardware Rendering all be it on certified hardware so maybe that is what is going to have to happen in the future of Terragen, if Hardware Rendering is implemented some where down the road.

;D

Regards to you.

Cyber-Angel     
Title: Re: Rendering via CUDA
Post by: PG on October 02, 2009, 12:09:37 PM
Terragen should benefit massively from GPU rendering. Not just in terms of performance. CUDA allows you to arrange threads into blocks that use the same piece of shared memory and can collaborate on the kernel they're processing. This should mean that GI errors should be avoided and possibly missing polys in populations or displacements.
Title: Re: Rendering via CUDA
Post by: jo on October 09, 2009, 12:55:00 AM
Hi PG,

Quote from: PG on October 02, 2009, 12:09:37 PM
Terragen should benefit massively from GPU rendering. Not just in terms of performance. CUDA allows you to arrange threads into blocks that use the same piece of shared memory and can collaborate on the kernel they're processing. This should mean that GI errors should be avoided and possibly missing polys in populations or displacements.

I'm sorry, but that's just really wrong in so many ways :-). I was going to try and explain it but can't think where to start. I mean that in the best possible way, I'm not trying to be offensive.

Regards,

Jo
Title: Re: Rendering via CUDA
Post by: jo on October 09, 2009, 01:29:49 AM
Hi,

Quote from: Cyber-Angel on October 02, 2009, 02:05:08 AM
I'd rather use my GPU then the current situation we have with CPU rendering which I'm sure is a good way too shorten the life of your CPU: those of us with GPU's capable of TG2 rendering and I speak for my self here.

Are you worried the transistors will wear out ? ;-) If you're worried about overheating then you need a better CPU cooler I guess.

QuoteI think that right now TG2 uses whats called Software Rendering where the rendering is handed off too the software for rendering (Thus CPU intensive) Vs Hardware Rendering where the software hands off the rendering to hardware at render time which I believe is faster (Hardware permitting).

The difference is in hardware. A GPU is essentially a very specialised vector processor ( like the SSE unit on CPUs ) with lots of cores so it can do a lot of work in parallel. Their architecture is becoming more general and better support for floating point numbers but I think a lot of what is enabling GPGPU type stuff is actually software layers like CUDA in between the developer and the GPU which make it easier to program with it.

You still have to write your application to suit what GPUs are good at, which is processing lots of similar data in parallel.

I think there is still some settling down needed before TG2 could be made to use the GPU for final rendering. TG2 uses double precision floating point numbers extensively. GPGPU stuff seems to only just be settling down to fully supporting single precision floating point and double precision is still a bit rudimentary. There also needs to be a clear leader in an API which looks it will stick around and make it worth investing in. CUDA is still restricted to NVIDIA cards as far as I'm aware. I don't even own one ( which is accidental rather than deliberate ). I don't think CUDA is a good bet long term. I like the idea of OpenCL, which NVIDIA along with others do support, as a cross platform processor agnostic API. It will be interesting to see how it works out. One of the good things about it is that it also supports vector units on CPUs so if there were parts of TG2 which we could rewrite to work with OpenCL then that would also get us SSE unit support on the CPU, for example.

Regards,

Jo
Title: Re: Rendering via CUDA
Post by: PG on October 09, 2009, 04:43:40 AM
As far as I've been told by nVidia's rep, CUDA can be used with a driver API that allows you to execute a single function with different arguments on each thread. So whatever functions govern, well whatever, you execute that but change the input arguments for which bit needs to be rendered next. That's what he said anyway.
I'm still relatively new to CUDA, so I'll ask the rep if he thinks it's viable.
Title: Re: Rendering via CUDA
Post by: penang on November 02, 2009, 03:08:26 AM
Quote from: jo on October 09, 2009, 01:29:49 AMI think there is still some settling down needed before TG2 could be made to use the GPU for final rendering. TG2 uses double precision floating point numbers extensively. GPGPU stuff seems to only just be settling down to fully supporting single precision floating point and double precision is still a bit rudimentary. There also needs to be a clear leader in an API which looks it will stick around and make it worth investing in. CUDA is still restricted to NVIDIA cards as far as I'm aware.

To answer the double precision: ATI has a GPU that does double precision. http://ati.amd.com/products/streamprocessor/specs.html

Cards are already on the market, made by AMD itself. Very expensive for the moment, more than 900 dollars for a card with that GPU and 2GB of GDDR5.

To answer CUDA restricted in Nvidia cards: http://www.maximumpc.com/article/news/cuda_running_a_radeon

But there is an alternative, OpenCL, which AMD backs, as well as Intel, Nvidia, IBM, Samsung and many more.

So back to the question: Will we be able to offload Terragen's rendering to GPU if the GPU can do double precision?
Title: Re: Rendering via CUDA
Post by: PG on November 02, 2009, 06:13:54 AM
Well the Geforce GTX 2xx series can support double precision, with the 192 core acheiving 30FPU/s with 64bit units. My 216 core can do 37. Plus they're about £100 :D
Still waiting for word from nVidia on the capability of CUDA for this kind of thing.
Title: Re: Rendering via CUDA
Post by: PorcupineFloyd on November 02, 2009, 06:47:26 AM
It's still a matter of coding / porting it so it'll work with CUDA or Ati's version. I really wonder what goods will OpenCL bring.
Title: Re: Rendering via CUDA
Post by: PG on November 02, 2009, 04:06:35 PM
I don't know a massive amount about OpenCL, does anyone know how it uses threads for GPGPU? CUDA up to 2.3 runs one function on a given number of threads and performs the operations on differing values for each core, with 3.0 it will allow multiple functions to be run. As far as I've read up, OpenCL and Stream are different but I couldn't find anything specific.
Title: Re: Rendering via CUDA
Post by: penang on November 03, 2009, 01:15:28 AM
Since we don't have the code base of TG 2 (almost) no one can know if it can be ported to CUDA / OpenCL.

However, the availability of GPU doing double precision FP means a lot.

Refer to this page --- http://en.wikipedia.org/wiki/AMD_FireStream#AMD_stream_processing_lineup

Take, for instant, the ATI 9270 (or HD 4870).

It has 800 stream cores, and it can do a maximum of 16,384 threads, all in parallel. Clocking at 750 MHz.

Compare this to the best Intel (desktop) CPUs of today. The i7 - 960, with 4 cores. Clocking at 3.46 GHz.

Do the math and you would come to the realization that, for real BANG for your bucks, GPUs really outshine CPUs.

Even if we half the performance of GPUs for performing double precision FP, and half the perfomance of GPUs again, just for fun, the net result is still mind boggling ---- The HD4870 is still capable of rendering pictures using double precision FP 10X the speed of the i7 - 960 !
Title: Re: Rendering via CUDA
Post by: PorcupineFloyd on November 03, 2009, 06:19:27 PM
Errrrr I've just stumbled upon this: http://flam4.sourceforge.net/

This is basically an Apophysis compatible fractal frame renderer which utilizes CUDA to do computation.

I've downloaded it, opened a single .flame file, hit "Render!" and was completely shocked. It rendered at pretty nice quality with 5 - 7 FPS. And then I've found that it can also render flames to disk. So I've typed in resolution of 3600 x 2400 and quality of 2500 and hit render. After like 2 - 5 minutes it was rendered and I had to pick up my jaw from the floor.
Last week I was rendering the same flame in Apophysis and it took like 3 or 4 hours to complete.

Now imagine that Terragen could benefit from it. In one way or another. Maybe just parts of rendering if the whole thing cannot be done but you know... from 4 hours to 4 minutes - and it's just GTX 260. Why bother about i7 then?
Title: Re: Rendering via CUDA
Post by: rcallicotte on November 04, 2009, 01:32:51 PM
Can't wait to try it.  Good find!  Thanks.
Title: Re: Rendering via CUDA
Post by: penang on November 06, 2009, 06:36:03 AM
Quote from: PorcupineFloyd on November 03, 2009, 06:19:27 PM
Errrrr I've just stumbled upon this: http://flam4.sourceforge.net/

This is basically an Apophysis compatible fractal frame renderer which utilizes CUDA to do computation.

I've downloaded it, opened a single .flame file, hit "Render!" and was completely shocked. It rendered at pretty nice quality with 5 - 7 FPS. And then I've found that it can also render flames to disk. So I've typed in resolution of 3600 x 2400 and quality of 2500 and hit render. After like 2 - 5 minutes it was rendered and I had to pick up my jaw from the floor.
Last week I was rendering the same flame in Apophysis and it took like 3 or 4 hours to complete.

Now imagine that Terragen could benefit from it. In one way or another. Maybe just parts of rendering if the whole thing cannot be done but you know... from 4 hours to 4 minutes - and it's just GTX 260. Why bother about i7 then?
Yes, I am a user of Flam4 as well, and yes, it *is* that fast !!

Think of the cost of GPUs and then think of the cost of i7..... the example I outlined in my previous message (HD 4870's performance is 10X that of the best i7 in the market), I can only come to one conclusion -----

If the rendering function of Terragen can be offload to GPU (either Nvidia or ATI or both), then the performance of Terragen would have jumped by at least 10 X, and, most importantly, the MARKET for Terragen would have expanded as well !!!!

Imagine people doesn't have to pay for 10 very expensive i7 and can still enjoy that type of rendering speed... think of how many more people who are trying out Terragen would gladly PAY to get it !!

Ultimately it gonna be a win-win for both, the users and the owner of Terragen !
Title: Re: Rendering via CUDA
Post by: PG on November 06, 2009, 06:53:30 AM
Obviously you can imagine that Planetside don't want to get into this yet for very good reasons, they haven't perfected the program for CPU rendering yet so if they started on integrated GPU rendering now they'd end up making two programs simultaneously. I'm still going to advocate my distributed computing idea here. While it's uneconomical and inefficient for Planetside to start working with CUDA or Stream or indeed OpenCL now, there are those of us in the community who already have experience with it.

I'm still waiting for David from nVidia to come back with ideas on how this would best work with CUDA, I don't know anyone at ATI unfortunately, but from what he got from the dev team last time we spoke they think it should be about a five month job with a team of 2 or 3 depending on the implementation. I mentioned a batching idea to them using a similar system to BigBen's BBAST program and they reckoned about 3 months for that. Creating a project for BOINC would be another month or so.
Title: Re: Rendering via CUDA
Post by: PorcupineFloyd on November 06, 2009, 08:28:16 AM
Or maybe Terragen could use some kind of externall renderer, just like Maya and 3D Max are able to do. I've seen some projects (like "furry ball") that are in fact externall, GPU renderers for those platforms. This way it would be easier for anybody to program an external implementation of rendering engine for Terragen. Or perhaps Planetside could hire another coder just for that matter (for offloading some computation on GPU).
It could be also a matter of offloading some parts of rendering or workflow on GPU. Let it be preview, GI or simply a computation of populations.
Title: Re: Rendering via CUDA
Post by: Oshyan on November 06, 2009, 10:31:37 PM
While we would love to take advantage of this technology soon, as PG has said the reality is that it's not really mature yet. OpenCL is the best bet since it is not specific to any one company's GPU technology, but it is still not finalized, much less widely understood or deployed.

This is also an important and very telling quote "they think it should be about a five month job with a team of 2 or 3 depending on the implementation". Now if you estimate conservatively, which is always a good idea in software development, you would probably say 6 months with 3 people. We only have 2 developers *at all* currently, and even if we were able to add another one, that would mean stopping development on everything else to concentrate on this for 6 months. Is it worth it? Questionable. There are a lot of other features that could be added in that time.

The other thing to consider is that as time goes on and these systems and APIs mature, it will get easier and faster to develop for them. What may be a 6 month, 3 person job now, could become a 3 month 2 person job in a year. That seems like a much better use of time to me.

In the end we have to choose our development targets very carefully. We have limited resources and a lot of features to work on. The more successful we are (the more licenses we are able to sell), the more we can put back into financing faster and better development, and that's something we're committed to. So tell your friends to buy TG, or better yet buy it for them for Christmas. ;)

- Oshyan
Title: Re: Rendering via CUDA
Post by: penang on November 06, 2009, 11:29:22 PM
Let me share a little bit of my programming experience:

There is always an endless list to do for any worthwhile project --- to streamline this, to speed up that, to add this feature, to fix that bug.... and so on.

It's a chore to keep up with all these, I tell you, but as someone who make a living doing programming, I learn to keep my head up by drawing up a plan.

You see, bug fixing is important, but I need to be able to sort out those bug reports and determine which bug must be fixed NOW, which bug can wait.

Same as features.

There are features that would be nice to have, but it wouldn't add much to the whole program. These I put them on the "to do" section of the list.

But then, there are things that would add A LOT to the program. These I put them on the "urgent" section of the list.

Speaking of Terragen, offloading the rendering part to GPU fits this description.

Nvidia doesn't have any GPU that can do double precision yet (maybe they will have one available by 2011) but ATI does, now.

ATI doesn't have CUDA, but it does have other tools available for programmers. Please refer to this page ( http://developer.amd.com/GPU/Pages/default.aspx )

There are SDKs available which cab help programmers in offloading their graphics onto ATI's GPUs.

Maybe the guys behind Terragen can take a look-see into what ATI has for offer, and maybe give it a spin.

My estimation on a speed up of 10 times via an ATI 4870 GPU, as versus that of the use of a 4-core i7 CPU, turns out to be a very conservative estimate.

A friend of mine who does video programming told me that offloading double-precision calculation to 4870 GPU he constantly gets over 40 times the speed on the best i7 on the market. He also told me that in some cases, he has managed to tune the application to get almost 60 times speed boost.

40 - 60 times !!! Can you imagine that?

But even if Terragen gets a 10 times speed boost, I would already be very happy !

And btw, I use 4870 as the example since it's already out in the market since July of last year and a lot of people are using it.

Do you know that the new 5870 (which is still in short supply right now) has TWICE the stream processors of 4870? Instead of 800 Stream Processor (4870), the 5870 packs with 1,600 Stream Processors !

Which means, by next year, people using 5870 will get an at least a 20 times speed boost (with a possibility of up to 120 times speed boost !!!) if Terragen can offload its rendering to the GPU.

Which means, something that normally takes four hours to render can be completed in just two minutes.

How many of you would pick your jaws from the floor, if that happens?


:)







Quote from: Oshyan on November 06, 2009, 10:31:37 PM
While we would love to take advantage of this technology soon, as PG has said the reality is that it's not really mature yet. OpenCL is the best bet since it is not specific to any one company's GPU technology, but it is still not finalized, much less widely understood or deployed.

This is also an important and very telling quote "they think it should be about a five month job with a team of 2 or 3 depending on the implementation". Now if you estimate conservatively, which is always a good idea in software development, you would probably say 6 months with 3 people. We only have 2 developers *at all* currently, and even if we were able to add another one, that would mean stopping development on everything else to concentrate on this for 6 months. Is it worth it? Questionable. There are a lot of other features that could be added in that time.

The other thing to consider is that as time goes on and these systems and APIs mature, it will get easier and faster to develop for them. What may be a 6 month, 3 person job now, could become a 3 month 2 person job in a year. That seems like a much better use of time to me.

In the end we have to choose our development targets very carefully. We have limited resources and a lot of features to work on. The more successful we are (the more licenses we are able to sell), the more we can put back into financing faster and better development, and that's something we're committed to. So tell your friends to buy TG, or better yet buy it for them for Christmas. ;)

- Oshyan
Title: Re: Rendering via CUDA
Post by: Oshyan on November 07, 2009, 12:25:41 AM
We've been doing this for a long time, hopefully we've learned how to prioritize development tasks by now. ;)

I think your estimates of speedup are appealing, but really based on a lot of assumption. As yet no one has ported an existing production renderer to a GPU 1:1. There have been versions of CPU renderers converted to GPU renderers, with *similar* features, *similar* output, etc. (e.g. Vray RT and others), but as yet I haven't seen a successful CPU to GPU direct port where the features and output are identical, at least not for any major, production rendering system. There's a good reason for this - rendering systems are highly complex. GPUs are great at some of the tasks necessary to make a fast renderer, but others have to be adapted to work the way GPUs are most efficient.

There are also very important memory considerations and potential pitfalls, e.g. right now people run into memory issues with TG2 scenes on machines with 4+GB of RAM and 64 bit OSs. Now imagine trying to render on a graphics card with 1GB (the current max for any normal consumer-level card) of RAM. Of course there are various ways around this limitation, but it's just an example to show that it's not as simple as "just do it on the graphics card, it'll be faster".

The long and the short of it is that, while the potential speedups are exciting, a lot of research would need to be done just to see how feasible it would be and what kind of speedup might be possible in practice. Then there is the actual implementation time. We're definitely keeping an eye on these technologies and will take advantage of them if and when we can do so to greatest effect. For the time being we have a lot of headroom in multithreading efficiency, caching, new rendering methods for objects (coming in the next release), and other areas that will be far more widely supported and available sooner.

- Oshyan
Title: Re: Rendering via CUDA
Post by: Kadri on November 07, 2009, 04:55:27 AM
Oshyan , last night i was nearly posting here some things like you said (not that technical certainly).
I have a programmer friend and when programmers say 2 months it is most of the time  2 or 3 times the lenght what they say in the end  :D
I know lightwave and follow the other 3d programs. This cuda openCl thing is in infancy right now. I am sure it will be seen in all of them in time.
But every team has its own shedule i am sure. Who doesn't want to be 10 times more faster rendering in his program? It would be a killer feature.
Don't get me wrong these are nice things to read here and everyone has the right to say something here about this, guys.
In the end what you want are good things for TG2. :D

Anyway...

But i have a question Osyhan. Maybe in the near time we can not see this in the rendering front in TG2. But what about the 3D preview?
Is there a chance that we could see this first there?. This would be a very good feature too. There hasn't to be everything perfect to be usefull...

Cheers.

Kadri.
Title: Re: Rendering via CUDA
Post by: PorcupineFloyd on November 07, 2009, 05:30:29 AM
It would be lovely to simply have a 3D preview which utilizes all cores and those darn populators that don't take hours to populate a bigger area of trees or grass. Maybe populators could easily use GPUs? It shouldn't be that hard to implement it (compared to whole rendering).

And to what Oshyan said - I'm really happy that I've bought a TG2 license. There is something special about software firms which are very small and make they products really valuable. You don't think long when you have to decide to spent the money or not.
Title: Re: Rendering via CUDA
Post by: PG on November 07, 2009, 04:44:27 PM
Quote from: penang on November 06, 2009, 11:29:22 PM
Nvidia doesn't have any GPU that can do double precision yet (maybe they will have one available by 2011) but ATI does, now.

The GTX 300 series has double precision and CUDA 3.0 will fully utilise this too. They also account for about 70% of users in the GPU market.
Title: Re: Rendering via CUDA
Post by: Oshyan on November 07, 2009, 07:26:16 PM
The current 3D preview uses a modified version of the normal rendering engine, so if were able to GPU-accelerate that, we ought to have GPU acceleration for the main renderer too. It's basically the same set of problems, and hence the same issues mentioned above. However I do think multithreading for the 3D preview (and populators) would help a lot. Currently TG2 scales well to 4 and in some cases 8 threads on appropriate hardware, and the 3D preview is single-threaded. So imagine it being 4 or almost 8 times faster. ;D

- Oshyan
Title: Re: Rendering via CUDA
Post by: PorcupineFloyd on November 07, 2009, 08:24:49 PM
Yes, it's kinda straightforward that both the main renderer and preview one use the same engine, but making it use more than one thread would really make it usefull :) Right now, on more complex projects I rather set up a render node with quality of 0.3 that works as a preview, instead of using preview window because of how slow it is.
Title: Re: Rendering via CUDA
Post by: Kadri on November 08, 2009, 02:40:31 AM
Thanks , Oshyan.
It seems the next 1-2 builds will be very effective :D

Kadri.
Title: Re: Rendering via CUDA
Post by: Oshyan on November 08, 2009, 04:36:09 PM
Just to clarify, the next build will not include a multithreaded preview. But it's something we do of course want to include in the future. But yes the new object rendering method coming in the next release will be a nice improvement. :)

- Oshyan
Title: Re: Rendering via CUDA
Post by: Kadri on November 08, 2009, 04:53:35 PM
Next 3-4 build ?

Just kidding :D

Osyhan maybe you can not say something about this , but are there any relaxed object handling
( Jo said something that TG2 obj handling was more standart and other 3d program obj generating more loose )
or more file formats in the works?

Sorry , it is off topic but as you mention it i wanted to ask.

Kadri.
Title: Re: Rendering via CUDA
Post by: Oshyan on November 08, 2009, 04:55:00 PM
Object handling has not yet been updated, but we plan this as a likely part of the finalizing the Animation Module since it helps with interoperability.

- Oshyan
Title: Re: Rendering via CUDA
Post by: Kadri on November 08, 2009, 04:59:34 PM
Thank you Oshyan.

Kadri.
Title: Re: Rendering via CUDA
Post by: penang on November 10, 2009, 03:59:54 AM
Quote from: PG on November 07, 2009, 04:44:27 PM
Quote from: penang on November 06, 2009, 11:29:22 PM
Nvidia doesn't have any GPU that can do double precision yet (maybe they will have one available by 2011) but ATI does, now.

The GTX 300 series has double precision and CUDA 3.0 will fully utilise this too. They also account for about 70% of users in the GPU market.
Yes, GTX 300 has double precision but it is still something that no one has been able to purchase, yet.

The complication of what happened to TSMC's 40nm process might delay the deployment of GTX 300 even further, until the middle of next year.
Title: Re: Rendering via CUDA
Post by: matrix2003 on November 10, 2009, 04:00:27 PM
InstinctTech DogFighter CudaDemo:  4000 individual objects utilizing CUDA to navigate and avoid each other!

http://www.youtube.com/watch?v=Z-gpwCspxi8

Amazing!  - Bill .
Title: Re: Rendering via CUDA
Post by: penang on November 13, 2009, 11:44:00 PM
This is the link I got from my friend who does video programming

http://developer.amd.com/gpu/ATIStreamSDK/Pages/default.aspx

According to my friend, the SDK, math library, kernel analyzer and powertoys are very useful to harness the power of double-precision FP of ATI's GPU.
Title: Re: Rendering via CUDA
Post by: penang on November 17, 2009, 10:31:08 PM
Just come across this --- Nvidia demoed ray-tracing using CUDA.

Info available at http://developer.nvidia.com/object/nvision08-IRT.html
Title: Re: Rendering via CUDA
Post by: penang on November 18, 2009, 06:26:18 PM
OMG !!!

Do you know that the ATI HD5870 GPU can perform 544 GFLOPS double precision ?

That's 544 Billion floatig point operation per second !!

Reference Pages:

http://www.amd.com/us/products/desktop/graphics/ati-radeon-hd-5000/hd-5870/Pages/ati-radeon-hd-5870-specifications.aspx

http://icrontic.com/articles/the-secret-sauces-in-atis-new-radeon-hd-5000-gpus
Title: Re: Rendering via CUDA
Post by: PG on November 19, 2009, 04:18:02 PM
Yeah most GPU's are averaging that now, the GTX 3 series is looking to push up to 1 TFLOP. FERMI, the technology behind the 3 series is an update to TESLA which are nVidia's high performance GPGPUs. The Tesla 10 series already runs at 1TFLOP but doesn't have a video output and is geared entirely towards binary calculations whereas Geforce is designed for rendering triangles (in a very, very basic explanation). It won't be far off though. GPU's are typically designed in a 'trickle up' process where consumers get the newest technology and general purpose products for intensive computation like farms and supercomputers get stuff nearly a generation old. The TESLA is based on experiments from the 8 series.
Title: Re: Rendering via CUDA
Post by: penang on November 22, 2009, 06:58:51 AM
Quote from: PG on November 19, 2009, 04:18:02 PMYeah most GPU's are averaging that now, the GTX 3 series is looking to push up to 1 TFLOP. FERMI, the technology behind the 3 series is an update to TESLA which are nVidia's high performance GPGPUs. The Tesla 10 series already runs at 1TFLOP but doesn't have a video output and is geared entirely towards binary calculations whereas Geforce is designed for rendering triangles (in a very, very basic explanation). It won't be far off though. GPU's are typically designed in a 'trickle up' process where consumers get the newest technology and general purpose products for intensive computation like farms and supercomputers get stuff nearly a generation old. The TESLA is based on experiments from the 8 series.
The 1 TFLOP figure given by Nvidia for their GTX 3 chips (Fermi based) is for single-precision.

The one that ATI quotes (544 GFLOPS) is double precision.

As comparison, the best i7 (965) can only churn out 70 GFLOPS, with all its cores.

Price wise, one (1) i7 (965) chip is equal to at least twenty (20) 5870 from ATI (chip versus chip comparison, doesn't include the supporting peripherals such as RAM and such).

Performance wise, one i7 (965) chip can do 70 GFLOPS. Twenty (20) 5870 chips, on the other hand, can churn out over 10 TFLOPS.

That's a ratio of 1:140  !
Title: Re: Rendering via CUDA
Post by: Mandrake on November 30, 2009, 10:06:30 PM
"The power to price ratio offered by today's GPUs makes leveraging them in tasks not related to graphics a no-brainer. Intel recognizes this and plans to leverage it with the "Larrabee" CPU+GPU on the same silicon, a move that will see the interaction between the two brains become more efficient"

A combo? Sounds great!

http://blogs.zdnet.com/hardware/?p=6289&tag=nl.e550
Title: Re: Rendering via CUDA
Post by: Cyber-Angel on December 01, 2009, 08:25:21 AM
In twenty or perhaps less years (twenty is the number I've seen quoted most often) we will hit the law of diminishing returns as far as silicon technology can be taken: due to the physical laws governing how small you can make a silicon pathway before the electrons start running into one another. Twenty years or so will see the end of the silicon processor and the micro scale of transistor manufacture, in order for continued manufacture for the semiconductor sector there would be a transition to nanometer scale transistors using graphene technology.

The combined CPU/GPU on the same substrate has be touted for a number of years no one has yet done so and it is interesting that Intel is developing such a chip: it will be interesting to see how such a chip handles the hand-offs between CPU and GPU tasks without causing pausing and other such delays, it will also be interesting to see how such a chip handles basic tasks like handshaking and lastly how it will manage the thermal loading?

Regards to you.

Cyber-Angel                     
Title: Re: Rendering via CUDA
Post by: penang on December 02, 2009, 07:18:29 AM
Quote from: Cyber-Angel on December 01, 2009, 08:25:21 AMThe combined CPU/GPU on the same substrate has be touted for a number of years no one has yet done so and it is interesting that Intel is developing such a chip: it will be interesting to see how such a chip handles the hand-offs between CPU and GPU tasks without causing pausing and other such delays, it will also be interesting to see how such a chip handles basic tasks like handshaking and lastly how it will manage the thermal loading?

Regards to you.

Cyber-Angel
I think Larrabee is a mistake.

Intel may be able to push Larrabee into the market, and there may be some acceptance of it, but I think it will be a mistake.

Intel's main job is to see to it that the CPU can scale well into the 64-256 cores arena, along with the upping of bits from the current 64 to 128 and even to 256-bit CPU.

Instead of doing all that, Intel diverts its resources into developing Larrabee, which only result in the lesser (relatively speaking) attention into Intel's own core-competency, CPU.
Title: Re: Rendering via CUDA
Post by: Kadri on December 02, 2009, 03:05:12 PM
Maybe it is a mistake...but time will tell . But they have to take such or other routes. Because only cpu isn't enough no longer .
To sell cpus they have to make them more valuable .
As i wrote somewhere here ordinary programs don't need ( at least for now ) so much cpus (4 ...8...16 and so).
So they are searching for other ways .
I think the 5 years ahead will bring interesting things to us . GPU CPU wars and the melting of them.
It seems Nvidia will have the toughest time ... But who knows  :)

http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3683&p=1

Kadri.
Title: Re: Rendering via CUDA
Post by: penang on December 04, 2009, 12:55:09 AM
Once upon a time some experts were predicting that the mainframes were enough and there was no use for any type of more advance computers.

Much later, another group of experts were saying that 640K of memory is more than enough.

Now, there is yet another group of experts are telling us "ordinary programs" don't need so much CPUs.

See the pattern here?

Back in the time of monochrome, who would ever thought of a "GPU" having 1600 stream processors?

The most advance "video game" back then was "Pong". Graphical simulation program was unheard of.

Not even the best expert at that time could imagine the "ordinary" applications we are using today --- like Terragen, like Maya, like Blender, like Povray, for example.
Title: Re: Rendering via CUDA
Post by: Kadri on December 04, 2009, 01:23:30 AM
No...no no Penang  ;D

I am one of the last that would think like that.
From the beginning i was everytime the one who wanted the most out of  my computers.
And it seemed every time not enough for me. It is difficult for me to say what i mean in English .
The fastest computer in maybe 1000 years away wouldn't enough for me then too.

What i am trying to say is there has to be demand for the masses for Intel Amd Nvidia and so on.
For people like us in 3d there is no upper limit real time rendering even would not enough for us.
We would be wanting to make rendering  2-4  and so on times more faster then real time too.
But i think you know not every program can handel 4 cores and are 64 bit at now (see not far away from this forum)
This are problems for now . And i am sure we will see in the 5 years from now on many other programs
that was not practical with the old cpus little by little pushed in the mainstream.
Like voice recognition , more interactive GUI's stereo(3d) monitors and games  and maybe things we don't know now...

I hinted at this in the "( at least for now )" .

And consoles are hurting  us in this evolution in my opinion .

I wish i could make me more clear...Do we understand us better now ?  :)

Edit : Capitalism will make this happen. They have to sell new things to make money. I am sure they will create the demand for it.

Cheers.

Kadri.
Title: Re: Rendering via CUDA
Post by: Oshyan on December 04, 2009, 11:37:36 PM
Well, you can stop waiting with baited breath for the magic of Larrabee to make everything render instantly. ;D
http://www.semiaccurate.com/2009/12/04/intel-kills-consumer-larrabee-focuses-future-variants/

- Oshyan
Title: Re: Rendering via CUDA
Post by: Kadri on December 04, 2009, 11:53:47 PM
Maybe AnandTech was right with the last paragraphs about the future:
" In recent history AMD's architectural decisions have predicted, earlier than Intel,
where the the microprocessor industry was headed.
The K8 embraced 64-bit computing, a move that Intel eventually echoed some years later.
Phenom was first to migrate to the 3 level cache hierarchy that we have today, with private L2 caches.
Nehalem mimicked and improved on that philosophy.
Bulldozer appears to be similarly ahead of its time, ready for world where heterogenous CPU/GPU computing is commonplace.
I wonder if we'll see a similar architecture from Intel in a few years. "
(the same link as in the previous  page: http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3683&p=1 )

Edit: http://www.anandtech.com/weblog/showpost.aspx?i=659

Kadri.
Title: Re: Rendering via CUDA
Post by: penang on December 08, 2009, 05:36:26 AM
Quote from: Oshyan on December 04, 2009, 11:37:36 PMWell, you can stop waiting with baited breath for the magic of Larrabee to make everything render instantly. ;D
http://www.semiaccurate.com/2009/12/04/intel-kills-consumer-larrabee-focuses-future-variants/

- Oshyan
Hmm... I thought Intel has just decided that Larrabee is dead.

Please correct me if I am wrong.

Title: Re: Rendering via CUDA
Post by: Kadri on December 08, 2009, 06:09:53 AM
Quote from: penang on December 08, 2009, 05:36:26 AM
Quote from: Oshyan on December 04, 2009, 11:37:36 PMWell, you can stop waiting with baited breath for the magic of Larrabee to make everything render instantly. ;D
http://www.semiaccurate.com/2009/12/04/intel-kills-consumer-larrabee-focuses-future-variants/

- Oshyan
Hmm... I thought Intel has just decided that Larrabee is dead.

Please correct me if I am wrong.


http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3686
Title: Re: Rendering via CUDA
Post by: penang on December 16, 2009, 06:08:19 AM
Just in case anyone interested ...

AMD did release a doc detailing the instructions for their GPU

It's a 392-page pdf file, available from http://developer.amd.com/gpu_assets/R700-Family_Instruction_Set_Architecture.pdf

Use it to make Terragen faster, pls !
Title: The power of GPUs
Post by: penang on December 17, 2009, 02:26:18 AM
The supercomputers are GPU-based, built by the topological group of the University of Antwerp in Belgium.

As topology goes, it's about drawing.

Same as Terragen. :D

So, to cut the story short, they built the first version some 18 months ago, with 8 GPUs, costing a little under 4,000 Euros.

Now, they have version 2, with 13 GPUs, costing around 6,000 Euros.

Both versions outperformed the CPU-based supercomputer with 512 cores, also in the University of Antwerp.

That's the power of GPU Terragen should tap into.

Pictures:
(http://www.dvhardware.net/news/astra_nvidia_supercomputer_2.jpg)

(http://www.dvhardware.net/news/2009/fastra_2/fastra_ii_desktop_supercomputer_risers.jpg)

(http://www.dvhardware.net/news/astra_nvidia_supercomputer.jpg)

(http://www.dvhardware.net/news/2009/fastra_2/fastra_ii_desktop_supercomputer.jpg)

Benchmark:
(http://www.dvhardware.net/news/2009/fastra_2/fastra_ii_reconstruction_benchmark.gif)

Power Consumption:
(http://www.dvhardware.net/news/2009/fastra_2/fastra_ii_reconstruction_power_consumption.gif)

Energy Efficiency:
(http://www.dvhardware.net/news/2009/fastra_2/fastra_ii_energy_efficiency.gif)

For more details, please click on the following link:

http://www.dvhardware.net/articles25_fastra_2_desktop_supercomputer.html
Title: Re: Rendering via CUDA
Post by: TheBlackHole on December 17, 2009, 11:10:43 AM
Well, I LIKE the fact that TG only renders with the CPU. Then I can run Celestia and it'll render every frame insanely fast compared to TG. Celestia can use the GPU so I can fly through wormholes and galaxies while TG's trying to render a scene.
Title: Re: Rendering via CUDA
Post by: PG on December 17, 2009, 11:33:27 AM
My guess is it would be an option rather than forced.
Title: Re: Rendering via CUDA
Post by: penang on December 17, 2009, 07:33:33 PM
Quote from: TheBlackHole on December 17, 2009, 11:10:43 AMWell, I LIKE the fact that TG only renders with the CPU. Then I can run Celestia and it'll render every frame insanely fast compared to TG. Celestia can use the GPU so I can fly through wormholes and galaxies while TG's trying to render a scene.
LOL !

And I thought I was all alone in thinking that !

Yeah, while TG2 is slowly crunching on CPUs, Flam4 warping on GPUs.

I like that ! :D
Title: Re: Rendering via CUDA
Post by: TheBlackHole on December 17, 2009, 07:44:36 PM
How did this thread last 2 months?
Title: Re: Rendering via CUDA
Post by: penang on December 17, 2009, 08:30:53 PM
I have found more information regarding the GPUs from ATI.

They are programming manuals, available from

http://www.x.org/docs/AMD/R6xx_R7xx_3D.pdf

and

http://ati.amd.com/developer/open_gpu_documentation.html

and

http://www.x.org/docs/AMD/R6xx_3D_Registers.pdf

There is even a disassembler for ATI's BIOS, HD4000 and up. It's called AtomDis
http://cgit.freedesktop.org/~mhopf/AtomDis/


Title: Re: Rendering via CUDA
Post by: haldun on December 22, 2009, 09:09:19 AM
Quote from: TheBlackHole on December 17, 2009, 07:44:36 PM
How did this thread last 2 months?

Hi, I am new here and I specifically registered to this forum to answer this question in this fashion:

I personally would buy Terragen THE MOMENT IT DOES GPU RENDERING :D

grtz
Haldun
Title: Re: Rendering via CUDA
Post by: penang on December 23, 2009, 07:03:47 PM
http://code.google.com/p/gpuocelot/

The link above is Ocelot, a just-in-time compiler for CUDA, allowing the same programs to be run on NVIDIA GPUs or x86 CPUs .

Check it out !
Title: GPU Rendering
Post by: penang on December 29, 2009, 01:32:12 AM
Just came across an interesting review:

http://www.brightsideofnews.com/news/2009/12/23/machstudio-pro-can-a-gpu-replace-a-cpu.aspx

Not very long, only 3 pages.

However, it is about a rendering software package, that comes with a GPU card, which renders using GPU, rather than CPU.

What interests me is the following sentences:

"The scenes we tested with rendered anywhere between 10 and 20 times faster than on our powerful quad-core/octa-thread processors."

"Do note that if we would really push the details into overdrive, we would get a single 1080p frame in 10 seconds on bundled FirePro V8750, while 3.33 GHz CPU would probably take 20-30 minutes per single frame."
Title: Re: Rendering via CUDA
Post by: PeanutMocha on February 14, 2010, 09:49:06 PM
Direct Compute is Microsoft's answer to Open CL and is part of Direct X 10 and 11.  http://www.nvidia.com/object/cuda_directcompute.html (http://www.nvidia.com/object/cuda_directcompute.html)

Another option to consider when the TG team feels it's the right time to implement this.

Like many others that have kept this post alive for months, I'm very interested in this (eventual) feature!
Title: Re: Rendering via CUDA
Post by: Oshyan on February 15, 2010, 04:51:00 AM
We need something that will be cross-platform, which of course Microsoft's solution isn't. ;D OpenCL is though, and seems promising enough...

In any case, while I know many other renderers are adopting these features quickly, many are also based on standard techniques that are becoming fairly well known and "easily" optimized for massively parallel GPUs (e.g. raytracers). TG2 is a different sort of beast, so it's rather more difficult to do this, and it will be some time before we're able to tackle it, if we do. That being said, I agree the possibilities are very exciting.

- Oshyan
Title: Re: Rendering via CUDA
Post by: PG on February 15, 2010, 10:02:28 AM
well mac's and linux can use nvidia cards. there's rumour of a new mac with an equivelant GTX 2xx series that can run CUDA programs so that'll be good. well not the mac part but you know what I mean. I did eventually get an email back from the nvidia techs and they said that any kind of compute API (as they called it) would be best applied through an abstraction layer which CUDA provides already but it means you have to write everything in C#. Not terragen obviously but the plugin or whatever for GPU rendering would have to be done with C#, the only way to do it with C++ is to write the access driver in assembly :D I looked into that and it's feckin hard.
I don't really know the ins and outs of how tg2 computes and renders so they only had my lame description to go on but they said that a single GTX 295 or 2 tesla 10 series cards could render an example image I told them about in under a minute with the right memory optimisations, while it took 40 on my quad core. This is because CUDA can allow you to edit what's on the VRAM before it starts to use it, the the cache it uses for computation is used so fluidly that you can just edit the one it's finished with and chuck it straight back onto the core it was just on. Plus you can get a small secondary cache for iterative instructions to increment/decrement values, etc. to the data while it's still in the primary cache just before the next cycle. I think that might be with the access driver though.

Half of that I don't understand so if it's of interest to anyone it is probably worth downloading the CUDA guides from the developer zone at nvidia's website.
Title: Re: Rendering via CUDA
Post by: Kadri on February 15, 2010, 12:09:57 PM
I don't know what the plan is for TG2 ( TG3 ? ) in general . Cuda is nice maybe , but what about Ati users ?
I would wait 1 - 2 year for  OpenCL maturing . And the different nature of TG2 rendering
( ı don't know anything how different it is  to raytrace ) would be another thing to consider.

So guys i think before 2 years , i don't see this happen if at all :)

(Hint , we may hear maybe some more things when putting it this way ;) )

Kadri.
Title: Re: Rendering via CUDA
Post by: Oshyan on February 15, 2010, 08:28:03 PM
Kadri, I think your perspective is very realistic and sensible. I'd like to see things happen sooner of course, but there are a lot of other things to be working on. And everyone should keep in mind that almost no one is using CUDA or other GPU acceleration techniques for *final rendering*. So while a faster 3D preview would be great (remember that we have not yet multithreaded it, so it will already get faster), if it will not affect final rendering then perhaps it is not worth the tremendous time investment. But rest assured we are considering it heavily, regardless.

- Oshyan
Title: Re: Rendering via CUDA
Post by: Kadri on February 15, 2010, 08:53:57 PM
Thanks , Oshyan  :)
I know from Lightwave how much a very fast preview (and final render at the same time ) renderer ( Fprime : http://www.worley.com/E/Products/fprime/fprime.html )
had an impact on the user base . Something along this line or-and a GPU renderer would certainly benefit TG2 . But we see from your posts here , that you know this really well .
I think many users here would prefer this kind of addition to other ones . So it is not easy to choose the things to add . But i hope it is nearer then i think  ;)

Cheers.

Kadri.


Title: Re: Rendering via CUDA
Post by: Tangled-Universe on August 11, 2010, 09:31:44 AM
Thought I could bump this topic up since there has been an interesting presentation on V-ray RT CPU/GPU rendering at Siggraph 2010:

http://forums.planetside.co.uk/index.php?topic=10517.0 (http://forums.planetside.co.uk/index.php?topic=10517.0)

So in the "V-ray RT CPU/GPU" presentation the major reasons are being explained why GPU-based rendering is not feasible with TG2's rendering engine.
Title: Re: Rendering via CUDA
Post by: KirillK on November 05, 2010, 09:46:40 AM
I am using Octane gpu renderer for "final rendering" . It's quite raw yet but ready for static renders.  I previously used Terragen for in-game backgrounds and some textures  but now started to do it in Octane + Zbrush. Especially  all the kind of rock stuff. It's a mistake that a  gpu renderer is  not ready for anything "final" .
Still using Terragen for a sky although.  Think Planetside should take gpu and cuda opportunity more seriously and hurry up.


Title: Re: Rendering via CUDA
Post by: Oshyan on November 05, 2010, 09:20:29 PM
It's not that we're not taking it seriously or don't consider it appropriate for "final" renders. It's just that TG2 rendering technology is *necessarily* very different from many other renderers. The vast majority of renderers you see being put onto GPUs are fairly traditional raytracers (albeit highly optimized in some cases), which is inherently more easily converted for massively parallel rendering. TG2 uses raytracing for some scene elements but not for terrain. Raytracing is notoriously inefficient at rendering highly displaced geometry, which is critical for TG2's landscape rendering capabilities. As yet we're not aware of anyone else doing GPU-accelerated pixel-level displacement equivalent to what TG2 does. We'd love to be the first to accomplish this, but we're not inherently GPU experts, so while it's something we're researching, we're not in a position to make any promises about if or when we would be able to take advantage of it. As previously discussed in this thread, certainly a cross-platform solution that is stable and mature is a prerequisite for us given our product is cross-platform. We are watching OpenCL with particular interest for this reason.

- Oshyan
Title: Re: Rendering via CUDA
Post by: Cyber-Angel on November 06, 2010, 01:02:40 AM
Would it be possible, theoretically at least, for Planetside to find a strategic technology partner with expertise in GPU rendering? That way the technology necessary could be developed in a more expedient fashion!

I can understand the need to develop things in house as a matter of business and technological control over what gets developed for any software product, however I also see nothing wrong with collaboration with others; such as when a software developer doesn't have a particular skill set in house, ultimately this sprite of collaboration is what drives (or should) the industry forward and not just the bottom line.

Yes, for a small developer things that become adopted else ware take time to implement; but been a small development house also presents unrivaled and unique opportunities for innovation.

From looking around different fora on the Internet the beast that is GPU based rendering seems to be the expectation form any render engine on the market today or at least the expectation is high that the next version of the software in question will have it! Further more, this expectation (Bordering on zealotry by some postings I've read) tends to indicate that GPU Accelerated Rendering is expected to be in the specifications of any renderer that wants as they put it "To be Respected".

Given this then, which sounds to me like the finest example of "Buzzword Bingo" I have come across for quite a while other than Cloud based rendering; I would urge continued efforts into Technologies integrating GPU Rendering in Terragen, but would also ask that consideration be given to seeking a technology partnership with others to develop such.

Regards to you.

Cyber-Angel    
Title: Re: Rendering via CUDA
Post by: Oshyan on November 06, 2010, 01:18:14 AM
We have been exploring partnerships. The question with partnerships is always what each party gains from the deal. Working out something that is agreeable to both parties takes time and is not easy, especially when you can't just pay someone for their tech (as a small company that's not something we can afford right now).

If you can think about it from our position for a moment I think you'll understand that we're working as much as we can to add GPU rendering to our products, within the bounds of our resources, the other in-demand features, and the limits of the technology. The simplest way to see where we're coming from is this: we make our living through this company, so there is no one who is more invested in the success of our products than us. If there is a feature that can help our product gain significantly greater success, then obviously we'll try to implement it.

We're well aware of the technology landscape and keep a very close eye not only on our immediate competitors, but also on the wider computer graphics field. Matt, Lead Developer and Owner of Planetside, has extensive experience in the film special effects industry and is very familiar with the demands of high-end computer graphics professionals and production workflows, not to mention the current state of the art in rendering technology. We test and compare with other products on a regular basis, read up on new relevant graphics industry research, and ultimately we're our own worst critics as far as the capabilities of our products.

We certainly understand how exciting GPU acceleration is and how much potential it seems to offer for almost any graphics program. We want very much to be able to take advantage of it and I hope you can just trust that we're doing what we can to make that happen. In the 5 pages of this thread nothing has been said that has changed our perspective or approach to it, though it's certainly important to see how much interest there is from our customers in this feature (a lot! ;D).

- Oshyan
Title: Re: Rendering via CUDA
Post by: Jack on November 06, 2010, 02:21:39 AM
gpu acceleration isn't meant for production quality rendering atm its merely used for a quick preview of you project. Nvidia has the monopoly by far opencl is good but as far as i know furryball and smallux are they only renderers i know of which use this and furryball using opencl produces very grainy images compared to its cuda counterpart. People go on how great octane is but its still nothing compared to mentalray (lol owned by nvidia) in terms of quality.

really i think its not necessary atm to implement gpu acceleration I would rather have features such as 'OPENGL'
previewing of objects the ability to scatter objects on top of objects and the list goes on.........
Title: Re: Rendering via CUDA
Post by: KirillK on November 06, 2010, 11:27:19 AM
An advantage of Octane over Mental ray is an immediate feedback you get. That's especially important with any kind of complicated naturally looking material.  Yes, Octane have no caustics and displacement so for some things it's not good, but for many other tings it allow to reach almost perfect result just in a few tweaks while with Mental ray you would spend hours just for test renders.

I understand that the displacement is a key thing in Terragen but perhaps it would be a great feature to have an option to bake all this displacement to  normal ( and other) maps and a geometry  ( like in Zbrush) with a decreasing resolution on a distance.

When people talk about production quality they seems to mean film industry but there is a game development also which I think is probably biggest part of the whole CG already. The game industry have  specific needs which are mostly ignored by software developers. You already have huge virtual worlds there, huge scopes of work, and can't spend weeks for rendering each thing.

An environment software that would allow to create procedurally, by nodes, an environment scene with Lods levels and texture backing, and fbx or collada exporting features would have an immediate success in game industry.  I am waiting such a soft for years already  sharing my expectations between Houdini and Terragen.
Title: Re: Rendering via CUDA
Post by: Jack on November 06, 2010, 02:59:35 PM
mentalray doesn't take hours to render lol, if you know how to tweak the massive amount of settings you can render damn quick and if you use the script standalone version you render even faster.
Title: Re: Rendering via CUDA
Post by: KirillK on November 06, 2010, 03:57:28 PM
I meant not the render but rather shaders creation and set up, especially when you need something natural and not so simple and sterile as  arch&design materials.   I have nothing against Mental ray although.
Title: Re: Rendering via CUDA
Post by: freelancah on November 06, 2010, 04:37:23 PM
The next version of V-ray will have OpenCL support. But what I noticed was that even they weren't able to make it work with procedural displacements...
Title: Re: Rendering via CUDA
Post by: Jack on November 06, 2010, 07:32:13 PM
Quote from: freelancah on November 06, 2010, 04:37:23 PM
The next version of V-ray will have OpenCL support. But what I noticed was that even they weren't able to make it work with procedural displacements...

its because all the companies that are getting into gpu raytracing are using the same algorithms they use for cpu rendering what is needed is algorithms designed just for gpu acceleration as gpus have 100's of core unlike cpu's which have max 12 atm? maybe wrong i know amd are bringing out that bulldozer thing that has 16 cores.
Title: Re: Rendering via CUDA
Post by: freelancah on November 06, 2010, 07:44:17 PM
Yeah I believe it has something to do with memory required per thread or something similiar.
Title: Re: Rendering via CUDA
Post by: Jack on November 06, 2010, 07:47:25 PM
give it a year or two and it'll be all worked out i think ;D
Title: Re: Rendering via CUDA
Post by: freelancah on November 06, 2010, 08:58:02 PM
I think it might take a bit more than that but Im still hopeful that it will happen ;) I think it will require some changes in GPU architecture and probably game engines need to go more towards realtime raytracing to make this happen. If I understood the problems correctly. Both of these things are memory dependant and if either one could kick the develpement into the right direction. Perhaps we will have some consumer level cards that can do this sometime in the near future  :P
Title: Re: Rendering via CUDA
Post by: PG on November 09, 2010, 10:33:04 AM
Even game engines aren't written to really maximise GPU technologies. The idea of so-called machothreading with this kind of incremental form of ordering memory was only introduced to GPU's around 2006 so most gme developers are still sending buffers in the old CPU style which is unfortunately more or less limited by DirectX.

If microsoft teams up with nVidia and AMD to allow developers much greater control over how the buffers are stored and transfered on the GPU then I think that'll permeate through all areas of visual software programming as it always has, 99.999% of innovation in graphics has come through gaming so until they do it I don't think anyone else will bother. Gamers are much more likely to make the sacrifice of new hardware, software upgrades, etc. than companies that use things like Terragen or Vue.

Sorry just having a little vent.
Title: Re: Rendering via CUDA
Post by: TheBlackHole on November 09, 2010, 11:04:46 AM
Is this some kind of seasonal thread? I remember seeing this thread about this time of year last year! ;D
Title: Re: Rendering via CUDA
Post by: Zairyn Arsyn on November 09, 2010, 11:42:27 AM
i think TBH might be onto something here... :D
this thread has been active during the cooler and colder months, at least were i'm living.
Quote from: TheBlackHole on November 09, 2010, 11:04:46 AM
Is this some kind of seasonal thread? I remember seeing this thread about this time of year last year! ;D
Title: Re: Rendering via CUDA
Post by: Henry Blewer on November 09, 2010, 06:51:38 PM
It has been going on for some time.
Title: Re: Rendering via CUDA
Post by: Cyber-Angel on December 04, 2010, 11:15:50 PM
This is one of those kinds of threads that show up once in a while, and usually go round the houses and draw the same conclusions; just has this one has then they faded into the dark of night, from wench they came; all the while people are left wondering, ever wanting to know what is happening behind the scenes: Just like the boy Oliver, forever hungry until one day, he takes his tiny wooden bowl, and rises from his seat form whence he sat and meekly says "Please sir, Can I have some more".

But, there is no more not for him, nor even his kindred kind; for there is know more for them whilst those assigned by the state to care, care not whilst that retch and his kind starve and their betters from a higher class then Oliver and his like kin, though lowly in rank them selves to the true high born, they who'd walk the halls of kings, and influence nations with their wealth; they who'd get others to bend their backs to that task the nameless, forgotten many, whilst history crowns them glorious and they take those honers for them self.

Regards to you.

Cyber-Angel ;D
Title: Re: Rendering via CUDA
Post by: dandelO on December 05, 2010, 10:57:30 AM
:D