Way to make T4 or T3.4.0 to use GPU?

Started by Keiyentai, July 27, 2016, 07:41:21 PM

Previous topic - Next topic

Keiyentai

Simple question just wondering if there is a way I could put the render to the GPU and not just all on the CPU and cores? Since I updated to 3.4 and in 4Beta my CPU goes from roughly 50c and jumps to 80-90c which is kind of uneasy for me even if it's a small test render at like 800x600 or lower res with no special features enabled. I have a Titan X 12GB GDDR5 Maxwell (not the new just released Pascal Titan x -_- ) and using that instead of maxing out my CPU and all 8 cores to 100% would be amazing. Really don't want to have to stop using Terragen cause 3.4 and 4Beta decide to make my CPU go in to nuclear mode. Any tips would be much appreciated.
Dev Machine
CPU: i7-950
OS: Win 10 64 Pro
RAM: 24GB DDR3
GPU: eVGA GeForce GTX Titan X 12GB GDDR5
HDD: 750GB Internal, 1TB External

yossam

From my understanding the code is written to utilize the CPU only................no way to change that I'm afraid.

Keiyentai

That's what I thought. I was hoping maybe it may had been added or optimized a little more since before 3.4/4B unless you cranked up the special features normally didn't make your CPU go nuts. Maybe something for a wishlist devs? Would be awesome if it could take advantage of Cuda Cores like some 3D Apps.
Dev Machine
CPU: i7-950
OS: Win 10 64 Pro
RAM: 24GB DDR3
GPU: eVGA GeForce GTX Titan X 12GB GDDR5
HDD: 750GB Internal, 1TB External

Oshyan

Making the renderer able to utilize GPU resources essentially means a total rewrite of our render engine, which is a huge, huge task. So unfortunately it's not something we'll have working in the near future. We've made big gains with TG4 in CPU rendering, and we have some ideas for use of GPU in the future, for example it could be used for an even faster realtime preview at some point. But again converting the main render engine is a bigger challenge. We can certainly see that GPU rendering is an increasingly useful approach, so we are thinking of how we can tackle it and when...

- Oshyan

Keiyentai

I understand how rewriting the entire render pipeline would be insane. Was just something that popped in to my head since even mainstream vidcards now can do pretty decently in the 3D Creation. If there was a way to add on GPU assist of some sort that would be cool if it didn't mean a full rewrite. How hard would it be to optimize MultiCore/Thread CPU's? Like maybe have one core render this section etc etc unless it does that and I am not noticing it.
Dev Machine
CPU: i7-950
OS: Win 10 64 Pro
RAM: 24GB DDR3
GPU: eVGA GeForce GTX Titan X 12GB GDDR5
HDD: 750GB Internal, 1TB External

Oshyan

We take full advantage of multi-core CPUs already. We use a system of rendering individual "buckets" that each get assigned to a single render core/thread. This approach scales fairly well up to around 32 cores/threads, and we continue to work on optimization for larger numbers of threads.

- Oshyan