AMD Open Sources Professional GPU-Optimized Photorealistic Renderer

Started by Kadri, July 29, 2016, 07:31:49 PM

Previous topic - Next topic

Matt

I should add that we don't know how well most people's GPUs will handle very large scenes, or how easy it will be to make sure that Terragen doesn't exceed those limits. The most I can really say is that we're going to try it out.

Matt
Just because milk is white doesn't mean that clouds are made of milk.

Kadri


It is obvious that there will be difficulties, otherwise it would be already a standard way to render. So that doesn't bother me Matt.
The thing that makes me happy is that since probably 7 years or so this came up, you stated something more concrete that you will try to incorporate it in the v.4 life cycle. This is enough for me to know now :)

Dune


Kadri

Quote from: Kadri on July 29, 2016, 08:39:10 PM
Quote from: TheBadger on July 29, 2016, 08:32:18 PM
Quote from: Kadri on July 29, 2016, 08:21:20 PM

Curious how much of a help this would for Matt if he does have the time
and determination of adding GPU rendering to Terragen or for anyone who can use a SDK.

Yeah, not going to hold my breath on a new render engine for TG.

Me too, but who knows :)
...


Yes Ulco. That was one of the times i was really happy to be wrong  :D

mhall

AMD is also sponsoring developers to integrate this as a renderer for Blender ... so they're going to pay a developer so that the Blender Foundation can put it in.

https://blenderartists.org/forum/showthread.php?403194-AMD-to-sponsor-two-Blender-projects&highlight=prorender

Looks like they would really like to see it get out there in use. Would be awesome if they could somehow support integration into TG as well.

Just dreamin. :)

Oshyan

Very interesting. I'm curious to see how it compares to Cycles! (in Blender)

- Oshyan

ajcgi

Quote from: Matt on July 29, 2016, 10:53:54 PM
I should add that we don't know how well most people's GPUs will handle very large scenes, or how easy it will be to make sure that Terragen doesn't exceed those limits. The most I can really say is that we're going to try it out.

Matt

This is interesting news. I've used RedShift3D a bit which utilises NVIDIA CUDA, which of course doesn't run on AMD, but renders bewilderingly quickly.
One thing I've had issues with there was hitting memory limit, even on 4GB cards it involved a lot of caching before rendering, but then after that it was suuuuuper fast. If you can get performance increases via graphics cards, most TG users will be very happy indeed.

TheBadger

@ kadri
We can only be wrong after the fact, until then I will go with, we are right.
But yes, being wrong and getting something nice in return is a nice thing! ;D

@matt
You are always looking into great stuff! Good for you 8)
And good not to have told us too soon too, otherwise the last year would have been all "are we there yet?", posts ;D
On the Bright side we got all this next year to nag you on it, so good times for all.

Still would like to know if it's possible to dasychain GPUs. I saw it done on YouTube, and demoed by the guy who directed transformers, but that was not something available to gen public.

It is not unfathamble that a normal person could have 3-4 Titans in terms of cost, to use for desktop production. Very curious cuz this seems to be the future, so how can it work for Jon Q now? Even as a single GPUs become very powerful and affordable, all the more reason to have a bank of them.

Also, getting the GPUS out of the dam box would be great, f-ing things get so hot I have melted two now >:( getting them out side of the machine would keep the box much cooler on the whole. Nice to have everything in one near package but, how many PCs have been destroyed by heat? Fing stupid to spend 6 k on a doomed arrangement of parts.
It has been eaten.

ajcgi

Quote from: TheBadger on August 02, 2016, 01:42:40 AM
Still would like to know if it's possible to dasychain GPUs. I saw it done on YouTube, and demoed by the guy who directed transformers

Wouldn't trust Michael Bay with my gear. He can stay the hell away.

PabloMack

This is very interesting stuff to me. I've been following parallel processing and AMD's GPU progress for years. In 1988 I started designing what is turning out to be a parallel programming language. It is a shame that the software tools for parallel are still very primitive. There were some bad design decisions made by Dennis Ritchie who developed the C programming language (back in the 1970's) that makes it a poor choice at a starting point for a parallel programming language. C++ is no better in that regard because it is basically a super-set of C. The two most popular "languages" for parallel programming in the PC/GPU platform are CUDA (from NVidia) and OpenCL (a standard promoted by AMD/ATI). I have looked at the OpenCL "Hello World" program and it is a several pages long. By comparison, the C version can easily fit in one line (if you ignore the includes). My programming language is just as brief as the C version and, in fact, it just calls the C libraries' printf library function. But the power of parallel is far better than anything else on the market today. I'm waiting on the organization that provides the back-end to my compiler to add support for the 64-bit environment before I can do much more development on it. I don't want to spend my life writing assemblers, linkers and library managers (or source-level debuggers if I can help it). The part of the tool chain that I have written are an editor, parser and code-generator. If you have any interest in programming you can watch the tutorial I did on this subject. The voice you hear is my wife's :) https://www.youtube.com/watch?v=XBHc4SOL-Ms