Saw this on the Lightwave forum.
Interesting:
http://www.amd.com/en-us/press-releases/Pages/amd-open-sources-2016jul25.aspx
The 3DMax plugin is out. Others will fallow later. Hmm...thinking of downloading a demo of Max.
3Dmax users, anyone ready to try and comment on this?
http://developer.amd.com/tools-and-sdks/graphics-development/radeonpro/radeon-prorender-technology/
https://radeon-prorender.github.io/
Any idea what the user cost will be like on this?
I only have a hand held right now, so maybe not seeing it. But I had been looking at videos of banks of GPUs for rendering a while back. Was very interested in it, now this op looks like the first public availability that I have seen yet. Very much want to know more, but deeply frustrating trying to read up on this tiny ass screen.
"Any idea what the user cost will be like on this?"
If you mean the price, this is free. You can download it as a plugin right now if you fallow the last link.
Here is it Michael (for 3DS Max) :
https://github.com/Radeon-ProRender/max/releases
It is Beta of course.
I am just now downloading the free educational version of 3DS Max just out of curiosity.
Hope that version isn't restricted for this kind of plugins.
What are you doing by the way. No see lately much around Michael?
Curious how much of a help this would for Matt if he does have the time
and determination of adding GPU rendering to Terragen or for anyone who can use a SDK.
"Free" ha! I'll take it ;D
Funny too, cuz my next deskto will in fact be a pc, so maybe I'll at last be able to use some of the free soft that helps fill in gaps between major applications. Sure as hell wold be nice to get max for the free plugins alone! But alas it will take a bit more time .
Thanks for asking kadri :) been working a lot trying to get ahead a bit while the world falls apart. I check in here every few days to see what's being discussed. But my desk top died, so not much can be done for fun. Ordered a refurbished Mac to try and save all the work I have on my broke one :'(
Quote from: Kadri on July 29, 2016, 08:21:20 PM
Curious how much of a help this would for Matt if he does have the time
and determination of adding GPU rendering to Terragen or for anyone who can use a SDK.
Yeah, not going to hold my breath on a new render engine for TG.
Quote from: TheBadger on July 29, 2016, 08:32:18 PM
Quote from: Kadri on July 29, 2016, 08:21:20 PM
Curious how much of a help this would for Matt if he does have the time
and determination of adding GPU rendering to Terragen or for anyone who can use a SDK.
Yeah, not going to hold my breath on a new render engine for TG.
Me too, but who knows :)
Sorry for the computer problem. Wish you will get up and running soon Michael.
Thanks me too. Feel a 3D binder coming on, but no way now to feed the need.
Quote from: Kadri on July 29, 2016, 08:21:20 PM
Curious how much of a help this would for Matt if he does have the time
and determination of adding GPU rendering to Terragen or for anyone who can use a SDK.
We're going to trial integration of FireRays into Terragen at some point during 4.x, I think. This would just be a replacement of the ray-geometry intersection part of the renderer, similar to how TG4 Beta currently uses Embree for this purpose.
FireRays is one of the components that goes into ProRender (or FireRender, depending on where you get your info!). Integrating ProRender would be a bigger job and we're not planning to do that.
Matt
Wow! Very nice to hear Matt.Thanks.
http://www.amd.com/en-gb/innovations/software-technologies/firepro-graphics/firerays
I saw you edited your post a little Matt. Just for clarification. Will we still get some kind of GPU acceleration ?
If the integration works out, yes. FireRays uses OpenCL to take advantage of both CPU and GPU, and is apparently very carefully optimised for various combinations of devices.
Matt
Nice :) Will it work for both 3D preview and main render engine?
That's my hope.
Thanks Matt.
I should add that we don't know how well most people's GPUs will handle very large scenes, or how easy it will be to make sure that Terragen doesn't exceed those limits. The most I can really say is that we're going to try it out.
Matt
It is obvious that there will be difficulties, otherwise it would be already a standard way to render. So that doesn't bother me Matt.
The thing that makes me happy is that since probably 7 years or so this came up, you stated something more concrete that you will try to incorporate it in the v.4 life cycle. This is enough for me to know now :)
Sounds really good. I love this progress!
Quote from: Kadri on July 29, 2016, 08:39:10 PM
Quote from: TheBadger on July 29, 2016, 08:32:18 PM
Quote from: Kadri on July 29, 2016, 08:21:20 PM
Curious how much of a help this would for Matt if he does have the time
and determination of adding GPU rendering to Terragen or for anyone who can use a SDK.
Yeah, not going to hold my breath on a new render engine for TG.
Me too, but who knows :)
...
Yes Ulco. That was one of the times i was really happy to be wrong :D
AMD is also sponsoring developers to integrate this as a renderer for Blender ... so they're going to pay a developer so that the Blender Foundation can put it in.
https://blenderartists.org/forum/showthread.php?403194-AMD-to-sponsor-two-Blender-projects&highlight=prorender
Looks like they would really like to see it get out there in use. Would be awesome if they could somehow support integration into TG as well.
Just dreamin. :)
Very interesting. I'm curious to see how it compares to Cycles! (in Blender)
- Oshyan
Quote from: Matt on July 29, 2016, 10:53:54 PM
I should add that we don't know how well most people's GPUs will handle very large scenes, or how easy it will be to make sure that Terragen doesn't exceed those limits. The most I can really say is that we're going to try it out.
Matt
This is interesting news. I've used RedShift3D a bit which utilises NVIDIA CUDA, which of course doesn't run on AMD, but renders bewilderingly quickly.
One thing I've had issues with there was hitting memory limit, even on 4GB cards it involved a lot of caching before rendering, but then after that it was suuuuuper fast. If you can get performance increases via graphics cards, most TG users will be very happy indeed.
@ kadri
We can only be wrong after the fact, until then I will go with, we are right.
But yes, being wrong and getting something nice in return is a nice thing! ;D
@matt
You are always looking into great stuff! Good for you 8)
And good not to have told us too soon too, otherwise the last year would have been all "are we there yet?", posts ;D
On the Bright side we got all this next year to nag you on it, so good times for all.
Still would like to know if it's possible to dasychain GPUs. I saw it done on YouTube, and demoed by the guy who directed transformers, but that was not something available to gen public.
It is not unfathamble that a normal person could have 3-4 Titans in terms of cost, to use for desktop production. Very curious cuz this seems to be the future, so how can it work for Jon Q now? Even as a single GPUs become very powerful and affordable, all the more reason to have a bank of them.
Also, getting the GPUS out of the dam box would be great, f-ing things get so hot I have melted two now >:( getting them out side of the machine would keep the box much cooler on the whole. Nice to have everything in one near package but, how many PCs have been destroyed by heat? Fing stupid to spend 6 k on a doomed arrangement of parts.
Quote from: TheBadger on August 02, 2016, 01:42:40 AM
Still would like to know if it's possible to dasychain GPUs. I saw it done on YouTube, and demoed by the guy who directed transformers
Wouldn't trust Michael Bay with my gear. He can stay the hell away.
This is very interesting stuff to me. I've been following parallel processing and AMD's GPU progress for years. In 1988 I started designing what is turning out to be a parallel programming language. It is a shame that the software tools for parallel are still very primitive. There were some bad design decisions made by Dennis Ritchie who developed the C programming language (back in the 1970's) that makes it a poor choice at a starting point for a parallel programming language. C++ is no better in that regard because it is basically a super-set of C. The two most popular "languages" for parallel programming in the PC/GPU platform are CUDA (from NVidia) and OpenCL (a standard promoted by AMD/ATI). I have looked at the OpenCL "Hello World" program and it is a several pages long. By comparison, the C version can easily fit in one line (if you ignore the includes). My programming language is just as brief as the C version and, in fact, it just calls the C libraries' printf library function. But the power of parallel is far better than anything else on the market today. I'm waiting on the organization that provides the back-end to my compiler to add support for the 64-bit environment before I can do much more development on it. I don't want to spend my life writing assemblers, linkers and library managers (or source-level debuggers if I can help it). The part of the tool chain that I have written are an editor, parser and code-generator. If you have any interest in programming you can watch the tutorial I did on this subject. The voice you hear is my wife's :) https://www.youtube.com/watch?v=XBHc4SOL-Ms