NVIDIA RTX

Started by SILENCER, August 20, 2018, 02:22:21 PM

Previous topic - Next topic

SILENCER

All the crazy crazy things this new GPU will do is truly a huge leap forward.

So the question is will Terragen look toward and type of GPU rendering/assistance to take advantage of this?
I'm sure the rewrite to make it happen is mind blowing, but goddamn, Terragen's glorious lighting model rendering at Warp 9 would be a huge win.

Oshyan

No specific plans for it as-yet, but the more hardware power and software resources are available to support this kind of thing, the more likely we'll be able to do it. So we're certainly getting closer with this release.

- Oshyan

WAS

#2
If I'm not mistaken he NVIDA RTX runs exclusively on Vulta technology, and through the Unreal Engine. See my post here: https://planetside.co.uk/forums/index.php/topic,25345.0.html

Still very much in it's infancy and require specific executions functions. Volta with it's new microarchitecture has these capabilities, but I believe is a high-end GPU unit, though supposedly the "future" of all their GPUs

PabloMack

#3
    Quote from: WAS 8/21/2018, 2:35:46 PM
>> If I'm not mistaken he NVIDA RTX runs exclusively on Vulta technology, and through the Unreal Engine.

Apparently, NVidia's RTX is now supported by Unity:

https://wccftech.com/unity-collaboration-nvidia-rtx-engine/

I'm starting to get very interested in both.

WAS

#4
Quote from: PabloMack on December 24, 2019, 12:54:44 PMQuote from: WAS 8/21/2018, 2:35:46 PM
>> If I'm not mistaken he NVIDA RTX runs exclusively on Vulta technology, and through the Unreal Engine.

Apparently, NVidia's RTX is now supported by Unity:

https://wccftech.com/unity-collaboration-nvidia-rtx-engine/

I'm starting to get very interested in both.

Yeah I saw that.

Having played with a 2080TI (16gb) I can tell you that RTX is a huuuuuuge gimmick. Their "specialized" cores are more conventional than specialized, as when not in RTX you can magically utilize all the card. I already had a hunch about this when AMD released models of their ray tracing without any specialized cores, and just a specialized API. And I can't wait for that API to be mainstream.

When you have RTX enabled, suddenly most the "important" (to visual quality) settings of your engine are grayed out, and defaulted to medium/low settings, making your game really look like crap compared to ultra settings without RTX.

It's good the technology was introduced, but like usual nvidia messed it up. Games that have been utilizing it besides Tomb Raider have been largely flops and revolving around the technology which was a terrible idea from the devs. CONTROL, one of the biggest advertisers of RTX is really a terrible game IMO, and a flash show with heavily exaggerated lighting and shafting to show off the tech. 

PabloMack

WAS, with all that said, It seems to me that the need for real time is what forces compromises to be made just to meet target frame rates. The guys who want to make movies, though, don't strictly need quaranteed real-time frame rates. They just want very good render times. Even one frame per second would be really good as compared to packages like TG that often need many minutes or even hours per frame to render many detailed scenes. But movie-makers are a low priority for these tech providers so the controls we need will probably not even be available. When gamers are concerned about frames per second, I am just interested in an average frames per second. The controls we want are more related to guaranteed detail and image quality and we adjust those according to the render times we can live with in order to get it. I'm sure you know all this. I'm just thinking out loud.

bobbystahr

Quote from: PabloMack on December 24, 2019, 04:36:29 PMWAS, with all that said, It seems to me that the need for real time is what forces compromises to be made just to meet target frame rates. The guys who want to make movies, though, don't strictly need quaranteed real-time frame rates. They just want very good render times. Even one frame per second would be really good as compared to packages like TG that often need many minutes or even hours per frame to render many detailed scenes. But movie-makers are a low priority for these tech providers so the controls we need will probably not even be available. When gamers are concerned about frames per second, I am just interested in an average frames per second. The controls we want are more related to guaranteed detail and image quality and we adjust those according to the render times we can live with in order to get it. I'm sure you know all this. I'm just thinking out loud.
Good thinking though....
something borrowed,
something Blue.
Ring out the Old.
Bring in the New
Bobby Stahr, Paracosmologist

WAS

Quote from: PabloMack on December 24, 2019, 04:36:29 PMWAS, with all that said, It seems to me that the need for real time is what forces compromises to be made just to meet target frame rates. The guys who want to make movies, though, don't strictly need quaranteed real-time frame rates. They just want very good render times. Even one frame per second would be really good as compared to packages like TG that often need many minutes or even hours per frame to render many detailed scenes. But movie-makers are a low priority for these tech providers so the controls we need will probably not even be available. When gamers are concerned about frames per second, I am just interested in an average frames per second. The controls we want are more related to guaranteed detail and image quality and we adjust those according to the render times we can live with in order to get it. I'm sure you know all this. I'm just thinking out loud.
RTX technology is exclusively real-time. You can bake on textures etc.and record that live RTX show. It's not part of any renderer, and would immediately compromise quality. They use much better systems like cycles, path tracing in TG, etc.

PabloMack

#8
Quote from: WAS on December 24, 2019, 07:51:48 PMRTX technology is exclusively real-time. You can bake on textures etc.and record that live RTX show. It's not part of any renderer, and would immediately compromise quality. They use much better systems like cycles, path tracing in TG, etc.
WAS,

It's good to know that. I appreciate the information and insight. Looks like we are stuck with CPU multi-core for the foreseeable future.

Oshyan


WAS

#10
Quote from: Oshyan on December 25, 2019, 01:52:06 PMActually Vray and other renderers are taking advantage of RTX cores already: https://www.chaosgroup.com/blog/v-ray-gpu-adds-support-for-nvidia-rtx
Supposedly in Cycles too: https://code.blender.org/2019/07/accelerating-cycles-using-nvidia-rtx/

- Oshyan

Granted both these use the API for a specific function of the API for calculation. Nothing is being rendered with RTX. Seems they are using the cores to calculate structure and intersection points. Not much different than GPU acceleration, just using the OptiX API and "RT" cores which as I mentioned seem more conventional than specialized, as they operate as generic GPU cores through any other API.

For NVIDIA workstation cards being premier it makes sense to take advantage of the API for specific things that would be implemented 3rd party in other APIs like Cycles in the past.

For actual RTX, you'd be recording live. It's Inherently Real-Time Raytracing. That's what the API is truly for, and why Cycles or Vray aren't actually using RTX, just the API for specialized math. Which is no different than using a GPU with CPU in end goal. Faster computation.

Oshyan

I'm not exactly clear what you mean when you say "RTX" then. You mean the specific RTX functionality as accessed through some particular API (e.g. DirectX)? In other words strictly how it is implemented in games?

- Oshyan

WAS

Quote from: Oshyan on December 25, 2019, 03:17:36 PMI'm not exactly clear what you mean when you say "RTX" then. You mean the specific RTX functionality as accessed through some particular API (e.g. DirectX)? In other words strictly how it is implemented in games?

- Oshyan
I edited my post to make that clear. RTX is Real-Time Raytracing. The API is using the GPU to render. What's being done here is using the API to calculate for specific end goal. Which itself isn't RTX.

This is just using OptiX API and doing some speedy calcs for their cycles/Vray to accelerate rendering.

WAS

#13
It seems the benefits here is software based calcs done on CPU vs hardware based calcs on GPU, meant to do them on RT cores.

To put it simply Cycles is still the renderer, and rendering, Vray is still the renderer, and rendering. TG's Path tracer or w/e would still be the renderer and renderering. It wouldn't be RTX, which is hardware, real-time rendering, on the GPU. That would be a whole new renderer.

Oshyan

Arguably then you have a very specific/individual definition of "RTX":
https://developer.nvidia.com/rtx

- Oshyan