About RTP

Started by WAS, June 05, 2019, 12:28:52 AM

Previous topic - Next topic

WAS

Who uses RTP? How does it function for you? On my AMD A10-5800k it's useless. It takes far longer to render it's preview than an actual render takes. This seems to not be conducive with system requirements at all.

Additionally, what's it's real use being so slow over crop rendering? Only thing I really gather useful from it is cloud previewing, which is especially slow on my end. I could leave a RTP going over night and it won't be done with it's passes... while the render would be done over 3 hours.

Even on my Xeon it was too slow to use and never really understood it's implication vs the renderer.

As a note, this seems like a really good area to work on GPU rendering with TG. the GPU could handle this stuff like cake-work, and would make sense, cause I get additional crashing with GUI when using RTP and updates vs even the normal preview updates relying on CPU for both preview and computation in real time.

raymoh

I can only agree with that!
"I consider global warming much less dangerous than global dumbing down"   (Lisa Fitz, German comedian)

Dune

It's pretty fast on my machine, but I only use it sparingly to check on procedural texturing on objects, and clouds, and sometimes the feel of an entire scene. Very useful for that! But I'm always afraid it'll crash TG when using it it all the time.

cyphyr

I use it all the time (check machine specs in sig) and find it very efficient.
I don't often use it for an entire scene though. Often only a cropped area and mostly surfaces only or atmosphere only.
Also in more complex scenes the preview may need to be re-set.
Not sure if it is using the GPU or CPU.
www.richardfraservfx.com
https://www.facebook.com/RichardFraserVFX/
/|\

Ryzen 9 5950X OC@4Ghz, 64Gb (TG4 benchmark 4:13)

Hannes

It's not too slow on my machine as well. I really like it and I use it more or less the same like Ulco and Richard.

pokoy

I'm using RTP all the time and it's super useful for lighting/atmo tweaks. Previewing clouds is a hit and miss sometimes since the voxel cache is different (and a few other things probably) but in general it makes a huge difference for me and TG has improved a lot with RTP in my opinion. The only thing I miss is having terrain to recompute with RTP on, but that is being worked on IIRC.

As for CPU vs GPU, I guess porting RTP to GPU is quite an effort and it could in fact end up slower than CPU once it exceeds the GPU's memory. Also, supporting all the different GPU code ecosystems (OpenCL vs CUDA) and models is not trivial, and likely impossible for small dev teams.

digitalguru

I use it mostly for lighting and shader color adjustment, but on big scenes, it will eventually stop working and I can only refresh it by reloading the scene.

Would use it more if it was more reliable.

What might be more interesting is some way to lock down the 3d viewport displacement so it doesn't update all the time (with some kind  of region padding so you could move the camera about a bit)
but don't know how possible that would be.

WAS

#7
Quote from: pokoy on June 05, 2019, 07:56:49 AM
As for CPU vs GPU, I guess porting RTP to GPU is quite an effort and it could in fact end up slower than CPU once it exceeds the GPU's memory. Also, supporting all the different GPU code ecosystems (OpenCL vs CUDA) and models is not trivial, and likely impossible for small dev teams.

That's a pretty bold statement. GPUs, like CPUs, have logic for communication. OpenCL vs CUDA is just a choice. One is proprietary, one is not. They both communicate with the GPU. The difference here is CUDA can use multiple GPUs, OpenCL cannot (it doesn't know how).

Additionally, like any GPU/Game/Renderer, it doesn't need to rely on only it's GPU memory.

Which todays, coupled with it's GPU, is much more efficient than a CPU by leaps and bounds with not only iteration speed but memory usage, because it has dedicated on board memory, and the systems on board memory. Most GPU units people use for rendering are at 6-8gb coupled with 16-32gb RAM in the machine. Workstation GPUs come with far more ram such as the Vega coming out with 32gb for Apple's workstation.

For the work TG is doing, especially moving to PT, it only makes sense to move to GPU rendering like the big boys. CPU rendering at home was really a means to fill a void that SGI made with it's GPU rendering that wasn't available to consumers at home.


------------

I'm not sure what's with the RTP on my CPU than. It is absolutely worthless and doesn't follow the programs System Requirements of a Quad Core. My system is 3.7ghz base, at 4.2ghz overclocked. Should be fine. For example the "proudly CPU" based Corona is very stable and quick on my system.

GPU rendering on my RX 480 8GB however is vastly quicker than Corona or TG, and in many cases for far more detail in objects and geometry.

pokoy

Quote from: WASasquatch on June 05, 2019, 11:13:59 AM
Quote from: pokoy on June 05, 2019, 07:56:49 AM
As for CPU vs GPU, I guess porting RTP to GPU is quite an effort and it could in fact end up slower than CPU once it exceeds the GPU's memory. Also, supporting all the different GPU code ecosystems (OpenCL vs CUDA) and models is not trivial, and likely impossible for small dev teams.

That's a pretty bold statement.
...
For the work TG is doing, especially moving to PT, it only makes sense to move to GPU rendering like the big boys.
...

Those big boys - chaosgroup, arnold, redshift etc - have teams of 10-20 and more people. TG would need to port every bit of the renderer - plus the entire node logic probably - to GPU. That's a task for an entire team and takes a lot of time. Not to mention the support demand for all the different problems that might arise from GPU model, driver issues, etc. I just don't think it's realistic.

Still, there's some impressive GPU render tech out there, and yes, it wouldn't be bad at all.

WAS

#9
Quote from: pokoy on June 06, 2019, 07:39:05 AM
Quote from: WASasquatch on June 05, 2019, 11:13:59 AM
Quote from: pokoy on June 05, 2019, 07:56:49 AM
As for CPU vs GPU, I guess porting RTP to GPU is quite an effort and it could in fact end up slower than CPU once it exceeds the GPU's memory. Also, supporting all the different GPU code ecosystems (OpenCL vs CUDA) and models is not trivial, and likely impossible for small dev teams.

That's a pretty bold statement.
...
For the work TG is doing, especially moving to PT, it only makes sense to move to GPU rendering like the big boys.
...

Those big boys - chaosgroup, arnold, redshift etc - have teams of 10-20 and more people. TG would need to port every bit of the renderer

That's literally not an excuse. Matt is more than capable of hiring developers. It's currently his choice he runs the show as a two man group... Matt can do whatever he want, hold TG back, use it as a private income source for contracts, develop as a hobby, but it doesn't excuse the potential in the industry for other consumers that it could be. I've seen more complaint over the damn GUI in here than anything that's actually practical to the industry end-goals, unknowingly admitting their novice stance.

As is, the fact it's a CPU renderer limits it's use in film because of time constraints (literally from the babes mouth over why they moved to other software post pilot). SGI saw this issue decades ago and moved appropriately to hard GPU reliance (though in the late 80s when they were hard at research CPUs weren't practical for CPU rendering, even with mathcos, etc, even handling the hardware was hard which is why the MIPS was used for proprietary goals.)

SILENCER

#10
Ideally you'd want to be able to make a low rez camera preview in RTP. That would be boss.

The GPU side of rendering is moving at warp speed.  We are using octane on this job, and nearly ALL of our backgrounds are 16K sphericals that I make in Terragen. We've also used Gaea project data I've done on cards - single polys I might add - for amazing terrains with VDB clouds. Renders like lightning.

WAS

#11
It seems clear that people are on just much to high-end machines to really see the issue, and more get a placebo effect.

On a machine that matches the system requirements for Terragen, the RTP is rubbish, and most certainly needs to be looked into. I'm honestly not sure why it was released without actually benchmarking it. In any situation I throw at it, it is much slower than the standard render, which is at much higher resolution and quality. See images.

Standard render is in MPD 0.5 AA 0.2

Just rendering for preview: 2.2m
RTP for preview: 3.8m (for a fraction the quality and smaller resolution...)

cyphyr

Each to their own way of working of course but I have never waited for a preview to finish.
Also I rarely use all three rendering modes (Shaders, Visible Atmosphere, Lighting) at the same time; mostly only choosing one.
It is after all "only" a preview and not expected to show an end result but rather give a good enough approximation to judge if a setting is working as desired or not (and there is room for improvement there I grant, particularly with clouds).
www.richardfraservfx.com
https://www.facebook.com/RichardFraserVFX/
/|\

Ryzen 9 5950X OC@4Ghz, 64Gb (TG4 benchmark 4:13)

WAS

#13
Quote from: cyphyr on June 08, 2019, 06:30:19 AM
Each to their own way of working of course but I have never waited for a preview to finish.
Also I rarely use all three rendering modes (Shaders, Visible Atmosphere, Lighting) at the same time; mostly only choosing one.
It is after all "only" a preview and not expected to show an end result but rather give a good enough approximation to judge if a setting is working as desired or not (and there is room for improvement there I grant, particularly with clouds).

Definitely to each their own. Hardly bothered to change the node setup or disable them to see. Usually their part of the preview. Many previews I do are not for a good enough approximation but actual quality control previewing. You can be very disappointed rendering a scene without actually testing it's high detail. Lower MPD settings won't give you a real idea of your surface textures a disp, plus lighting interaction.

This is not about what it's good for, but the fact it is not optimized for a full release. It is slower than the standard renderer, again, for far less detail. Definitely a QC area issue.  Making it useless for practical use as it offers no actual benefit... Actually uselss. For a system in specs, over just rendering the whole thing or crop rendering. That's a quality and performance issue of the software.

It is not practical but to a high end user that wouldn't even tell the difference, which is why benchmarks exist.

Oshyan

Those are the minimum system requirements. It's the minimum *that will work*. Not the minimum we recommend, not the minimum for a pleasant experience. The RTP works very well on average-to-good hardware. I have an i7-2700k in one machine, that's an 8 year old CPU, and even there it's quite usable. It is not intended to create final render quality quickly, it's intended to create a good impression of the scene characteristics dynamically, in a minimum amount of time. And for this it works well on appropriate hardware.

We'll consider adding a note that for use of the RTP, we recommend higher-level hardware. But again even a desktop-level Ryzen 7 1800x or so will do fine. Hardly a high-end or costly CPU.

- Oshyan