Testing Top Denoisers

Started by Kadri, February 08, 2019, 09:44:50 PM

Previous topic - Next topic

Kadri


This was a Lightwave render test with Optix.

Kadri


But my last render and your original render shows the kind of noise that Optix kinda prefers to me.
What i meant above with unfinished image noise was your original image and the Lightwave original above i posted.
But this is outside my technical knowledge. Just guessing here.

pokoy

#17
I guess the best would be to get Matt to test this. From what I heard from other developers it's pretty straightforward to implement - except for the Nvidia bureaucracy part, Intel's should be faster to get hands on - and a renderer typically provides all the data that's needed internally - e.g. you can not simply use a single (noisy) image, the tech expects a bit more than that as was already mentioned here. To make sure TG is tested correctly, it should be done on the developer's side.

The example implementation of the command line tool referred to here could be already making a few assumptions to parameters and internal options, which is probably why they are using single images. It might be a bit different from what the tech offers per se and when implemented to work on the internal renderer data.

EDIT
Kadri, your example from Lightwave is much more what I'd expect Optix to look like, yes.


pokoy

Some more info after asking the Corona dev team:

Optix/Intel denoisers need albedo and normals passes from a renderer.

With bucker renderers (such as the final renderer in TG) there might be problems based on what type of noise and image filtering the denoiser gets as input. Citing the developer here, no idea what problems in detail he means.

Optix:
- Nvidia GPU only, works close to real time, means it updates and improves while the image samples.
- Denoiser is free to get/use but legal audit of code is needed
- Nvidia can forbid its usage at any time

Intel:
- CPU only, slower than GPU of course but not slow - 1-2 seconds for a HD image
- x86 code so will probably run on AMD CPUs as well
- Apache license, means you can use it AND modify without restrictions


Oshyan

We've tested the Intel one with the necessary passes and it just doesn't work right. Still possible we're doing something wrong, but I'm not ruling out that these denoisers are, as we know, based on *specific training data sets*, and Terragen is not going to be in those data sets (they're all going to be straightforward path tracing-only implementations is my guess). So even though the Terragen output looks similar to us, it *may* not look like what the denoiser is expecting.

I'd love to test OptiX, just the commandline version, but as I mentioned I don't have high enough level of hardware. If someone does I can send them appropriate output passes for a test. Terragen does support all the render passes needed for these (in the Pro version at least).

- Oshyan

WAS

#22
Optix, and all AI denoisers, scalers, and detailers all have varied results because of their AI backgrounds. I wasn't able to test the optix much but even on the same images noise patterns created from denoising were always different. This effects that pixel level details of edges and stuff, and in animations I can only imagine would translate to "chatter". Same issue with Topaz AI or even their old non-AI denoiser.

I don't think Denoise should really be a option unless it's very low level "filmgrain" like noise.

Quote from: Kadri on February 11, 2019, 03:26:05 AM

This was a Lightwave render test with Optix.

I find it disconcerting this "AI" is creating stylized edges like TopazLab which is an entirely different feature set.