This guy is developing a renderer.

Started by Seth, March 03, 2015, 07:40:23 PM

Previous topic - Next topic

Upon Infinity

I hope I wasn't coming across the wrong way, TU.  I guess I see an attack on TG as an attack on my choices as well, as it is currently the only renderer I use.  It may not be perfect, but it is the best I've found for the work I'm presently doing (although to be honest, I haven't done much searching lately).  But also I would like to understand more of the criticisms you're levelling against it.  I'm no expert in differentiating renderers from each other or the strengths and weaknesses of each but as I understand it, the technology behind a pure raytracer really suffers when it comes to procedural calculations, like Terragen's landcape generator?  Although, to be fair, I haven't used a lot of other renderders that much.  The reason I don't use POV-Ray anymore was mostly user interface related.   I think I remember it actually getting it to do populations as one point but programming 3D scenes was really frustrating for me.  But if there is competition for Terragen out there that works on a raytracing engine, I suppose I haven't heard of it yet, except Vue maybe.  Or perhaps it's price point was too high to be on my radar?

It's also that I saw a free program like POV-Ray just killing on these kind of lighting effects even before I stumbled across TG, that I just assumed most (all?) pure, unbiased raytracers could perform nearly as adequately or better and that the mathematics behind them were basically copy and paste from a programming standpoint.  POV-Ray was a really neat program and it was my introduction to 3D, but having a non-existent UI meant it was practically unusable to me.  This Takua engine seems to me to be just another potential POV-Ray-like program.  Something that maybe a few code-heads can really make some interesting scenes with but impractical from an user / artistic standpoint.

But also, are you saying that TG was way ahead of it's time and that other tech is beginning to catch up and begin to overlap on the things that TG does do well?  The kinds of lighting effects shown with Takua were never TG's strength nor will they ever be.  I knew that going in.  I suppose that's the source of my defensiveness.  And I do agree that TG's progress has been snail-slow over the years.  But just when I think it's stopped progressing, they throw us a tasty bone of an upgrade.  I also agree that if pure raytracers can create Terrains like TG can, then Planetside could be in some trouble.  I still see some critical weaknesses in TG, however, like importing animated objects.

I can definitely understand criticism of 3DC implementing PBR before it's standardized.  Although they are implementing it, like it or not.  I also happen to own it, so it's something I'm paying attention to.  The way they are implementing it should be easy to get Terragen to get it to work.  Even if TG would need to be updated to do so.  I've been running experiments with PBR and Terragen, and they are so far promising.  There really is not much to differentiate them from normal textures.  I'm still confident TG can run them if not now, then in the future with relative ease on the programming side of things.  The 3DC website mentioned that all node-based renderers should be able to implement them fairly easily.  It just might not do it "like the others". 

mhaze

The problem with TG development is the size of the company!  The new plans for a SDK may well speed things up. 

Tangled-Universe

#17
Quote from: Upon Infinity on March 05, 2015, 04:38:53 AM
I hope I wasn't coming across the wrong way, TU.  I guess I see an attack on TG as an attack on my choices as well, as it is currently the only renderer I use.  It may not be perfect, but it is the best I've found for the work I'm presently doing (although to be honest, I haven't done much searching lately).  But also I would like to understand more of the criticisms you're levelling against it.  I'm no expert in differentiating renderers from each other or the strengths and weaknesses of each but as I understand it, the technology behind a pure raytracer really suffers when it comes to procedural calculations, like Terragen's landcape generator?  Although, to be fair, I haven't used a lot of other renderders that much.  The reason I don't use POV-Ray anymore was mostly user interface related.   I think I remember it actually getting it to do populations as one point but programming 3D scenes was really frustrating for me.  But if there is competition for Terragen out there that works on a raytracing engine, I suppose I haven't heard of it yet, except Vue maybe.  Or perhaps it's price point was too high to be on my radar?

There isn't right or wrong in these kind of discussions.
You can see it both ways and actually, we are seeing it both ways happening here!

I can imagine you feel you want to defend your choices, the time invested etc.
I have been there too.
On the other hand, after having defended myself for my choices and my time invested for some time it turns out - at least for me - that it's getting harder and harder to either defend these choices or to justify to myself to spend time and effort into a software which only seems alive by the grace of these forums and an occasional update containing features previously needed in studio jobs by the developer. Not necessarily what the users are interested in.

That's a whole different discussion of course, but it is what is happening.

Regarding procedurals. Just read that Reddit thread and then especially visit the links to the shadertoy website the guy posted there.
Realtime procedurals in a web-browser.
This argument worked for the past >5 years, but it isn't valid anymore, especially if you need to believe the guy on Reddit who knows a whole lot better than me what he's talking about ;)

Quote
It's also that I saw a free program like POV-Ray just killing on these kind of lighting effects even before I stumbled across TG, that I just assumed most (all?) pure, unbiased raytracers could perform nearly as adequately or better and that the mathematics behind them were basically copy and paste from a programming standpoint.  POV-Ray was a really neat program and it was my introduction to 3D, but having a non-existent UI meant it was practically unusable to me.  This Takua engine seems to me to be just another potential POV-Ray-like program.  Something that maybe a few code-heads can really make some interesting scenes with but impractical from an user / artistic standpoint.

I don't know? It's in alpha 0.5 state now and as far as I know he hasn't discussed any (G)UI for this renderer. As it is now it's command-line based, still.
I guess/imagine every renderer starts in this state.

Quote
But also, are you saying that TG was way ahead of it's time and that other tech is beginning to catch up and begin to overlap on the things that TG does do well?  The kinds of lighting effects shown with Takua were never TG's strength nor will they ever be.  I knew that going in.  I suppose that's the source of my defensiveness.  And I do agree that TG's progress has been snail-slow over the years.  But just when I think it's stopped progressing, they throw us a tasty bone of an upgrade.  I also agree that if pure raytracers can create Terrains like TG can, then Planetside could be in some trouble.  I still see some critical weaknesses in TG, however, like importing animated objects.

Well, I have been advocating that opinion for quite a while and in my previous post I summarized the answer Karl (the guy of the renderer) gave me to my question.
Those answers kind of acknowledge my opinion.

With the current tech available online and with the right person with the right interests (meaning landscape rendering instead of chicks/cars/buildings) then that tech most definitely outperforms TG in speed and visual fidelity. I'm absolutely convinced about that, even more now.

Yes, the shading features in those flower-renders aren't what I think TG needs straight away and then I especially mean things like caustics.
What it shows though, like all those other modern renderers, is what a modern renderer can churn out in terms of visual quality and speed.

Like I said in one of my previous posts TG's vegetation looks mediocre at best compared to modern renderers. That's when the sun is behind the camera. If the sun is in front of the camera and a lot of shadows are in the vegetation then TG's vegetation looks really bad compared to those modern renderers. They are so much more accurate, for example.

Quote
I can definitely understand criticism of 3DC implementing PBR before it's standardized.  Although they are implementing it, like it or not.  I also happen to own it, so it's something I'm paying attention to.  The way they are implementing it should be easy to get Terragen to get it to work.  Even if TG would need to be updated to do so.  I've been running experiments with PBR and Terragen, and they are so far promising.  There really is not much to differentiate them from normal textures.  I'm still confident TG can run them if not now, then in the future with relative ease on the programming side of things.  The 3DC website mentioned that all node-based renderers should be able to implement them fairly easily.  It just might not do it "like the others".

I don't use 3DCoat and it is not that I don't like it that they implement PBR. It's their choice and I can't be bothered by it, since I don't use it anyway.
What I meant all the way long is how it seems that many companies make strange decisions and 3DCoat seems to be on the other end of this compared to Planetside.
I feel 3DCoat implemented PBR "because everyone's doing it".
PBR is great, but not yet (if ever) standardized. This means you can create your model and give it beautiful shading in 3DC, then transfer it to a rendering app with it's own implementation of PBR, to then find out that it doesn't match with 3DC!
So to me that's a strange choice of allocating development resources. There are probably lots of other areas in need of improvement or revision.

In that respect I can't possibly criticise Planetside with very few exceptions like the object library (why oh why).

Quote from: mhaze on March 05, 2015, 06:37:14 AM
The problem with TG development is the size of the company!  The new plans for a SDK may well speed things up.

Well Mick, don't take this personal at all, but I get tired of this argument. Even if it's true, which it is.
Why?
At some point you're growing too old to use the same "excuse" all the time for not improving in/on something.
You can't keep hiding behind something you're not changing. 9 years have passed by!

Yeah the SDK is interesting and may lead to interesting things. Who knows!
However, when it comes to rendering then the SDK offers limited usage.
What this whole discussion is about is miles away from TG in approach for rendering.
You can write shaders with the SDK, but it won't allow you to get the same visual fidelity those modern renderers show, as far as I know.

mhaze

I know what you mean with the rendering of veg.  I'm struggling with it now.  For me the big issues apart from the renderer are object import, SSS, caustics and a greater range of noises to play with.  I wonder if there are any plans to improve the renderer and add these features in the pipeline?

mhaze

Oh and proper displacement on objects!

TheBadger

I am rather hopeful about the SDK too. At the very least it should allow for others to work on a lot of things we need, and allow matt to focus more on fundamental aspects of TG. I mean procedural erosion sounds like a pretty big advantage for TG given what else TG does. and if a 3rd party can give us that, than why not better object displacement too, for example... somehow?

Matt said he is working on TG4. in that context, and with respect to this thread, than wouldn't TG have to be a complete rebuild? Cant just keep adding new code to old... look at Maya. :P
It has been eaten.

Seth

https://www.youtube.com/watch?v=nwuFd5uK_xQ

I know that it's only a game engine and it's not procedural and all, but hell, wouldn't that be great to have real time rendering like that ? ;)
And guys, who in here have that production's quality on a rendered animation ? And if somebody could do that kind of thing, Just imagine the render time!

TheBadger

#22
^^ Yeah!
I like the idea though!!!!!! I don't know, maybe I can rob a bank or something.

http://www.gamespot.com/articles/nvidia-reveals-titan-x-the-world-s-most-advanced-g/1100-6425692/
It has been eaten.

Matt

#23
Martin, why do you hate me so much ;) Talking about rendering algorithms is usually something I enjoy, but you suck the joy out of it... Oh well, let's get down to business.

Quote from: Tangled-Universe on March 04, 2015, 04:50:32 PM
The short answer confirms what we already know from various discussions before: there's no rasterized version for evaluating physically accurate and correct GI.

The whole post then boils down to that REYES-style rendering is useless, except for calculating primary visibility of the displaced primitives.
From then on REYES-style rendering is infeasible because of aforementioned reason.

Terragen uses rasterisation only for rendering primary visibility of the displaced primitives. It doesn't use it for anything else. GI, reflections and shadows all use ray tracing. It doesn't even use it for trees, by default. So I don't understand this particular issue you have with Terragen's choice to rasterise terrain. As an alpha tester you know that we have a deferred shading option which essentially makes the shading aspect of the terrain similar to a ray tracer; if you enable it, rasterisation really is just an optimisation for primary visibility, with everything else being ray traced.

Now, your real gripe is that we don't have a brute-force unbiased solution for GI and shading (physically based shading and lighting). Not yet, anyway. And that's a fair gripe. But this isn't mutually exclusive with rasterisation. Why don't we have this yet? It stems from the speed requirements of ray tracing two things in particular: procedurally-generated surfaces and procedural atmospheres. Physically based shading and lighting requires a lot more secondary and higher order rays to be fired, and the rays are usually highly incoherent. The incoherence is a problem for our current displacement engine which was designed to run on machines which have a lot less RAM than we have today while giving unprecedented detail and procedural modeling tools that are scale-independent. It is not fair to compare Terragen's displacement capabilities with those of mainstream ray tracers such as V-Ray or Arnold; even less fair to compare with a few hundred lines of code running in ShaderToy (BTW, ShaderToy is awesome). Mainstream renderers render high-poly models with moderate levels of displacement. We render procedurally defined surfaces with orders of magnitude more displacement amplitude. Some artists find that Terragen's approach provides an advantageous workflow over other apps for creating landscapes. It's not the only way, though.

There's a plan to refactor the displacement cache to optimise for the abundance of RAM on modern machines, probably next year, so that displaced primitives can be ray traced much faster. At that point we're looking forward to enabling brute force GI and then implementing physically based shading and lighting. We'll also need a way to speed up secondary rays into the sky. This could be solved by baking sky domes before rendering, or perhaps using render-time caches.

We're probably not doing it this year because we're scheduled to work on other kinds of speed enhancements in Terragen. Not just speedup you already know about, Martin.

Quote
Furthermore he explains that REYES-style was favored in the early days when render power was expensive and lacking. Now there's plenty and it's relatively cheap.
Raytracing is economical now and because of industry/academic strain raytracing outperforms REYES-style rendering in both speed and visual fidelity, since a couple of years.

I agree.

Quote
Tadaaa...

Of course I'm extremely biased about this ;) and thus "happy" to see someone who's really into this stuff kind of acknowledge my thoughts and feelings.

Ray tracing (and the family of algorithms that use it), is how we should render stuff. For years I've been saying that ray tracing is the future, if not publicly on this forum then in discussions with you on the alpha test forum. This is why alpha builds have a deferred shading option, and why this will be released with Terragen 4. Deferred shading is, basically, ray tracing with a rasterisation step for optimisation. If/when you find it to be slower than non-deferred shading, then you've found a case where rasterisation-based shading is still faster. (Rasterisation-based shading is not necessarily coupled with rasterisation-based visibility, there are lots of hybrid solutions in rendering technology, each with different tradeoffs.) But the real benefits of deferred shading (or pure ray tracing) will be realised when Terragen's sampling of lights, reflections and GI are adapted to work with the greater number of samples per pixel that are being rendered. But don't expect it to be faster and better looking all the time. If you find an application that can do all this, you should start using it!

We want to improve the micro export tools to give you the power to export high-res terrain meshes with textures, optionally with displacement maps for the finest details. I hope that we can do this for populations, too. Then you can use whatever renderer you want. Maybe you'll transfer your terrains a renderer that has the speed and visual fidelity you want. That might be the best solution for you in future, and for many other users who prefer a particular renderer. But we're still aiming to get some of these brute force capabilities into Terragen's native renderer in the next 2-3 years.

Quote
It seems to me there's little reason, if at all, for the industry/academic field to re-invest into a REYES approach.
Heck, it seems the latest Renderman isn't even a hybrid REYES/raytracer anymore, just pure raytracing. Hello???

I'm not investing much in REYES/rasterisation either, but Terragen is still in the process of crossing the bridge to the other side where everything will be ray traced. Probably within 2 years you will not need to rasterise anything in TG if you are allergic to the idea, and perhaps within 5 years we might be able to put the rasterisation code into retirement. That will be good because it is baggage that slows down development a little (because there are two different rendering paths whose code needs to be maintained). It would still be beneficial for optimising terrain visibility, but that benefit will be insignificant in the long term.

Terragen 2 (TGD) development started at a time when brute force GI or physically based rendering were feasible only for simpler scenes. The sheer amount of detail we could render with Terragen was almost impossible with ray tracers, even without GI, at the time. I'm talking about terrain though. Trees present problems that are better solved by ray tracing. By the way, the forests in Avatar were rendered with REYES. Those forests had the sort of complexity that I thought should be ray traced instead, but they had some clever LoD schemes that helped them, so it's hard to say whether they made the wrong choice or not. That was about the time that Terragen switched to ray tracing trees by default, and I decided not to improve LoD schemes that would only benefit rasterisation. So I'm not "re-investing in REYES" by any means.

I was recently interviewed by Seekscale for their blog. There's some overlap with this discussion, so here's the link. Hopefully it's interesting reading.

http://home.seekscale.com/blog/realistic-landscapes-generation-and-rendering-iw-with-terragen-ceo

As I look now, I see that their most recent interview talks directly about rasterization versus ray tracing, with the Embree team from Intel. Funny coincidence :) I'm going off to read that now...

http://home.seekscale.com/blog/intel-strategy-for-ray-tracing-rasterization-and-scientific-visualization

Matt
Just because milk is white doesn't mean that clouds are made of milk.

Tangled-Universe

#24
Hi Matt,

First of all I don't hate you!  8)
I'm sorry for sucking the joy out of discussing these things for you.

But I guess seeking out contact, discussion and offering help - for half a dozen of times on an extremely friendly and respectful basis - on these matters with you in private since last summer and you not replying to me time after time - while promising me to do so! - does not help at all :(
It made me feel like a fool, ask Oshyan, and your attempt to make me look like the bad guy now is totally misplaced and as disrespectful as systematically ignoring me! At least that's how I feel.

If this makes you feel I hate you, then think again about these lines above.
You reap what you sow Matt and some self-reflection wouldn't hurt perhaps ;)

At least now you know you solely have to thank yourself for this instead of blaming me for sucking the joy out of it.

Yet you may still think I hate you, but there's Planetside Matt and Matt.
And yes, I'm deeply disappointed in how you deal with a lot of things and *especially communication*.
Hate is absolutely not the case.

By the looks of it your reply contains some answers to my long-standing questions.
Reading your reply there are seemingly quite a lot of mis-interpretations on TG's renderer, also from my side.

I need some time to think about your reply, because my first feeling is even more confusion.
I and probably a large fraction of the community(?), were under the assumption the majority of the rendering-process was rasterized and that deferred shading "decoupled" that from the rasterizer to let raytracing deal with the shading, similar to the leap from rasterized object rendering to RTO.
Seemingly, a lot was already happening with ray tracing instead and this confuses me.
Consequently it makes me think "so actually the rasterized phase of the rendering was only for visibility and everything else was already mostly raytracing? I thought the lack of speed and visual nice-ness was because of the rasterizer's limitation and not the raytracer? Is the situation even worse then than I think it is??"

You gave the answers, I just need to let it settle. All of it.

Cheers,
Martin

TheBadger

It has been eaten.

otakar

Thanks for the interview link, Matt. Fascinating. In the future, please advertise these publications more loudly! :) I can't wait for the fluvial erosion feature...


archonforest

Holly cow Matt! You were using an amiga???
WOW, I love u man!!! (not like that :P)
Dell T5500 with Dual Hexa Xeon CPU 3Ghz, 32Gb ram, GTX 1080
Amiga 1200 8Mb ram, 8Gb ssd

Oshyan

#28
This is definitely an interesting thread. There are research and hobby renderers being developed all the time, they're often required or recommended coursework in major university graphics courses for example. So this guy's work is interesting and he seems to be knowledgeable and reasonably forward-thinking, but it remains to be seen if he has any really new ideas (he hints at some, but nothing shown yet). Fortunately the state of the rendering industry today is such that much of the important base knowledge is readily shared or even available as open source.

I do note that this is a total redesign for his renderer, which itself has been around several years. Progress seems to have been slow to now, but his attention is back on it for the moment. These kind of intermittent rushes of development are unsurprising for a personal/research/semi-hobby project. But anyone hoping to make any actual use of this renderer themselves is probably going to be disappointed as his ambitions don't seem to be for public, much less commercial, use. But the more people exploring the rendering process, the better!

The Athens demo he links to in his response on Reddit is an interesting example: https://vimeo.com/71148018 I definitely recommend watching it all the way through to see some of the making-of. Note that it seems to be constructed - and possibly rendered - in layers, which certainly has some relevance on e.g. memory use, render time, etc. Other than that the workflow seems not that terribly different from TG in many respects, although we don't have automatic/progressive repopulation yet for example. But seeing things like this always makes me really wonder how well or how poorly TG might actually do in comparison.

Just showing an impressive, fairly large scale scene in another renderer does not by itself make the case that it's better than TG. We don't know per-frame render time, and scene development is another aspect that TG might have helped (not asset development, e.g. houses, of course). But there's a notable quote from Whiskeytree on their Vimeo comment thread that we should also consider:
QuoteThe project had a 6 week schedule though we finished the project in 5 weeks. At the peak of asset creation we had 14 artists dedicated to the project during the first 3 weeks. Once assets and layouts where complete the crew scaled back to just a few people to finish the shot."

Now if you were just looking at the Athens scene as proof of renderers like Arnold being able to do large scale, certainly there's no question of it being feasible. In fact I believe Elysium also used Arnold for quite large scales in production for film-level work. But I'm quite curious what would happen if you put a team of 14 onto a TG project, or even take the exact same assets they generated and render them in TG. Most TG projects are the work of a single person, and if you consider for example what Ulco has been able to do *on his own*, now imagine 1 or 2 additional people helping him out, including an animator and a compositor to add that extra bit of film-like flair... I could certainly see something like this coming out of that kind of team, utilizing TG for scene development and rendering. Martin is right that TG's GI is not ideal for vegetation in some cases, but on the flip side there's better atmosphere and cloud rendering to consider.

So in the end the real question remains: if you took the exact same setups (as much as possible, avoiding procedural differences for example) and rendered them in these two systems, would one be clearly better than the other? Going a step further, if you consider the whole scene development pipeline and the number of people involved, is TG going to be provide any benefits vs. working in Maya or Max with Arnold? Perhaps not... but we don't really know until someone tries it, and nobody apparently has!

Since I don't have Maya or Max, much less the expertise to properly create a 1:1 scene copy between TG and either app, I can't try it myself, but I'm genuinely very interested in the results. Obviously I have an stake in how TG fares, but if it doesn't come out ahead that's OK too. I want to know where TG falls short *in the real world*, and so does Matt, I'm sure. We both have a sense of where TG's strengths and weaknesses are, Matt more than I due to his production experience, but still I think workflow and pipeline decisions in production are often made on an experiential or even semi-political basis, i.e. whether a team will us Terragen or another app in their pipeline for these kinds of environments.

I've actually tried several times over the years to work with artists familiar with other apps to create similar scenes and render them in TG to compare, but no one has ever stuck with it long enough to actually make a useful comparison. Some of this comes down to the fact that landscapes aren't something that many people do in other apps, even if they may be technically possible in many respects. Still, I am hopeful for some useful comparisons at some point in the (near) future. With the increasingly wide availability of instancing plugins and other related tools for apps like Maya and 3DS Max, perhaps it is a better time than ever to try it again.

Is there anyone here with reasonable familiarity in other apps who would like to try to make this happen?

- Oshyan

Matt

Quote from: Tangled-Universe on March 06, 2015, 04:36:00 AM
Hi Matt,

First of all I don't hate you!  8)
I'm sorry for sucking the joy out of discussing these things for you.

Of course I don't think you literally hate me. The wink emoticon was supposed to convey that. But the joke comes from a place of irritation with the manner in which you ask these questions (not just this one time, I'm talking about a long term pattern), so you were right to pick up on that.

I'll respond to some of the other stuff in an email.

Quote
By the looks of it your reply contains some answers to my long-standing questions.
Reading your reply there are seemingly quite a lot of mis-interpretations on TG's renderer, also from my side.

I need some time to think about your reply, because my first feeling is even more confusion.
I and probably a large fraction of the community(?), were under the assumption the majority of the rendering-process was rasterized and that deferred shading "decoupled" that from the rasterizer to let raytracing deal with the shading, similar to the leap from rasterized object rendering to RTO.
Seemingly, a lot was already happening with ray tracing instead and this confuses me.
Consequently it makes me think "so actually the rasterized phase of the rendering was only for visibility and everything else was already mostly raytracing? I thought the lack of speed and visual nice-ness was because of the rasterizer's limitation and not the raytracer?

It's not quite as simple as that, but mostly, yes. We could make the GI look much better with a fairly modest amount of development, but the render times with the current engine shortcomings would be pretty bad. The good news is that those engine shortcomings are on our list of things to address in the next 2 years.

Quote
Is the situation even worse then than I think it is??"

No, I would think your conclusion should be that the situation is better, because there are multiple possible solutions just around the corner. If you trust that we can deliver one or more of those solutions.

Matt
Just because milk is white doesn't mean that clouds are made of milk.