TG2 as a Hobby?

Started by joshbakr, January 23, 2008, 12:35:52 PM

Previous topic - Next topic

Cyber-Angel

My understanding of the Mote-Carlo method is that in recent experiments with it (Pure Academic Research) that this method has been significantly improved in speed and performance terms: though at this time I don't remember the names of the researchers involved nor the universities to which they belong. My understanding of rendering methods points to Metropolis Light Transport which is a variant of the Monte-Carlo Method is significantly faster then pure Monte-Carlo in both rendering and CPU Cycle terms.

Regards to you.

Cyber-Angel   

JimB

Mental Ray has an option for baking the GI/Final Gathering as a saved rendered map. The resolution of the map can be specified (low for tests, then change it to high for final), but the beauty is that you can tell it to use that map over an animation. The first frame can take some time to render, but thereafter it flies.
Some bits and bobs
The Galileo Fallacy, 'Argumentum ad Galileus':
"They laughed at Galileo. They're laughing at me. Therefore I am the next Galileo."

Nope. Galileo was right for the simpler reason that he was right.

treddie

You are correct, Cyber-Angel.  In fact, Maxwell Render and FryRender use Metropolis.  But if you have ever used Maxwell or Fry,  you quickly realize that you're still back in the same boat.  Unless you have a render farm at your access, or at least one or more quad-core machines to network together, you are still looking at at least a 12-24 hr render for something like 3000x2000.  Ofcourse, it all depends on whether you are using glass, caustics and so on, but it pretty much puts full-scale animation on the backburner unless you have a BIG render farm.  It appears that for the time being, Maxwell has found a nitch in the viz industry for product design and architectural, because for many of those projects, the client is willing to wait an extra day or two to see a finished concept image.

JimB>  Yah, baking would be nice for static lighting, but if you want complete realism, you would have to stick to stationary light sources.  I wonder how well it would work if baking was used for all of the staionary lights, and non-baking was used for moving lights, all at once.  That way you could get the major processing time significantly reduced and let the renderer concentrate on only the moving elements after frame 1.

Incidentally, I have no idea what method TG2 uses.  Metropolis, maybe?  I'm curious.

treddie

Check out this link if anyone is interested.  It's a good launching point if you want to do further web research on Metropolis (MLT), photon-mapping, etc.:
http://renderspud.blogspot.com/2006/10/biased-vs-unbiased-rendering.html

And if anyone can follow college level math, I have some great pdf's on MLT theory.  I could download them to you if you're interested.

Cyber-Angel

#19
I've been told in the past the that TG2 uses a custom GI schema for its renderer based some what on existing methods, but still I'd love to know more! Treddie would you happen per chance to know the method employed by the Brazil R/S is it traditional Mote-Carlo or some thing else, I ask out of curiosity?

The link you provided talks about Unbiased Renderers and as far as I knew only a renderer based on Spectral Rendering can be unbiased as traditional RGB renderers can not be by there intrinsic nature also with renderers of the RGB type it is impossible for them to do things like Polarization, Spectral Rendering on the other hand is capable of these effects and is if carefully designed Physically Accurate where renderers of the RGB type are not.

Also the link provided talks about Path-Tracing which is the vary inferior counterpart of Ray-Tracing which a little odd. 

Regards to you.

Cyber-Angel   

treddie

Thanks for the TG2 info there.

From what little I know (or THINK I know) about Brazil r/s is that it uses Monte Carlo (not MLT) along with Photon Mapping and Ray tracing.

Regarding spectral rendering, that's what makes Maxwell Render (and any MLT based renderer) so cool.  It deals with the WHOLE spectrum not just RGB.  As a result, the caustics it creates are truly amazing and faithful.  Some examples compared actual photos with Maxwell scenes duplicating the setup virtually, and given enough render time, you can hardly tell the difference.  I believe I remember someone even doing an example of the famous experiment with polarized light and 3 filters, but I might just be conjuring up a false memory there; I've gone back to the Maxwell sight and can't find it.

But OH-MY-GOD...you have to wait DAYS to get a good image with caustics in Maxwell to get rid of all the noise.  Noise with MLT clears up seemingly logarithmically.  Each successive unit of time sees far less noise removal than the last, so that as the render progresses, you see less and less improvement.  The last hours are really frustrating.  Thank god for network rendering.

Cyber-Angel

Not to sound picky but doesn't Maxwell just treat light as just an EM Wave rather than treating light as it is which is both a particle and a wave (The so called duality of light) a fully physically accurate light model would have to account for that duality, but like I said I'm trying not to sound picky.

Regards to you.

Cyber-Angel     

king_tiger_666

would like to see how a quad handles tg2 when multiple thread comes out... is it just using the 1 core on the quad right now?... running 1 instance of tg2

<a href="www.hobbies.nzaus.co.nz/">My  Terragen Downloads & Gallery</a>

treddie

Maxwell has to be based on both wave and particle nature because to do things like refraction, you need wave behaviour, but to do things like shadow building (and soft edge shadows), you need particle behaviour.

I am confused about TG2 supposedly not using multi-threading right now.  I heard them say it wasn't supported yet, but right now, I'm maxed out on both processors of my dual-core laptop, rendering a TG2 scene.

old_blaggard

You must have a time machine and used it to steal a version from the future, because TG2 isn't multithreaded yet.  It's possible that you're running some other task that's taking up the other core.
http://www.terragen.org - A great Terragen resource with models, contests, galleries, and forums.

Will

I took a class on making your own raytracer , I know what you mean it gets real complex.
The world is round... so you have to use spherical projection.

treddie

You might be right there.   In looking at the processors this morning (render complete), both procesors are down to roughly 40%.  It's possible that since TG2 took all of one processor, that the parts of other processes running on it were moved over and added to the other one.

I looked into it too, ages ago, Will, and tried to do it in BASIC.   Way slow.  And light transport methods are WAY more processor intensive then little old ray tracing.  Until I REALLY go through the MLT theory and when I say I think I understand it, only then will I even entertain the idea of saying I know how it works.  MLT is based partly on statistical algorithms, and I never warmed up to the math of statistics.  So they're not my strong point.  I imagine it will take three or four passes through the theory to get my mind around it all.  And that will have to be broken up by some homework to grasp some of the basic math concepts.  I'm good with Calculus and Linear Algebra, but statistics makes me want to yawn.  I don't know why.  UGHHH!

Cyber-Angel

An interesting factoid about Ray-Tracing is that in its original incarnation it had nothing what  ever to do with computer graphics (Not until it was used for the first time for that purpose on the Motion Picture: Tron) it was used to calculate the paths radiation could take, form say a reactor leak or spillage of radioactive wast.

As to light transport methods, it really depends on the implementation used; Ray-Tracing can be a slow as a glacier if not implemented properly (Bryce 5.0 and earlier are examples): from the academic standpoint (Pure theory seen in countless papers, not software you can use) there are many methods to make the existing methods  faster with some promising results and it is likely, more then not, that a combination of these is most likely to produce the best results: as a caveat I will say that making these dispirit  algorithms work together maybe some thing of a challenge.               

Regards to you.

Cyber-Angel   

treddie

I think you're probably right.  If 3D rendering were to rely exactly, and ONLY on just how light really behaves,  we would probably never get anything rendered at all.  It seems that for 3D, this is the age of optimization and really clever simulation.

cptvideo

Quote from: Cyber-Angel on February 09, 2008, 06:34:12 PM
The link you provided talks about Unbiased Renderers and as far as I knew only a renderer based on Spectral Rendering can be unbiased as traditional RGB renderers can not be by there intrinsic nature also with renderers of the RGB type it is impossible for them to do things like Polarization, Spectral Rendering on the other hand is capable of these effects and is if carefully designed Physically Accurate where renderers of the RGB type are not.
Heya Cyber-Angel,

Here's a good, short, and pretty understandable paper on unbiased rendering:  http://www.cs.caltech.edu/~keenan/bias.pdf

DISCLAIMER:  I'm not posting this to take anything away from maxwell, or to get into a "my renderer's better than yours," I'm just posting to help clarify some things that I think are general misconceptions.  I don't intend any of this to be argumentative.

Whether or not a renderer is biased or not is a technical description of the rendering algorithm -- regarding how it accounts for, and where it gets, the information that the renderer uses to do a calculation.  The words, "biased" and "unbiased" do not refer to the resultant image and they don't have a lot of bearing on wether or not a renderer is "physically correct."  An important point to get here is that even if a renderer is unbiased, it may still produce incorrect images.  "Unbiased" doesn't refer to the result of the renderer, only the algorithms used, and an algorithm only needs to stay unbiased within the realm of what it chooses to support.

In my understanding of the definition of "bias", if you have a renderer that supports glass, but it doesn't properly account for all light paths through that glass, you've got a biased renderer.  However, if that same renderer handles everything else rigorously but doesn't allow you to even create glass (ie. it doesn't support glass) it can actually be an unbiased renderer.

Another important point is that it's totally possible to produce 100% correct images using biased renderers.  Generally, a renderer intentionally uses biased algorithms for performance reasons, not as a mathematical shortcut, or due to misunderstandings, oversights and mistakes made by the programmers -- ie. photon mapping is a biased algorithm, but photons are undeniably fast and they can produce correct images.

Whether or not a renderer is doing it's calculations in RGB or some other way also has no basis on whether or not a renderer is biased or unbiased.  You could write an unbiased renderer that only calculates light intensities and produces black and white images -- again, it's a matter of whether or not everything in the scope of the simulation is accounted for in the equations.

As to spectral effects not being possible in an RGB space renderer, that's not true.  A renderer that does the majority of it's calculations in RGB can still fully support spectral effects.  This is just a guess, but I'd bet that maxwell is actually doing the majority of it's work in RGB space, but supports intelligent spectral effects (otherwise, writing new shaders for it would be a real bear).  Brazil r/s runs mostly in RGB space, but it supports spectral effects -- you can actually run glass prism type experiments that produce rainbow caustics and things like that.  This image shows some spectral effects via dispersion in glass:  http://brazil.mcneel.com/photos/technology/picture22.aspx

Quote from: Cyber-Angel on February 09, 2008, 06:34:12 PMdoesn't Maxwell just treat light as just an EM Wave rather than treating light as it is which is both a particle and a wave
If it did, you could reproduce Young's Double Slit experiment (http://en.wikipedia.org/wiki/Double-slit_experiment) - which I'm pretty sure you can't do in any production renderer out there :)
* cptvideo waves at treddi