Hi,
I just use TG Tech Preview for a hobby, however I like to print out my scenes.
For a 12"x18" Print, the "Recommended Minimum" is to render at 1800x2700 and for "Excellent Quality" which I prefer, a 3600x5400 render is recommended. At the current state of TG a render of either size would take a month to render. And with the GI issues, Crop rendering isn't very desirable at this point. Just wondering if I'm being realistic in thinking this will be possible with TG2 after the next release or the final release? I can't afford to buy or build a cutting edge computer in the 4-7k range.
I'm also wondering which User Base this Program is really being developed for? At this point it doesn't appear to be users like myself like I said: just as a hobby. Or is it really being developed for pro's with super computers and Render Farms or the Movie Industry? I hope my questions are not too vague, but I think most will know where I'm coming from.
PS- I suppose some of the TG defenders will be chiming in here, and I don't really care for Ass kissing responses.
What I would like is knowledgeable Honest answers because I'm really getting tired of waiting.
Thanks,
first of all you dont need a 4000-7000 computer to render TG2 scenes....and 3600x5400 renders shouldnt take more than a week of non stop rendering at high quality....
I am not saying I am defending TG2.....I am saying from experience...I normally do 800 x 600 renders a rather high quality and on my computer it usually takes between 2-10 hrs maximum, depending on what the scene consists of....
my comp specs:
Core 2 Duo E6600 (2.4 GHz)
4GB Ram
820 GB HDD
640 MB XFX 8800 GTX
for TG2 renders you only need ram and processing power!....and with multi core support coming...these render times should crank down even more....
Quote from: dhavalmistry on January 23, 2008, 12:55:37 PM
first of all you dont need a 4000-7000 computer to render TG2 scenes....and 3600x5400 renders shouldnt take more than a week of non stop rendering at high quality....
I am not saying I am defending TG2.....I am saying from experience...I normally do 800 x 600 renders a rather high quality and on my computer it usually takes between 2-10 hrs maximum, depending on what the scene consists of....
my comp specs:
Core 2 Duo E6600 (2.4 GHz)
4GB Ram
820 GB HDD
640 MB XFX 8800 GTX
for TG2 renders you only need ram and processing power!....and with multi core support coming...these render times should crank down even more....
I am speaking from experience also, been using a registered version of TGTP since it was first released in Dec 06. I'm wondering if you are a registered user and if you have attempted rendering anything larger than 800x600?
Hi Josh
I think the question you're asking is 'will TG2 ever be fast?'. Sadly I believe the answer is no. We've been told the final release will be 'significantly faster', but to be honest I'm not holding out too much hope.
What TG2 gives us is the chance of superb quality landscapes at a really good price point, but given the nature of what it does, (calculate and render a near photorealistic surface of a landscape all the way to the horizon, including volumetric clouds) I can't see it ever being fast. I share your frustration (I'm not rich either), but TG2's nearest rival, Mojoworld is even slower - certainly for the renders I have tried to do. I have both programs, and have all but abandoned Mojo out of sheer irritation at it. The interface for TG2 is much easier to use, the application looks to have a good future, the support for XFrog plants is a huge bonus, the community here gives amazing support, and the results TG2 can produce are awesome.
But yes, it's slow. I don't know if you've searched the form for hints and tips on speeding up renders, there have been some helpful posts.
Depending on what kind of images you want to create, you may find you can get a surprising amount out of Bryce - now a really low price at DAZ3D. Bryce renders a lot quicker (but then it does a lot less).
There's Vue Infinite of course - an amazing program, but at a steep price.
I think the only real hope for us on the horizon is that computers keep getting faster and cheaper, so maybe in say 2 years time we will both be able to afford substantially better machines.
In the meantime, I'm sticking with TG2 simply because it's the only program I can afford, that even gets close to doing what I want. I guess the answer truthfully is ' it depends on what you want most?'
Quote from: joshbakr on January 23, 2008, 01:10:57 PM
I am speaking from experience also, been using a registered version of TGTP since it was first released in Dec 06. I'm wondering if you are a registered user and if you have attempted rendering anything larger than 800x600?
yes I do render larger than 800 x 600 but it has never taken longer than 2 days max.....I render maximum at 1920 x 1200
@joshbakr - As Oshyan has said again and again, sometimes it's just good to take a break from TG2 for awhile during this process. This sort of process isn't for everyone and I'd go so far as to say it isn't for anyone hardly at all. LOL.
Nevertheless, the low price point (why we're going through this right now) and the end result we expect are good reasons to be patient. It easier to be patient by playing with something else once in a while, in my opinion. Like Silo or Call of Duty or whatever gets your mind on something else that's interesting.
Wow...I render at 2400x2400 and usually at high quality settings.
and my longest one was maybe 40 hours,not a month. I'd say get it.
One reason ?...Misery loves company..you can wait with us for the final release.
(Just yanking your chain). Seriously, I'd go for Tg2,simply because you have a great
landscape tool, There's a hugely loyal customer base, Planetside is super quick to answer questions. Its a long list...You cant model with it...wont do Greyscale or
solid modelling,but..its not supposed to, It is a bit pricey, but then again Vue Infinate
is more expensive and maybe its just my system, but stability was horrible,
Hi J,
It really depends what you're trying to render. Scenes with lots of reflections are slow in the current release, but that is being improved for 2.0 (it will render reflections faster, more accurately, and with fewer crashes). The GI problems have been reduced for 2.0. Other changes mean that GI at high resolutions now renders a lot faster in some situations. For large renders this often results in very significant speed improvements. Scenes that use heavy volumetrics are also slow, and will probably always be slower than you want, but GI was often the culprit so you may see improvements all-round.
Multi-threading will provide a good speedup if you have more than one core or processor.
This is an ongoing development. Even after 2.0 you should see incremental speed improvements as core renderer features stabilise and we are able to spend more time on optimisation.
As for target userbase... Terragen 2 has been developed with goals which are not entirely the same as TG v0.9. That has resulted in a product which some professionals prefer over v0.9, and some don't, and the same divide is true in hobbyists. I had thought that by now we would have more than one edition of Terragen 2.0 on the market, each with different strengths and retail price, but of course that has not happened yet. I hope that in future releases after 2.0 we can give users more choice over the kind of interface they want to use and the kinds of features they value most. Render time is something that everyone wants to see reduced, professionals and hobbyists alike.
Matt
Thank You for your reply Matt.
Hey, Josh. Long time no viddy, droog.
;D
Howdie.
I assume that TG2 uses the Monte Carlo Method, or its equivalent to deal with GI. If so, rendering time will ALWAYS be slow even when optimized. But I understand this when using programs with GI. Your'e talking about a s___load of numerical calculations and whenever you have to do that (whenever there is no quick, analytical solution), even with optimization, it will still be slow. Years ago, I played around with something I called the "Gigacube". Purely theoretical, it was a virtual volume 1 billion pixels per side, so that you could conceivably be immersed in a virtual 3D world about 20-30 miles on a side, with no pixelation visible to the viewer and real-time 30 frames per second. And that was ONLY ray-traced, no GI. I calculated that roughly 100,000 times the present (late 1980s) processing power would be required to pull it off. So you see, the math is what it is...we live in a particle-based universe and to precisely model that universe, you need to rely on particle-based methods, or algorithms that can SIMULATE particle behaviour. Optimization will only take you so far. The burden truly falls on the machine's processing power to take us to the next step and I would imagine that would be heavy parallel processing in a compact form. We "aint" there yet by a long shot, but it's coming as new technologies come to bear. 5-10 years? 20 years? Who can say, but certainly within our lifetime.
But back to TG2...I LOVE it. I remember back in the '60s when I dreamt so hard that I could have a COMPUTER! Like Will Robinson's little compact dealy he was playing with in episode 2. Well, here we are, and I am thankfull and fortunate to live in a world where a truly lifelike scene at 3000x2000 pixels can be rendered at all. Put what you have in perspective.
For those of you with limited budgets, I can only suggest to try and find SOME way to purchase a second computer that you can let render from here to eternity and not have it interfere with your regular computer's use as a general purpose machine. There are killer deals out there around $700.00 US. Not bad for 2008. Then all you need is some Zen-patience, not a bad virtue to practice. And the render farm prices are coming down too. The movie studios have nothing to worry about...if they feel they need 1000 machines to render some scenes, they'll fork out the bucks. And since TG2 can deliver the quality, the studios will grab it, when it fits there needs.
Anyhoo....I'm out of breath.
I also just use it as a hobby ;)
I guess this is why I asked in another thread if there is, or will be, a "save incomplete render" feature. I remember when I used Bryce, I could save the scene and image file and pick up later where I left off, mainly to free up the memory for other uses.
For those of us with one "capable" TG2 computer, long renders really tie up the computer when it may be needed for work with other programs.
But wow, the terrain renders capable from TG2 are just better than any other application I've used. Maybe multi threading will ease some of the pain.
Treddie - good advice, and nice history.
:)
You're very welcome, Birdman.
And DEFINITELY, a "save incomplete render" would be MOST nice. I've had to cancel renders from time to time, when I REALLY needed both my machines for REAL work.
One thing for sure...TG2 has without a doubt the most amazing atmosphere system out there. Blows me away. Well worth the rendering time.
My understanding of the Mote-Carlo method is that in recent experiments with it (Pure Academic Research) that this method has been significantly improved in speed and performance terms: though at this time I don't remember the names of the researchers involved nor the universities to which they belong. My understanding of rendering methods points to Metropolis Light Transport which is a variant of the Monte-Carlo Method is significantly faster then pure Monte-Carlo in both rendering and CPU Cycle terms.
Regards to you.
Cyber-Angel
Mental Ray has an option for baking the GI/Final Gathering as a saved rendered map. The resolution of the map can be specified (low for tests, then change it to high for final), but the beauty is that you can tell it to use that map over an animation. The first frame can take some time to render, but thereafter it flies.
You are correct, Cyber-Angel. In fact, Maxwell Render and FryRender use Metropolis. But if you have ever used Maxwell or Fry, you quickly realize that you're still back in the same boat. Unless you have a render farm at your access, or at least one or more quad-core machines to network together, you are still looking at at least a 12-24 hr render for something like 3000x2000. Ofcourse, it all depends on whether you are using glass, caustics and so on, but it pretty much puts full-scale animation on the backburner unless you have a BIG render farm. It appears that for the time being, Maxwell has found a nitch in the viz industry for product design and architectural, because for many of those projects, the client is willing to wait an extra day or two to see a finished concept image.
JimB> Yah, baking would be nice for static lighting, but if you want complete realism, you would have to stick to stationary light sources. I wonder how well it would work if baking was used for all of the staionary lights, and non-baking was used for moving lights, all at once. That way you could get the major processing time significantly reduced and let the renderer concentrate on only the moving elements after frame 1.
Incidentally, I have no idea what method TG2 uses. Metropolis, maybe? I'm curious.
Check out this link if anyone is interested. It's a good launching point if you want to do further web research on Metropolis (MLT), photon-mapping, etc.:
http://renderspud.blogspot.com/2006/10/biased-vs-unbiased-rendering.html
And if anyone can follow college level math, I have some great pdf's on MLT theory. I could download them to you if you're interested.
I've been told in the past the that TG2 uses a custom GI schema for its renderer based some what on existing methods, but still I'd love to know more! Treddie would you happen per chance to know the method employed by the Brazil R/S is it traditional Mote-Carlo or some thing else, I ask out of curiosity?
The link you provided talks about Unbiased Renderers and as far as I knew only a renderer based on Spectral Rendering can be unbiased as traditional RGB renderers can not be by there intrinsic nature also with renderers of the RGB type it is impossible for them to do things like Polarization, Spectral Rendering on the other hand is capable of these effects and is if carefully designed Physically Accurate where renderers of the RGB type are not.
Also the link provided talks about Path-Tracing which is the vary inferior counterpart of Ray-Tracing which a little odd.
Regards to you.
Cyber-Angel
Thanks for the TG2 info there.
From what little I know (or THINK I know) about Brazil r/s is that it uses Monte Carlo (not MLT) along with Photon Mapping and Ray tracing.
Regarding spectral rendering, that's what makes Maxwell Render (and any MLT based renderer) so cool. It deals with the WHOLE spectrum not just RGB. As a result, the caustics it creates are truly amazing and faithful. Some examples compared actual photos with Maxwell scenes duplicating the setup virtually, and given enough render time, you can hardly tell the difference. I believe I remember someone even doing an example of the famous experiment with polarized light and 3 filters, but I might just be conjuring up a false memory there; I've gone back to the Maxwell sight and can't find it.
But OH-MY-GOD...you have to wait DAYS to get a good image with caustics in Maxwell to get rid of all the noise. Noise with MLT clears up seemingly logarithmically. Each successive unit of time sees far less noise removal than the last, so that as the render progresses, you see less and less improvement. The last hours are really frustrating. Thank god for network rendering.
Not to sound picky but doesn't Maxwell just treat light as just an EM Wave rather than treating light as it is which is both a particle and a wave (The so called duality of light) a fully physically accurate light model would have to account for that duality, but like I said I'm trying not to sound picky.
Regards to you.
Cyber-Angel
would like to see how a quad handles tg2 when multiple thread comes out... is it just using the 1 core on the quad right now?... running 1 instance of tg2
Maxwell has to be based on both wave and particle nature because to do things like refraction, you need wave behaviour, but to do things like shadow building (and soft edge shadows), you need particle behaviour.
I am confused about TG2 supposedly not using multi-threading right now. I heard them say it wasn't supported yet, but right now, I'm maxed out on both processors of my dual-core laptop, rendering a TG2 scene.
You must have a time machine and used it to steal a version from the future, because TG2 isn't multithreaded yet. It's possible that you're running some other task that's taking up the other core.
I took a class on making your own raytracer , I know what you mean it gets real complex.
You might be right there. In looking at the processors this morning (render complete), both procesors are down to roughly 40%. It's possible that since TG2 took all of one processor, that the parts of other processes running on it were moved over and added to the other one.
I looked into it too, ages ago, Will, and tried to do it in BASIC. Way slow. And light transport methods are WAY more processor intensive then little old ray tracing. Until I REALLY go through the MLT theory and when I say I think I understand it, only then will I even entertain the idea of saying I know how it works. MLT is based partly on statistical algorithms, and I never warmed up to the math of statistics. So they're not my strong point. I imagine it will take three or four passes through the theory to get my mind around it all. And that will have to be broken up by some homework to grasp some of the basic math concepts. I'm good with Calculus and Linear Algebra, but statistics makes me want to yawn. I don't know why. UGHHH!
An interesting factoid about Ray-Tracing is that in its original incarnation it had nothing what ever to do with computer graphics (Not until it was used for the first time for that purpose on the Motion Picture: Tron) it was used to calculate the paths radiation could take, form say a reactor leak or spillage of radioactive wast.
As to light transport methods, it really depends on the implementation used; Ray-Tracing can be a slow as a glacier if not implemented properly (Bryce 5.0 and earlier are examples): from the academic standpoint (Pure theory seen in countless papers, not software you can use) there are many methods to make the existing methods faster with some promising results and it is likely, more then not, that a combination of these is most likely to produce the best results: as a caveat I will say that making these dispirit algorithms work together maybe some thing of a challenge.
Regards to you.
Cyber-Angel
I think you're probably right. If 3D rendering were to rely exactly, and ONLY on just how light really behaves, we would probably never get anything rendered at all. It seems that for 3D, this is the age of optimization and really clever simulation.
Quote from: Cyber-Angel on February 09, 2008, 06:34:12 PM
The link you provided talks about Unbiased Renderers and as far as I knew only a renderer based on Spectral Rendering can be unbiased as traditional RGB renderers can not be by there intrinsic nature also with renderers of the RGB type it is impossible for them to do things like Polarization, Spectral Rendering on the other hand is capable of these effects and is if carefully designed Physically Accurate where renderers of the RGB type are not.
Heya Cyber-Angel,
Here's a good, short, and pretty understandable paper on unbiased rendering: http://www.cs.caltech.edu/~keenan/bias.pdf
DISCLAIMER: I'm not posting this to take anything away from maxwell, or to get into a "my renderer's better than yours," I'm just posting to help clarify some things that I think are general misconceptions. I don't intend any of this to be argumentative.
Whether or not a renderer is biased or not is a technical description of the rendering algorithm -- regarding how it accounts for, and where it gets, the information that the renderer uses to do a calculation. The words, "biased" and "unbiased" do not refer to the resultant image and they don't have a lot of bearing on wether or not a renderer is "physically correct." An important point to get here is that even if a renderer is unbiased, it may still produce incorrect images. "Unbiased" doesn't refer to the result of the renderer, only the algorithms used, and an algorithm only needs to stay unbiased within the realm of what it chooses to support.
In my understanding of the definition of "bias", if you have a renderer that supports glass, but it doesn't properly account for all light paths through that glass, you've got a biased renderer. However, if that same renderer handles everything else rigorously but doesn't allow you to even create glass (ie. it doesn't support glass) it can actually be an unbiased renderer.
Another important point is that it's totally possible to produce 100% correct images using biased renderers. Generally, a renderer intentionally uses biased algorithms for performance reasons, not as a mathematical shortcut, or due to misunderstandings, oversights and mistakes made by the programmers -- ie. photon mapping is a biased algorithm, but photons are undeniably fast and they can produce correct images.
Whether or not a renderer is doing it's calculations in RGB or some other way also has no basis on whether or not a renderer is biased or unbiased. You could write an unbiased renderer that only calculates light intensities and produces black and white images -- again, it's a matter of whether or not everything in the scope of the simulation is accounted for in the equations.
As to spectral effects not being possible in an RGB space renderer, that's not true. A renderer that does the majority of it's calculations in RGB can still fully support spectral effects. This is just a guess, but I'd bet that maxwell is actually doing the majority of it's work in RGB space, but supports intelligent spectral effects (otherwise, writing new shaders for it would be a real bear). Brazil r/s runs mostly in RGB space, but it supports spectral effects -- you can actually run glass prism type experiments that produce rainbow caustics and things like that. This image shows some spectral effects via dispersion in glass: http://brazil.mcneel.com/photos/technology/picture22.aspx
Quote from: Cyber-Angel on February 09, 2008, 06:34:12 PMdoesn't Maxwell just treat light as just an EM Wave rather than treating light as it is which is both a particle and a wave
If it did, you could reproduce Young's Double Slit experiment (http://en.wikipedia.org/wiki/Double-slit_experiment) - which I'm pretty sure you can't do in any production renderer out there :)
/me waves at treddi
Good info there Cap'n.
I's check it out.
treddie
No it wont be faster... but well what you thought? TG 0.9 was more or less amateur (and yes it produced some nasty results however the program itself wasnt nowhere near vue or whatever) so it took much less time. Now you have program that can produce absolute photorealism (no near photorealism if you can use it good you can make it look like photo if you want) and basically is limitless. Not to say you can have superkiller machine for terragen about 20 000 (lets say you buy intel quad 4-8gb ram some good MB and spare on graphics)
The prices of processors are falling like leaves (i remember when basic athlon dual core cost like 240 bucks year ago now you have better quadcore for same price) and rams are like cheese you can get 8 GB for like 300. Amazing
I can remember when someone said somebody was coming out with 8G hard-drive, nobody believed it at first. ::)
http://www.pcguide.com/ref/hdd/hist-c.html