I havent worked with TG2 long enough to know if this would yield meaningful numbers...however - what do you think about this:
Once a crop render is complete - along with the current information in the top of the renderwindow, a time estimate for a full frame render is shown - based on the time it took to render the crop area, multiplied with the full image area...?
That way you could choose a complex part of the scene and get a worst case number.
feedback?
yeah that could be easily done and would help me with getting good rendertime/quality ratio
I'll third that :)
This would be extremely useful...
Also, I'd love to be able to configure the camera movement keys.
Also, terragen crashes when setting a camera position of 0,0,0.
(hey, may as well get them all in one post??) :D
I myself have being thinking about such a feature, but my version you'd have a small drop down box to the right of the elapsed render time clock that you already have at render time; this drop down would allow you to select a estimated completion time clock which would operate in the same one you see when your downloading a file. This estimated render time clock should also have the percentage of the image complete.
This completeness percentage should have two modes:
1. Start from 0 (Start of Render) and work to 100 (End of Render)
2. Start Form 100 (Start of Render) and work to 0 (End of Render) [I have seen this method done with some software installations where they start with the full size of the Program, say 25MB and work there way down to zero].
This process should be aware of how the image is comprised and base its estimate on things like:
Image Resolution
Number and complexity of Volumetric cloud layers
Number and complexity of population instances in an image
Global Illumination and Atmosphere Samples and other settings
Ray traced Shadows
Complexity and Number of Displacements etc.
Regards to you
Cyber-Angel
A reliable "time left" indicator for renders in TG2 is virtually impossible given the variety of scenes and factors involved. The best way to deal with this is just to do a very low detail render of the same scene *at your target resolution* and then multiply the results appropriately depending on your final main Detail setting. All other settings (cloud samples, etc.) would have to remain the same. The exact formula is honestly unclear to me since my own tests have shown some variability. But in theory 0.1 detail should be 100 times faster than 1.0. I have seen it vary between 50 and 100 times, for some reason. But that may give you a reasonable starting point.
- Oshyan
I wouldn't say virtually impossible. Everyone knows a "Time Left" feature will be technically incorrect, but it would help, in my opinion. A lot of other renderer's have this feature and there are no problems there. All you have to do is multiply the number of pixels rendered at any given moment by the time taken so far at each step, then find an average of those values with every new step (step meaning when a new mini area gets rendered, as it looks like TG2 renders the shape of polygons).
Might I add in that other renderer's also have a wide variety of scenes and factors involved, but it doesn't stop them from estimating the time left to render.
Quote from: JavaJones on January 05, 2007, 09:05:41 PM
The best way to deal with this is just to do a very low detail render of the same scene *at your target resolution* and then multiply the results appropriately depending on your final main Detail setting. All other settings (cloud samples, etc.) would have to remain the same.
..something that TG2 could do for us and present in the render GUI once a first render is made. A full frame rendertime estimate based on the numbers you quoted and maybe even updated when a new render detail number is set or frame size is changed.
However...
..that would most probably incur a lot of extra coding and open multiple cans of worm...so why not just having a simple "full frame rendertime approximation" value in the top of the renderwindow when i complete a CROP render. (I.E showing crop rendertime divided with the crop area and then multiplied with full frame area) That wouldnt touch the general UI and should be pretty easy to implement. Please correct me if Im wrong.
//O.
I am be inclined to agree with Oshyan. However, the first rendering pass (with the dots) is a good way of working out an initial approximation about the total rendering time - I'm finding that the first pass probably takes about 1/10th of the total render time.
While such a feature would be nice, I'd rather see some other features added (such as transparent water and distorted perlin) beforehand. Also I reckon that amount of time used to derive an algorithm for semi-reliable estimation would be better spent elsewhere. So I would recommend taking the first-pass time and x10.
Andrew Randle
The Geostation
..thus my hope for a simple solution that is easy to implement so not to take away from the important stuff :)
Quote from: The Geostation on January 06, 2007, 10:00:36 AM
However, the first rendering pass (with the dots)...
I have found that with maximum quality settings T2TP seems to render almost everything first and then fills in the few black dots left, thus making this estimation useless.
I have a feature in one of my programs for the old TG that estimates rendering times for animations based on the length of time frames have taken to render so far. I wouldn't think that implementing some time estimation in Terragen itself would cause too much trouble.
Quote from: njen on January 05, 2007, 09:35:37 PM
All you have to do is multiply the number of pixels rendered at any given moment by the time taken so far at each step, then find an average of those values with every new step (step meaning when a new mini area gets rendered, as it looks like TG2 renders the shape of polygons).
I am inclined to think that the above idea would be the best, giving a live estimate of the time remaining. Terragen could easily go beyond my easy time averaging and look at tendencies and standard deviation and stuff like that to produce a very accurate estimation. It could even give an estimated margin of error for the estimated time remaining.
I am not so sure about the original idea, however, of basing the estimation on a crop. That would waste time as TG would have to consider objects outside the crop to even give a rough estimate.
I'm afraid that anything even close to an accurate estimation is far more difficult than you might think. Keep in mind that not all rendering systems or methods are conducive to accurate render time estimations. Some systems are much more consistent across a frame in terms of how long elements take to render, for example. It really depends on what you're rendering and how you're rendering it.
In TG2's case things can vary quite dramatically, especially between sky and ground. So what you're arguing for is the *illusion* of knowledge about render time, not any real information as to how long you should expect to wait - it could easily be 10 times longer or 10 times shorter than even the best estimates would indicate. So what is the value in it?
As I said the best method is to do your own simple estimate with lower detail levels. This could be handled natively in the application for an added nicety, but it will definitely be a lower priority given how easy it is to do your own calculation. We may at some point publish some kind of table of real-world measured tests to allow more accurate estimation on your own, and this is the kind of thing that could form the basis of a tool to estimate final render time based on a lower detail version, but this may well be something that would come after final release (in a patch for example).
- Oshyan
Terragen is based on maths so I'm sure it would be possible to write an external piece of software that reads the tgd file and uses a load of complicated mathematics to analyse settings, objects and other stuff to give a reasonable estimate for the render time. However, a program like that would be very complex and it would have to be thoroughly tested as well as regularly updated. Unfortunately, that would require a lot of time and effort and would still only produce an estimate.
You also have to remember that almost every spec in your computer will make variations on the render time, even including stuff like hard drive optimization if your render is complicated enough. In order to make an estimate, the program would have to analyze your CPU, RAM, and motherboard, and then every background task your computer is running, your operating system, and countless other things. I have to agree with Oshyan on this one, I think this feature would be near impossible to implement. After awhile, you get a feel for how long things are going to render anyways.
Quote from: MeltingIce on January 07, 2007, 05:33:09 PM
You also have to remember that almost every spec in your computer will make variations on the render time, even including stuff like hard drive optimization if your render is complicated enough. In order to make an estimate, the program would have to analyze your CPU, RAM, and motherboard, and then every background task your computer is running, your operating system, and countless other things. I have to agree with Oshyan on this one, I think this feature would be near impossible to implement. After awhile, you get a feel for how long things are going to render anyways.
This is not true, none of analyzations are necessary. While a render estimate is just simply that, an estimate, it is not really as complicated as what you are making it out to be.
For example, you have an image that is 100 x 100 pixels (10,000 pixels all up). The first 500 pixels render in the first step, and it takes 10 seconds. The render estimate would now show 3.33 minutes till completion. Then the next step renders, and the next 500 pixels took 20 seconds. The render estimate would now show 5 minutes till completion. The third step renders then next 500 pixels and it takes 20 seconds. Time is now 5.55 minutes. 4th step takes 15 seconds, time is now 5.41 minutes.
That's all there is to it.
Now, if it's a matter of priority as far as features go, then I completely understand it not being added for the time being.
I don't think this is possible because TG can not work out how long the render will take until it has actually gone through the process. It's simply too complicated. There are numerous things that will effect the time. You have to just test render and make a rough estimation of how long the full render will take. You own estimation will be better than anything TG can come up with. I agree with The Geostation. I don't think this is a pressing issue.
That's why it's called an estimate ;)
And TG doesn't have to go through the entire process to work out how long it will take. I explained it in my last post. It calculates how much has rendered so far, and estimates a completion time.
This simplest way I can explain it, is that if TG has completed 1/10 of the render that took 1 minute, then it estimates that the total rendering time will be 10 minutes. Now when it has completed 1/5th of the render completed and that has taken a total of 3 minutes so far, then the estimate now becomes 15 minutes. The mathematics are quite simple.
Certainly simple, but as you can see the results are likely to be totally inaccurate. What is the real use of doing such a thing? Especially when the method of rendering at lower detail (but same resolution) can yield a more accurate estimate.
- Oshyan
The results are only inaccurate because I didn't average the end results (I tried to keep it simple for efflux). Plus I only used 2 steps in my last example. In my example before that I used 4 steps and you could see that the estimated time didn't change much by the fouth step.
When TG2 renders, it takes many, many steps (other 3D programs call them buckets, but TG2 does not render steps in any ordered fashion or shape, thus they are not truly buckets), which would, when averaged, provide a far better estimation.
Doing a low res render then a high res render, not only is redundant (waste of render time), but would only serve to provide a barely usable estimation. If I had an image 720 x 405 that rendered with a Detail of 0.1, and AA of 2 and it took 1 minute. How does that tell me how long a render with a Detail setting of 0.7 and AA of 4 will take? It could take 10 minutes, or 60 minutes...
Don't get me wrong, I am not trying to make a big deal out of this, just trying to lay to rest the claims that it is a difficult thing to do, when it is quite simple :)
You could just as easily construct a theoretical scene where the estimate is completely off, for example one with a lot of water and a simple sky, where the top half will render extremely quickly and then slow down dramatically on the water. There's really no such thing as an "average scene", especially when displacement and volumetrics are involved. So although your method may be reasonably accurate (within a 25% margin of error let's say) on some scenes, it would certainly be dramatically inaccurate on other scenes.
The question isn't whether it can be done, as your method is obviously fairly simple, it's whether providing such information is worthwhile and useful or simply misleading. It could as easily upset or put off many people when they see drastically incorrect estimates. Meanwhile the benefits are questionable since it's only in the latter half of the render that the estimate even starts to get close to accurate, and by then you ought to know if you want to keep it with or without such an estimate.
- Oshyan
This will be my last post on this topic because it's late and alas I must sleep ;D
First of all, I understand that there is no such thing as an average scene. I've been working in 3D for well over 10 years, and have rendered everything from shaggy bears to bullets firing out of a gun to penguins to vast landscapes. I have used many 3D packages in my time.
I understand that there are areas of an image that are faster to render than others.
I also understand that the 3D packages that have had a render estimation feature have helped me manage my time more efficiently, let me know if I have overstepped my allotment of render time by estimation. It's as simple as that :)
That's perfectly fair. It's definitely a feature we'll look at implementing and we will do so if it is feasible to provide something that gives reasonably accurate estimates. :)
- Oshyan
The problem with estimation is the same as with continuing a partial render. TG2 does not currently render pixel by pixel. There are areas of some scenes I've rendered recently where an area is rendered three times! Once, with background atmosphere, next was terrain, and finally was water. The pixels in that area had three different values at three different times during the render, not counting the initial black. Where does that come into the estimate calculations? What about the GI calculations that happen before any "real" pixels are calculated? :)
Rich
Quote from: MeltingIce on January 07, 2007, 05:33:09 PM
You also have to remember that almost every spec in your computer will make variations on the render time, even including stuff like hard drive optimization if your render is complicated enough. In order to make an estimate, the program would have to analyze your CPU, RAM, and motherboard, and then every background task your computer is running, your operating system, and countless other things. I have to agree with Oshyan on this one, I think this feature would be near impossible to implement. After awhile, you get a feel for how long things are going to render anyways.
That's why I said it would need to be thoroughly tested and regularly updated - you would need a huge range of computers to get sample values from to give the initial estimates any real value. You would then need to automatically collect information from users to further improve the results. However, this is complicated further by the fact that some users may be using CPU limiters and/or have changed the priority of Terragen. I think that this amount of programming and data collection would begin to rival Terragen itself in terms of complexity and would therefore be pointless.
I think the rough estimate based on the length of time taken to render the pixels done so far in a picture is the best balance between complexity and accuracy.
One of the problems is certainly that various elements or areas of the whole picture take much longer to render. My opinion is that once you get used to using all the settings then you can make a reasonably good judgment on how long it can take because you can see all the parts in the picture and the settings you have made. By doing a few test renders you can make a better judgement than what the software could tell you. To be of any use the software would have to make estimations about all the various settings and picture components. This would have to be very complicated to equal what you could estimate yourself.
Quote from: efflux on January 08, 2007, 03:34:08 PM
By doing a few test renders you can make a better judgement than what the software could tell you.
The problem is that those test renders require time to set up and run. For those of us who don't have an army of computers to hand but still want to get the most from TG2, these wastages need to be avoided. I agree with njen:
Quote from: njen on January 07, 2007, 08:35:05 PM
Doing a low res render then a high res render, not only is redundant (waste of render time), but would only serve to provide a barely usable estimation. If I had an image 720 x 405 that rendered with a Detail of 0.1, and AA of 2 and it took 1 minute. How does that tell me how long a render with a Detail setting of 0.7 and AA of 4 will take? It could take 10 minutes, or 60 minutes...
In a way, this topic is related to the topic on the pause/resume option (http://forums.planetside.co.uk/index.php?topic=347.0), because they are both to do with problems actually finishing a render. The solution discussed in this topic allows you to see if a particular amount of quality is going to make the render take too long, or allows you to plan around the render. The pause/resume option allows you to come back after closing Terragen for a while, for whatever reason, to resume rendering a frame that was taking too long.
Perhaps the solutions are related to...
Well I'm only a simple soul, who would love to see the preview render percentage broken down to include 1% increments after at least 80% so that I would have at least half a chance of working out how long it's going to take to finish. Even 60% added 'cause sometimes you are in the same boat waiting for the 40% to increment upto 80%.
If this assumption is wrong then forgive me ;D
I assume that when the preview is rendering, the detail level shown is actually a new render at that higher detail, not a simple percentage of the finished level of detail. Adding more percentages into the equation would take much longer to get to the finished state. . .
M.
Hi,
We've discussed this with the alpha testers previously as well. Basically, when we are able to come up with a reasonably reliable way of predicting render time then we will add a prediction. We can't do that now. Either the prediction would be wildly optimistic and then annoy people when it took longer, or it would be wildly pessimistic and people would stop the render. Sometimes it might be pretty close, but it wouldn't be possible to tell when. You can do just as accurate an estimation right now as we could - look at how much has rendered and then look at how much time it has taken so far. A render time prediction which isn't pretty accurate just isn't useful. You could have 90% of an image render fast and then have an incredibly shiny mirror ball or something in one corner which could take an age to render. It could be the other way around.
I absolutely agree that a render time prediction would be a useful thing to have, but it's only useful and worth doing if it's pretty accurate.
Regards,
Jo
I found this paper in PDF format called "Rendering Time Estimation for Real-Time Rendering" by Micheal Wimmer and Peter Wonka which might or might not be applicable which can be found at this link.
http://www.cg.tuwien.ac.at/research/vr/rendest/rendest_egsr2003.pdf
Not sure weather this could be adapted for use in TG2, but then again I am not a programmer.
Regards to you.
Cyber-Angel
QuoteAdding more percentages into the equation would take much longer to get to the finished state. . .
Ah! good point, wouldn't want that to happen. But would still like 1% increments added to the preview 80%.