Didn't take me long to break T2 again. Don't know what happened, preview renders on low detail were fine. Then tried a final render - half way through got a whole string of ray trace errors and a heightfield error - I was not even using a heightfield. Anyone know what this all means?
Thanks,
JR
Hi,
It looks to me like you've run out of memory. Looking at the file you've set the subdiv cache size to 2600 MB and to have it preallocated. If you're using 32 bit Windows then this is very close to the limit of how much memory you can use if you're using the /3GB switch (approx. 3000 MB). It's far too much. You can see that the second error message is "Unable to allocate memory for subdiv cache". That's a good hint that your subdiv cache is too large. It may still work if you uncheck "Preallocate subdiv cache", but I would suggest taking it down to a smaller value because otherwise TG2 will try to use up to the amount you've set and that may be too much. Have you tried it using the default 400 MB?
You might want to try reading the error messages :-). The "Unable to allocate memory for subdiv cache" error actually goes on to explain how to fix the problem.
All the "Unknown error" errors following the "Unable to allocate memory for subdiv cache" are a very good indication that you've run out of memory.
The heightfield error comes about from a Heightfield load node in node inside your Plane 01 node. I found it by looking through the nodes in the Node Network project view list. The actual Heightfield load node was hidden behind the Default shader node in the network view, I didn't spot it there immediately.
BTW, TG2 hasn't actually crashed here. It's done the right thing - you've tried to make it do something it isn't able to and it's stopped doing it before it actually crashes i.e. "the application terminates unexpectedly" or whatever it is Windows says when an app actually crashes.
Just a note regarding the "Preallocate subdiv cache" setting - if you set it to a large amount you may find it fails even though you think you've set it smaller than the amount of memory available. This is because it may not be able to allocate such a large block of memory in one piece. As the application gets used there is the potential for the whole memory space to be broken up into chunks and sometimes that means there isn't a big enough chunk for one big allocation like that. This may be more of a factor if you've been using TG2 a while.
Regards,
Jo
I did read the error messages. Why would it stop half way through and all of a sudden not have enough memory??? And what about the Heightfield error message - I wasn't using a heightfield.
And the default 400MB - way too slow!
Let's break this down then:
1) I deleted the heightfield shader from the tree and later found a lone node in the node list that I deleted as well. So why are there orphaned nodes in the node list when you delete the heightfield from the treeview pane?
2) Pre-Allocate: I think we differ on the definition of pre-allocation of memory. As a programmer if I pre-allocate a chunk of memory (and get a successful return value) then I have the memory until I release it. I don't have to keep "pre-allocating" memory I already have it. So my question is what does the "pre-allocate" switch really do and how is T2 handling memory? Does it keep releasing and requesting memory, which defeats the purpose of pre-allocating the memory in the first place, but I haven't seen the code so I'll let you tell me how T2 is using the "pre-allocation" of memory.
Thanks.
JR
I have run into this on my computer. It is not so much a RAM problem, but is related. The problem is with the SWAP file. I had to adjust mine to 8 GB. There is a way to fix the RAM by changing the large address aware flag in the registry.
I lost the link to Microsoft's knowledge base when I went to Windows 7. But I want to adjust it on Window 7 also. Stay tuned. I'll see if I can find it again.
http://www.tomshardware.com/forum/238910-45-large-address-aware
Found some info at a site I find reliable.
Interesting. I have mine set at 2.5 times my total memory. I believe the optimum is twice the installed memory, unless MS has changed it for Win7.
I still don't see the difference though between pre-allocating memory versus allocating on the fly as far as how T2 seems to be operating.
Also conducted an experiment. I tend to like to use a powerfractal on my terrain instead of the heightfield shader. So I load the powerfractal and move it to first position then deleted the heightfield shader. Looked in the node tree and low and behold, there was the orphaned node. Why is it not deleted from here as well, like every other node? And it still takes up memory- aaarrrgggghhhh!
Here is the info:
The virtual address space of processes and applications is still limited to 2 GB unless the /3GB switch is used in the Boot.ini file. When the physical RAM in the system exceeds 16 GB and the /3GB switch is used, the operating system will ignore the additional RAM until the /3GB switch is removed. This is because of the increased size of the kernel required to support more Page Table Entries. The assumption is made that the administrator would rather not lose the /3GB functionality silently and automatically; therefore, this requires the administrator to explicitly change this setting.
The /3GB switch allocates 3 GB of virtual address space to an application that uses IMAGE_FILE_LARGE_ADDRESS_AWARE in the process header. This switch allows applications to address 1 GB of additional virtual address space above 2 GB.
The virtual address space of processes and applications is still limited to 2 GB, unless the /3GB switch is used in the Boot.ini file. The following example shows how to add the /3GB parameter in the Boot.ini file to enable application memory tuning:
[boot loader]
timeout=30
default=multi(0)disk(0)rdisk(0)partition(2)\WINNT
[operating systems]
multi(0)disk(0)rdisk(0)partition(2)\WINNT="????" /3GB
More on this can be found at: http://www.microsoft.com/whdc/system/platform/server/PAE/PAEmem.mspx (http://www.microsoft.com/whdc/system/platform/server/PAE/PAEmem.mspx)
Whats your system specs?
And are you saying it runs at 400mb just fine just "too slow"? You could try somewhere between 400 and 2600.... more towards the 400 side probably.
I think the point is getting missed here, I know I can select less memory for rendering. The question I have is if 2600 is successfully pre-allocated and works for half of a render - where did the memory go unless T2 released it and had to reallocate it and then the operation failed. Thus my question on what does the "pre-allocate subdiv cache" really do?
Hi,
AFAIK if you have "Preallocate subdiv cache" turned on then the subdiv cache is allocated when rendering begins. Before that setting was available it was always allocated on the fly up to the maximum you specified, which could lead to problems when the subdiv cache limit you'd set was reached. By preallocating the subdiv cache you can know ahead of time that the subdiv cache at least was safely allocated. Matt would need to clarify this, but I believe if the subdiv cache allocation fails then it goes back to allocating the cache on the fly, which is why the render proceeds. However I think it will continue allocating subdiv cache up to the very high limit you've set, and eventually that causes memory allocations to begin failing, which I'm 99% sure are causing the other errors you're seeing.
The cache is disposed of when rendering finishes. It doesn't make sense to have a big chunk of memory only used for rendering hanging around not doing anything, the shortage it potentially causes could prevent you doing other things with the app.
I think the "Unable to allocate memory for subdiv cache" error is actually shown at the beginning of rendering, not half way through as you think. The other errors would come later in the rendering process.
Using a 2600 MB subdiv cache, preallocated or not, is too large for 32 bit TG2. You need to try something smaller.
Regards,
Jo
Quote from: jo on May 29, 2010, 07:26:41 PM
I think the "Unable to allocate memory for subdiv cache" error is actually shown at the beginning of rendering, not half way through as you think. The other errors would come later in the rendering process.
Using a 2600 MB subdiv cache, preallocated or not, is too large for 32 bit TG2. You need to try something smaller.
Thanks that was the info I was looking for. Max would be 2048 then. As for the errors, the only error I get before I start rendering is unable to find heighfield 01 file. I can't find the missing node, and still confused as to why it is still in the node tree when I delete it from the Terrain tree view.
Why even use a 2048 subdiv cache though? It's extremely high. I know in another thread people recommended that for improving render time. Did you prove through actual testing that it made a significant difference? If not, I'd suggest testing that to be sure. If the difference is not more than e.g. 50%, I don't think it's worth the risk, especially if you're pushing memory limits in other ways (e.g. high render resolution, high AA, etc.).
Perhaps you're not aware, but there are many other things besides the subdiv cache that take memory, some of which vary over the rendering process. So even if your memory for the subdiv cache was successfully allocated at the beginning, memory use through the render could vary e.g. with the image buffer or AA buffer, and thus cause a crash due to out of memory issues. Other things that can take up memory are images (either on objects or used as masks or textures on the terrain), heightfields, objects and populations, AA buffers, image buffer, etc. so it's really not a good idea to make assumptions about max buffer size allocation unless you have a very good idea of what else is using memory in the scene.
Btw njeneb, although increasing swap file size may help allocate large contiguous blocks of memory, swap file ("virtual") memory space is so much slower than real memory that it will dramatically slow down your renders. It's not at all recommended to use this as a solution.
- Oshyan
Sometimes there is not any other way to get the render done. I am hoping the 64 bit will help with RAM address space. Just upgraded to Windows 7 64 bit to take advantage of this when the new build is released.
The Media Center is so much better!
njeneb, is that true though? I mean do we have example scenes that verifiably render correctly with higher cache values and not with lower values?
- Oshyan
It's not that they don't render correctly - it speeds up the render time.
By how much? And have you tried intermediate, safer values, like 1600MB?
- Oshyan
Personally I've never tried over 800 and that was when I tell it to use 8 cores on my i7. And I run 64bit with 8 gigs ram.
I'll have to make a decently memory intensive scene and try out various memory values when I get home to see if it really plays much of a role. But as Oshyan and I mentioned earlier, even if it does improve speed, you need to find your limits which 2600 is out of the picture, try 800, then 1200, then 1400, 1600, etc to see what is a good balance for speed to not crashing your render.
And as Oshyan mentioned, relying on virtual memory will not help your speeds for sure, virtual memory uses your harddrive. Thats the worst storage speeds available on the computer.
Which is why I was wondering your computer specs, as if you are relying on virtual memory a lot, then it could be you see speed differences depending on how much actual physical memory terragen was given vs virtual during said renders and not as much of how much you set to be allocated.
I tend not to change the sub divide cache. Lately, I have been rendering while I am at work. Then I use the clip render to finish the render and combine the images in Corel Paint. For some reason my computer uses the swap memory a lot using Terragen 2. I have not looked into why.
Using a P4 HT, which is now a slow computer, the render time does not matter (much). 1920 x 1080 still can take days to finish a single image.
I also watch TV and movies while rendering, net surf, etc.. So it is probably to my advantage the swap is used. Someday, I'll be getting a new computer just for rendering. This one will be the entertainment and game machine.
I've been running subdiv cache at 2048, have been busy with a project once that is done I'll do some actual timings. To me so far, it has only been judging by eye - which is only my perception of it of course.
Specs:
2.70 gigahertz AMD Athlon II X2 215
256 kilobyte primary memory cache
1024 kilobyte secondary memory cache
Bus Clock: 200 megahertz
BIOS: Phoenix Technologies, LTD 5.49 08/06/2009
3072 Megabytes Installed Memory
Nothing fancy.
Thanks for all the input, soon as I get a chance I'll do a benchmark - I just hope it doesn't ruin the illusion...
JR
So far my test renderings have been showing the reverse. Dumped a ton of object populations and put it on decent settings, then tried 400, 800, and 1200 settings. I've seen the higher subdivs take longer in general actually.
Could be my scene I'm testing on just isn't intensive enough but it's A LOT of objects going on. I think reality is changing the subdiv cache isn't going to make some miracle change in render speed... only a very small fraction if that even.
Still further test probably needed with a variety of scene types.
When ever I deal with water is when I tend to jump up the subdiv cache.
It will also depend on how much physical memory you have.
Here are my results with the displacement plane scene in this thread (note that it is a somewhat unusual scene setup so it may be showing a bigger impact than a "normal" scene):
400mb cache: 2 hours 58 minutes 55 seconds
800mb cache: 2 hours 4 minutes 20 seconds
1600mb cache: 1 hour 28 minutes 24 seconds
2400mb cache: crash (out of memory, presumably)
Very interesting results in this case. I shall have to do more tests with other scenes.
- Oshyan
Great results. Better than I came up with. At least I'm not imagining the render times. :)
JR
So far this is the type of scenes that seem to benefit by the subdiv cache being higher than 400K.
- Scenes with water, also when I use the water shader as a glass effect on objects.
- Darker scenes with high definition of clouds.
I am averaging about 24% to 36% reduction in render times with higher cache values: 1024 thru 2048K.
I haven't seen the code behind the render engine, but obviously in some scenes the higher cache value does indeed help decrease the render time. I can find no downside to using a higher subdiv cache value (within limits of course and max 2048 for 32-bit Terragen). Anyone else think of any downside...?
JR
The only likely downside is, as you said, running into memory limits. There is unfortunately an inverse relationship between scene complexity and the possible size of the render buffer. My assumption is that more "complex" scenes probably benefit more from larger buffers, but in those cases it is also harder to have large buffers...
- Oshyan
Quote from: jritchie777 on June 03, 2010, 10:22:45 PM
So far this is the type of scenes that seem to benefit by the subdiv cache being higher than 400K.
- Scenes with water, also when I use the water shader as a glass effect on objects.
- Darker scenes with high definition of clouds.
I am averaging about 24% to 36% reduction in render times with higher cache values: 1024 thru 2048K.
I haven't seen the code behind the render engine, but obviously in some scenes the higher cache value does indeed help decrease the render time. I can find no downside to using a higher subdiv cache value (within limits of course and max 2048 for 32-bit Terragen). Anyone else think of any downside...?
JR
Yep I've been trying water and darker/lower sun settings + heavy cloud use and have noticed some improvements. I think the best choice for this settings is playing it more on the safe side and guessing a slightly lower value -- but to still increase it by a bit. As if the render crashes its only going to increase the time required ;).
True, but it is easy to figure out. My desktop is only 3GB memory, after boot up I have about 2.5 GB. So if I'm only running T2, I can crank it up to the limit of 2048 without fear of a crash. Task manager/Processes gives a good value of memory being used and what is free. Stay within the "free memory" limits and no more than 2048 Kb for 32 bit T2.
Question: Does the 64-bit version of T2 have such limits or has it been optimized for 64-bit OS?
JR
p.s. Shortcut for Task Mgr (Windows only) Ctrl+Shift+Esc
There is no 64 bit version of TG2 yet. And assuming only 500MB of memory needed for everything besides the render cache is not really safe to do in many cases. I reckon you've been rather lucky, or haven't been preallocating that large a cache in every case.
- Oshyan
500Mb left for what is not safe? If T2 is the only thing I'm running...
Or is everyone just intent on discouraging me to shorten my render times??? It's a 'caveat' situation anyway.
I'm intent on helping you avoid crashes that lose render time and cause frustration. I'd never want your renders to be longer than they have to be. ;D If anything I'm interested in the results being discussed here as they could inform development around optimizing the renderer and caching scheme.
500MB is not much considering everything that can use memory, from objects and textures and terrains that are loaded, to image and antialiasing buffers (separate from render buffers, as far as I am aware), to populations, and more. By all means use whatever render buffers work for you and give you the best render times, I just think it's a dangerous *assumption* to make that a 2048MB render buffer size is a generally good practice. I know for my part it's impractical to use a buffer that size on the vast majority of my scenes, but mine may be more complex than yours.
More than anything I simply want to avoid unnecessary crashes for you and anyone who follows your example.
- Oshyan
I have noticed less memory use since I loaded Windows 7. This may be because the OS is not loading its whole into memory. Everything I have run, games, Blender, Terragen 2, Corel Paint, FireFox, etc; has worked much better than with Vista or XP.
As Oshyan mentioned theres many things that can be loading into memory. You have to remember terragen needs access to more memory then just the renderer. It has its entire ui, anything you load in, etc.
Also are you running the /3gb switch (though that isn't very wise considering your max is 3gb)? If not 2048 is the LIMIT of terragen's access, when it still has tons of other pieces that need memory. So you need to approach it more like:
2048 - (memory terragen ui is currently taken) - (chunk of memory for objects etc) - a lil more just incase giving a buffer = safe subdiv cache.
And even if you use the /3gb switch, with only 3 gigs of ram I don't think it really does anything as it still will reserve 1 gig for kernel before giving ram to a program - which would only be 2 gb which wouldn't make the switch do anything.
One way to save yourself a little memory, is if you have the paid version you can run the render with nothing up by console mode, which will give you a few more megs the ui usually would be taking (i think, would need to see).