Tried to search for an answer, but no success. After experiencing very long rendering time and not understanding why (brand new PC), I check the Box and TG2 sees only 1 processor core. How do I modify this setting? Many thanks in advance
What box did you check?
If you are trying to override the number of cores that Terragen detects then open Terragen 2 and go to the preferences window (on the edit menu). On the startup settings panel check override automatic number of cores detection and then in the textbox type the number of cores that you want Terragen to use.
OK, thanks! I feel so stupid not to have found it by myself. BTW, when using other applications they detect 8 core because of the hyperthreading, do you know wheither TG2 will work faster or do I have to use the normal 4 cores? Many thanks for your answer,
With the amount of overhead that would generate it's not worth splitting them. I'd stick with using 4 cores at full blistering speed.
I don't really know how much increase in performance you would get on an i7, but there is a lot of overhead for adding an extra core in Terragen. My best machine is a P4 with hyperthreading (sadly) and using hyperthreading is about 14% faster on a fairly simple scene. The difference is that I am only adding 1 core where you would be adding 4. Here is a thread discussing the subject: The return of hyper threading - Any benefit to TG2? (http://forums.planetside.co.uk/index.php?topic=5032.0)
Frank Basinski (FrankB) also has an i7 920 system and he's using min. 8 threads and max. 16 threads to enable the hyperthreading of the 4 cores.
I'd say: try it :)
Thank you all for your answers. I'll experiment and let you know the results!
First off, it would be interesting to know if anyone else with a Core i7 is seeing it reported as just having the one core.
Ok, to hyperthreading. We try to detect hyperthreading and if so we set the number of cores to number of actual physical cores. So if you have a single core P4 with hyperthreading we report it as just one core and ignore the hyperthreaded core. This is also why we only detect 4 cores on a Core i7 instead of 8. This is because we found that hyperthreading was slowing renders down in some cases and, as dwilson has found, on older processors hyperthreading only gets you a small speed increase anyway. 14% would be about typical.
With Core i7/Nehalem hyperthreading is apparently much better, and I think in this case we might need to add a check to see if a CPU is Core i7/Nehalem and if so count the hyperthreaded cores. That would mean you'd see 8 cores on a Core i7. There is a different issue here though, in that TG2 has scaling problems with rendering above 4 cores, it starts to get slower instead of faster the more cores you add. On Windows I think it might go ok up to 6 cores IIRC. There is no doubt that the Core i7 is a great CPU for TG2 though.
CPU detection on Windows, especially across the range of OSes we support, is a really nuisance.
14% improvement is better than 0% improvement though. I had a render that took 17hours one core and 14.5 with 2 cores. I also have 3GB of ram so my computer has the extra memory needed for another thread.
Quote from: dwilson on July 08, 2009, 07:25:59 pm
14% improvement is better than 0% improvement though.
Yes, I didn't say that quite how I meant to :-). It was a bit early, but what I was getting at was that with earlier hyperthreading Intel had said something like 21% was the theoretical best, and a good real world increase was about 15%. So to get 14% is pretty good going.
QuoteI had a render that took 17hours one core and 14.5 with 2 cores. I also have 3GB of ram so my computer has the extra memory needed for another thread.
It's certainly worth experimenting with if you have the patience :-). I believe we found that different scenes were effected differently, at a guess more complex scenes might be better versus simpler ones. We didn't decide to "disable" hyperthreading without reason.
When I render using 1 thread on my P4 hyperthreading computer. I have found that adjusting the task priority of Terragen 2 to 'Above Normal' speeds up the renders. The other logical processor can handle the OS and other things well enough not to cause instability.
I've just got a Core i7 of my own, and it's definitely faster with 8 threads vs. 4. The caveat is that this either A: reduces cache size per core (which can theoretically reduce performance) or B: increase memory use (if you increase cache size to compensate for additional threads). So far I've found it's still a "win" to go down to 50MB per thread with 8 cores (400MB cache, the default). But that may not be true in more complex scenes. But regardless, this i7 is a beast! ;D
P.S. Reporting 4 cores (as Jo described should happen since we're ignoring hyperthread "cores" for now) on Vista x64.
We're having a similar problem but in a different configuration. On our render nodes we're running tgdcli.exe from Deadline, and it seems to find only 2 cores on these machines:
0: STDOUT: <<<### APP RUN STARTED ###>>>
0: STDOUT: Terragen 2 v2.0 (build 126.96.36.199)
0: STDOUT: Licensed to Polygon Pictures Inc.
0: STDOUT: Found 2 processor cores.
We did check the Render Node's advanced tab, and maximum threads is set to 16. Our render machines run Windows XP 64-bit Professional and have 2 Quad Core Intel L5430s for a total of 8 physical cores (just in case someone asks, our other rendering programs successfully use all 8 cores on these machines).
- Our command line (from Deadline) is below. Are there any extra or different command line parameters we should be using? I haven't been able to find the documentation of the tgdcli.exe command line parameters.
- I launched interactive Terragen 2 on the render node, and we have not overridden the # threads setting in preferences.
- Apart from the Render Node's Advanced tab, are there any other settings we should be checking in the Terragen scene or in the preferences?
Thanks for any ideas, and apologies if this has all been hashed out before and I just couldn't find it...
tgdcli.exe -p "c16_w_mask.tgd" -hide -exit -r -f 590 -o "f:\c16\terragen_mask\c16_w_mask.%04d.bmp" -ox "f:\c16\test\c16_w02\c16_w02.IMAGETYPE.%04d.bmp"
It's strange that only 2 cores are being found. At this time your best option is going to be to use the preferences to explicitly set the number of threads and override the core detection code. You can use the TG2 GUI to configure the preferences how you want and then just copy the preferences file out to all the render machines. For a typical install the TG2 preferences file can be found at:
C:\Documents and Settings\<user name>\Application Data\uk.co.planetside\Terragen_2\preferences.xml
Actually, if you haven't changed the prefs for any of the render machines you may need to copy the whole :
folder hierarchy as well to the appropriate Application Data folder on the render machines. That way TG2 will find the preferences file when it looks for it at startup.
Given the apparent difficulty of reliably detecting the number of cores available on Windows we may need to look at adding a CLI param for this, although that isn't much help to you at the moment.
Remember that it's *minimum* threads that you'd want to set higher.
Ah, I didn't think of the minimum threads render node setting! That was dumb of me, much less hassle than propagating the preferences across the render farm. There are pros and cons though. With just setting the minimum threads in the render node you need to remember to do it for each project, whereas using the preferences method will make sure the desired amount of threads are used for every project without needing to change render node threading settings.
Back in the office after a busy Friday/weekend. I'll try to implement the preferences solution -- as you said, I think that makes more sense in a RenderFarm environment. We know how many cores each machine has!
Just to follow up with everyone, this worked out pretty well. The preferences file lets us set the number of cores up on the farm, but leave it normal in the project file for local rendering.
In addition, to get any benefit out of the extra cores, we had to up the subdivision cache size substantially (I think the TD is currently running with around 2GB). It's a bummer to have the total subdivision cache size set in the project file, I think it would be a lot more portable if that setting was the per-thread size. But in any case, we're getting jobs through the farm a lot faster now!
Glad to hear it worked out! I'm surprised that increasing the subdivision cache had such a large effect though. In most cases I've found that it hardly changes the render times. It would be interesting if you could provide some specific numbers, or better yet a test version of your scene which shows the difference. It might help us in optimizing in the future.