Started by Jack, April 17, 2010, 12:54:52 am
Quote from: domdib on April 17, 2010, 04:44:04 amYes, remember the core i7 has hyperthreading, so it's effectively like having an 8 core processor. I don't think the AMD has (but please, correct me if I'm wrong)
Quote from: PabloMack on April 23, 2010, 01:05:26 pmQuote from: domdib on April 17, 2010, 04:44:04 amYes, remember the core i7 has hyperthreading, so it's effectively like having an 8 core processor. I don't think the AMD has (but please, correct me if I'm wrong)A lot of people are saying that hyper-threaded effectively doubles the number of cores and this isn't true. All hyper-threading does is to do more context switching in hardware when a thread change needs to happen. It doesn't do any of the thread's app processing. According to Intel, hyper-threading can improve performance by about 15-30% (corrected). But performance can even be worse with hyper-threading. This improvement does not even come close to the performance gain you would get by doubling the number of cores. However, when your app thinks it has 8-threads, twice as much memory has to be allocated for stacks. This can put the squeeze on your 32-bit app's address space and make your program more unstable if your are rendering a complex scene.
Quote from: PabloMack on April 23, 2010, 01:05:26 pmAs you say, Deneb does not have any hyper-threading so there is more software overhead for context switches. Thuban (to be released on April 26) is supposed to have some acceleration, but details are sketchy. I don't think that hyper-threading accounts for much of the performance advantage i7 has over Phenom II. AMD is supposed to have full-fledged hyper-threading when Bulldozer arrives next year or late this year. Once we are running the 64-bit TG2, then hyper-threading will not put the squeeze on the app's address space given you have plenty of RAM. But I think by then, AMD processors will have this technology on the market. Don't forget that, if it were not for AMD, the i7 wouldn't even exist. Remember the Itanic? Competition is good for the industry. I, for one, like to have the freedom of choice.
Quote from: penang on April 24, 2010, 03:39:43 amI thought Bullozer won't arrive before 2011?
Quote from: penang on April 24, 2010, 03:39:43 amBut if the hyperthread CPU is used for heavy duty stuffs like what TG2 requires, I have serious doubt on the ability of Intel's hyperthreading technology to fulfill the need to do millions to billions of double-precision floating-point calculation per thread.
Quote from: PabloMack on April 24, 2010, 09:34:47 amQuote from: penang on April 24, 2010, 03:39:43 amI thought Bullozer won't arrive before 2011?This is probably true but 2011 is now little more than 8 months away. What any one system will be able to do with TG2 in the meanwhile will be very finite. AMD seems to be step-wise adding more cores to Phenom II offerings, but 45nm can only take that so far.
Quote from: PabloMack on April 24, 2010, 09:34:47 amWithout something like hyper-threading, Phenom II should have more silicon real estate to work with to put more real cores on a die than Intel can do with the same scale of geometry. As for TG2, the single thread that is used to construct the pre-render window seems to need the boost from hyper-threading to make that single thread run faster. The people with i7's are benefiting from this small margin of improvement. But AMD is working on clocking up single threads automatically to take up the slack in near future offerings. I saw an article that said someone over-clocked a Phenom II X4 to run over 7 GHz realiably and said it was a world record. This system, though, was set up with liquid cooling. It is good to know that the silicon can go much faster than it is being pushed and heat buildup is the limiting factor in the Phenom II line. I do think that Bulldozer involves a new core design because it will have some sort of multi-threading capability to address the hyper-thread issue.
Quote from: PabloMack on April 24, 2010, 09:34:47 amBut AMD is planning to put a GPU on the same die.
Quote from: PabloMack on April 24, 2010, 09:34:47 amThis could speed up processing by much more than what you see with today's GPUs once software companies like PS start to use it. I've been reading semiconductor news and it seems that there are more problems with going to 32nm than people realize. It involves emersion processes that depend on equipment made by suppliers that customers like Intel and AMD depend on and the technology is not there yet. Apparently there is a considerable cost involved to do the development and no-one is stepping up to the plate. The article seemed to be addressing the often-sited coming to the end to "Moore's Law". I think the industry could be approaching the end of what light can do at such small scales. What is the next step? XRay or scanning electron beams?
Quote from: piggy on April 24, 2010, 11:12:42 pmQuote from: PabloMack on April 24, 2010, 09:34:47 amBut AMD is planning to put a GPU on the same die.Hmm... This is the first time I heard that.