Good News!

Started by Jack, April 17, 2010, 12:54:52 AM

Previous topic - Next topic

Jack

thankyou oshyan for clearing this up  ;D
i will be getting a 650w cooler master psu
core i7 860 with the cheaper motherboard
and 8gb of ram ;)
My terragen gallery:
http://wetbanana.deviantart.com/

PabloMack

#31
Quote from: domdib on April 17, 2010, 04:44:04 AM
Yes, remember the core i7 has hyperthreading, so it's effectively like having an 8 core processor. I don't think the AMD has (but please, correct me if I'm wrong)

A lot of people are saying that hyper-threaded effectively doubles the number of cores and this isn't true.  All hyper-threading does is to do more context switching in hardware when a thread change needs to happen.  It doesn't do any of the thread's app processing.  According to Intel, hyper-threading can improve performance by about 15-30% (corrected).  But performance can even be worse with hyper-threading.  This improvement does not even come close to the performance gain you would get by doubling the number of cores.  However, when your app thinks it has 8-threads, twice as much memory has to be allocated for stacks.  This can put the squeeze on your 32-bit app's address space and make your program more unstable if your are rendering a complex scene.  As you say, Deneb does not have any hyper-threading so there is more software overhead for context switches.  Thuban (to be released on April 26) is supposed to have some acceleration, but details are sketchy.  I don't think that hyper-threading accounts for much of the performance advantage i7 has over Phenom II.  AMD is supposed to have full-fledged hyper-threading when Bulldozer arrives next year or late this year.  Once we are running the 64-bit TG2, then hyper-threading will not put the squeeze on the app's address space given you have plenty of RAM.  But I think by then, AMD processors will have this technology on the market.  

Don't forget that, if it were not for AMD, the i7 wouldn't even exist.  Remember the Itanic?  Competition is good for the industry.  I, for one, like to have the freedom of choice.  

old_blaggard

Not quite.

Hyperthreading is Intel's proprietary term for the general chip design concept of simultaneous multithreading (SMT). In SMT, you duplicate various elements of a system's pipeline and thereby allow individual threads to be executed simultaneously in a pipeline stage. This means that there are a lot of variables in the equation: number of duplicated elements, length of pipeline, size of cache, and cache and memory response time.

The number you cited almost certainly comes from the PIV era of hyperthreading. With Core i7's improvements in memory management, larger caches, and redesigned pipeline, we almost certainly get more. The wikipedia article for hyperthreading cites 30%, but I remember hearing reports when the i7 first came out that gains of 50% to nearly 100% performance were routine for highly threaded applications like Terragen. Anecdotally, I can certainly tell a huge difference between rendering on 8 cores and 16 cores on my Mac Pro, even though technically 8 of those 16 cores are virtual.

http://en.wikipedia.org/wiki/Simultaneous_multithreading
http://en.wikipedia.org/wiki/Superscalar
http://en.wikipedia.org/wiki/Intel_Nehalem_(microarchitecture)
http://en.wikipedia.org/wiki/Hyper-Threading
http://www.terragen.org - A great Terragen resource with models, contests, galleries, and forums.

PabloMack

#33
I read the same Wikipedia articles, but I mis-quoted the improvement percentages.  Thank you for the correction (now edited into my earliler post).  It was Intel's numbers that I was trying to quote from memory that estimates the difference in performance due to hyper-threading alone.  Unless you know how to explicitly turn off hyper-threading in your machine, you will not know how much of your performance gains are due to hyper-threading alone.  I would think that Intel improved their pipelines as well, which probably accounts for part of the performance gains that early benchmarkers were seeing.  Most of the improvement in performance, though, was due to multi-threading making use of multiple cores that the P4 didn't have.  

penang

Quote from: PabloMack on April 23, 2010, 01:05:26 PM
Quote from: domdib on April 17, 2010, 04:44:04 AMYes, remember the core i7 has hyperthreading, so it's effectively like having an 8 core processor. I don't think the AMD has (but please, correct me if I'm wrong)
A lot of people are saying that hyper-threaded effectively doubles the number of cores and this isn't true.  All hyper-threading does is to do more context switching in hardware when a thread change needs to happen.  It doesn't do any of the thread's app processing.  According to Intel, hyper-threading can improve performance by about 15-30% (corrected).  But performance can even be worse with hyper-threading.  This improvement does not even come close to the performance gain you would get by doubling the number of cores.  However, when your app thinks it has 8-threads, twice as much memory has to be allocated for stacks.  This can put the squeeze on your 32-bit app's address space and make your program more unstable if your are rendering a complex scene.
Applaud !

Hyperthread machines can perform VERY WELL if the threads are used to perform VERY SIMPLE task, like sending out a few lines of javascript, for example.

Intel developed the hyperthread technology for the server market, where busy servers need to serve hundreds of thousands to million of hits per second.

But if the hyperthread CPU is used for heavy duty stuffs like what TG2 requires, I have serious doubt on the ability of Intel's hyperthreading technology to fulfill the need to do millions to billions of double-precision floating-point calculation per thread.

Quote from: PabloMack on April 23, 2010, 01:05:26 PMAs you say, Deneb does not have any hyper-threading so there is more software overhead for context switches.  Thuban (to be released on April 26) is supposed to have some acceleration, but details are sketchy.  I don't think that hyper-threading accounts for much of the performance advantage i7 has over Phenom II.  AMD is supposed to have full-fledged hyper-threading when Bulldozer arrives next year or late this year.  Once we are running the 64-bit TG2, then hyper-threading will not put the squeeze on the app's address space given you have plenty of RAM.  But I think by then, AMD processors will have this technology on the market.  

Don't forget that, if it were not for AMD, the i7 wouldn't even exist.  Remember the Itanic?  Competition is good for the industry.  I, for one, like to have the freedom of choice.
I thought Bullozer won't arrive before 2011?

Tangled-Universe

Well penang, the facts are quite simple:
I have a 2.4GHz Q6600 and FrankB and others have a i7 920 @ 2.66GHz.
Their renders are at least 2x faster than mine and some times even a bit faster.

nikita

Someone with HT should do a comparison render with and without hyperthreading, if it hasn't already been done.

As for the PSU: Some manufacturers have a tool on their website to calculate how much power you'll need.
I myself got a 380W psu and I'm running a Q6700 Quadcore plus a GF 7900GS without any problems.

If you want a silent pc, remember that psu and graphics card have (possibly loud) coolers in/on them too.

Henry Blewer

I would go with at least a 750W power supply. Graphics cards in series and some other peripherals can take a lot of power. Buying a powerful power supply now will pay off in future upgrades.
http://flickr.com/photos/njeneb/
Forget Tuesday; It's just Monday spelled with a T

PabloMack

#38
Quote from: penang on April 24, 2010, 03:39:43 AM
I thought Bullozer won't arrive before 2011?

This is probably true but 2011 is now little more than 8 months away.  What any one system will be able to do with TG2 in the meanwhile will be very finite.  AMD seems to be step-wise adding more cores to Phenom II offerings, but 45nm can only take that so far.  Without something like hyper-threading, Phenom II should have more silicon real estate to work with to put more real cores on a die than Intel can do with the same scale of geometry.  As for TG2, the single thread that is used to construct the pre-render window seems to need the boost from hyper-threading to make that single thread run faster.  The people with i7's are benefiting from this small margin of improvement.  But AMD is working on clocking up single threads automatically to take up the slack in near future offerings.  I saw an article that said someone over-clocked a Phenom II X4 to run over 7 GHz realiably and said it was a world record.  This system, though, was set up with liquid cooling.  It is good to know that the silicon can go much faster than it is being pushed and heat buildup is the limiting factor in the Phenom II line.  

I do think that Bulldozer involves a new core design because it will have some sort of multi-threading capability to address the hyper-thread issue.  But AMD is planning to put a GPU on the same die.  This could speed up processing by much more than what you see with today's GPUs once software companies like PS start to use it.  I've been reading semiconductor news and it seems that there are more problems with going to 32nm than people realize.  It involves emersion processes that depend on equipment made by suppliers that customers like Intel and AMD depend on and the technology is not there yet.  Apparently there is a considerable cost involved to do the development and no-one is stepping up to the plate.  The article seemed to be addressing the often-sited coming to the end to "Moore's Law".  I think the industry could be approaching the end of what light can do at such small scales.  What is the next step?  XRay or scanning electron beams?  

freelancah

I bought a setup like this a few months ago. It might suit you too:

Processor:     Core i7 920 2.66GHz LGA1366
Case:            P183
Motherboard   P6T, X58, LGA1366, DDR3, SLI / Crossfire, ATX
Cooler:          Prolimatech Megahalems , LGA775/LGA1156/LGA    
PSU:             Corsair 650W 650HX, ATX2.2 80+    
Graphics:       Ati Radeon HD 4650 XXX, 512 MB DDR2
RAM:             6 x 2GB Extreeme Dark Tri-Channel kit, DDR3 1333

The price was little less than 1000 at that time. I suspect you have to drop 1 of those ramkits and go with 6 gb since the price almost doubled. I paid 99 for a 3x2gb kit.

PabloMack

#40
Quote from: penang on April 24, 2010, 03:39:43 AM
But if the hyperthread CPU is used for heavy duty stuffs like what TG2 requires, I have serious doubt on the ability of Intel's hyperthreading technology to fulfill the need to do millions to billions of double-precision floating-point calculation per thread.

I did find that TG2 lets you over-ride the number of cores that the application sees in the Preferences menu.  For people with memory limitations, you can do the threads/render time tradeoff thing yourself to try to optimize.  But with more than one variable at work here, it depends on the OS to decide how to balance the processing between real cores and these "pseudo cores"(or maybe the hardware just takes care of it and it is beyond the control of the OS, I don't know.).  One i7 pseudo-core may be less powerful than one Phenom II real-core so AMD users might be getting more processing power for their stack usage space on final renders.  But then stack usage is not an issue for scenes that are not pushing memory limitations.  I wonder, do i7 users see better performance on the single-thread process of updating the pre-render window?  And can single threads benefit from hyper-threading when the rest of the system is idle?  

piggy

Quote from: PabloMack on April 24, 2010, 09:34:47 AM
Quote from: penang on April 24, 2010, 03:39:43 AMI thought Bullozer won't arrive before 2011?
This is probably true but 2011 is now little more than 8 months away.  What any one system will be able to do with TG2 in the meanwhile will be very finite.  AMD seems to be step-wise adding more cores to Phenom II offerings, but 45nm can only take that so far.
45nm version of Bulldozers are trial runs. These have been taped out and will be produced later this year. Their 32nm version cousins will have to wait.

AMD still owns part of Globalfoundries, and Globalfoundries is going to produce 28nm FPGA chips for Altera, starting August or September of this year.

What this means is that AMD is willing to wait just a bit longer. Wait till they learn some first hand experience from the 28nm production for Altera and then apply them to their 32nm design of Bulldozers that will come out next year.
Quote from: PabloMack on April 24, 2010, 09:34:47 AMWithout something like hyper-threading, Phenom II should have more silicon real estate to work with to put more real cores on a die than Intel can do with the same scale of geometry.  As for TG2, the single thread that is used to construct the pre-render window seems to need the boost from hyper-threading to make that single thread run faster.  The people with i7's are benefiting from this small margin of improvement.  But AMD is working on clocking up single threads automatically to take up the slack in near future offerings.  I saw an article that said someone over-clocked a Phenom II X4 to run over 7 GHz realiably and said it was a world record.  This system, though, was set up with liquid cooling.  It is good to know that the silicon can go much faster than it is being pushed and heat buildup is the limiting factor in the Phenom II line. 

I do think that Bulldozer involves a new core design because it will have some sort of multi-threading capability to address the hyper-thread issue.
The Bulldozer represents the next chapter of AMD.

AMD has been riding on the Atholon-64 architecture for the past 9 years. Almost every core from AMD, from Athlon onwards, was based on the Athlon architecture.
Quote from: PabloMack on April 24, 2010, 09:34:47 AMBut AMD is planning to put a GPU on the same die.
Hmm... This is the first time I heard that.

I do not think AMD will stick their GPU on their Bulldozer architecture. Sure, some versions of Bulldozer derived CPU may have ATI GPU glued to them, but it wasn't AMD's intention to serve their Bulldozers to only the gaming industry.

AMD's aim for the Bulldozer is the datacenters and super-computing industry, where massive heavy-duty data/number crunching is the top most priority.
Quote from: PabloMack on April 24, 2010, 09:34:47 AMThis could speed up processing by much more than what you see with today's GPUs once software companies like PS start to use it.  I've been reading semiconductor news and it seems that there are more problems with going to 32nm than people realize.  It involves emersion processes that depend on equipment made by suppliers that customers like Intel and AMD depend on and the technology is not there yet.  Apparently there is a considerable cost involved to do the development and no-one is stepping up to the plate.  The article seemed to be addressing the often-sited coming to the end to "Moore's Law".  I think the industry could be approaching the end of what light can do at such small scales.  What is the next step?  XRay or scanning electron beams?
Globalfoundries, Intel, Micron, IBM, and TSMC are 1st tier fabs. They get the first priority for emersion technology.

Globalfoudries, being filled to the rim with oil money from the Arab countries, already have plenty of the new equipments installed, that is why they have signed on Altera to produce the 28nm FPGA.

Emersion technology uses deep ultra violet ray that are very close to X-ray. It's safe to say that the deep ultra violet rays is usable up to the 15nm generation. Beyond that it would be X-ray (radiation) lithography, and beyond THAT it would be gamma ray (heavy radiation).

I do not foresee that happen, though.

By the time 15nm hits, the bottoms-up technology, aka nano-tech, would be ripen. And in the future, electronic chips would be "grown", particle by particle, line by line, gate by gate; Instead of "etched", like what we have today.

PabloMack

#42
Quote from: piggy on April 24, 2010, 11:12:42 PM
Quote from: PabloMack on April 24, 2010, 09:34:47 AMBut AMD is planning to put a GPU on the same die.
Hmm... This is the first time I heard that.

Watch this video...http://sites.amd.com/us/fusion/Pages/index.aspx

Sure the video is 90% marketing...

So far AMD has been pretty vague about when they plan to do this.  Maybe it is planned sometime in a generation after Bulldozer.  But when they do, parallel performance could go up quite dramatically.  GPUs have already proven to speed many applications by 10X with existing graphic boards.  I have been reading about CUDA and OpenCL and something that prevents GPUs from making parallel performance even a lot better is the time delay from the serial communication between GPU and CPU.  Once the GPU is on the same die, much of this delay can be eliminated.  Let's hope things go pretty smoothly down to 11nm.  I don't think the industry has all the problems figured out yet, though.