Nvidia: Moore's Law is Dead, Multi-core Not Future

Started by Kadri, May 06, 2010, 08:56:29 AM

Previous topic - Next topic

neuspadrin

It was obvious moore's law would die off eventually at that rate, as it was never really a law, just more of a statement of truth while it did hold true (which it did quite well at). They've basically maxed out the power they can juice into it, they've maxed out the frequencies they can do while avoiding melting the chip right out from under you.  Multicore was all they had to go to really.

And really mr gpu man? Multicore not the way to go?  Odd considering gpu architectures are multi/many cored that seems hypocritical.  And CPU's too heavy???? you serious??? My gpu eats way more watts/hour compared to my cpu.  Cpu's have been crazy efficient recently.

Also yes programming needs to move more parallel, but GPU's have always only dealt with graphics which is super easy to split up into nice sections.  General programming isn't that easy to do. Really multicore's make a ton of sense given the current market.  More people will benefit from a few fast cpu's then a TON of slower ones. 

nikita

I wonder whether the guy who wrote the caption actually read the article.

leafspring

Quote from: neuspadrin on May 06, 2010, 10:33:06 AM
And really mr gpu man? Multicore not the way to go?  Odd considering gpu architectures are multi/many cored that seems hypocritical.
First of all, he says that multicore systems with focus on serial processing are not the way to go. Streaming processors (or CUDA cores) are not optimized for serial but parallel processing. No hypocricy here.

Quote from: neuspadrin on May 06, 2010, 10:33:06 AMAnd CPU's too heavy???? you serious??? My gpu eats way more watts/hour compared to my cpu.  Cpu's have been crazy efficient recently.
If you compare the CPU power consumption with that of a graphics card, remember to add memory and mainboard to the equation since the graphics card has to power it's RAM and board too.
And sure, a Core i7 needs only 130W (without MB and RAM) compared to the 180W of a GTX260 but efficiency is not about the overall power consumption, it's about what can be achieved with it. For tasks that are parallelizable (is that even a word?^^) a GPU outperforms any CPU while using about the same amount of energy (and current GPUs aren't even fully optimized for general purpose) and his statement was focused on parallel computing. - "Conventional serial CPUs are simply too heavy (consume too much energy per instruction) to fly on parallel programs".

Lang lang er vejen for Aslaug
Længe venter lykken på Kraka