Rendering via CUDA

Started by PorcupineFloyd, October 01, 2009, 07:25:47 AM

Previous topic - Next topic

Kadri

Next 3-4 build ?

Just kidding :D

Osyhan maybe you can not say something about this , but are there any relaxed object handling
( Jo said something that TG2 obj handling was more standart and other 3d program obj generating more loose )
or more file formats in the works?

Sorry , it is off topic but as you mention it i wanted to ask.

Kadri.

Oshyan

Object handling has not yet been updated, but we plan this as a likely part of the finalizing the Animation Module since it helps with interoperability.

- Oshyan


penang

Quote from: PG on November 07, 2009, 04:44:27 PM
Quote from: penang on November 06, 2009, 11:29:22 PM
Nvidia doesn't have any GPU that can do double precision yet (maybe they will have one available by 2011) but ATI does, now.

The GTX 300 series has double precision and CUDA 3.0 will fully utilise this too. They also account for about 70% of users in the GPU market.
Yes, GTX 300 has double precision but it is still something that no one has been able to purchase, yet.

The complication of what happened to TSMC's 40nm process might delay the deployment of GTX 300 even further, until the middle of next year.

matrix2003

InstinctTech DogFighter CudaDemo:  4000 individual objects utilizing CUDA to navigate and avoid each other!

http://www.youtube.com/watch?v=Z-gpwCspxi8

Amazing!  - Bill .
***************************
-MATRIX2003-      ·DHV·  ....·´¯`*
***************************

penang

This is the link I got from my friend who does video programming

http://developer.amd.com/gpu/ATIStreamSDK/Pages/default.aspx

According to my friend, the SDK, math library, kernel analyzer and powertoys are very useful to harness the power of double-precision FP of ATI's GPU.

penang

Just come across this --- Nvidia demoed ray-tracing using CUDA.

Info available at http://developer.nvidia.com/object/nvision08-IRT.html

penang


PG

Yeah most GPU's are averaging that now, the GTX 3 series is looking to push up to 1 TFLOP. FERMI, the technology behind the 3 series is an update to TESLA which are nVidia's high performance GPGPUs. The Tesla 10 series already runs at 1TFLOP but doesn't have a video output and is geared entirely towards binary calculations whereas Geforce is designed for rendering triangles (in a very, very basic explanation). It won't be far off though. GPU's are typically designed in a 'trickle up' process where consumers get the newest technology and general purpose products for intensive computation like farms and supercomputers get stuff nearly a generation old. The TESLA is based on experiments from the 8 series.
Figured out how to do clicky signatures

penang

Quote from: PG on November 19, 2009, 04:18:02 PMYeah most GPU's are averaging that now, the GTX 3 series is looking to push up to 1 TFLOP. FERMI, the technology behind the 3 series is an update to TESLA which are nVidia's high performance GPGPUs. The Tesla 10 series already runs at 1TFLOP but doesn't have a video output and is geared entirely towards binary calculations whereas Geforce is designed for rendering triangles (in a very, very basic explanation). It won't be far off though. GPU's are typically designed in a 'trickle up' process where consumers get the newest technology and general purpose products for intensive computation like farms and supercomputers get stuff nearly a generation old. The TESLA is based on experiments from the 8 series.
The 1 TFLOP figure given by Nvidia for their GTX 3 chips (Fermi based) is for single-precision.

The one that ATI quotes (544 GFLOPS) is double precision.

As comparison, the best i7 (965) can only churn out 70 GFLOPS, with all its cores.

Price wise, one (1) i7 (965) chip is equal to at least twenty (20) 5870 from ATI (chip versus chip comparison, doesn't include the supporting peripherals such as RAM and such).

Performance wise, one i7 (965) chip can do 70 GFLOPS. Twenty (20) 5870 chips, on the other hand, can churn out over 10 TFLOPS.

That's a ratio of 1:140  !

Mandrake

"The power to price ratio offered by today's GPUs makes leveraging them in tasks not related to graphics a no-brainer. Intel recognizes this and plans to leverage it with the "Larrabee" CPU+GPU on the same silicon, a move that will see the interaction between the two brains become more efficient"

A combo? Sounds great!

http://blogs.zdnet.com/hardware/?p=6289&tag=nl.e550

Cyber-Angel

In twenty or perhaps less years (twenty is the number I've seen quoted most often) we will hit the law of diminishing returns as far as silicon technology can be taken: due to the physical laws governing how small you can make a silicon pathway before the electrons start running into one another. Twenty years or so will see the end of the silicon processor and the micro scale of transistor manufacture, in order for continued manufacture for the semiconductor sector there would be a transition to nanometer scale transistors using graphene technology.

The combined CPU/GPU on the same substrate has be touted for a number of years no one has yet done so and it is interesting that Intel is developing such a chip: it will be interesting to see how such a chip handles the hand-offs between CPU and GPU tasks without causing pausing and other such delays, it will also be interesting to see how such a chip handles basic tasks like handshaking and lastly how it will manage the thermal loading?

Regards to you.

Cyber-Angel                     

penang

Quote from: Cyber-Angel on December 01, 2009, 08:25:21 AMThe combined CPU/GPU on the same substrate has be touted for a number of years no one has yet done so and it is interesting that Intel is developing such a chip: it will be interesting to see how such a chip handles the hand-offs between CPU and GPU tasks without causing pausing and other such delays, it will also be interesting to see how such a chip handles basic tasks like handshaking and lastly how it will manage the thermal loading?

Regards to you.

Cyber-Angel
I think Larrabee is a mistake.

Intel may be able to push Larrabee into the market, and there may be some acceptance of it, but I think it will be a mistake.

Intel's main job is to see to it that the CPU can scale well into the 64-256 cores arena, along with the upping of bits from the current 64 to 128 and even to 256-bit CPU.

Instead of doing all that, Intel diverts its resources into developing Larrabee, which only result in the lesser (relatively speaking) attention into Intel's own core-competency, CPU.

Kadri

Maybe it is a mistake...but time will tell . But they have to take such or other routes. Because only cpu isn't enough no longer .
To sell cpus they have to make them more valuable .
As i wrote somewhere here ordinary programs don't need ( at least for now ) so much cpus (4 ...8...16 and so).
So they are searching for other ways .
I think the 5 years ahead will bring interesting things to us . GPU CPU wars and the melting of them.
It seems Nvidia will have the toughest time ... But who knows  :)

http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3683&p=1

Kadri.

penang

Once upon a time some experts were predicting that the mainframes were enough and there was no use for any type of more advance computers.

Much later, another group of experts were saying that 640K of memory is more than enough.

Now, there is yet another group of experts are telling us "ordinary programs" don't need so much CPUs.

See the pattern here?

Back in the time of monochrome, who would ever thought of a "GPU" having 1600 stream processors?

The most advance "video game" back then was "Pong". Graphical simulation program was unheard of.

Not even the best expert at that time could imagine the "ordinary" applications we are using today --- like Terragen, like Maya, like Blender, like Povray, for example.