Rendering via CUDA

Started by PorcupineFloyd, October 01, 2009, 07:25:47 AM

Previous topic - Next topic

Kadri

#45
No...no no Penang  ;D

I am one of the last that would think like that.
From the beginning i was everytime the one who wanted the most out of  my computers.
And it seemed every time not enough for me. It is difficult for me to say what i mean in English .
The fastest computer in maybe 1000 years away wouldn't enough for me then too.

What i am trying to say is there has to be demand for the masses for Intel Amd Nvidia and so on.
For people like us in 3d there is no upper limit real time rendering even would not enough for us.
We would be wanting to make rendering  2-4  and so on times more faster then real time too.
But i think you know not every program can handel 4 cores and are 64 bit at now (see not far away from this forum)
This are problems for now . And i am sure we will see in the 5 years from now on many other programs
that was not practical with the old cpus little by little pushed in the mainstream.
Like voice recognition , more interactive GUI's stereo(3d) monitors and games  and maybe things we don't know now...

I hinted at this in the "( at least for now )" .

And consoles are hurting  us in this evolution in my opinion .

I wish i could make me more clear...Do we understand us better now ?  :)

Edit : Capitalism will make this happen. They have to sell new things to make money. I am sure they will create the demand for it.

Cheers.

Kadri.

Oshyan

Well, you can stop waiting with baited breath for the magic of Larrabee to make everything render instantly. ;D
http://www.semiaccurate.com/2009/12/04/intel-kills-consumer-larrabee-focuses-future-variants/

- Oshyan

Kadri

#47
Maybe AnandTech was right with the last paragraphs about the future:
" In recent history AMD's architectural decisions have predicted, earlier than Intel,
where the the microprocessor industry was headed.
The K8 embraced 64-bit computing, a move that Intel eventually echoed some years later.
Phenom was first to migrate to the 3 level cache hierarchy that we have today, with private L2 caches.
Nehalem mimicked and improved on that philosophy.
Bulldozer appears to be similarly ahead of its time, ready for world where heterogenous CPU/GPU computing is commonplace.
I wonder if we'll see a similar architecture from Intel in a few years. "
(the same link as in the previous  page: http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3683&p=1 )

Edit: http://www.anandtech.com/weblog/showpost.aspx?i=659

Kadri.

penang

Quote from: Oshyan on December 04, 2009, 11:37:36 PMWell, you can stop waiting with baited breath for the magic of Larrabee to make everything render instantly. ;D
http://www.semiaccurate.com/2009/12/04/intel-kills-consumer-larrabee-focuses-future-variants/

- Oshyan
Hmm... I thought Intel has just decided that Larrabee is dead.

Please correct me if I am wrong.


Kadri

Quote from: penang on December 08, 2009, 05:36:26 AM
Quote from: Oshyan on December 04, 2009, 11:37:36 PMWell, you can stop waiting with baited breath for the magic of Larrabee to make everything render instantly. ;D
http://www.semiaccurate.com/2009/12/04/intel-kills-consumer-larrabee-focuses-future-variants/

- Oshyan
Hmm... I thought Intel has just decided that Larrabee is dead.

Please correct me if I am wrong.


http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3686

penang

Just in case anyone interested ...

AMD did release a doc detailing the instructions for their GPU

It's a 392-page pdf file, available from http://developer.amd.com/gpu_assets/R700-Family_Instruction_Set_Architecture.pdf

Use it to make Terragen faster, pls !

penang

#51
The supercomputers are GPU-based, built by the topological group of the University of Antwerp in Belgium.

As topology goes, it's about drawing.

Same as Terragen. :D

So, to cut the story short, they built the first version some 18 months ago, with 8 GPUs, costing a little under 4,000 Euros.

Now, they have version 2, with 13 GPUs, costing around 6,000 Euros.

Both versions outperformed the CPU-based supercomputer with 512 cores, also in the University of Antwerp.

That's the power of GPU Terragen should tap into.

Pictures:








Benchmark:


Power Consumption:


Energy Efficiency:


For more details, please click on the following link:

http://www.dvhardware.net/articles25_fastra_2_desktop_supercomputer.html

TheBlackHole

Well, I LIKE the fact that TG only renders with the CPU. Then I can run Celestia and it'll render every frame insanely fast compared to TG. Celestia can use the GPU so I can fly through wormholes and galaxies while TG's trying to render a scene.
They just issued a tornado warning and said to stay away from windows. Does that mean I can't use my computer?

PG

My guess is it would be an option rather than forced.
Figured out how to do clicky signatures

penang

Quote from: TheBlackHole on December 17, 2009, 11:10:43 AMWell, I LIKE the fact that TG only renders with the CPU. Then I can run Celestia and it'll render every frame insanely fast compared to TG. Celestia can use the GPU so I can fly through wormholes and galaxies while TG's trying to render a scene.
LOL !

And I thought I was all alone in thinking that !

Yeah, while TG2 is slowly crunching on CPUs, Flam4 warping on GPUs.

I like that ! :D

TheBlackHole

How did this thread last 2 months?
They just issued a tornado warning and said to stay away from windows. Does that mean I can't use my computer?

penang

#56
I have found more information regarding the GPUs from ATI.

They are programming manuals, available from

http://www.x.org/docs/AMD/R6xx_R7xx_3D.pdf

and

http://ati.amd.com/developer/open_gpu_documentation.html

and

http://www.x.org/docs/AMD/R6xx_3D_Registers.pdf

There is even a disassembler for ATI's BIOS, HD4000 and up. It's called AtomDis
http://cgit.freedesktop.org/~mhopf/AtomDis/



haldun

Quote from: TheBlackHole on December 17, 2009, 07:44:36 PM
How did this thread last 2 months?

Hi, I am new here and I specifically registered to this forum to answer this question in this fashion:

I personally would buy Terragen THE MOMENT IT DOES GPU RENDERING :D

grtz
Haldun

penang

http://code.google.com/p/gpuocelot/

The link above is Ocelot, a just-in-time compiler for CUDA, allowing the same programs to be run on NVIDIA GPUs or x86 CPUs .

Check it out !

penang

#59
Just came across an interesting review:

http://www.brightsideofnews.com/news/2009/12/23/machstudio-pro-can-a-gpu-replace-a-cpu.aspx

Not very long, only 3 pages.

However, it is about a rendering software package, that comes with a GPU card, which renders using GPU, rather than CPU.

What interests me is the following sentences:

"The scenes we tested with rendered anywhere between 10 and 20 times faster than on our powerful quad-core/octa-thread processors."

"Do note that if we would really push the details into overdrive, we would get a single 1080p frame in 10 seconds on bundled FirePro V8750, while 3.33 GHz CPU would probably take 20-30 minutes per single frame."