Graphics Processing Units (GPUs): The Future of Scientific Distributed Computing

AMD/ATI Radeon GPU Card photo

General-Purpose Computing on Graphics Processing Units
It is pretty clear to me that when it comes to scientific distributed computing, GPUs are the future. Folding@home has already proven that, both by pulling far ahead of other projects in raw processing power and being certified by the Guinness World Records as “the most powerful distributed computing cluster in the world“, but also with the distribution of that processing power:

Out of a current total of about 2.7 petaFLOPS, 1.3 of those are coming from nVidia and ATI/AMD GPUs, and 1.1 from Playstation 3 gaming consoles. That leaves only about 0.3 petaFLOPS for regular CPUs, and they outnumber GPUs and consoles by about 5 to 1. To put things in perspective, 0.3 petaFLOPS is still about 4-5 times bigger than Rosetta@home, my favorite project, which is generally considered to be in the top 5 of projects by processing power, but which doesn’t use GPUs.

So clearly, if the data you need to crunch can be crunched on a GPU, that’s the way to go. So far it hasn’t always been easy to do, but that should get better as GPGPU frameworks and standards appear (like Apple’s OpenCL, for example), and the hardware itself becomes more flexible and powerful (the ability to do “double precision” floating-point calculations, which is required by most scientific applications, etc).

What the Future of GPUs Might Look Like
The best-case scenario would be an open framework that makes it easy to write scientific apps for GPUs and to port and parallelize already-existing code. Ideally, that would work on a level that makes supporting GPUs from all manufacturers (AMD/ATI, nVidia, Intel, etc) easy without having to write hardware-specific code (which is the case for the Folding@home GPU apps right now, as far as I know).

This would create healthy competition among GPU makers to make hardware that is desirable to the scientific crowd (which won’t be alone in using GPUs for general-purpose computing, but like the gamer crowd, they probably tend to be early adopters and spend more on hardware than the spreadsheet crowd, so they are worth catering to), and would most likely give a double-benefit to distributed computing projects: They would get massive amounts of extra crunching power (for example, the recent AMD/ATI 4870 X2 is around 2 teraFLOPS per card), and they would also benefit from the fact that people tend to upgrade GPUs more often than the rest of their computers. Even older computers, if they have recent GPUs, could be useful data crunchers.

Helping Science Between Gaming Sessions
This move toward GPGPU is especially exciting for biology and physics. Now that both sciences have a large information-technology side, breakthroughs can be expected from things like the Materials Genome Projects and computational protein & enzyme design, as well as from GPGPU pioneers like Folding@home itself.

Advertisements

4 Responses to “Graphics Processing Units (GPUs): The Future of Scientific Distributed Computing”

  1. anonymous Says:

    You might find “Sutherland’s Wheel of Reincarnation” an interesting observation.

    Bottom Line: Specialized hardware almost *ALWAYS* loses out to software on general purpose processors. Specialized hardware costs more, has fewer features, and is less flexible. Moore’s law will not be denied. Specialized hardware either disappears or gets subsumed onto the motherboard, then the CPU (e.g. MMUs, FPUs, caches, etc). GPUs if they are lucky have 5 years left.

  2. Michael Graham Richard Says:

    Hey anonymous,

    That’s an interesting point. Even if eventually the hardware that we know as a GPU gets moved to a chip on the motherboard or gets packaged inside the CPU, I think the point still stands that we’ll be using its power for things like distributed computing, simply because you can get a lots of FLOPS out of it. Granted, we might not call it a GPU by then.

    I’m not sure if the external GPU will die so quickly, or so completely, though. For people like gamers, it’s very useful to be able to swap out the GPU, and there might always be companies that specialize in graphics and that don’t want to be depending on mobo or CPU makers. We might see a two-tier system, with ‘integrated’ graphics for almost everybody, and specialized GPUs for hardcore gamers/crunchers/etc.

  3. MikeH Says:

    I’d have to agree. AMD have been working on GPGPU designs for a while now – the Fusion processor. Intel is also moving in the same direction with its Nehalem processor design. These architectures are touted as the ‘next generation’. This will mean that we’d get the best of both worlds – the general purpose functionality of CPUs and the parallel processing power of GPUs. Moreover, it won’t be long before GPU floating point math will be IEEE-754 compliant now that the manufacturers can see the benefit of using GPUs for scientific computing.

    anonymous’s claim that GPUs, if lucky, have 5 years left may therefore be true – in their current form. But certainly not altogether judging by what the big chip makers are working on.

  4. Recent Links Tagged With "computational" - JabberTags Says:

    […] public links >> computational Graphics Processing Units (GPUs): The Future of Scientific… Saved by lubot on Thu […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: