News

Amazon Adds GPU Instances to Cloud Service

In a move that will allow users to accelerate the performance of its cloud services, Amazon Web Services has launched Cluster GPU Instances.

In a move that will allow users to accelerate the performance of its cloud services, Amazon Web Services has launched Cluster GPU Instances.

The new service, launched today, is an instance type to Amazon's cloud portfolio. Each Cluster GPU Instance is based upon two NVIDIA Tesla M2050 GPUs, which Amazon said offers a peak performance of over 1 trillion double-precision floating point operations per second (FLOPS).

"This incredible power is available for anyone to use in the usual pay-as-you-go model, removing the investment barrier that has kept many organizations from adopting GPUs for their workloads even though they knew there would be significant performance benefit," wrote Amazon CTO Werner Vogels, in a blog posting.

Vogels added that Cluster GPU Instances will appeal to users requiring high performance computing (HPC) such as those performing complex financial processing, as well as and traditional oil and gas exploration and integrating complex 3D graphics into online and mobile applications. 

"We believe that making these GPU resources available for everyone to use at low cost will drive new innovation in the application of highly parallel programming models," he said.

Jeff Barr, Amazon's lead Web services evangelist , added in his blog post today that each account can use up to eight Cluster GPU instances by default with more accessible by contacting the company. "With the ability to cluster these instances over 10Gbps Ethernet, the compute power delivered for highly data parallel HPC, rendering and media processing applications is staggering," he wrote. 

In his blog post, Barr pointed out that users will need to implement specialized code in order to achieve optimal GPU performance. The Tesla GPUs implements NVIDIA's CUDA architecture in several different ways, he noted. Among those that he pointed to, users and developers can:

  • Write directly to the low-level CUDA Driver API
  • Use higher-level functions in the C Runtime for CUDA
  • Use existing higher-level languages such as FORTRAN, Python, C, C++, or Java
  • Build new applications in OpenCL (Open Compute Language), a new cross-vendor standard for heterogeneous computing.
  • Run existing applications that have been adapted to make use of CUDA.

Amazon Cluster GPU Instances run on the Amazon EC2 Cluster network, which was launched earlier this year. As the name implies, Cluster Compute Instances lets users create clusters of instances on Amazon's network. The addition of Cluster GPU Instances targets HPC workloads, letting users take advantage of the parallel computing capabilities of GPUs, according to the company.

About the Author

Jeffrey Schwartz is editor of Redmond magazine and also covers cloud computing for Virtualization Review's Cloud Report. In addition, he writes the Channeling the Cloud column for Redmond Channel Partner. Follow him on Twitter @JeffreySchwartz.

Must Read Articles