High Performance Computing: Speeding up research in the cloud

With the massive amount of data available today, researchers are looking for more effective and efficient ways of processing and analyzing the data. While high performance computing (HPC) can accelerate research processes such as modelling or simulation, its extensive requirements often makes it an expensive investment that is out of reach for most higher education institutions and researchers.

This is where the cloud is rewriting the rules. It enables cost-friendly access to HPC resources, allowing researchers to drastically increase the speed at which discoveries can be made—as in the case of the University of Newcastle, where simulation run-time was cut from 80 days to two days. And that is not all. The cloud additionally supports flexible design and scaling HPC architectures according to research needs, and encourages seamless collaboration between researchers for greater innovation at a faster speed.

Economical HPC access from the outset, and in the long-run

HPC requires low latency storage, high speed connectivity and large amounts of memory. This means an on-premises supercomputing center would require considerable capital outlay, power, space and an extensive set up period of between 6-24 months. Maintenance and upgrades also involve specialized technical expertise, further racking up the costs—which is why only very small groups of researchers currently have access to HPC.

As Wu Feng, Professor of Computer Science at Virginia Tech said, HPC on the cloud means that researchers and their institutions “no longer have to find millions of dollars to establish their own supercomputing center”.

By eliminating the large capital outlay, maintenance costs, and power and space requirements, and reducing deployment time to a mere 20 minutes, the cloud democratizes HPC access. The capital, time and manpower can instead be put towards acquiring compute power and other resources that facilitate progress on the research. Upgrades and maintenance are also taken care of by the HPC cloud service provider, which ensures researchers always have access the latest hardware to support their applications and workflows, and speed up output—without having to invest in new infrastructure.

Easy flexibility and scalability, optimized for research needs

The cloud also provides researchers and universities with greater flexibility and freedom to configure HPC capabilities to meet the unique needs of their research, applications and existing infrastructure. This means the various research teams in the same institution can each have a dedicated and customized supercomputer that meets their specific project needs—instead of having to share HPC clusters, servers and architecture.

This flexibility also extends to changes in computing demands. If there is an increase in the amount of data to be processed, researchers can easily scale up compute capabilities to handle the influx using the cloud, ensuring that progress is not affected. For institutions with their own existing on-premises HPC clusters, cloud HPC can complement current infrastructure by providing additional clusters or expanding compute capabilities to include access to state of the art graphics-processing units (GPUs)—without having to introduce new tools and software on researchers’ personal devices, or on-premises clusters.

For example, while the City of Hope medical research and treatment center was already using on-premises HPC clusters in their research, they were eager to find a cost-effective way to leverage graphics-processing units (GPU) to further accelerate their drug discovery process. Azure Virtual Machines N-Series provided just that, and reduced simulations from a month on the CPU-based HPC machines, to just days.

Greater collaboration for faster, better research output

HPC on the cloud streamlines and supports collaboration, not only within the institution but even across countries. Not only can the compute resources be shared, the access to the data, applications, workflows and outputs can also be shared to all users involved.

This enables researchers to reach out and work with other researchers from all around the world, allowing them to share insights and exchange expertise—which facilitates greater collaboration and faster discoveries.

Driving faster, better research outcomes with the right supercomputing partner

While most HPC cloud solutions vendors provide affordable, scalable and secure HPC resources, what differentiates each provider is their ecosystem of cloud solutions. Researchers and higher education institutions may find solutions they need already readily available within the provider’s ecosystem—saving the time to architect one.

For example, the Genome Institute of Singapore (GIS) not only adopted Microsoft Azure’s HPC capabilities to rapidly develop, test and deploy applications without having to set up extensive infrastructure, they also leveraged the Microsoft Genomics Genome Analysis Toolkit (GATK), an on-demand HPC solution, to sequence genomes much faster than before.

“Azure and its services have not only enabled us to do our research faster, it has also allowed us to be more focused on our research,” said Shih Chih Chuan from the GIS. “Azure’s large ecosystem of services meant that we had a ready solution in GATK and did not need to have to spend time coding or designing an application for our genome sequencing needs. This enabled us to concentrate our efforts on our research.”

Through its Azure cloud and collaboration with Cray, as well as outreach programs including workshops at events like SuperComputing Asia, Microsoft is committed to empowering researchers and higher education institutions to do better research, and creating a unified research community that will enable greater collaboration for faster, improved outcomes.

For more information, please visit Microsoft Azure High Performance Computing.

Tags:

Related Posts