Q&A: Microsoft Announces Strategy for High-Performance Computing

REDMOND, Wash., Nov. 15, 2005 — Advancements in high-performance computing are on the minds of thousands of IT professionals and scientists attending conferences around the world today. Bob Muglia, senior vice president, Microsoft Server and Tools Business, talked about Microsoft’s commitment to high performance computing in his keynote address in Barcelona at the Microsoft IT Forum 2005, the company’s annual infrastructure conference. Meanwhile, Microsoft Chairman and Chief Software Architect Bill Gates, in a keynote address at Supercomputing 2005 in Seattle today, will describe how innovations in high-performance computing will improve global collaboration by connecting scientists and researchers, computation resources and data, all of which can help reduce time to insight and discovery.

In conjunction with today’s keynotes, Microsoft also announced the Beta 2 release of Windows Compute Cluster Server 2003, which is designed to accelerate time to insight by providing an integrated, easy to use and manageable high performance computing solution. The company also said it would invest in helping to establish 10 Institutes for High Performance Computing at key universities worldwide. Both developments signal Microsoft’s entry into the high-performance computing (HPC) market.

To learn more about Microsoft’s high performance computing strategy, PressPass spoke last week with Muglia.

PressPass: How would you describe Microsoft’s vision for high performance computing and how does Microsoft plan to make that vision a reality?

Muglia: For the past two or three decades, supercomputing has been the domain of the few, with specialized custom hardware and software systems – a niche market focused on raw computation. However, scientific discoveries in the last decade have signaled a need for increased collaboration among scientists, as well as the ability to manage an increasing amount of data being generated by automated tools, such as sensors, satellites and others. We’ve also seen new segments of the market – such as departments within commercial industry – asking for access to HPC technology, which has historically been difficult to manage and afford. What’s needed now are software technologies that will interconnect computing systems, people and data to produce more rapid answers to the questions of science, and to help researchers use computation in the most effective manner..

Microsoft is committed to making this vision reality in the same way we enabled computing to go from a curiosity in the ‘80s to a pervasive reality for consumers and businesses today. We understand that customers need the integrated software infrastructure and tools to run their applications successfully – and we are striving to provide that with Windows Compute Cluster Server 2003.



Bob Muglia, Senior Vice President, Microsoft Server and Tools

PressPass: What made Microsoft decide to enter this market now, and what work has the company done up to this point to understand customers’ needs?

Muglia: There are two main reasons why Microsoft decided to develop an HPC solution now. There’s clear industry evidence that HPC is moving into mainstream commercial segments where it didn’t exist previously, and customers are asking us for a Windows-based HPC solution to be able to leverage existing Windows platform technologies.

That said, Microsoft actually began work in the HPC space almost five years ago in a research capacity. However, to understand how and why we entered at that point in time, it’s necessary to provide some context.

In the late ’90s, increases in computing power (Moore’s Law) contributed to the emergence of clusters of industry-standard servers to tackle some of the challenging problems in this space. This, in turn, led to two further developments: an increased number of scenarios in which these systems could be applied, such as research in engineering, life science and oil/gas; and more ISVs (Independent Software Vendors) developing and offering HPC applications.

Microsoft began work in the HPC space by establishing research relationships with a set of partners using Windows 2000 Server. We funded the Cornell Theory Center at Cornell University, which supported all research on campus with a completely Windows-based HPC center. Because of the resulting unique expertise with Microsoft technologies, the center briefed, trained and supported a number of Microsoft customers deploying Windows HPC solutions. We also worked closely with various OEM (Original Equipment Manufacturers) partners to learn about the hardware requirements of the HPC space.

By the time Windows Server 2003 shipped, the HPC market was beginning to utilize advances in 64-bit computing and we saw a further increase in the use of HPC in enterprises. At the beginning of 2004, we put a team in place to build a product, now called Windows Compute Cluster Server 2003. The team has been performing significant market research customer visits during the product development phase to learn more about their needs. We’re also working with some of the leading members in the HPC community to create a long-term roadmap and a design for the first version of the Windows Compute Cluster Server 2003. And we’ve been testing this design in the community and with customers and released the first beta of the product in September at the Professional Developer’s Conference. We received very useful and positive feedback, and are now releasing the second beta, which will be available for testing and evaluation online at Microsoft.com.

PressPass: What’s the significance of today’s Beta 2 milestone?

Muglia: We are very excited about the Windows Compute Cluster Server 2003 beta 2 milestone. It represents a major achievement for Microsoft because Beta 2 includes all the features we have designed and developed for Windows Compute Cluster Server 2003, and it also encompasses performance tuning for InfiniBand and gigabit Ethernet. With this public release, customers and partners can begin evaluating our solution and putting plans in place for integrating it into their infrastructures and product offerings.

PressPass: How is Microsoft’s approach to HPC with Windows Compute Cluster Server 2003 different from other approaches?

Muglia: Based on our research to date, we know that for mainstream customers to take advantage of HPC systems, these systems need to be easier to deploy, use and manage – and more cost effective. We’ve also heard that the solutions available today are both too costly and complex – both major barriers to adoption.

In releasing Beta 2, we have tackled three dimensions of Windows Compute Cluster Server 2003. First, we focused on making it easy for ISVs to move their code so that they can take advantage of the Microsoft platform. This should ensure a broad range of commercial and open source applications are available for this platform when it ships next year.

Second, we focused on ensuring that HPC clusters are easy for system administrators to manage and deploy. Our customers have told us they want to see HPC clusters fully integrated into their existing infrastructure, rather than having to deploy and manage them as standalone islands of functionality. Windows Compute Cluster Server 2003 provides administrators the familiar Windows interface for managing these systems. It integrates with existing, broadly-used tools, such as Microsoft Management Console, Microsoft Operations Manager 2005 and Active Directory.

Finally, we focused on ensuring that end users can access and use HPC clusters as transparently as they do the other components of the enterprise infrastructure, such as storage, printing or the Web. We believe computational resources should be seamlessly woven into the desktop and applications so that they are easily accessible from anywhere and fully integrated into user’s workflow. The integration efforts with our software partners will simplify the way customers access HPC clusters and the rest of the infrastructure.

So what we’re really doing with Windows Compute Cluster Server 2003 is extending the traditional benefits of the Windows platform to a new set of customers.

PressPass: Can you provide details about Microsoft’s plan to invest in Institutes for High Performance Computing? How will this investment further the company’s goals for this market?

Muglia: Microsoft brings to the HPC space a unique strength, which is the ability to design, build and support software that is used by hundreds of millions of users, from small businesses to large enterprises. We plan to extend Microsoft’s investments in the academic community by establishing 10 Microsoft Institutes for High Performance Computing at university research centers worldwide.

This multi-year, multi-million-dollar investment builds on the relationship we started with the Cornell Theory Center years ago, and will allow us to understand much more deeply the requirements and opportunities for applying results of research and innovation in high performance computing. The institutes, such as the University of Southampton (in the ), the University of (), Tokyo Institute of Technology, the University of Tennessee and the University of Texas, are leaders in their fields. The research centers at the institutes have been running HPC platforms for years in critical research environments, so their scientists are experienced not only in advanced computing but also in a broad range of HPC platforms serving high numbers of users.

We’ll work closely with the institutes on research projects that utilize Microsoft technologies to gain insight about unique needs in this segment, as well as to collect product feedback.

PressPass: What type of partner support have you garnered and how will partners benefit from Windows Compute Cluster Server 2003?

Muglia: From the outset, we heard universally from our hardware and software industry partners that they were very excited to see the potential for HPC to go mainstream. But what we also heard is that customers are experiencing a significant amount of complexity deploying and managing HPC solutions, which has really prevented the market from expanding more rapidly.

Software vendors today are challenged with creating solutions that support upwards of 10 operating systems and expensive hardware platforms. As industry-standard hardware has become available for HPC, software vendors say they’d like to see a supported platform they can target that minimizes the complexity and cost of developing their solutions, and streamlines sales. So one of the main elements of our strategy around high performance computing involves reducing the costs our partners face in developing HPC applications.

The result of this strategy is that we’re seeing tremendous interest from the software and hardware community, from major software vendors in the life sciences, oil and gas, and manufacturing industries, to hardware and microprocessor vendors who are delivering solutions in this space. We’re talking about the mainstays in the supercomputing business. And there are 20 HPC applications on display in our booth at Supercomputing 2005, and we’ve already had more than 15 ISVs announce their plans to release applications in conjunction with the launch of Windows Compute Cluster Server 2003 in the first half of 2006.

PressPass: Will Windows Compute Cluster Server evolve into Microsoft’s grid computing offering?

Muglia: Grid computing has unfortunately become a very broad term that encompasses a range of technical meanings, depending on the audience using it. Many leaders in the grid computing community agree that the term today is problematic – the Global Grid Forum [http://www.gridforum.org/] in Boston last month devoted a panel discussion to this topic, and most of the leaders at the forum agreed that grid computing is used too broadly. So it’s difficult to answer that question without first defining the term more precisely.

When businesses talk about grid computing for the enterprise, they generally mean virtualization or data-center management – IBM and Oracle refer to grid computing in this way. When people in the scientific and technical computing community use the term grid computing, they are using it to signify connecting disparate computational center resources, data sources and, increasingly, the researchers themselves. This is the meaning that Microsoft is using when it talks about grid computing.

The vision that Bill will outline today at Supercomputing 2005 paints a picture of the importance of this interconnection. A lot of the work that Microsoft has been doing around the company’s business and computing approach to Web services, large-scale databases and collaboration applies directly to this vision of connecting scientists, computation and data.

Windows Compute Cluster Server 2003 plays a very important role in this superstructure by being a solution that makes computation easily accessible, easily consumed and fully integrated into the rest of the enterprise infrastructure as well as more affordable to mainstream users. These benefits will hopefully create more opportunities for users to accelerate their innovation and discovery, and spur critical advancements around the world.

Related Posts