Bob Muglia: Microsoft CIO Summit

Remarks by Bob Muglia, president of the Microsoft Server and Tools Business, on Changing the Game with Dynamic IT
CIO Summit
Redmond, Wash.
April 28, 2009

BOB MUGLIA: So, good morning. It’s great to have a chance to be here, and I thank you very much for taking time out of your busy schedules in this difficult and challenging economic times to come out and spend a few days with us. We hope that the conference will be very valuable.

Today, what I want to do is talk about the datacenter and some of the changes in terms of the way we’ll work, users will work with information, and some of the things we’re doing that we think will have some fairly transformative impacts in the way people work with information and the way costs can be reduced by managing the datacenter.

It’s interesting hearing some of the questions that were asked to Chris, and they very much echo the kinds of questions and conversations that I have been having with CIOs and analysts, industry analysts over the past few months where there are a number of things on people’s minds.

It certainly is an interesting time. It’s always an interesting time in my experience. The 20 plus years I’ve been here, the 30 years I’ve been in the industry, times are always interesting. But with the effects of the economy and some of the changes that are happening, it’s a time where there’s a lot of things that are on people’s minds in terms of business results.

And it’s true pretty universally that cost cutting is on everybody’s mind right now, and that certainly is hitting most IT departments from the discussions I’ve had with CIOs. There are a few lucky ones that are not, but most of them are being cut, and some of which are being cut pretty substantially.

One thing that I do see that’s fairly different right now than what might have been true in the last recession we had around 2000, 2001 is the conversations about whether IT is important are not happening as much. At least that’s what I’m hearing. What I’m hearing a lot of is that our companies, our CEOs, our organizations are looking to us as IT professionals to help bring us out of the recession and the economic difficulties that we’re having, and looking towards how technology and technical systems can help to enable new products to be delivered more quickly, services made available more effectively to customers, and, in fact, get a flow of innovation going that can help us all to transform our businesses and grow ourselves out of these difficult times.

There’s lots of challenges, lots of things are changing, and yet the importance of technology and particularly information technology is top of mind to many, in fact, most companies that I’ve had a chance to speak with.

The last question that was asked about the communications and people coming in, young people coming in to organizations with having a different set of expectations, this is a challenge that I hear a lot as well in the context that we are all living in a world where we have multiple generations of people within our organizations with different levels of familiarity associated with technology, and there are different opportunities for us as we take advantage of the capabilities of these different individuals.

Certainly young people who were weaned on the Internet, so to speak, have an ability to work with technology and perhaps a lack of concern and fear that is quite different than some of our more mature people that we have working for us, and that poses both opportunities and challenges.

I mean, it poses great opportunities because if we can empower these people, we can get them to do more, more effectively, and to be transformative in terms of their parts of the organization, and IT can play a very important role in that, really shifting from delivering full systems and fully packaged solutions to one where we are taking and delivering parts of solutions that can then be composited and brought together by individuals within different departments to build the very specific solutions that are appropriate for the business.

The more we can get IT into the mode of building the underlying foundations of applications and solutions that then can be customized by business unit members, the more effective those business units can be, and the more agility we can enable for our overall organization. And there’s a huge opportunity given the technical capabilities of many of these end users, and particularly those that are just recently getting out of college.

The flip side of that, of course, is those same young people have a view of the way information is shared that they perhaps learned on Facebook or Twitter that might be different than the policies that we need to have within our organizations associated with privacy of information, associated with compliance regulations, and so there’s a need for us to be able to enable their capability, enable them to be effective, and at the same time have the appropriate policies and control on information, and information protection so that we ensure the integrity of our businesses, because of course that’s such a key concern. But it is a great opportunity, it was a great question, and it’s a great opportunity.

A third area that I think is extremely interesting and perhaps one of the most exciting sets of things, and I’ll show a little bit about this in a demo is some changes that are happening in the underlying technology that will enable new sets of IT systems that frankly we’ve never even dreamed of before.

There’s a few shifts that are going on. I mean, we could talk about virtualization, which I will talk about, we can talk about the cloud, which I will talk about, but the ones that I’m actually referring to right now we’re not hearing as much about, and that is the transformation to solid-state memory, having the storage capacity to be able to run major business systems.

That’s happening both in the context of the advent of solid state disks that have very, very different performance characteristics and are now having the reliability and availability characteristics that we would expect in production systems that can be used in a database.

And those solid state disks operate at, depending on the particular operations, somewhere from 10 to 100 to even a thousand times faster than the rotating media counterparts, which of course are limited by the physics of rotating media that a disk has.

The other thing that’s happening is the advent of very large memory as we transition to 64-bit, and we begin to break through the 4- gigabyte barrier, and the affordability of having very, very large amounts of memory present in both servers certainly where these days 32 gigs, 64 gigs, 128 gigs or even more is becoming more commonplace for the kinds of business applications we want to run, or in clients where that 4-gig number is becoming very common and we’ll begin to see 8 gig and even beyond that.

The kinds of things that can be done in memory are dramatically — dramatically different and dramatically faster when working with data than can be done when you’re much more disk-bound, and I’ll show a very cogent example of that in a demo in a few minutes.

So, you know, the net of it is it’s an interesting time, it’s always an interesting time, but the bottom line for me is that it’s a time where there’s no question that the role of IT in helping our organizations to break through the economic barriers that are affecting the overall global economy is very, very real, and it is incumbent upon all of us as the leaders of our IT communities to be able to drive that change and enable our business to really differentiate itself in these difficult times.

Now, putting this into context, Microsoft has been working for a number of years on something we call Dynamic IT. It just so happens that today is the day that we’re doing our 10th annual Microsoft Management Summit. We do this summit annually in the spring. It’s opening up today. We have a lot of really interesting things being shown there. Some of you may have people that are attending that summit where there will be a lot of drill-down sessions.

But in the context of that what we’ve talked about over certainly the last six years is this idea of a Dynamic Systems Initiative or what we now call Dynamic IT, and the fundamental idea of Dynamic IT is to focus in on how we can transform the business process associated with IT to make IT both more effective in terms of being able to deliver more at a lower cost, changing the lifecycle of the process that IT does to build applications, reducing the cost of datacenter management, and at the same time transform the way users work with information and the way we manage user desktops.

Certainly I’ll talk a little bit about this – in fact, I’ll talk about it right now, the idea that the way users work with information has been transformed over the past few years, and certainly the way we manage users has been transformed.

If you go back five, six, seven years ago, the average way that IT managed users’ desktops was very, very primitive and frequently involved sending people out to desktops to do simple things like software installation and patch update.

Software technology has evolved pretty considerably since then, and most of our users have achieved – most of the companies we work with have achieved significant cost savings, be it by enabling systems management technologies to broadly manage the desktops within their organization, enabling application installation, software patch updates, configuration management, inventory, things of that nature.

We view ourselves as being at a transition to a major change, a major breakthrough in the way user information, user desktops will be managed, and that shift is moving from a device centricity that almost all management systems have today to a user centricity.

And this is simply a reflection of the reality of the way users work with information. If you go back five or six years ago, most users worked with information from their desktop inside the office. Well, that just isn’t the case today. Users have perhaps a desktop machine, perhaps they have a portable that they take on the road, I’m sure many people brought your portables with you. They work with mobile phones and Blackberrys and things, and they’ve got access to a significant amount of the corporate information through their mobile device. And, in fact, users are working across multiple devices dealing with an effectively common set of business information that is brought out to them, and is presented to them as appropriate on the device they are working with at the time they are working.

We don’t manage our users that way. Really nobody effectively is managing their users that way. We’re managing their devices. And we’re probably managing their desktops, we’re probably managing their portables. If I looked out and said how many are managing – I’ll ask the question, how many are managing mobile smartphones? Raise your hand if you feel like you have an adequate management system for mobile phones. I see a few hands going up. Most people have less than an adequate solution for that. But even if you do have an adequate solution, it is almost certainly disjointed from what you’re doing today in terms of managing desktops, and it’s not providing a consistent user experience across that.

So, what we’re in the process of doing is transitioning the way we think about users, again taking into account this idea that our users are becoming more and more sophisticated, the things that they expect from us are more and more sophisticated, the capabilities of what they can do are sophisticated; we need to operate in a world where the policies, privacy issues, compliance restrictions that are appropriate for our given industry and our given business are enforced with those users regardless of where they are, regardless of how they’re accessing information.

And yet we want to enable them to, for example, when they’re on the road have access to all of their information that is appropriate from a policy perspective from their portable machine, but perhaps a subset from their mobile phone, and then when they’re not working on a corporate owned resource perhaps presenting them with a much narrower slice of information or enabling them to have access to business applications through some sort of desktop virtualization or terminal services to enable them to get access to business data but within the confines and protection associated with the datacenter environment; so providing these different set of capabilities directed at the end user and again based on what they’re doing, what they’re allowed to do at the time, and in the process of doing that helping to enable them to do more.

Now, there’s a whole set of things that we are working on collectively with the industry, collectively with many of you to help enable that. Again sort of the most major shift is flipping upside down our approach today of device-centric management to user-centric management across a wide variety of heterogeneous devices. That’s sort of the first thing.

In conjunction with this, we’re working to solve some of the most problematic attributes that IT has and the most costly attributes that IT have associated with running their operation, things such as simplifying the process and, in fact, in a sense eliminating the process of application install through technologies such as application virtualization, which we have in a product we call App-V.

The key to this is the ability to get around and no longer require a software install process to run on a given piece of hardware, but instead allow the application to be streamed down on a dynamic basis based on user’s needs, and then remove with effectively a simple delete command to simply remove the app without having to worry about the interactions that it might have with other aspects of the system. And the potential cost savings for this are pretty dramatic.

At the same time, we recognize that all of these things are connected very closely to the way we manage the security and identity of our users, and in fact Microsoft has taken a very leadership role in looking at the industry and saying the issues associated with security management and the issues associated with identity management are, in fact, highly related issues that overlap in a lot of ways, and we are going to think about these problems holistically.

So, beginning with widely used technology like Active Directory, we are expanding out and really focusing on providing our customers with a security and identity foundation that they can use across their organizations to ensure that, for example, user accounts are appropriately structured, they have the appropriate membership and groups, we have the ability to maintain and update the credentials associated with a user, including certificate-based credentials and Smart Card based credentials; if passwords are used, the ability to simply reset those passwords in the event that a user has lost their password; connecting that to anti-malware software, connecting that to information leakage protection, thinking about these things, worrying about what happens with day zero malware, connecting all these sorts of things together in a coherent solution, which is what we’re really building out with our Forefront security and identity products.

One opportunity that we see is the usage of online services, and I’ll talk a little bit more about this. Chris talked a little bit about what we’re doing around online. I’ll talk a little bit more about some focus areas that we have associated with online, and one of the most exciting sort of new online services that you’ll see us introduce, actually we’re introducing it this week, it will be delivered next year, is a management online or a System Center online product, which will enable desktop management for users in organizations really of all sizes, and enabling that to be done as an online service.

And this is particular interesting in many cases for companies that have either not fully deployed a desktop management solution or if they have deployed one but they struggle with maintaining that on a global basis for their remote workers, and this can leverage the global footprint of Microsoft datacenters to help provide access to remote users and users that are in distant branch offices and things.

So, it’s kind of an exciting new thing that we’ll be working on. Again the focus is how can we reduce costs, how can we enable people to get a more effective management solution.

And the last thing here which I’ll talk a little bit about, which is different from a – it’s not a management thing, but it’s really focused instead on how can we enable end users to do more, that focus that end users are becoming more and more capable, how can we enable them to do more is some work that we’re doing between our SQL Server team and our Office team to really enable end users to work with vast amounts of information without tying up and utilizing very expensive corporate BI resources.

So, what I want to do now is switch over to a demo that we’ll do, if we can get the demo on the screen. There we go.

What I’m going to be demonstrating is a new version of Excel and Office. This is what we call internally our Office 14 products. They’ll be coming out next year, and actually going to beta very shortly. We’ll be releasing them in beta later this quarter; and together with a new version of SQL Server Analysis Services that will ship roughly coincident with the new version of Office.

When we think about BI and we think about empowering end users, one of the attributes that we know is complicated is today people are using BI, they have separate BI systems, separate BI applications that are quite costly for them to purchase and acquire, and even more costly to train and get their users working on.

So, as you might expect at Microsoft, when we think about BI, we think about an application that’s perhaps more broadly used called Excel.

So, what I’ll do here is what I want to do is take and work with some information, so I can go ahead and load some data in. I’ll load it from a database and bring that in, and I now have access to all that information.

And I’ve got several tables that I just imported here. By the way, this application is an application for an online movie service, and the BI scenario that I’ll be running is one where I’m looking at and comparing our results for the online service versus the box office results for different classes of movies.

PARTICIPANT: (Off mike).

BOB MUGLIA: That’s correct. We loaded — the question is, are the tabs individual tables, and that’s exactly right. In fact, I’ll show you here, so this is the different media types, here is some pricing information, and then we’ve also got purchases. This is actually a table that has all of the purchases that have been done associated with this particular time period for this organization.

And the interesting thing here, and let me just pause and tell you this is all being run on a laptop. By the time we ship, this will run very effectively on a 4-gigabyte laptop. It is actually running today on a laptop already.

And just to give you an idea here, it’s sort of interesting to see that this information in this laptop has 100 million rows of data in it. Through that data I can just sort of scroll through it, and as you see it’s 40 million, 50 million, 60 million, 70 million, et cetera rows.

And to put this in context, if you remember early versions of spreadsheets and things, Excel through the 2003 release could only support 64,000 rows of information. The new version of Excel can support 1 to 2 million rows. But if I wanted to do a query on that, say I wanted to look at and do a sort on purchase date or do a query, I would probably need to come back later in the morning to see the results, and what I can do here is I can just do a sort just like that. And what we’re getting is millisecond response on queries against hundreds of millions of rows of data. And, in fact, I can even do a little query here, I can look at purchase geography, and instead of selecting all I’ll just select Europe and the UK, and I get a query result there, and it’s sort of returned instantly.

Now, again the interesting thing here, this is all working on a user’s laptop, and, of course, they’re operating with this set of information, but the really interesting power comes when you take and can use the capability, that users can use the capabilities that are present in Excel that they know already to be able to put together BI applications.

So I’ll create a pivot table with this, something that is reasonably well understood, and we’ll go ahead and put a few different pieces of information in there. I’ll put the number of tickets sold in there. I will put genre in here. And then I’ll put some of my purchase information, and number of purchases, and create a little pivot table associated with that.

Let me go ahead and make this into a chart. So I’ll make it a pivot chart. And it’s probably better to be percent of total so that I can take a look at the relatives of our sales versus the box office. And one of the new features in Excel is the ability to create something called Slicers, Slicers are views of data that would be commonly used as a BI application was put together. So what I’ll now do is create a few Slicers. I’ll take media rating, create a Slicer on that. And media provider. And just to speed up the demo I have a little macro that will put the rest of Slicers in. And I may as well make this look pretty, so I’ll go ahead and set a theme to it.

OK. So what I have now done in a period of just a few minutes is, I have built a BI application. And as an end user I can now work with that. So, for example, I can look and say, what’s all of the movies that were done in 2008 that were rated PG-13 in the United States? And, boom, I can get different views of this data. And if you think about what it typically takes to build a BI application, typically what is required is you have OLAP specialists within your organization that take your data, and take it and pull it out of your transaction systems, and then they configure that data into the form of OLAP cubes to be manipulated and worked with. And that’s necessary to do that because the access key to bits are so slow that in order to build a performance BI application the data must be very carefully structured into a cube so that the information can be manipulated based on the needs of that specific application.

But in this case, I don’t have to do that. In fact, because the data is all in memory, it is just as fast to access any one piece of data as it is to access any other, and the system actually builds the relationships associated with these different pieces of data, and builds that for the end user. The end user did not have to lay out those tables, the tables were all effectively laid out for the end user because the relationships could be inferred based on the access patterns, and it didn’t matter, there were no performance implications associated with it because it’s all in memory. And the compression associated with this very, very powerful. We’re seeing on the order of 10 to 20 times compression. So in a 4-gigabyte system you’ll be able to get 40-50-100 gigs of information available to an end user, and to be manipulated with an end user because we’re compressing it down so effectively.

Now, because this is a simple BI application that I just put together. It’s quite interesting. I might want to share it with some folks. Of course, the way I would share that is, I would publish it to SharePoint, and we’ll show you a little prototype of our early version of SharePoint 14, this is the set of BI applications that have been published. I happen to have the one that was previously published here. And now another end user working with this information, so an end user created this BI application in just a few minutes. They can publish it to SharePoint and to SQL Server because it’s published in conjunction to a backend SQL Server database. And now from a Web application, any other end user that’s collaborating with that particular person interested in seeing information, once again, has access to that same information associated with it. So, again, I can get these different views. And in this case the Web application is being connected back against an in-memory SQL Server database that’s running on a production server.

One of the fascinating attributes of this is the performance implications, and the kinds of things that we’re seeing. We have seen in our lab queries that today at Microsoft run and take 40 servers 15 minutes to run. So a key production resource, a BI resource, in our backend data centers that takes minutes to run we are now seeing results where end users can work with that same information and get a sub-second, or just a few second response time associated with it.

The exciting thing to me about this is the implications that happen when we empower end users. I think about if it today’s takes an extensive corporate resource 15 minutes to run a query, and I can put that in the hands of an end user running on a $1,000 computer, and they can as a result get that same question answered in just a couple of seconds, what kind of questions can they ask that today they could not ask because of the costs associated with it, and the in accessibility of the information. What sorts of discoveries can be done. What level – what sets of business analysis and marketing questions can be done?

I mean, we’re already beginning to use it internally. It’s kind of exciting because we’re using the information internally. And we’re looking at our business, looking, oh, my gosh, servers are down, this is happening, all these changes are happening in our markets. We’re now beginning to be able to empower our marketing people to be able to use this sort of technology to get questions answered that they simply couldn’t answer before. So I think these are the sorts of things where IT can be very transformative, and have a massive impact in the business results associated with their organization, because technology will enable us to do things we couldn’t do before. That’s sort of a fast demo of something that you’ll see coming, it’s in beta in just a few months, an it will be available in production next year in final product.

Now what I want to do is shift to the data center, and talk about the trends associated with virtualization, and the trends that are coming in the cloud, and how that will have a transformative impact on the way we build applications, and the kinds of solutions that our organizations can create.

Let me just ask you a question, how many of you are using virtualization today in production? (Show of hands.) So the majority of people. That’s pretty typical. I think most organizations have begun to use virtualization. Some people are very, very consolidated on virtualization, and they’ve rolled it out very broadly. That’s certainly true in larger organizations, which I know all of you run. So it’s a very common thing. Typically people today are using virtualization for consolidation. They’re using virtualization to drive up their utilization, reduce their power and data center footprint, and in many cases people have begun to look at virtualization to be an enabler towards business continuity with secondary data centers being scaled over to be able to achieve the continuity that our organizations need.

There is an important transition that will be coming over the next few years, which is the transition towards cloud-based systems, whether those systems are on premises within our organizations in the form of private clouds, or whether they are hosted in the form of a public cloud. The idea that we will move more to a pool of resources, computing resources, where applications can be enabled within that pool to elastically take advantage of the computing resource. And the transition here really is the next stage in the transition from a physical system. Several years ago, I’m sure you’re familiar, most of the time when a business unit would come and ask for a new application to be hosted, the classic response from IT was to provision a dedicated server, or set of servers associated with that business application, which meant there was an ordering process, and it was not uncommon for the response time to be, we can have that hosted for you in four, six, eight, 12 weeks. We’re now moving to a world where through virtualization and having a set of resources that are consolidated and virtualized, we can bring up a virtual machine for an organization for a new application in just a matter of a few minutes or hours, and that’s a fairly major enabler in terms of enabling businesses to be more agile and responsive.

Well, the next step associated with this is really the step towards the cloud, where instead of having an administrator create that virtual machine that the system will allocate the virtual machine based on the ongoing needs. So, for example, if you’ve got a retail organization that has a major push right after Thanksgiving, you’ll have more Web servers that will get allocated based on the needs of the Web site that’s coming on that major shopping day, and the system will dynamically adapt to that, or perhaps at end of quarter your financial systems will utilize a larger percent of your resources, and all of these things will be elastically shrunk or grown based on the response of the application.

Again, this takes a knowledge of the application. It takes a set of management systems that understand not just what the underlying virtualization infrastructure can do, but what the applications actually require to be able to grow and shrink them. It will also mean that we will be interested as IT professionals in building applications that have the ability to scale out to take advantage of a broad set of resources. Many of you today are building Web service, or SOA-based applications, that’s a common application pattern today. We see a world where we will shift to what I think of frequently as the fifth generation application model, which is a scale out application model.

Kind of going back through history, when I was in college the classic application model was the monolithic or mainframe application model. I had the same story that many of you do about dropping my punch cards on the floor in the middle of – as approaching a college computer science exam. And that was sort of the first application model.

When I first joined Microsoft in the late 1980s, we had the advent of the client-server programming model. That was the first presentation I did when I started on SQL Server 21 years ago, was the presentation about the brand new client-server programming model. In the mid-1990s, we saw the advent of the Web programming model, the Web server programming model with HTML and Web-based systems. And then around 2000-2001, we saw the advent of SOA, or Web services, where we’re wrapping business applications in the form of Internet protocols for them to be accessed.

The next generation programming model, the scale out programming model is really an extension of this, it’s an extension toward being able to take a Web services, or SOA-based application and enable instances of it to scale out or scale in, based on the load associated with it. And that will come with the advent of cloud computing.

So it’s an evolution, like all of these other ones, it’s an evolution forward, our existing applications will need to run in that environment, but they will not have the full ability to take advantage of those scale-out capabilities.

So what are we doing, what are the sets of things that we’re doing to help invest in this, and help invest in this and help to drive this forward? Well, we have several things. We’ve got obviously virtualization, virtualization is a key investment that Microsoft has made over the last five years. Very obviously, there’s quite a bit of competition in this space. We think we have an incredible offering that provides for the needs of the vast majority of companies at a fraction of the price of what is available from others. And more importantly, and perhaps to the real point, one of the key things that we’re doing is focusing on management at a service level, thinking about managing the physical infrastructure, the virtual infrastructure, and the services or applications, and combining those all together holistically to enable a common management solution across both physical and virtual applications, and most importantly with an understanding of the application itself.

When you talk about management and you talk about where management is going with the cloud, it is insufficient to talk simply about virtualization. Virtualization is a requirement, it is a key enabler, it is not sufficient to solve the entire management solution. You need to think about managing holistically the entire application, where we have a very large amount of resource, and a very strong, growing business in the form of overall services management. In fact, I think it’s fair to say we’re the fastest-growing management company on the planet right now.

One of the key enablers to this, I talked about managing services, one of the key enablers of this is modeling, and defining models to describe what an application is all about. I talked about dynamic IT as thinking about how you can reduce costs attached to the lifecycle of business application. The lifecycle beginning with the point where the application requirements are defined by a given business unit, going through the architectural description of that application, going through the development and design process, going into the transition, into operations, and then through the ongoing operations and management, ultimately leading to the retirement of the application.

That’s a cycle. All of those things are connected together from the time requirement is defined to the point the app is developed, until it’s deployed, and in truth ultimately lives in the data center, because it’s what’s really running. But today the connections across those lifecycles are really nonexistent at a system level. There is no system on the planet, not Windows, not Linux, not UNIX, not the mainframe where all those pieces of the application lifecycle are connected together.

We have had case tools and modeling tools for 30 years, but they’ve not been connected throughout that entire lifecycle. In fact, modeling, and abstract modeling is an incredibly important part of IT development, by not operating with executable models, or physical models that define the reality of what is in the production system we are missing a big opportunity, because today the pieces of that application lifecycle are connected in the heads of our people. They’re connected through e-mail, they’re connected through instant messaging, they’re connected through hallway conversations and casual conversations. They are not built into our business process. In fact, we really don’t have what you can think of as an ERP system for our overall IT application development lifecycle.

What we’re doing is investing in creating that, and the most fundamental step that Microsoft is doing is building a modeling system that can describe the service that’s running in production at a very detailed executable level, and conflate that back into the development process, and through the development process. So codifying the characteristics of the application, what are all the components, what do they consist of, what’s important about them, what are they related to, and how do they relate to other application components, codifying that in the form of a model that is understood certainly by the management system, but also by the development system, as well, and connecting that together. That’s an important set of investments, you’ll see that come in over the next few years.

We first started talking about Dynamic IT, we said it was a 10-year vision. We’re six years into it. It will still take us a few more years to roll out and connect all of these modeling systems together, but that is inexorably where we’re headed. And in doing so we think we’ll fundamentally be able to transform the software development process within an organization to enable applications to be build at a much faster speed, and to be maintained and kept reliable at a much lower cost.

Talking about scale out, you have to have models to do scale out programming. And the idea that you can implement scale out through virtualization is simply – it’s absurd almost. You need to have more than that. You need – the applications themselves have to understand this. Again, in order for the system to allocate more instances of an application there has to be a model, the system has to understand when there’s a parameter that is being stretched within an application, when the load is heavy how does the system respond. Today people have that in their head. They get it right sometimes. They get it wrong sometimes. By describing the application and its relationship and characteristics in a model the system will be able to define that scale out or scale in based on loads, and again those things all connect.

The world is heterogeneous, we understand that, we’re investing more and more in heterogeneous interoperability. This week we’re announcing the availability of our heterogeneous support for operations management, so managing Linux-based systems, managing a wide variety of UNIX based systems within our System Center product. As we move forward with this modeling work we will be releasing those, we’ll be releasing our intellectual property under licenses that enable others to implement them on heterogeneous systems. We will do some of that work ourselves, we’ll work with partners, and we will work with a variety of standards bodies.

We just had a conversation earlier this week about DMCF and some work that’s being done in DMCF that ultimately will wind up likely having an impact on this modeling technology, and we’ll be working with those standards bodies, as well as other emerging standards bodies as time goes on, to help us make sure that the work that I’m talking about is – while I think Microsoft will provide the spark of innovation, and will be driving the industry on this, that it will, in fact, be an industry-wide thing that extends beyond the Microsoft platform.

So all of this now connects to services, because when you think about cloud services, the idea of services is critical. The idea that applications are described as a service, the idea that applications are virtualized and they can be consumed as a part of the overall IT resource that you put together is critical. There are many other key pieces to it to enable services. What I just talked about is very important, but it is not the full picture. The kinds of things that services require are issues like federation of identity.

You can imagine – you can understand how complicated it is within your own organization today where you may use a few services, or at the early days of consuming services when employees come on board, when employees are terminated the need for you to be able to transform and change their access to resources needs to be able to done immediately. You can imagine the complexity associated with having multiple services within an organization, maybe five, 10, 20, some day maybe 100 different suppliers that you’re getting services from, imagine the complexity if they each had their own identity system that you had to manage.

Obviously there’s a need for those things to be connected and federated to a centralized identity system. That’s the kind of fundamental work that Microsoft is doing in addition to the cloud-based services that I talked about. We are rolling out a number of these in a variety of ways. We’re providing a set of business productivity services. In some senses those are our first services we’re rolling out with things like Exchange Online, and SharePoint Online. And quite a few companies are beginning to work with that, and in fact we’ve got quite a few significant organizations that have rolled out broadly on Exchange online and SharePoint Online, that are using breadth of the Microsoft ASR capabilities to lower their IT cost and provide them great advantages.

It’s a straightforward thing in the context that we know that customers spend a fair amount running their internal systems. All of our studies will show that the majority of dollars spent running these production systems is typically spent in people costs to manage it. The mailbox example, the Exchange example is pretty clear. Having talked to a large number of customers, and looking at what people do, on average we know customers spend about $12 a month running a mailbox. Some spend less. Some spend more. Most, frankly, don’t know how much they’re spending. But if you go all the way through it what you discover is it’s about $12 a month on average.

How does that break down? Well, the software is a small percentage of it. It’s less than $2 of it. Even the hardware associated with it is just $3 or $4. The majority of the dollars spend are spend in the people cots of administering it. What we’re trying to do by building these services at very high scale is we can lower the hardware acquisition cost, as well as lower the operations cost.

So we can provide our customers with a great value, pricing our products at less than $10 a month relative to the $12 amount. So cutting costs for an organization, providing a highly available, scalable service that in essence is like the electric lights for most companies, and is a requirement, providing customers with an always up to date service, et cetera, and at the same time saving customers money. That’s the sort of potential that these things have.

I mentioned some similar ones happening in the infrastructure, system center online being a good example of something quite similar that we think we can lower the cost of desktop management. Remarkably when you look at –  when you talk to IDC or Gartner, et cetera, and you ask of the $3 trillion worldwide IT spend, give or take, about $100 billion of that is spent managing desktops.

We kind of created this problem at Microsoft. We think we can lower the cost of that pretty substantially, and we have lowered that cost with products like System Center Configuration Manager, but we can lower it further through these services. And that’s the first step that really excites me, being able to provide customers with a much better value, with a service that they can consume and use to make their business more effective.

The other piece of this, and perhaps the most complicated piece, is the broad application platform sets of services that will be emerging. And we’re investing pretty heavily in that right now, in the application platform. So being able to take our existing app platform that exists today with Windows Server, and SQL Server, and .NET and then really providing with that a hosted equivalent of that, that both enables compatibility on the one side and at the same time enables this new scale out, cloud-based programming model on the other.

So whether those are hosted privately in your own on-premises data center, enabling you to build your own private cloud, whether they’re run through Microsoft Azure, which is our brand for the online service we will be providing and hosting, or whether we’re selling our platform to other hosters that you choose to use, and are running it in that environment, in a public cloud, or a hosted cloud that’s run by the literally thousands of hosters worldwide that run the Microsoft platform, we want to provide that set of choices for you.

Again, the idea is there is a platform shift happening to a scale out platform that you will begin to want to build upon, taking your previous existing investments in things like SOA, and carrying it to the next step forward. We want to enable you to leverage that on-premises, with the variety of hosters, or through Microsoft Azure in our cloud-based services. And, again, having a commonality of a programming model across those things.

We’re pretty excited about this stuff. We think, again, it has some strong value propositions associated with being able to lower the cost of building applications and providing them at global scale. One of the big advantages of these cloud-based services is the global scale that we can give, and enable, without the data center build out costs associated with it.

So, for example, if you need to – if you’re trying to expand out and reach markets in Asia, or other parts of the world, we have Asian data centers, and you can leverage those sets of capabilities for building business applications. Also, obviously, it can be used as additional capacity on-demand to meet the change in business requirements that people have, the elastic capability is one of the more interesting aspects that we often think about. So this is early still, pretty early days, but we do think it’s pretty exciting in its potential.

I guess that the bottom line is we are very strongly committed to being a partner with you as you build out your systems into the future, as you invest in the future. We very strongly believe that IT will help companies around the world provide differentiation and innovation within their organizations to enable new business solutions to be build, to drive business results, all focused on the business results and the bottom line.

We’re committed to being a strong partner and work with you. We are investing very heavily. My organization is investing north of $2 ½ billion this year in R&D. So we have a massive investment in new applications, new underlying platform things that we think will help you to drive business results. Whether it’s examples like the in-memory business intelligence, I provided some of the management examples, or some of a lot of different things I didn’t talk about, for example, some of the work we’re doing in our developer tools to lower the cost of building business applications, and get you to be able to roll them out more quickly. All of those are good examples of thing that we’re focused on doing, and innovation that we’re spending a lot of money on so that you can use and leverage that innovation to drive down your cost, and to, most importantly, build business solutions that provide you with differentiated value.

-end-

Related Posts