Remarks by Bob Muglia, Senior Vice President, Windows Server, Microsoft Corporation
Microsoft Windows Server Platform: The Next Three Years
Microsoft Professional Developers Conference 2005
Los Angeles, California
September 15, 2005
ANNOUNCER: Ladies and gentlemen, please welcome Microsoft Senior Vice President, Windows Server, Bob Muglia.
BOB MUGLIA: Welcome to PDC Day Three, Server Day. This morning we’re going to talk about servers, in particular we’re going to talk about Windows Server, and where we’re going for the next three years. What I want to do here is give you a platform guided tour and sort of a taste of Windows Server for the next three years. Now, there’s a lot of things that are coming in Windows Server. We’ve got releases coming about every six months that will impact developers. There’s a ton of additional functionality that we’re delivering that affects other audiences, and a lot of things for IT as well. But, we’re not going to cover that today. Today, I just want to talk about the platform, and the way the platform is going to evolve. And that evolution really starts now as we move into the end of ’05 with the next release of Windows Server, Windows Server 2003 R2.
What do we do with R2? Well, what we’ve been doing is, we’re building an interim release that adds functionality on top of 2003. It has a wide range of features, a lot of great stuff, branch office, replication, a whole set of great functionality being added to Windows Server 2003 to really enhance the overall experience. But there’s some key things in R2 for developers, and that’s what I want to focus on.
First, the one that is unambiguously the most important, the one that will impact the most people here, and the most people who develop for Windows is the .NET Framework, the 2.0 version which we’re shipping in R2. So, of course, it’s all the things you’ve been hearing about, all the things that are being delivered with Visual Studio, but it’s just being included in the R2 release of Windows Server. So, as customers deploy new versions of servers or upgrade them to R2, this will just be there, and you can count on it as a standard operating system service.
There is a lot more that we’re doing. We’re doing things around storage, in particular making it easier to connect to SAN in this release, making it more straightforward to take Windows Server and storage-based applications and configure and connect to SANs with BDF 1.1.
Active Directory has a key new feature, I’ll talk more about Active Directory and the future of how Active Directory is going to unfold over the next few years, but I want to put all the focus on what’s coming now, which is Federation Services. And this is an open protocol, WS Federation, and an open set of services that allow Active Directory System to federate, and user accounts to be authenticated based on the credentials that exist within a partner organization, great feature for IT, lots of developer opportunity with this feature of Active Directory, lots more to come with Active Directory.
Another thing I want to highlight in R2 is our UNIX interoperability, putting a lot of focus on helping developers who have UNIX applications move them on top of the Windows platform without having to rewrite the whole thing. The new services for UNIX, which are included in R2 really provide the substance and to simplify that development and that porting, allowing for UNIX-based semantics to be included in the application. A lot of great features here, a lot of great UNIX utilities for UNIX administrators.
One thing I want to highlight here is the database interoperability. The biggest and most difficult problem that developers who have built applications on UNIX who want to move them onto Windows is how to integrate their Oracle and other databases into the platform. And we now have services to allow for that connectivity directly, a big step forward.
Other features in R2 that impact developers, business management, a new standards-based protocol we just submitted that protocol, actually it’s being announced today, to DNCF for standardization. This is a protocol where there’s been a lot of industry rallying around it, companies like Intel and others are working with us to build WS management capabilities directly into the hardware. But, you can take and extend WMI-based applications with WS management, as well, allowing for heterogeneous management. When we think about heterogeneity, we think about how open protocols connect to other systems. Having an upgraded management protocol that’s secure, and Web services based is critical, and that’s what WS management is all about.
Another really important thing coming in R2 is the next generation of the Microsoft Management Console, MMC 3.0. For the last 10 years or so there has been a quandary in management of what to do. If you take a look at applications that are being built, server-based applications that need management consoles, what have you chosen to do? Well, some people have chosen to write to MMC and that’s provides some integration benefits, but today’s MMC consoles aren’t all that great, and it doesn’t provide the level of heterogeneous support that developers want, or that customers want, IT wants.
Some applications have chosen to go with the least common denominator approach and build a Web application, and that works, certainly, from anywhere, but it isn’t a very rich experience. And a lot of people have chosen Java, because it seems to provide a balance across the board, but it has a lot of tradeoffs and compromises, too. Well, MMC 3.0 provides us the first step to a no-compromise management experience on Windows. The key to MMC 3.0 is it enables you to write a managed code, .NET application, so it’s incredibly straightforward to build an MMC 3.0 app. Since we introduced this at Microsoft we’ve had an explosion of MMC 3.0 work going on across the server organization, with developers building really great management consoles very, very rapidly. It’s incredibly straightforward.
We’ve updated guidelines, which will go on the Web shortly to talk about what these consoles should look like. The next step to really provide the heterogeneous support that was coming in Longhorn, that’s when we take Terminal Server and allow for application remoting. So you can take any server, any Windows server in Longhorn, and be able to take an application and remote it to another machine. Now, that’s great for terminal server applications, but for management it’s also great, because it means that you can run the management console remotely, with just having the remote desktop client present on the client machine. We also will have a gateway in Longhorn to allow this to go through firewalls. So really what that means is that with MMC 3 you can build a console that runs anywhere. So it’s a good step.
Another really important thing that starts at this timeframe, it’s separate from R2, but it’s really being delivered in the 2005 timeframe, so I want to put a focus on it is SDM, the system definition model, and the work that’s being done integrating with Visual Studio. We’ve talked about how models will change the way management works. Models provide for the first time a clear description of what an application is. It’s the first time that developers can specify all of the components of an application and their relationship. Visual Studio 2005 provides the foundation to allow that to be built. Now we’re working with partners to take those models and be able to turn those into really complete solutions that close the loop between operations and development. This is the first key step in that delivery of SDM.
With that, what I’d like to do is invite Dmitriy Nikonov to come up to show us how partners like Macrovision and Avicode have taken and built on SDM and Visual Studio to close that loop.
DMITRIY NIKONOV: Thank you, Bob.
Good morning, everybody. What you’re seeing right now is an application diagram which is a visual presentation of an SDM model, and this represents an insurance company that has a call center application, which is a Windows application that’s connected to Web Services Action Manager where policy actions are recorded, and we define management which is your interface to legacy systems, to a live system and other systems. What you can do with it, the developer now can specify how and what will be monitored in your application by placing a MOM management pack on your application diagram. Now, this MOM management pack is designed by our partners as a code and allows you to produce and configure the actual MOM packs on the MOM server.
BOB MUGLIA: So essentially what you’re laying out through this visual diagram is an SDM model. What Avicode has done is they’ve taken and extended that by enabling a management pack to be built directly.
DMITRIY NIKONOV: Yes, through the SDM accessibility SDK they’ve extended it, and now you can specify what kind of exceptions and performance you want these applications to be monitored for, such as what functionality types and entry points, and by default all the entry points are included and, you know, system UI base process request, is one of them.
BOB MUGLIA: We can set the threshold to actually trigger an event to be sent to MOM for operations to be monitored and then an alert to be raised.
DMITRIY NIKONOV: Yes, exactly that. And as a logical data center diagram you can specify where the MOM packs will be generated and what kind of binding types, which really means whether you’re going to do the WMA instrumentation, or Windows Event Log.
BOB MUGLIA: Great.
DMITRIY NIKONOV: So the next logical step is to prepare my applications for deployment. I’m going to map everything on my deployment diagram. So now I’m making my MOM packs to a MOM server, and all the other applications to their respective logical servers.
BOB MUGLIA: So once you’ve done that you’re really ready to take and create an SDM-based deployment report that really describes all of these attributes, the characteristics of an app, and helps the developer to really lay out for operations what this app consists of, it helps from a documentation perspective.
DMITRIY NIKONOV: Yes, it’s a great reference too. It’s a great documentation tool. It also creates everything you need for deployment, which is your binary for Windows applications, content files for Web applications, and whatever Windows project groups contain. And this is an example of the deployment report.
BOB MUGLIA: Then from that you can directly generate a management pack?
DMITRIY NIKONOV: Absolutely, yes. So once we’ve created the deployment report and verified that our application can be deployed, we do the actual deployment. And our partner Macrovision has designed this great add-in, Macrovision Install Shield Packager for Distributed Applications. It will take input from our SDM, it will take input from our deployment report, which I’m going to specify right now, it’s going to take the files that were generated by the deployment report, and I’m going to specify a directory.
BOB MUGLIA: So really for the first time when the developer is laying out and building the application, all the relationships, the deployment characteristics are included inside the application, and then what Macrovision is doing here is creating the MSI install packages. And they can actually create ones for developers to use, for tests, to use in the test deployment, and then for production across multiple servers.
DMITRIY NIKONOV: Yes, and they created by separating them in a logical server grouping so a developer can deploy them in their development environment, they can deploy them in a test environment, or they can be pushed through Macrovision Admin Studio to SMS for enterprise-wide deployment.
BOB MUGLIA: So this is the development side of this. Let’s take a look at what happens from an error reporting side on the back when, when the user is actually running the application.
DMITRIY NIKONOV: Once we have our build deployed, this call center application is an example of that deployment. So I’m going to put some data for the claim number, I’m going to put FI123, I’m going to populate some claim data, and I’m going to get some user actions.
BOB MUGLIA: Now, this error that’s generated here is actually an error that’s being run on the back end service. So that error is an error that’s within the production server application, and that’s being picked up, of course, by MOM, based on what we just did in the SDM model.
DMITRIY NIKONOV: Now, because we designed the SDM model in a way that allows us to track four major kinds of errors, application faults, application performance faults, connectivity and security faults, operations can clearly see that this is an application fault, so a developer should be notified about that, they should know about it.
BOB MUGLIA: So the connection, there’s actually a connection here between the operations console and the developer Visual Studio, and then the developer can go in and actually see that, in fact, an error has been raised, and then take a look and find out where that error occurred.
DMITRIY NIKONOV: Take a look at the diagram, now Action Manager has a red sign on it. So the developer now knows that this application for some reason had a problem.
BOB MUGLIA: Now, it would be interesting to be able to take that problem and go deeper, and figure out where the problem existed, what was the event that caused it, and where the code was that caused the actual problems.
DMITRIY NIKONOV: I can go ahead and integrate into Visual Studio by providing the intercept plug-in, and what I see here now, it tells me that there was a PGC test exception that occurred at 8:40 a.m. this morning on this machine, and in this specific ASMX. Obviously, this information is not enough for the developer to debug the issue.
BOB MUGLIA: They need to be able to get in and see what happens from a code perspective.
DMITRIY NIKONOV: And that’s what we do now. They have the entire exception chain they can trace, they have the complete stack they can trace. We know now the parameters, the SCR claim number, FI123 that I just passed to the function, local variables, if there are any, and finally they can actually jump to the source code. Here we show a snippet of the source code, but if I click on the link it will open the actual source for you to develop and highlight the line where the error occurred.
BOB MUGLIA: So that’s closing the loop. We’ve been talking about closing the loop between developer and operations, and I think with Visual Studio 2005 and this great work with some of our partners we’re really seeing it happen.
DMITRIY NIKONOV: Yes, definitely that’s the case. And we are providing now now you can take it, you can fix it, and you can run through this loop to redeploy it back in the production environment again.
BOB MUGLIA: Thank you.
DMITRIY NIKONOV: Thank you, Bob. (Applause.)
BOB MUGLIA: So from a developer perspective, that’s really our first major deliverable on SDM. And I think it’s a very important step forward. We’re going to continue this process. We feel that closing the loop between developers and operational systems, there’s a tremendous amount of work that’s left to be done here, but we see some real progress happening now, and things that you can take advantage of.
Okay. So that’s this year, 2005, those are things that are coming very, very shortly. Let’s move and talk about the things that are coming next year. A couple of major things, we have a new product, a new member of the Windows Server family that we’re introducing next year, the Compute Cluster Edition. We’re doing some work, and we’ll be releasing in our first release of Monad our command line scripting language, and then Win FX comes out coincident with Vista, and of course there is a re-distributable for that that runs on Windows Server 2003 at the time that Vista ships. So a lot of things coming next year, in addition to our client work, a lot of things are coming next year to affect the server.
Let me start by talking about compute clusters. In the scientific computing marketplace there’s a very large number of these things that are getting installed. There’s a good set of ISVs that are building applications, it’s a thriving ISV community, and they’re really solving compute-intensive problems for engineering organizations, oil and gas, pharmaceutical, financial, government, and universities, all sorts of different scenarios are being solved. Up to now really the focus on this has largely been on the Linux platform. The reason is that a complete solution set has not been delivered by Microsoft to pull this together on Windows. And while Linux is working for some of these situations there are some tradeoffs, and some drawbacks. The apps are not integrated into the company’s business environment. They tend to be built on a one-off basis, so there’s not a lot of consistency, the parts have to be pulled together. And we’ve heard from ISVs that there are real support issues associated with this.
So what we’re doing next year, the first half of next year, is we’re releasing a version of Server 2003 that’s targeted at the engineering and scientific computing market, the compute-intensive market. The focus is to build a complete platform that has the set of services that are required to write applications that target this market. Many of those applications are open source, many are delivered by ISVs in this space, and some are built in-house by companies and universities. But, our goal with the complete platform here.
The idea is you can take a cluster of machines, we’re building a cluster-based solution where you might take a very small cluster of, say, 4 to 8 machines, or a larger cluster of 32, 64, or 128 machines, good sized clusters, these are machines when you think about dual-core systems, these are machines with a lot of capability, given today’s hardware. But, take a cluster of machines, be able to take applications, submit a job to the cluster solutions, have that job run across the different sets of machines. Some of those applications will require tight communications, so a high-performance, MPI-based messaging stack to connect applications together. Some don’t require that tight coupling, but allow for that level of flexibility and deliver a complete solution that integrates with the business environment, that works with Active Directory, that can be developed and designed with Visual Studio and gets all the benefits of Windows. We think the ecosystem is ready for this. We’re very excited about this opportunity. And with that, what I would like to do is invite the beta of that is available now. We just went to our first beta, and that’s available on Microsoft.com/hpc, so we’re starting this process. We have major things coming later this year.
With that, what I would like to do is invite Kyril Faenov up to show us a demo of the compute cluster solution.
KYRIL FAENOV: Hello, Bob.
BOB MUGLIA: Kyril, good morning.
KYRIL FAENOV: How are you doing?
You mentioned financial services as an example of an application. What I wanted to show is an application that can take advantage of compute clusters. Excel client has been used as an application platform in financial services, and here we have an example app that’s models the risk of an options portfolio using Monte Carlo simulation. So this spreadsheet has about 50,000 row, and takes several minute to run with the results then being displayed in this sheet.
So, Excel client over time has become a very popular platform for developing applications. And as we know companies that have tens of thousands of spreadsheets encompassing their applications, and with Excel Services functionality in Office “12” next year, you now will be able to run these same worksheets on the server taking care advantage of server capabilities, and compute cluster solutions can provide scalable and highly available full computing resources for this application.
So, let’s see what that would look like. So, instead of inputting it directly into the Excel client, you have a Web part where just the input parameters can be specified. You can request specific output parameters, and click submit. At this point, the Web part contacts the computational cluster and submits a job on one of the available resources. That same spreadsheet that you just saw is running on one of the compute nodes in the cluster, actually it’s the cluster right here of four nodes. So the results will be rendered right back here into the Web part as it executes on the compute node. So here we will see the results, and just the results you want to see are rendered.
BOB MUGLIA: Now, in most industries the applications for compute intensive solutions are developed by third parties, but in the financial services space Excel is really popular.
KYRIL FAENOV: Exactly. So, let’s take a look at what’s going on in the cluster. Right now, we have two nodes handling load, as you can see in the performance monitor, and if you look at the queue, you have two jobs that are running, and a bunch more are queued. So, you might imagine that you want to grow the capacity of this cluster to increase throughput. Let’s bring online a preprovisioned node that will immediately jump in and start taking the load, improving the throughput of the cluster, you saw it took the load, so now we have to refresh that. We now have three jobs running in parallel. So, clearly, this is a mission critical application. A failure is not an option. So, Bob, what I’m going to ask you to do is unplug one of these machines.
BOB MUGLIA: Okay.
KYRIL FAENOV: Bob just unplugged the network connection, and the job scheduler for the compute cluster right now is monitoring through heartbeats the compute nodes, and when it senses a failure what it will do, it will requeue the job that was running on the failed node, and put it on top of the queue to submit to run it on top of the next resource that becomes available. So, this takes a few seconds because the heartbeat mechanism, and we see the node becomes unreachable, and when we look at the console, it will get requeued.
BOB MUGLIA: The job was pulled off that, and it just requeues.
KYRIL FAENOV: It requeues on top of the job queue.
So, what I want to show next is how easy it is to integrate this advanced functionality of talking to the cluster into your applications. So, let’s look at the source code for the Web part that we just saw. So, right after you click the submit button, here is the code required to integrate with a cluster. Here you create the configuration, and you connect to the cluster, you create this job, you specify the resources that are required to run this job, and this is a job that will run on the cluster. Right here, you add the task, you submit the job, and you set the credential. When a job will run, it will take advantage of Active Directory to run as the user that submitted the job. So, now the administrators will be able to use the same policy and security mechanism they already have applied.
BOB MUGLIA: That’s actually a pretty big deal, too, for a lot of these companies, because up to now these compute clusters based on Linux had to run outside of the credential management system in the company, and they really need to be treated as separate islands, and just getting the resources, we’ve heard from customers we’ve talked to, is a real problem.
KYRIL FAENOV: Exactly. And so, with this, you can integrate advanced cluster functionality very easily in the application. I would like to invite you to stop by the hands-on lab. We actually have a cluster there where you can try your hand in writing parallel applications using compute cluster solutions. And with that solution next year, we’ll make it easy for developer to take their compute intensive apps, and scale their functionality across the cluster, easy for our key professionals to deploy and administer these clusters.
BOB MUGLIA: Thank you.
KYRIL FAENOV: Thanks, Bob. (Applause.)
BOB MUGLIA: There’s not a tremendous amount of developer additions that are required for compute cluster additions. The real key is to take these classic applications, these scientific and compute intensive applications, and have them run on the Windows platform, and our goal is to provide a complete solution, platform solution, where those applications can be built. We think it’s a great new market opportunity. It’s a great growing part of the server market, and we’re really glad to be bringing out a solution here.
Now, let’s talk about Monad. Monad is an object-based command language. It’s been under development for a while, and we’ve spent a lot of time trying to understand the needs that in particular IT has to build command script, what the management issues, are, and what we’ve done is, we’ve built really a next generation scripting language that’s focused on meeting those needs. Monad has a lot of great capabilities. The most important of which is that it’s built on the .NET Framework, and it’s integrated with all of the services that are there. When you want to take your application and write management extensions, write management commands to it, it’s possible to do that and build that using standard .NET code. And the key thing that Monad does, the thing that Monad delivers is this object-based orientation. It’s really in a sense the first command language that’s focused on allowing objects, with a full set of services that an object possesses, and pass those things between different commands. From your perspective, you just take and build a .NET object that focuses on the management side, and then put a little bit of extra things in there to define this command-let. And from that, a very thorough set of commands can be generated to do command line based management.
We are going to undergo a project over the next few years to get a full set of Monad commands across all of Windows Server, and across all of our server applications. As we move forward, our design point for building management tools, the MMC-based console, the GUI consoles, would be to take these graphical consoles and have them call the same .NET objects that are used for Monads. One set of underlying management objects can provide a very consistent set of capabilities for both the graphical administration as well as for command line.
We also think it will take a giant leap forward in terms of the manageability of Windows Server, and I am explicitly asking your help, this is one where I really want your help in terms of building these objects, these management objects, building these command lists to make your server-based applications administrable. So, I look forward to that. It’s a really important step from a management perspective, and I think it’s a great opportunity.
WinFX, lots has been talked about at this conference on WinFX, and I’m going to focus on really the identity side here. But there are many, many attributes of WinFX that affects the server very, very deeply. The ones I’ll focus on are inside this communications box. And, this has been discussed in a lot of depth, so I’m not going to spend a lot of time talking about the Windows Communications Foundation, or the Windows Workflow Foundation. Both of those are things that have been discussed in keynotes, and have been discussed throughout this PDC. Suffice it to say, these are core foundational services upon which our servers are going to be built in the future. When we look at new protocols we’re introducing, we are building WS-* protocols. We are building them on top of Windows Communications Foundation. One of the core reasons why these services are in the operating system is, we are going to use them ubiquitously. We are going to use them to build operating system services. We use them to build our Microsoft Server Applications, and, of course, they’re therefore available for you to use as you build applications on top of these things.
The Communications Foundation is a very critical component. It is a revolutionary component to make it straightforward, to extend applications and speak using a standards based heterogeneous protocol, and with incredible sets of services such as message queuing, and transactions, and security, and all those things built in. Workflow Foundation really builds into the platform core workflow services to be enabled across a wide set of applications. If you look at business apps, all of them have workflow in it. Even within Windows Server, we have many things that require workflow. Having a standard platform, the Windows Workflow Foundation, on this is critical. So, these are key things.
But the one I really want to focus on today is identity and Active Directory. Active Directory is an incredibly important part of what we provide to our customers, both developers and IT. In a sense, all services that an operating system provides are critical. But in some senses, identity and security is the most critical, because everything is built on top of it. Everything has to use it. You know, when we introduced Active Directory five years ago, the problem space that our customers had was how did they take and get single sign-on across their enterprise. Active Directory has largely solved that problem. When we go out and survey our customers, 75 percent of enterprise customers use Active Directory as their primary authentication system. They may have other authentication systems. In fact, all enterprises have some other authentication system. But when users log-in everyday, when services run, they are being authenticated by Active Directory. What this means, and this is growing, by the way, it’s growing at an incredibly rapid rate, what this means is that from your perspective, you can count on Active Directory to be there to provide you identity and authorization services. And so, when you’re building an application, you need to think about, gee, how can I get a certificate out there, Active Directory is the answer. If you want to use smart cards, and credential management, Active Directory is the answer. If you want to protect the rights of a document, of something that’s being created, and have claims based, claims be put on that that are more sophisticated than just a ACL, Rights Management Services is an answer there.
I talked about earlier how Federation Services, we’re introducing that for across company, partner-based authentication, that’s available. Meta Directory Services to connect to other systems, whether they be HR systems, or other directory services, those are available. Those are now all built and just a standard part of what’s on top of that single sign-on capability that we introduced five years ago, and these are services that should be taken advantage of. It’s an incredibly unfortunate thing when we look at systems that our customers have to deploy, and they have to maintain separate user accounts and passwords. That’s a mistake. If you have an app, even if you want to keep your own database, integrate with Active Directory, make that an option for your customers, so they can have one place to do authentication. This is something which is available to developers today, it makes a difference for customers, it will differentiate your product, because right now the marketplace is littered with all kinds of different authentication systems, and no customer is happy about that. Active Directory is the answer.
But, moving forward, there is more that needs to be done. We look forward to what’s happening in the identity space, and it’s no longer just about single sign-on, it’s about compliance, and dealing with the HPA, and all the different regulations that are out there. It’s about B2B transactions and online customers, how to do that securely, and how to ensure the privacy. You think about identity theft. It’s one of the key issues that companies face today, and one of the most embarrassing things that can happen to a company is to have their customers identities be stolen. Authentication which is password based is not secure. We need to move away from these things, and we can do so in a way that simplifies the deployment costs and complexities.
We’re focusing on evolving Active Directory to meet those needs, and to build a common set of services that developers can use to take and create solutions that leverage the identity capabilities of Active Directory. So think about these services as implicit in the Windows Server platform, think about capabilities like access control, and auditing, credentials management, certificate based distribution, all of those things are intrinsics that Windows can provide, and as you build applications you can take advantage of those things. You can take advantage whether you’re righting for Win32, or whether you’re righting to WinFX and, of course, whether you’re using a standard protocol like LDAP, or the WS-* based protocol. So, the goals here is to take and expand out what Active Directory can do, and to provide these services in a way that all developers can take advantage of.
Let me highlight one service here that’s perhaps new to you, that’s the security token service. This is a service that will provide really the backend capability to info cards that Jim showed earlier this week. So, this is our implementation of a security token service for info cards. It will come some time after Longhorn ships. But, of course, info cards were designed in a way such that other STSs can plug in as well in heterogeneous systems, but clearly we’re making investments in allowing those credentials to be managed as well.
With that, what I would like to do is invite Dave Martinez up to give a demonstration of Active Directory, so Dave can show some of the things that are being developed here.
DAVE MARTINEZ: Hi, Bob. How are you doing? Hi, everyone. (Applause.)
As Bob was saying, identity and access management has always been an important topic, but with the emergence of connected systems where the services we write could, theoretically be connected to by anyone, or anything with an Internet connection, how we control apps to those services, and to their data becomes even more critical.
Now, in this service-oriented world identity can be just another service. And, in fact, with Active Directory, we provide a rich set of identity services that would make it easy for developers to leverage through WinFX, and show demonstrations to showoff how we might do that.
BOB MUGLIA: There really is an important point, when you think about both the standards based protocols like WS-* and WinFX, we have a wide variety of services that we’re providing, and in a Web services world there’s a lot of different kinds of services. Identity is just one of those services, and that’s the way we’re evolving and building Active Directory.
DAVE MARTINEZ: That’s exactly right. So, today, in our demonstration, I’m going to be with Fabricam, and we ran an order management application that composes two parts, a smart client, and as service, both of which, using WinFX, is going to be available in the Vista timeframe. So, when I go ahead and I start the client as a Fabricam employee, the first thing that you’ll notice about the app is that I don’t have to log-on, and the reason is because the application is leveraging Windows integrated authentication to provide single sign-on. Now, users don’t have to have a separate password for accessing this application. Now, an interesting question that might come up here is, how much work did I have to do to actually make this possible. In the past this has been possible with the Windows Integrated Office. Now using WinFX, that process can actually be made so that it doesn’t require any specific code. So, if I go into Visual Studio here, I can actually show you my service, and you can see here that I don’t have any security code in my interface definition. If I scroll through here a little bit, you’ll see that I have no security code really in my implementation either, and that’s because I’m declaring everything in configuration. So, if I go ahead and close this out here, you’ll see that I’ve specifically set my Web service to use a WSHTTP binding, which uses message level security by default, and leverages Windows Integrated Authentication. And since I’m setting all the settings, the security, declaratively in config, if there’s any reason why I have to change the security in the future, I can do so easily without having to recompile or redeploy.
BOB MUGLIA: So, if IT decides they want to change the way they’re providing security services, those things can be done just though configuration files, not through having to go back and change the code. The code is isolated from that?
DAVE MARTINEZ: That’s exactly right. In fact, a good example of that would be, let’s say that I wanted to federate this application with a business partner. Would I have to do a lot of code management for that? Well, the answer is no, because what we actually have done here is, we’ve added a new binding called a federated binding to our config file. What this means is that my service can now trust an identity service over at our partner, in this case Contoso. And if you’re wondering, what is an identity service, it’s what Bob was talking about before, it’s a security token service, or STS. So, for federated users to get access to my application, they need to provide a security token from an STS that my service trusts, in this case a Contoso employee STS. Now, Contoso has the option, they can use Active Directory to generate the security token with the Active Directory STS, or they can choose any WS-* compliant identity service.
Now, how does my application actually get the security token? For that I actually want to go ahead and show you an experience here. I’m going to go ahead and close this out. What I’m going to do is, I’m going to switch to an instance of the order management application that is configured for use by Contoso employees. So, when I click here on Contoso order entry, the smart client is going to contact my service and the service is going to request a security token from their STS.
Now, Contoso could have just chosen to use user names and passwords to protect the STS, but instead they did something else, they decided to leverage info cards, new client side technology available in the Vista timeframe, that enables users to select the appropriate identity for use with different applications while, at the same time, providing a secure alternative to user names and passwords.
So, you can see here that through Web services protocols, first, info card, and WinFX read the access policy of my service. Then, what they do is they match that security policy to the different identities available for a different user. The grayed out card here is an instance of an identity service that can’t meet the security requirements of my service. But the Contoso card here, which is not grayed out, can because of the federated arrangement set up between Contoso and Fabricam.
BOB MUGLIA: Because this isn’t using a password, that’s a much more secure, certificate-based card?
DAVE MARTINEZ: Absolutely. If I go ahead and select on the card and click submit, again using Web services, info card and WinFX go get a security token from Contoso’s STS, and they pass that security token to the service using a cryptographic key in lieu of a user name and a password.
So, now that I’ve done that, you might as the question on the client side how much work did I have to do in the app to manage all this info card interaction. Again, like the answer before, with WinFX, I can extract a lot of this with the code. So, if I go ahead and I bring back up solution explorer, and go take a look at the config file for the partner instance of my service, you can see that here is where I’ve identified the federation bindings for communicating with the actual service, and here is where I’ve set up Info Card to be the credential type that I’ll use for my STS. Some of the other stuff that happens, including getting the security tokens, reading policies, passing them to the service, et cetera, is all handled for me.
BOB MUGLIA: It’s handled the by the services, the identity-based services we’re creating, and you’re just using standard WinFX calls to get to that.
DAVE MARTINEZ: To get to that, exactly. So we talked a little bit about how we’ve used WinFX and AD to control access to my application, but what about information that’s inside of my application, particularly when that information is no longer under the control of my app, or even of my organization. For example, if I bring up the app you’ll see that one of the features in it is this view price list feature, and that generates a custom price list, that we consider sensitive information, we wouldn’t want Contoso, for example, to share that information with our competitors.
Is there anything we can do to actually control how they use that data? In fact there is, using the information protection capabilities provided by rights management services or RMS. If I go ahead and come back here, let me go ahead and show you where I actually protect my price list document. I’m using this “system.security.rightsmanagement” assembly in WinFX to actually protect this document, which creates a series of we’ll call them usage licenses that we append to the document that correspond to the entitlement of the person requesting the document.
BOB MUGLIA: And the document itself, the RMS actually encrypts the document and those entitlements have to be satisfied, the claims have to be satisfied in order for the document to go back into a clear form.
DAVE MARTINEZ: Exactly, and when the document is actually received by the user, let me go ahead and actually click view price list, one thing that you’ll notice is we use the XPS document sharing format in this case, in part because of its rich integration with RMS. You can see here in the XPS viewer we can see that there is, in fact, rights protection on this document, if I drill in a little bit you’ll see that it tells me that I have the right to vie the document. Of course, that means there are a number of things a Contoso user cannot do. I’ll show you that. I can’t save the document, I can’t print the document, I can’t send the document to somebody in e-mail, I can’t even copy and paste the document from the Window. So in a federated world where access gets so much broader, the idea of still being able to have this sort of fine-grain control over specific things, can address issues like security and regulatory compliance issues that organizations might have.
So anyway, what we’ve done here today is we’ve had a little brief review of how Active Directory and WinFX can help organizations to leverage the rich services of Active Directory to provide identity and access in their applications, basically building identity-ware applications for a service-oriented world. And by leveraging because WinFX makes it so easy to do this, ultimately Active Directory can help organizations address some of the bigger identity and access issues that you were talking about before, like regulatory compliance, or extending the business partners, et cetera.
BOB MUGLIA: And the key to what we’re doing here is we’re building these things in the same WinFX-based foundations, the same WS-* protocols that are available to you as you build Web service based applications. Identity is a Web service, Active Directory is an instantiation of that Web service. Active Directory is broadly used today, and as these new services come online it provides more and more capability for you to take advantage of those developers. There are things to do today, there’s a lot more to do tomorrow.
DAVE MARTINEZ: Thanks a lot. (Applause.)
BOB MUGLIA: So as you can see, Active Directory, certainly identity and security, those are key things that we spend tons of time focusing on. The security problems that people are facing today have moved from being just these broad virus things that are spread broadly, to being things that are targeted at individual companies. And being able to provide within the platform a set of services that enable you to secure the information of your customers is a key part of what we’re developing. And as developers, it’s really critical that we understand these things and take and build our applications using these facilities.
Okay. Longhorn 2007, Vista will ship in 2006, Windows Server and Longhorn will take a little longer to bake. We’re planning on shipping this in the 2007 timeframe. There are a lot of things in Longhorn. I’m going to focus today on really just a few of the developer-focused things that are coming in Longhorn server.
Before I do that, let me just say that today as a part of the good, the things that are available to you, we are handing out the first availability of IIS 7, and an updated copy of Windows Server Longhorn. So that’s something that’s now available for you to add to your goods kit. So that’s a new thing that’s coming today. It’s a new set of bits that’s really worth taking a look at, and we look forward to seeing this evolve with more community technical previews over the coming months.
Developer services in Longhorn, a lot of things, a lot of new capabilities that are being built into this base platform. Let me highlight a few of them. I talked about access for terminal services and remote applications across firewalls. This is a key thing, because it really enables developers to build applications can that can projected anywhere. So they can run on the desktop, they can run in a Windows environment, they can also run from the server anywhere. They can take advantage of the new capabilities that are being built into Longhorn, as well.
Storage, there are a lot of things happening in storage. I’m, going to highlight one, the transacted file system. TXF is an update to NTFS that, simply put, enables all file operations to be transacted. The great thing about TXF is if you aren’t using it, it has not overhead at all. If you are using it the overhead is extremely minimal, single digit percentages of overhead. So it’s very, very efficient from a file system perspective. What can you do? Begin transaction, copy file, copy file, move file, do this, do that, then either commit or roll back. The number of applications where this is interesting is very, very large. Certainly anything that coordinates files with other data stores, such as databases, wants to do that in a transactional sense, finally it’s possible.
Any application that wants to move a lot of files, we will use TXF as we do software deployment and patch deployment, we will use TXF so that if a failure occurs during the update they can simply be rolled back. This is a very core foundational service. It’s also a core service that will be used across SQL Server and WinFS and other places moving forward.
The event log, I’ll just talk briefly about this. We’re trying to get more information available from developers to operations and back to developers. A simple way to do that is make sure that information, metadata about an application’s failure conditions and health states can be logged in the event log. The old event log called all stores, the new ones have extensions to allow metadata to be logged that provides a lot more information to be logged that provides a lot more information about the specific state of an application. This is something we’re building into the operating system, taking advantage of these metadata extensions for our own services, and as you build services those are things that you can take advantage of, as well.
Longhorn, one of the attributes about Longhorn is its modularity. Our componentization that we’re doing. We’re putting an enormous amount of effort in to this major release to build a modular operating system. This particularly applies on the server. Customers on the server often want to deploy just the set of core services that are required to run the role or workload that their application needs. They don’t want to have all these other features available if they’re not using them. It reduces the surface area for having to patch, it reduces the vulnerability, and it frankly makes the maintenance simpler.
So what we’ve done is we’ve built Longhorn in a way that is modular, and it’s something that you need to be aware of as you build applications. There is something called a Server Core, it’s the basic foundation for what’s within Windows Server, and on top of that there are a set of roles, a set of services that can run, things like networking, security, active directory, management, those are all services that can run simply on top of the Server Core. Perhaps you’ll have applications you’ll develop that can run here. This is an environment that has no graphical UI. It’s a very, very basic Windows Kernel plus basic set of services. And we think it’s a great thing that our customers are going to want.
Then on top of that we built what we call the Core Plus, which really adds a broad set of Windows Server capabilities that typically run there. That’s where you bring in the shell, that’s where many, many other server applications would want to run, because they want some of those services. Then there are even steps on top of that that bring in the full set of client features, if you want to, say, run a full terminal server to actually emulate a desktop. Most servers won’t be deployed in that mode, most will be deployed either with Server Core Plus, providing a broad set of services, or just the Server Core itself, and particularly in enterprises. Take that into mind, that Windows Server Longhorn is modular, and as you build your applications think about that.
IIS 7, major, major new release of IIS coming here. This release also is modular, one of the key things in IIS is its modularity. The fact is that the monolithic nature IIS, which has been a total pain for people building applications has been eliminated in this release. IIS is now built with a set of modules, with standardized interfaces, public interfaces, that’s how we do extensibility, that’s how you can do extensibility to IIS. We’ll show that in a demo in a few minutes.
IIS is built on a standardized activation service, the Windows Activation service, that is how we activate either Web-based protocols, or WS-*-based protocols. So we’re building in a standardized service to allow communications-based applications to activate higher level services such as IIS, or such as the WinFX Communications Foundation. ASP.net is integrated in IIS 7 in a deeper way than before, allowing for applications that take advantage of ASP.net things, to not require the full set of ASP.net services. Finally, IIS 7 has a very, very thorough set of diagnostics and tracing information, so you can pinpoint when something is going wrong with a Web application that you’ve built, precisely where that’s happening in your code and can work to get that resolved as quickly as possible.
What I’d like to do is invite Bill Staples up to give a demonstration of IIS 7, of all these great things that are coming in IIS 7.
BILL STAPLES: Good morning. Thanks, Bob.
It’s great to be here today. IIS 7 is a big release for us and it’s great to be able to show the work we’ve done in direct response to developers. Essentially what we’ve done that I’d like to show off today is how we’ve taken the rock solid security and reliability of IIS 6 and melded it with the kind of modularity that Apache is known for, and power and ease of extensibility of ASP.net. The result is a Web server platform that offers much better integration across the stack, and the same security, reliability, and management that IIS is known for today.
So let’s go ahead and show that off. What I’ve done is basically built a really simple Web site to help demonstrate the IIS 7 features I want to highlight today. You’ll notice it’s go a home page, I’ve named that BillsHome.html, style sheet, some images, and et cetera. When I go ahead and pull up that Web site you see instead of getting the home page I get this ugly error.
BOB MUGLIA: Why?
BILL STAPLES: IIS doesn’t recognize my home page, BillsHome, as a default document. If this had been any other version of IIS I’d have to be a machine administrator to change that setting, or any other IIS setting. Why? Well, today IIS configuration is stored in the meta base, which is a machine centric, administrator only share, which is not very convenient for me the developer, because I have to be a machine admin to change even a basic setting like default document.
BOB MUGLIA: Let’s say that the meta base is a pain in the butt.
BILL STAPLES: I’m happy to say with IIS 7 the meta base is dead. (Applause.) Instead, we’ve got the ability to create Web.config files. Now, those ASP.net developers out there automatically recognize this file. Right? It’s the same configuration system that’s used today by ASP.net, and it will be used across the Web platform. IIS uses it, ASP.net uses it, and Indigo uses it, as well. So all configuration data is now stored in the same place, accessible via the same set of APIs.
Now, the cool thing about this is I can come in here as a developer, and go ahead and change my default document. Instead of index.html it will be BillsHome.html, save that change, and now we can check out my site. Now, are there any bloggers out there?
BOB MUGLIA: Yes.
BILL STAPLES: Okay. A few, and maybe some press, I’d really like to help you write an article on IIS 7. So I had them take this picture of me backstage right before I came out so you can have a picture of me up on your article about IIS 7, I hope. There we go. Is the resemblance not uncanny. That’s how I felt right before I came on stage, a demo monkey, that’s what I feel like today.
All right. So let’s get back to business. I’ve changed the default document simply by editing a text file in my content directory. That’s pretty simple, but it actually enables some pretty powerful things. It means when I get ready to hand off this site to operations, I can give them the code, the content, and my configuration. I don’t have to spend an hour on the phone describing to them how to set up IIS to make it work with my application. (Applause.) It also means when they get ready to scale out the Web site off the server farm, they don’t have to worry about meta base replication anymore. They can take the code, content and config, put it on a Windows File Server, and have multiple front end Web servers reference that configuration data as well as the content. Pretty cool stuff, but that’s just configuration. Let me go on and show some of the componentization work we’ve done. What I’ve got here is a list of all the modules that we’re going to ship with IIS 7. Now, you may not realize this, but all of these features today exist in IIS in a single .dll. Essentially, we have a private interface that we’ve built all of the core IIS feature set on top of, and then we’ve made, ITAPI, the public interface, available to all of you.
Well, with IIS 7, as Bob said, that’s going to change. We have a brand new Win32 API that we’re introducing, that’s a super set of ITAPI filter and extension capability. And we’re taking it so seriously, we’ve ported all of the IIS features on top of that public API. That means you have the same API, the same fidelity that the IIS development team has for building new IIS features. That also enables something even more powerful. It means each of these features of IIS are now implemented as a discrete module, discrete binary on the box. What that enables is, it enables me as a developer to customize the surface of IIS to fit my application’s needs. So, in our case, we’ve got all the modules, I’ve made them an explicit part of my site’s configuration data. I can show the site still works as expected. But we can come back and say, you know, did we really need that CGI module loaded for my site? I’m not using CGI, let’s get rid of it. Do we really need ITAPI, or active checking? No, it’s an anonymous Web site. In fact, just for fun, let’s go ahead and clear all the modules in this list. I’ve left this clear tag here in the module list, that basically clears inherited configuration. So, with that, I will go ahead and save the change, and watch this really carefully because it goes by fast. That is the most secure and high performing Web server Microsoft has ever built. (Applause.) Now, it’s not terribly functional, so let’s go make it work again. You’ll notice that the first module I’ve got listed here is the anonymous authentication module. That basically allows anonymous users to access the site. Default document serves up that BillsHome.html. The stack file module is probably self-explanatory, it’s spits back the static files I have as part of my site, and directory listing serves back that gallery link that I had. Everything else I don’t need. So, let’s go ahead and trim down this list.
So, we’re left with these four modules, go ahead and save that configuration. Come back to our site, refresh, without rebooting or restarting IIS, configurations applied. The big difference here, obviously, is instead of 40 modules, I have about four modules for this Web site.
BOB MUGLIA: Four modules to maintain, four modules that are kept up to date.
BILL STAPLES: Four modules loaded in memory, four modules to patch, much better experience.
The next thing I wanted to show, though, is how we’re taking the componentization even further. I mentioned we have a new Win32 API. We want to make extensibility as easy and powerful as possible. And so we want to make it available for ASP.net for managed code developers basically. What we’ve done, instead of inventing yet another managed API off of this, we’ve taken the existing ASP.net extensibility interface, the IHP module interface, and we’ve plugged it directly into this new IIS pipeline. That means whether you’re writing C, C++ code against our new Win32 API, or C#, VB.net code against the existing .NET interface, you can now plug directly into IIS, intercept every request, and do some pretty powerful things. For example, the ASP.net forms authentication module that I’ve just plugged in here, is implemented on top of that public interface, so are these others, role management, UR authorization. These are features today that work great for ASP.net applications. If you have ASP.net pages, you can do forms authentication against that page. And, it’s a pretty powerful feature.
But now, I’m plugging that same module that exists today directly into IIS. That means any request that comes into IIS, whether it’s a classic ASP application, static file Web site like what I have right now, or even a PHP application, they can take advantage of ASP.net forms authentication.
So, let’s go ahead and actually show that off with my static file Web site. I’m going to go ahead and grab some additional configurations that I need to plug in to configure the site for forms authentication. And the last thing I need to do is actually create a log-in form. I’ve actually already created that for expediency sake. Go ahead and open it up. You’ll notice, I didn’t even have to write any code for a log-in form. That’s the great thing about ASP.net 2.0, you have a log-in control completely declarative. I don’t have to figure out what the identity system is I’m going against. That’s all a part of the membership system. The log-in control implements all the access checking functionality, everything for me.
So, I’ve got the log-in form deployed. Let’s go ahead and refresh the site, hit the log-in form as expected, even though it’s a static file Web site. Go ahead and log-in now with a user I precreated for the demo, and there we go, we’re authenticated and we’re in. Now, that’s how (applause) that’s how we’re making ASP.net even more powerful building on top of IIS 7.
The last thing I wanted to show, though, is how you can use these extensibility APIs to actually replace IIS functionality. So, if you don’t like the way we do logging, you want to log-in to SQL Server or your own XML log file format, you can take out the IIS module and plug in your own. You don’t like the way we do authentication, maybe you want to do basic authentication against an LDAP or some other database, you can now implement your own basic authentication module, plug it directly into IIS.
Now, I’m a photographer on the side, of course, and I like pictures. So when I look at this directory listing of some of my favorite images that I’ve taken, I was a little disappointed, honestly. This looks pretty much the same in IIS 7 as it did in IIS 6, and 5, and 4, and 3, and 2, and 1, for that matter.
BOB MUGLIA: What can you do about that?
BILL STAPLES: I can actually replace this built in module with my own. Let’s go ahead and so that. I’ve got a built in module here, Directory Listing module, replace it with another list module. I implemented mine using the ASP.net ISP module interfaces, so I have to give it a type. The next thing I’m going to do is configure a couple of handlers that I also built. And I’ll explain what those do in just a minute, plug those into my configuration. The last thing we do is actually deploy the module, so I’ll go ahead drag this bin directory, X-copy deployment of the module and the handler. Now we can come back to directory listing, again, without restarting the server or anything I’ve just deployed a new module, and when IIS gets the request, instead of the built-in module getting it, my module gets it, and puts out much better looking HTML.
BOB MUGLIA: So IIS does some really great stuff. Thanks a lot, Bill.
BILL STAPLES: Thanks, Bob. (Applause.)
BOB MUGLIA: When I think about the combination of Visual Studio 2005 together with the communications foundation, IIS 7, I think those sets of services really provide the richest application platform on the planet to develop the most sophisticated Web and Web service-based applications. So we’re very excited about the opportunities that exist for all of us to build great solutions built on top of this platform.
Now, on top of all the other things we’re doing in Windows, we’re also putting a lot of focus on virtualization, and we have a lot of investment. Today we have Virtual Server and Virtual PC. With Virtual PC we have an important update of that coming. All of you have a production copy of Virtual Server that you can use as part of your development process. But, after we ship Longhorn we will be building a hypervisor and virtualization capabilities directly into the Windows Server operating system. So we’re actively working on building a new set of technologies that are from the ground up, designing a very modern hypervisor as a part of Windows Server to allow for virtualized sessions.
Stay tuned, we have a lot more to talk about with regards to virtualization in the coming months, but be aware, we have a very, very competitive Virtual Server product available today, you all have it as a part of your kit, and we’re building virtualization capabilities very, very deeply into the operating system as a core service for you to build applications on top of.
So there’s a lot happening over the next three years with Windows Server. We talked about R2 this year, we talked about 2006 with WinFX, Compute Cluster, Monad, and 2007, of course, Longhorn Server and these virtualization capabilities. Being successful with this is really, of course, all up to you. In order for you to be successful, you need the software. That’s why we’re delivering the goods, so to speak, the goods with the CDs that you already have. You might notice that when you got your goods CD case there was a CD missing. That’s CD 6, 6 for the PC, that’s available today. I very strongly encourage you to take that and add it to the set and start working on it. Hopefully you’ve been working at home, back in your hotel rooms at night on some of these CDs, taking a look at it. Maybe worked through CD 2 or CD 3, but let’s get to CD 6.
There are all sorts of things coming in Windows Server. I talked about management, and how SDM and management packs are available now, and Monad, how Monad will change things. Compute Cluster Edition is available in beta today, so you can start building applications here, and of course, WinFX, which is available to you as a part of the PDC material that we’ve given you to help build the next generation of distributed Web service applications. So we’re very excited about all this.
You may notice that I’ve gotten to my last slide, and I haven’t said anything about 64-bit, not a single thing. Why is that? Because everything we’re doing we’re doing in 64-bit, 64-bit is here today. Windows Server will move very aggressively to 64-bit. Everything that we’re building as we move forward we’re building in a 64-bit world. All these services are available for 64-bit applications. And we are incredibly excited when we take a look and see what’s happening in real-world deployments with 64-bit.
We said that 64-bit would run 32-bit applications well. It is. Your existing 32-bit apps run great on Windows Server 64-bit. We also felt that for data-oriented applications like database we would see phenomenal performance improvement. We are, the numbers are phenomenal. What we didn’t realize is the breadth of applications when run in a memory-rich environment, where you go beyond two, three, four gigabytes of memory, the kinds of performance we’re able to see. MSFT is seeing phenomenal performance improvements on Web server-based applications, by running them in 64-bit and throwing more memory into it.
As we move forward with Windows Server I really think of 32-bit as legacy. We’re going to be compatible and support that legacy, but the future is all 64-bit. The future is all the work that you’re doing. I appreciate all your support. Thank you very much. Have a good day three at PDC. (Applause.)