Bob Muglia: PDC10

Remarks by Bob Muglia, President, Server & Tools Business
Professional Developers Conference
Redmond, Wash.
October 28, 2010

BOB MUGLIA: The day was July 6th, 1992. It was the second PDC we’d ever run. First one was here in the Seattle area, and I think people remember that PDC as the Windows NT PDC. It was a very busy time for me personally. My daughter had been born less than a month earlier, and I had been working 12-hour days from before she was born right up through that day, something my wife will never let me forget, I’ll say.

But it was worth it because it was a time when it was a dawn of a new era, and it was a time when we saw the industry undergoing significant change. And that change really was the advent of industry-standard computing for servers. It was a change that brought applications to small businesses who could never see it before. It was a change that enabled big businesses to connect to offices around the world in distributed systems and really go global like they had never been able to do before.

Ultimately, we saw it bring about and enable the Internet to be possible, which of course runs on industry-standard computers. So, it’s sort of appropriate that as we stand here today a generation or so later, my daughter just happened to start as a freshman in college this fall, so it’s 18 years later, where we see a new age really beginning, and a new age that I think will open up new possibilities that go beyond what we even saw 18 years ago. And of course that age is really the age of the cloud and the age where new cloud applications will rule the land.

And I wanted to talk a little bit about the cloud and the different kinds of clouds that exist. People have been talking about this, and it’s getting standardized in a lot of ways. Conversations around infrastructure as a service, software as service and platform as a service. Now, Microsoft is a provider that has offerings in all three of these areas. We just announced last week Office 365, an important offering in software as a service, and next week or a week after next at TechEd in Berlin, we’ll be talking a lot about infrastructure as a service and how Windows Server and Hyper-V and System Center provide a fantastic solution for people who want to take and build private clouds and public clouds using infrastructure as a service.

But today what I really wanted to do was focus in on platform as a service because I think it’s very clear that that is where the future of applications will go. Platform as a service will redefine the landscape, and Microsoft is very focused on this. This is where we’re putting the majority of our focus in terms of delivering a new platform and of course that platform is Windows Azure.

And so let me talk a little bit about platform as a service and give you an idea of some of the characteristics of this, how it looks relative to exciting applications. I think it’s worth answering that question because what is it that makes platform as a service so unique and different compared to existing applications in an existing environment?

First off, you look at today, all of us who are familiar with writing applications know that there’s a lot of infrastructure we have to deal with. We think about the underlying server, we think about managing VMs, now in some cases hundreds or even thousands of VMs. We worry about the network. We worry about the way the storage is configured. And then we can get to the app.

Well, with platform as a service, all of that infrastructure is handled for you and you, the developer, can focus on the app. What really matters, what matters to your organization, what distinguishes your business. In fact, if you look at this, one of the ways you can tell for sure that something is platform as a service is if you’re dealing with any of that infrastructure stuff, if you’re managing the VM, it’s not platform as a service. And so you can focus on what really matters to you and to your business, which is the app.

Another interesting thing about the platforms today is that you’re dealing a lot with change that isn’t really helpful to you, change in the form of patches that happen on a frequent basis, change in the form of service releases. New versions that come in and give you some features, but also tend to bring with it some incompatibilities and issues you have to deal with.

Well, with platform as a service, all of that is maintained for you. The entire operating system environment is maintained and it improves and enhances over time, gives you new features and new capabilities, but it stays compatible. It’s maintained, you worry about your application.

With today’s world, there’s a lot of assembly required. You have to take pieces from all over the place, put them together. You want this middleware component, you want this set of services, you want this runtime, this library, you put it together, you assemble virtual machines, you test the image out and deploy it — that’s all a lot of work. In the case of platform as a service, there are a whole broad set of highly available scale-out services that are just available to you to call. Again, you just focus on what matters to your business and these services become things that make you much more productive and allow you to get your applications into market much, much faster.

Now I’ll point out that the top three things I listed here are advantages for platform as a service even relative to new infrastructure as a service offerings. Platform as a service has distinct and unique advantages, which is why we see it as the destination for where tomorrow’s apps will be written. But let’s talk — because people aren’t living in — and even in infrastructure as a service world, let’s talk about the world today that almost all of us live in.

One of the attributes is the datacenter environment. I’ve been to a lot of datacenters, and they’re all different. The environments, the software, the hardware, the different kinds of networks, the different kinds of topologies. They’re all different, they’re custom, and they’re very inconsistent.

I find it’s interesting, in some senses, the industry is providing one after another complicated solution to fix problems created by yesterday’s complicated solution. So, one of the great opportunities we have with platform as a service is to get rid of all that grunge and focus instead on a standardized environment, a standardized hardware environment, a standardized operating system environment that, again, enables you to focus on what matters, the app.

In today’s world, applications need to be designed to handle the peak load, whether that’s an end-of-quarter time, or whether it happens just before Christmas. Whatever the period of time when an application is under its most stringent demand, the infrastructure has to be provisioned to be able to handle that because it needs that level of resource. With platform as a service, scale is available on demand based on what the application requires, and in many cases with a global provider like Microsoft, that scale is even available globally, allowing you to reach parts of the globe that you never could easily reach before with your applications.

And so finally, for years and years, applications have been designed to focus on avoiding failure. People were afraid of applications failing because it had business impact. So, an enormous amount of work goes into focusing on keeping the system from failing. And when a system does fail, it can be a crisis where people are paged in the middle of the night and they come in to fix a database or a disk that’s been corrupted.

Well, platform as a service takes a very different design point. It recognizes that failure will happen. Failure will happen within components of the application, failure will happen certainly in the hardware, and it’s designed to be able to keep the application up and running through failure. So, when something fails, it’s no big deal, another instance is just spun up. When a network connection or a network switch goes down, it doesn’t matter because there are other instances running everywhere.

So that’s the kind of distinction that we see with platform as a service, and it’s why we’re so excited about it. It’s why we’re putting our energy on it, and it’s why we’re building Windows Azure. Windows Azure was designed to run as the next generation platform as a service. It is an operating system that was designed for this environment. And let me really talk about it for a second as an operating system because, first and foremost, that is what Windows Azure is.

Now, if you’re only focusing on the infrastructure, I can understand why you might be confused and think that an operating system isn’t important. But if you’re trying to build a platform as a service that provides some top-to-bottom capabilities, all the way from the hardware infrastructure all the way out through a set of services to enable applications, you need an operating system. That’s what operating systems do. They manage the underlying hardware and they provide services to applications.

Now, if we look back over time, I talked about the Windows NT PDC and the introduction of Windows NT, it was an operating system for its time. It was designed to run on a server, a computer. Well, Windows Azure is an operating system for now. It’s designed to run on hundreds or even hundreds of thousands of computers. It’s designed to run an entire datacenter, and it’s designed to run across multiple datacenters around the globe. It’s been designed to work in today’s cloud environment and to provide a very broad set of services to applications.

With that, let’s talk about that, those services. If you look at what we’re doing with Windows Azure, we’re moving. I mean, one of the themes of today is there’s amazing focus within Microsoft on Windows Azure and across the industry on what can be built on top of this next-generation operating system. What we’ve done with Windows Azure is we have focused on building a very integrated, highly comprehensive platform of services that are available for you to write your application.

So if you want to, for example, use Facebook or Google or an Active Directory, a corporate Active Directory as your authentication source, there’s a service to enable that. It’s a couple of lines of code for you. The service is up and running, it’s a scale-out, multi-tenant service that’s available.

If you want to speed up your applications, your Web applications and get pages out faster, we have a shared, coherent, caching service that’s available to you. It’s just a couple of lines of code. And all of these services are designed to work together. We’ve been working on them for a long time. They’re not some set of ragtag acquisitions that we’ve acquired in the last 12 months and just thrown together. These are a set of comprehensive services that we know you need to develop this next generation of applications.

The services are built on technologies and products that you know really well. So, Windows Azure and SQL Azure, they’re built on Windows Server and SQL Server. Those are environments that you have a strong understanding of, and they’ve been enterprise tested for decades so they start out with incredibly broad capabilities that are clearly ready for the most demanding applications. Of course things like .NET and world-class tools like Visual Studio and operational management tools like System Center, all of these are being designed to work together, building on the strength that they’ve had for so many years, meeting the needs of the most demanding business applications, but now being reengineered for the cloud.

So what we’ve done with these products is it’s not like we’ve just moved them into Windows Azure and are running them in a VM, we’re not doing that. We are saying to ourselves, what does the cloud require of the underlying platform? What does it have to deliver? Well, it needs to deliver a fully-available, globally scaled, shared, multi-tenant service. And that’s how we’re redesigning these underlying services as a part of Windows Azure.

So take SQL Server and compare it to SQL Azure. SQL Server was designed to run on one server or perhaps a cluster of servers that work together. That’s not how SQL Azure was designed. Sure, it’s SQL Server underneath, it’s the same SQL engine underneath, it’s the same stored procedures and the same SQL, all that stuff is the same. So, from the developer perspective, it’s the same database environment.

But underneath, SQL Azure runs on thousands of computers across multiple datacenters, currently six datacenters globally, and data is replicated across those things, and all of that is handled for you. You don’t think about it. You don’t think about how you’re allocating files and how you’re building up underlying physical infrastructure. SQL Azure handles that because that’s what a real path database system should do.

And so we built these things based on their strength, but made them world-class for this next generation of the cloud. And yet, while that is true, the underlying platform that we’re creating with Windows Azure is a very open platform that’s been designed to enable you to run the language of your choice, the framework of your choice, the development tool, management tool, or in fact the datacenter with the Windows Azure appliance that we introduced this summer, we’re taking Windows Azure that we’re running as a public cloud at Microsoft and we’re making it available to customers and partners to run in their datacenter environment. So, very important thing. Everything you hear at the PDC, this morning and over the next few days, all of the sessions, when we talk about Windows Azure and services we’re making available to Windows Azure developers, those services will be available also within private cloud datacenters that run Windows Azure. So, if your organization wants that kind of environment, these same set of services will be made available to you.

The success of Windows Azure in the long run does provide you with this kind of choice, the choice of datacenter, the choice of language. We’re making Java a first-class citizen with Windows Azure, the choice of the underlying framework, the choice of the development tool. Sure, we’ll show Visual Studio, we’ll talk about how great Visual Studio is, but we’re working hard on making Eclipse great to build applications against Windows Azure. We’ll make PHP great, we’ll make Ruby great, we’re focusing across a broad set of underlying environments so you can choose the environment that best suits your application because Windows Azure is a general purpose platform as a service. In fact, if you take a look, it’s the only general purpose platform as a service on the planet.

So what that leads us to is that we’re providing these set of services in Windows Azure so that you can focus on what matters to you, which is the application, and ultimately the business problem you’re trying to solve because all this other stuff are services that in the past used to just get in your way. All that infrastructure just got in your way. Our job is to get that stuff out of the way and enable you to get applications written as fast as you possibly can, get them live and into the market, get data back on them and learn, and then to be able to respond to that data very quickly. It’s all about applications, and particularly, it’s about your applications.

So that’s kind of the rest of the theme today. It’s about applications and all the sets of services that are coming in Windows Azure. What I wanted to do was show you a video and give you a demonstration of an application that you might not be familiar with, but I can tell you for sure that the end product of this application, everyone is familiar with. The characters it’s created, the way it’s animated people, the whole worlds that have been created because of this application is something that we’ve all been familiar with. But I’ll tell you, the company that builds it is a household name. Let’s go ahead and run the video.

(Video Segment.)

BOB MUGLIA: Please welcome Chris Ford from Pixar Studios. (Applause.) Chris, good morning.

CHRIS FORD: Good to see you, Bob.

BOB MUGLIA: Great, it’s great to have you here.

CHRIS FORD: Hi, everyone.

BOB MUGLIA: So, tell us about RenderMan.

CHRIS FORD: So, I’m sure when you all think of Pixar, you probably think of us as a feature animation film studio, and we are, but you may not be aware that we also pioneered the visual effects rendering technology that lies behind the visual effects revolution that has occurred over the past 20 years.

This technology called RenderMan, sometimes photorealistic RenderMan, because it can create images at such high levels of photorealism that you can combine them with live action in a seamless manner. It’s both an interface standard, it’s also a product, and it’s one that we use not only for all our films going back to Toy Story, but when we also supplied the entire visual effects industry. And perhaps any visual effects blockbuster you’ve seen, going back to Jurassic Park, pretty much uses RenderMan in some form or another. The last 16 Academy Award winners for visual effects used RenderMan. And today, about 70 to 80 percent of all the effects movies use our software. So, we’re pretty much the standard.

BOB MUGLIA: And really it’s not just the monsters and things like that, right? I mean, RenderMan is really used almost everywhere for much more subtle things, right?

CHRIS FORD: Yeah, I think the key thing here is that it’s cinematic quality images; that is, images where photorealism is paramount. So, it is used in, for example, in visualization, in broadcast television, in high-definition. All of these require that cinematic level of detail.

BOB MUGLIA: That’s great. So, why is the cloud important for RenderMan?

CHRIS FORD: Well, I think what I’ll need to describe is the magnitude of the problem. Let’s just take one of our films. Toy Story 3 is our most recent. This is a 103-minute movie. At 24 frames a second, that is about 148,000 frames.

Oh, you want it in 3D. You want to wear glasses in stereo. That’s 290,000 frames. And each of those frames at the moment will take eight hours on average to render. So, you can see this is a big computational challenge.

Now, that type of quantity requires a lot of resources. If we just had one processor, it would take us approximately 272 years to render a movie. (Laughter.)

Now, I’m not sure about you, but we’re not that patient. So, generally, if you’re a studio, you’ll have a large datacenter called a render farm, these days averaging four, five, 6,000 processors or many more, to calculate the final movie, but even then it still takes, you know, weeks and months to generate all those final frames.

But the real story here isn’t so much about large studios who are already invested in their datacenters, it’s more about medium-sized studios and other content creators who really want to access cinematic rendering, but they can’t afford the expense of a large render farm, and this is where we’re particularly excited about the prospect of the cloud and Azure, that essentially a small studio with very creative people can upload their assets and gain access to hundreds, thousands of processors without actually having to pay for a physical installation.

BOB MUGLIA: So, the cloud lets RenderMan come to places that it never went to before, it never could get to before, because the cost was just too expensive.

CHRIS FORD: Absolutely. Yes, I mean, there is at the moment, you know, an ongoing diffusion in visual effects, in graphics production, and a growth in that production, especially outside of the USA on a worldwide basis. And always the cost of building the infrastructure is a key limiting factor, and the cloud has the potential to actually change that equation.

BOB MUGLIA: So, Chris, what led Pixar to Windows Azure and choose Windows Azure as the cloud platform?

CHRIS FORD: I would actually summarize three main points. First is scalability. You know, the idea of a physical render farm with a render for hire type service is not new, it’s been done many times before, but that’s the whole point, it’s a physical farm. It’s a finite resource.

The whole point about the cloud is its extreme elasticity, the fact that you can scale it to enormous sizes, which means that you can potentially push jobs through extremely quickly, just because it’s very wide as well as very deep.

Second is sustainability, and this is very key for us. Remember, we’re in a business where production time scales can sometimes take, you know, two years or more. We need to ensure that a solution we put in place is going to be there in a few years’ time. Confidence is a key part of the value proposition. So, that’s critical for us.

And thirdly, frankly, it just works. You know, we just uploaded the HPC Windows version of RenderMan up into the cloud, and it worked. We didn’t have to do anything else at all.

So, we can focus on what we do well, which is develop rendering software, knowing full well that we can run the same software in the cloud without any extra effort.

BOB MUGLIA: That’s great. Well, let’s take a look at it.

CHRIS FORD: Absolutely. So, what I’m going to show you here is a proof of concept demo that we did on Windows Azure with the assistance of Microsoft, and this is not necessarily what the actually final thing will look like, but it does give you a sense of the key elements that we think are important. What we’re looking at first of all is not actually anything in the cloud, you’re looking at a character that some of you may recognize — (laughter) — our friend Buzz Lightyear, and he currently is standing inside Maya. Maya is a 3D application which is very commonly used in the production industry to aggregate all of the scene assets that will be uploaded for the final render process.

What I mean by that is all the models, the sets, the backgrounds, the vehicles, and also the light sources and the cameras that are positioned relative to those assets, all of that is embodied into one big scene description file called a rib file. This is a RenderMan standard file.

And these files can get pretty large, because you’ve seen the scenes in many movies these days, they’re pretty complicated.

So, Buzz essentially is written out as a rib file, and then we get to the stage we want to upload it into the cloud. So, this kind of shows you the experience, what it looks like.

First of all, it should be simple, right? It should be clean and it should be attractive. It’s on the Web, as we see here.

And what I do, of course, is I first of all enter my account area, so I know my work is safe from prying eyes. And I can create a new job.

Now, you can see that there is already a job running on the cloud, but we can also see that there’s a history of those jobs that have been submitted in the past. But in this case we want to submit a new job. So, we need to upload that rib file, which I mentioned earlier. So, I’m going to choose it, and here it is, and we’re going to upload.

And as it uploads, there are a number of concepts I would like to relay, one of which is the fact that as we go through the file, it will start to identify if there are any changes. And if it sees changes, it will only upload those new elements that are required. In other words, we can analyze the file so we’re not redundantly uploading the same thing time and time again, which of course is greater time and expense.

Another key concept which is implicit is the concept of an estimator. Now, you must remember that we are dealing with a creative industry here. We have directors, we have designers, and they just love to change things, they just love to tweak things, normally right up in front of the deadline. And the elasticity of the cloud offers the potential to essentially be able to accommodate those changes very close to the deadline.

So, for example, if you need it tomorrow, we can just increase the number of rendering units, if you like, which are purchased, and you can see that the estimated rendering time here goes way down.

Now, there will be a price attached to that, of course. So, equally you can take the more economical path, and you have a longer rendering time.

The whole point is that here we’re really leveraging the elasticity of the cloud to give us all of these options, which I think is pretty exciting.

BOB MUGLIA: So, what’s actually happening here is when you uploaded the file, what you did is you were uploading the rib file into Azure blob storage —

CHRIS FORD: That’s right.

BOB MUGLIA: — and then as you choose the different tiers of rendering time, that will determine the number of worker roles in Azure that actually work on this particular job simultaneously.

CHRIS FORD: Absolutely.

BOB MUGLIA: And the job, correct me, RenderMan is a C++ application —

CHRIS FORD: Absolutely.

BOB MUGLIA: — a Windows C++ application that you were able to just quickly move to Windows Azure.

CHRIS FORD: Yes, it was really easy.

So, once we’ve selected the number of rendering units we’re going to use, I’ll just enter the name of the output image filenames that will come from the renderer, and I’ll start the job, and here we are rendering.

Now, there are 1,440 frames in this sequence, and I will admit that watching rendering is not often the most exciting thing, but what is interesting is seeing the frames come out. So, while this is actually rendering, we can go to the job details, and you can see each of the frames begin to pop out as each one finishes.

These little proxy images, most importantly, will enable us to see if there’s anything going wrong, so, of course, at any time we can cancel the job and we can start again.’

But we can see the completion time is printed here, and at all times we’re in control of the job. We know essentially how much it’s going to cost, how long it will take.

So, finally, those jobs will just literally go straight down to your local hard drive where you just simply put them straight out to film or to video or whatever output medium you are using, and you have a full record of all of the jobs that are submitted. Because remember, a visual effects shot or any sequence you’re working on may be composed of many individual shots, and some of these can get quite complicated.

All of us on Pixar’s RenderMan team are really pretty excited about the prospect of the cloud to significantly alter the cost and production equation for this type of work, and I really would like to thank, Bob, you and Microsoft, for all of your help in this proof of concept, and really helping us share it with you here today. Thank you.

BOB MUGLIA: Chris, we really appreciate it. Thank you so much. We look forward to working with you.

CHRIS FORD: Thank you. Thank you very much.

BOB MUGLIA: Thank you. (Applause.)

I love that example, because it’s something that we’ve all experienced. We’ve all seen what RenderMan can do, and enjoyed it in so many wonderful pictures that it has helped to create over the years, and it really speaks to the promise of the cloud, because it takes something that was only previously available to the largest studios and now makes it available to really anyone who wants to create the next wonderful animated feature or, in fact, augment an existing regular motion picture.

So, coming back to the apps, it’s all about the apps and it’s all about what we can do to help you be more effective in writing apps by providing wonderful new Windows Azure services.

This morning, what I’m going to be doing is talking about the broad set of new features that are coming in Windows Azure to make your job easier, and I’ll talk about how those features can help you take your app and move them to the cloud, how you can enhance your existing applications, and then some new things that are very transformative in the way applications will be written with the cloud and platform as a service.

And I’m going to start by talking about moving existing applications. And interestingly, I’m going to talk for just a minute about infrastructure as a service. This is the only feature I’m going to talk about this morning that is really an infrastructure as a service feature, and what we’re now doing is bringing a VM role out on Windows Azure so you can take a Windows Server 2008 R2 image that you’ve built with Hyper-V in your environment, and move that into the Windows Azure environment, and run it as is with no changes. (Applause.) Yeah, glad to have that feature.

And over the next year, we’ll be bringing out Windows Server 2003, and we’ll be enabling also image creation within the cloud.

So, we’re treating this as something that helps you to take your existing applications forward, and we again believe very strongly that the destination is PAAS, and the destination for applications are inside Windows Azure roles, but we know that many of you have applications there that need some work in order to maintain or may not be worth taking that step, and so having this infrastructure as a service feature is a helpful transition there.

Another feature that we’re announcing today to help you move your application is something that allows you to take an existing application and move it to PAAS, and that’s server application virtualization, Server App-V.

So, if you’re familiar with App-V on the Windows client, what it does is it takes an existing Windows application that writes to the registry and does a whole lengthy, complicated install process, and it packages it into a file, which can essentially be X-copied onto a desktop.

Well, we’ve taken that technology and now built it for the server environment, and for the Windows Azure cloud environment, and what it does is it lets you take an existing application, and then deploy that application without going through its installation process into a Windows Azure worker role, and we think it’s a very exciting way to help you get compatibility with existing Windows Server applications in the cloud environment.

We’re also announcing a broad set of new features from Windows Azure. These are things you’ve all been asking for. They’re important features that we’ve been waiting to have come.

And by the way, these features that are on this page will be available later this year. So, all these things go live this year, will be in beta on these features.

Smaller instance of Windows Azure, so if you don’t need a full role, a full core, you have a sub core.

Remote desktop capability, something I think everybody needs to be able to get access to the roles.

Full IIS support, enabling a broad set of new applications, smooth streaming applications, and any number of websites on a single server, in a single role.

Virtual network capability, which I talked about at PDC last year, called that Sydney. That will connect your existing corporate datacenters and databases and information and apps on your existing corporate datacenter virtually into the Windows Azure applications that you have.

Admin mode, elevated privileges, finally. It’s a great feature that lets you take and install applications that require administrative mode and elevated privilege.

Server 2008 R2 roles, so the latest instance of Windows Server now available in the role form within Windows Azure.

And then support for multiple administrators to allow greater flexibility in terms of how you manage the environment.

And so now to show you all of that, what I want to do is invite Mark Russinovich to come on stage and talk about and show you some of the great new features in action in Windows Azure. Mark? (Applause.)


Hi, everybody. Well, let’s start here on a site that I think a lot of you are familiar with. This is the Channel 9 website, of course. This is where you go to watch videos of developers talking about the products that they’re working on.

What you might not know, though, is that Channel 9 has taken a really big bet on Windows Azure. This used to be an application that ran on-premises, on a server, and now it runs up in the cloud on Windows Azure.

And what I’m going to do is take you behind the scenes of that application to show you how they’ve managed to do that by taking advantage of some of the features that Bob just mentioned.

So, let’s go to the new Windows Azure management portal. Those of you that have used the existing portal, you’ll probably see this as a little bit of an improvement.

It starts here with this dashboard, which shows you at a glance the status and health of your various deployments, their roles, and their role instances. And then over here on the compute services node you can get detailed information about all of those instances and those roles and those deployments.

What I’m looking at here is the Channel 9 service, and that Channel 9 service consists of two roles. This first role here is the memcache-v role. This is a worker role front-end which caches the Channel 9 content. And then you’ve got this role right here, which is a standard Web role, the Channel 9 view role, which is actually hosting the website.

There are two instances of that role. You can see those here, for availability. And when I click on them, you can see buttons light up in the toolbar. I think the one that a lot of you will be excited about is this one right here, which I’ll go ahead and press. And if you’ve ever had a problem with one of your services up in the cloud, and you’ve only had Windows Azure diagnostics as your way of trying to figure out what’s going wrong, then this button is the one you want, because it lets you, like Bob said, connect right inside a running role instance.

So, I’m going to run my favorite diagnostic and troubleshooting tool to show you around here, and that’s Task Manager. No, come on, really you thought I’d run Task Manager? (Laughter.) This is, of course, Sysinternals Process Explorer, and what you’ll notice here, some of you might recognize the W3WP. This is the IIS worker process. This is full IIS. And if I click open the DLL view to show you some of the components loaded inside it, you can see that the Channel 9 team is using some third-party components. If I scroll down here, and I’ve got to stop here, I didn’t realize that rhinos hibernated until I saw this. I bet a lot of you didn’t either.

And what I’m really trying to show you here is this DLL right here, the Passport Relying Party Support DLL, this is actually the Windows Live authentication COM object. It’s a COM object that requires installation the first time the role starts up. It requires admin rights. And so I’ll show you in a little bit how they managed to configure that.

As further evidence that this is full IIS, you’ve got INetManager here. If I click that open, you can go look at the website instance that Windows Azure created when the role started. Here it is. And if I click explorer, I get taken right to the root directory of that website installation.

And the reason that I’m taking you here is to go into this startup folder, and this is where various executables and their support files are stored that get executed the first time the role instance starts up. The one of interest is this install RPS executable, and that’s the one that installs that Windows Live ID COM object.

So, let’s take a look at how hard or how easy it was for the Channel 9 team to set this up. And for that I’ll go back to my desktop, open up Visual Studio. And we’ve got the Channel 9 Windows Azure service loaded here into Visual Studio. I’m looking at the service definition file. And there are some new sections here, even if you’ve programmed in Windows Azure, that you haven’t seen before. The first is this startup section, and this is where you configure the tasks that execute the first time the role instance starts up.

This is the task that installs that RPS executable that we just saw, and it requires as a COM object admin rights, which it’s given here with the elevated value on the execution context property.

If we scroll down a little bit, this is where the website is defined. And it’s got standard HTTP and HTTPS port bindings. You notice it’s in the section called “sites,” and that’s because it’s as easy as copying and pasting this code to create another website. I just have to give it a new name, and I’ve got another website. So, it’s taking advantage of some of the full IIS features.

It exposes a number of other powerful IIS features like virtual directories, virtual applications, and you can even do things like create custom host headers, which that’s supposed to auto-complete. There we go. Well, that should auto-complete.

Let me scroll down a little bit further, and show you one last thing here. This is the import section, and that’s new. This is where you configure Windows Azure plug-ins to be run in your service when it starts up. And I’ve got one plug-in configured here, through an access plug-in, and that’s what we use actually when we Terminal Server’d or RDP’d into the running instance.

But the Channel 9 website is more than just that front-end. As you know that there’s a lot of videos hosted on the website, and they’ve got a big job in encoding those videos that they capture into eight different formats that they publish.

That video encoding process is centered around this file share, which is on a server on-premise, and the way that the encoding process kicks off is that a video is dropped into this incoming folder. And the way that videos are dropped into that incoming folder is using this tool right here. I’ll drop a video in there, and then we’ll talk about what happens when a video gets placed there.

And let’s give our video a name. I’m going to do some search engine optimization, because I want people that during Web searches to come to Channel 9 and watch videos. So, I’ll give it a name that I think will be picked up on a lot of searches. (Laughter.) I’ll submit that video. And now that job is going to be picked up and executed.

But the way that that happened in the past is that there is a Channel 9 encoding client that ran in that server, picked up the job, encoded it to the multiple formats, and then uploaded it.

The problem with that architecture is that back around events like the PDC and Tech•Ed there are literally hundreds of videos that need to be encoded for uploading, and if just that server was responsible for it, it would literally take days to get those videos up into the cloud.

So, what the Channel 9 team has done in the past around those events is provision additional servers, even take machines from home, even take their children’s PCs and bring them in, and there’s nothing worse than a child waking up in the morning and not being able to check in Justin Bieber’s status. (Laughter.)

So, thanks to Windows Azure, though, that’s not going to happen tomorrow. And let me tell you how they did that, because this is a legacy application that they wanted to burst into the cloud to help offload that share.

So, they needed to get that media encoder client software up into a Windows Azure service, but they faced two challenges in doing that. The first is that that application requires manual configuration. They haven’t had a chance to update it yet to be scriptable for its install.

And the second is that they needed obviously to connect back into the Channel 9 domain, and get access to this share so that the worker or so that the role instance can pull that video up.

And they also need to connect back in with credentials that are valid in the Channel 9 domain. You could see only magic folder users have access to that share.

So, let’s talk about how they managed to address those two challenges in getting their application migrated to the cloud, and for that I’m going to go back to the management portal, and take a look at the magic folder encoder service, which we see right here. It’s got one role, the magic folder encoder role, and its role type is different than what we see for the other ones. It’s the special VM role that Bob mentioned. This means that the Channel 9 team was able to take a Hyper-V VM, install the encoder client, configured it, take a snapshot, and then go upload it to the cloud — you can see the VHD here — and then connect the service back to it.

So, that’s the way that they addressed the first challenge of the client application requiring manual configuration.

As far as the second one, being able to connect back into the Channel 9 domain with Channel 9 credentials, they leveraged this new feature, the virtual networking feature.

And the way that this feature works is again this is Project Sydney from last year, it’s called Windows Azure Connect, that with the Windows Azure Connect plug-in these role instances, as they start up get automatically joined to the Channel 9 domain, and given fully qualified Channel 9 domain names.

Now, I could show you connecting into those instances the way that we did before by going back and pressing the connect button, but to demonstrate that they really are part of this domain, I’m going to copy the short name of this server and then RDP right into it using that.

Now, the way that that connection was made between the cloud and on-premise, like I said, part of it was that Windows Azure Connect plug-in which was specified in the service definition file, but here the connections between the two, the on-premise and the cloud resources, are configured. You’ve got a group here, which is the Channel 9 domain, with the on-premise resources. This is the server that hosts that file share, and it’s set up with a connection back up to the cloud and this role instance.

Now let me type in my credentials.

And with that, the connection between the cloud is automatically made. It’s as simple as configuring in the service definition file admin credentials for an account that can join back in. And now when that login actually takes place, we’ll be able to see the Windows magic folder client software encoding that video that I dropped in there. And this will take a second because we’re actually making a round trip back into the Channel 9 domain.

And further, I’m going to be able to double-click on that file share, which is visible right here. There’s the encoding software running, taking that job. Here is that share on the Magic Server server, back here on the Channel 9 domain, and if I double-click in, we’re able to see the files that I have there you saw from on-premise, just as if this was down the hall. So, we’ve got the full connectivity back into the Channel 9 corpnet.

So, that is a look at Channel 9, how they took advantage of a lot of the features that we announce today to be able to get the Channel 9 application up into the cloud and get their encoding process bursted up into the cloud as well. (Applause.)

Thanks, Bob.

BOB MUGLIA: Thanks, Mark.

So, a lot of great new features in Windows Azure that I think came from feedback from you of features that you needed and wanted to make it easier for you take your applications and bring them up in the Windows Azure environment.

And, you know, as I said, this is all about applications, and we thought it would be interesting to show you an internal application that we’re working on, a Microsoft app that we’re working on and will be making available in a CTP form next year, which is Team Foundation Server, so an application that many of you are familiar with or developers writing code, working together in teams today. You need to install Team Foundation Server locally within your environment.

We will be bringing Team Foundation Server out into the Windows Azure environment, and I thought it would be very helpful for you to know that we’re doing that, but we also thought it would be interesting for you to get a better feeling for what it’s taken us to take what is a pretty substantive application, and get it running in the Windows Azure environment.

And so with that, I’d like to invite Brian Harry up, our technical fellow, and chief architect of Team Foundation Server. Brian. (Applause.)

BRIAN HARRY: Thanks, Bob. I appreciate it.

So, Team Foundation Server is the heart of Visual Studio application lifecycle management. It’s sort of enables collaboration between all the people involved in creating software.

And what I’m going to show you is sort of we’ve been working on it for the last few months. I first talked about it on my blog a few months ago. And I want to show you how far we’ve gotten and talk a little bit about what it’s taken.

So, let’s start by talking about what it might take to get started using Team Foundation Server in the cloud.

So, let’s start a trial, and pick a project name that I want to use, create my account. Now I’ll logon.

BOB MUGLIA: Using your Yahoo! ID here.

BRIAN HARRY: Using Yahoo!

BOB MUGLIA: I guess they’re a partner of ours; that’s OK.

BRIAN HARRY: Indeed. (Laughter.)

So, as Bob mentioned, today, you have to install Team Foundation Server on-premises. And we worked really hard in Team Foundation Server 2010 to make that really easy. And we were very proud to get to the point where you could get from the time you sort of stuck in the CD to up and running was about 20 to 30 minutes.

With the cloud we’re done. I’m done. My project is now provisioned, it’s ready for me to use, and it takes about 20 seconds. No software to install, simply say you want an account and you’re ready to go.

BOB MUGLIA: Plus developers get access really from anywhere with this.

BRIAN HARRY: Yeah. You can now use — your global friends can now connect to the project and use it.

So, that’s kind of what getting started would look like.

What would it look like to use it? Is it a completely different experience? Well, let’s start like any good developer would do and launch Visual Studio.

The first thing you’ll notice is that it asks me to sign in. And Team Foundation Server today, of course, only supports Active Directory authentication. Using AppFabric access control services we’re now able to support a bunch of different authentication mechanisms, things like Live ID, Google, Yahoo!, Facebook, and we can even federate with your corporate identity so you can use your corporate identity to connect to TFS.

So, let’s sign in with Yahoo! And here we go, we’re in Visual Studio. It looks like Visual Studio. It’s connecting to TFS. And it really is no different than your TFS as you know it today. I can open up my work items and see what work I have to do. That’s all coming down from the Azure cloud. I can browse my source code, go find a project I would like to work on, and open that in Visual Studio just by double-clicking it. That downloads it from the Azure cloud onto my desktop, opens it in Solution Explorer. And there it is, the project is open.

I can build that code, I can run my unit tests, I can do whatever I do to get things ready. And I have some nice sample code that I’ve modified here to show you what a check-in would look like.

So, I’ve got some changes. I’m happy with them. And I want to do a check-in.

I just click check-in, it’s going to upload that into the Windows Azure cloud, and again completely seamless inside Visual Studio, nothing special.

BOB MUGLIA: Nothing different.

BRIAN HARRY: Yeah, no different.

Now, it happens to be that with this project on my Windows Azure TFS instance I’ve configured it for continuous integration. Continuous integration is a process whereby every check-in is built, run unit tests, run static analysis, whatever it is you’d like to do to validate the code that you check in.

So, as I check that in, it automatically kicked off a build on the new Windows Azure VM role, and that builds right here in the queue. And if I open up that build status, I can see that it’s been in the queue for eight seconds. I could look at the build details, and I can watch the build progress.

BOB MUGLIA: So, it’s doing a build in the cloud, in Windows Azure, and in this case it’s using the VM role, and that’s really useful because we know many of you have very custom-built environments.

BRIAN HARRY: That’s right.

BOB MUGLIA: We’ll also have a more standard build environment that will just use a standard Windows Azure worker role to actually do the build.


So, you know, that’s it, that’s sort of the developer experience, completely transparent. I don’t have to have any local development infrastructure. It’s all in the cloud. My TFS, my source code, my work items, my builds, everything is in the cloud.

So, that’s cool, it’s very simple for the developer, but as I mentioned at the beginning, TFS is about enabling collaboration among all of the people involved in the software development process. And there are a lot more people than developers.

And we have a lot of interfaces involved in Visual Studio application lifecycle management. We have a Web interface, we have this here, Microsoft Test Manager, which is for test professionals who want to be able to do test plans, execute test cases, track the quality and the test progress. That’s set up here, connected to that exact same TFS project running on Windows Azure.

And apparently I don’t have permission, but OK.

We’ll even enable our Eclipse client. So, we have an Eclipse client for TFS that connects, and that will also work with this.

BOB MUGLIA: So, anything that TFS works with today will just work in the Windows Azure environment with the hosted TFS.

BRIAN HARRY: That’s right.

BOB MUGLIA: That’s great.

BRIAN HARRY: So, you know, totally transparent experience.

Now, let me take one moment to sort of talk to you a little bit about behind the scenes, under the covers what it’s taken for us to do that.

TFS started, of course, as a pretty standard Web application with the standard three-tier architecture. As we move that to the cloud, we took our Web application pieces, which were our Web Services and our Web UI, those moved pretty transparently into a Web role in Windows Azure. We’ve taken our background processing, which we call our job agent, that moved very easily into a worker role. We’ve taken our TFS build component, and that, as Bob mentioned, can run in a VM role. You’ll also be able to do it in a worker role. We took what was an on-premise SQL, and that has now been partially moved to SQL Azure, and part of it moved to the Windows Azure blob storage.

Now, some interesting questions about why we did that, and how we decided what to move where, and later today at 4:30 we’re going to be doing a whole 70 minutes on sort of the process we went through importing TFS. So, come if you’d like to learn more about that.

BOB MUGLIA: But this is a pretty significant application. I mean, a lot of stored procedures and a lot of code, right?

BRIAN HARRY: Yeah, yeah, TFS is a large application. The SQL Server especially it’s almost a thousand stored procedures, over 250 tables, and about 180,000 lines of TSQL.

We were able to port all of that to SQL Azure in about two people in one month.

BOB MUGLIA: That’s great.

BRIAN HARRY: And then on the Web tier it’s about 350,000 lines of C# code, and that took one person one month to get it running on Windows Azure.

BOB MUGLIA: And, in in fact, I mean, I think one of you said one of the more complicated things was taking what was a single tenant organization, application, and making it multi-tenant, because that’s something that’s necessary for this kind of app.

BRIAN HARRY: Yeah, that’s by far the biggest chunk of work we had. Getting it to run on Azure was not hard. It’s turning it into a service that we can operate 365, you know, 24 hours a day. That’s the more challenging part.

BOB MUGLIA: Well, I think the insight that you’ve gained in terms of moving an existing application onto Windows Azure, and then doing the things like taking advantage of the different identity providers, and then making it multi-tenant will be very useful to people this afternoon.

That’s great. Thanks a lot, Brian. Thank you so much.

BRIAN HARRY: Thank you, Bob. Appreciate it. (Applause.)

BOB MUGLIA: So, this is a great example of how we’re looking at what it takes to help you take existing applications and move them into the Windows Azure environment.

And one of the key attributes of this is how we can help by providing a very broad set of highly scalable, always available, multi-tenant services that you can just incorporate into your applications with a couple of lines of code, and we have some great new services and some great new features that we’re announcing today.

The access control service is supporting more providers, things like Facebook and Google, obviously things like Active Directory for corporate access, and, of course, Windows Live as well, and a broader set of protocols, particularly the OLA protocol.

We’re pleased that the AppFabric caching service, which has been available on Windows Server since earlier this year, is moving to Windows Azure, and will be available later this year for you to start working with. And it’s just available for you to take and have a highly available, persistent cache to accelerate your applications, incredibly easy to get fantastic performance.

Service bus is being enhanced with durable message support, the ability to connect to existing — through BizTalk to existing business applications. That’s a huge feature in terms of enabling a broad set of applications that require the consistency and stability of a durable message queue.

SQL Azure, we’ve got some great new things coming in SQL Azure. Reporting services, a fully available set of SQL Server reporting services, now hosted as a multi-tenant service that all you need to do is just call and create reports against SQL Azure, you know, something that people have been asking for, for a long time, as well as data synch capability to synchronize multiple instances of SQL Azure within the cloud, but also to synchronize your on-premises SQL Servers to SQL Azure running in the cloud, being able to connect your existing datacenter. And, of course, that works very well with the Virtual Private Network connection that allows you to take a particular database server and connect it to a particular Azure application within the Azure cloud. So, a lot of great new features that are available for you as developers to begin using.

And so with that, what I’d like to do is invite Don Box and Jonathan Carter up to show you some of those great new features in action. (Applause.)

DON BOX: Hello! How are you guys?


DON BOX: Are you awake? The keynote is going to be over at some point, I’m sure by 4:30, because Brian said he’s got a talk, so I know we’ll be done by then.

Thanks for coming to PDC. It’s great to be here in Los Angeles, in Boston, in New York, in London, all my favorite cities. I get to program in all these cities at once.

This is my friend Jonathan Carter. What we’re going to do is write some programs against this notion of a platform as a service. Bob just talked about it. He showed you a list of some of the platform services that we provide, that we operate, that you as a developer don’t need to think about so you can just focus on writing your app, mostly the way you do now, using the tools and languages that you’re used to.

So, what we’re going to do is take an app that we’ve done a little bit of work on, and we’re going to go add some more platform services to it.

So, Jonathan, can we go to Visual Studio?

So, what we’ve got here is a classic ASP.NET MVC application that we’ve written, and what we’re going to do is actually deploy it to Windows Azure. What we’ve done in preparation for this is we’ve used the feature Russinovich talked about earlier, full IIS. So, we’ve got a virtual machine already deployed out into Windows Azure, using full IIS, and what we’re going to do is just use Visual Studio with Web Deploy to push our app out there. So, rather than waiting for the standard staging of our deployment with Web roles and worker roles, we can just use Web Deploy to get this thing up and running so we can do interactive development with our own private piece of the live cloud. And by the way, everything you’re going to see today is against the live Azure bits.

Great. So, those of you who have written an MVC app before, this should look astonishingly familiar. We see the standard forms auth page. So, what we do is we type in some credentials that are specific to our application. And what this application does is it says, what topics are you interested in, ASP.NET and OData, of course, right, and it helps you find people you can go talk to here at this event. And this app will actually be live, and you can actually go use it, if you want.

So, this app is running in Windows Azure right now. We instantly deployed it using Web Deploy. And it’s using one of the platform services, which is SQL Azure. So, all the data that we’re fetching which has all the tables of experts and topics and their information has already been deployed out to SQL Azure.

So, the next platform service we want to add to this is the access control service. So, remember when we started the app, we were using forms-based auth. So, it was very specific, we had to manage all the accounts, we had to keep track of who could do what.

What we’re going to do now is use a platform service, which is ACS, to go set this up so we can use different kinds of logons.

So, what Jonathan is going to do is we’ve already kind of set up our app inside the portal to use several authentication providers. One of the authentication providers we’re using is the Active Directory federation service wired up to the Microsoft domain controller, so that if you’re a Microsoft employee, you can come use our app using your standard credentials that you use every day when you log into Microsoft’s corpnet.

How many people in here have Microsoft corpnet credentials? Show of hands. You’re not supposed to be in this room. No employees are supposed to be in this room. I encourage you to consider whether you should be in here or not. How many — so that’s an issue. Not many of you guys can use our apps, because there were like five people who snuck in with their Microsoft corpnet account.

The other identity provider we’re using is Facebook. How many people have Facebook accounts, show of hands? In New York raise your hands if you have Facebook accounts, in Tokyo. Oh, yes, I can see in Tokyo everybody has got Facebook; it’s awesome. Great.

So, everybody can get into our app because we’re using Facebook.

So, we’ve set this up so that we can support both kinds of authentication in our app, but what we want to do is we’re going to do some features that we want to only be available to folks who work at Microsoft.

So, what we’re going to do is use ACS to add a claim so we can detect in our program whether or not you came in from a Facebook logon or from a Microsoft logon.

So, to do this we’re going to go in and add a new rule which when we come in through the Microsoft identity provider, which you see at the top of the screen, we’re going to add a new claim. The name of the claim is going to be role, and the value of that thing is going to be something clever, which Jonathan has just put in, and it’s called ‘softy.’ Great.

So, if someone logs in as a Microsoft account, we can just go look up in our identity information and see if this claim is present. If they come in from Facebook, it ain’t gonna be there. Great.

So, now what I want to do is go grab the address of our ACS service, and that address that Jonathan is copying off the Web page is a platform service that we’ve deployed at Microsoft, that we manage for you, you don’t have to think about it, you can just use it. And so what we’re going to do is go to Visual Studio and say add STS reference, and what add STS reference does is it allow us to say, look, we’ve got an application called This is the name of the app which we configured back in ACS before we came out onstage. And then we give it the address of the actual platform service that is being hosted by Microsoft, and we’re set to go.

So, now when we build and deploy the app — and by the way, every time he builds and deploys the app, we’re actually pushing the bits back out to Windows Azure. So, we’re actually always running live in the cloud. The only thing that’s not running in the cloud is Jonathan himself and Visual Studio. Those are running here onstage. Great.

So, now when we get into our app, because we’ve wired it up to use ACS, instead of seeing the standards forms auth page, we’re presented with the ACS logon page where we can pick one of the credentials. It looks like Jonathan has decided to use Facebook. Notice those of you who are watching carefully it’s [email protected]. If you would like to be ignored, you can add a friend request to Jonathan right now, and he’ll be clicking all of his ignore buttons, or accept and confirm later on. Great.

So, now that we’ve logged back into the app, we can type in whatever topics we care about. So, ADO.NET or ASP.NET and OData, and we’ll see the standard list of characters, including, of course, Pablo Castro. You can’t have a demo without Pablo in it or without OData in it. Great.

So, we’re set, we’ve already done that. So, we’ve now used SQL Azure and we’ve used ACS. So, let’s use another platform service.

So, one of the things we did was we wrote an OData server as part of the preparation for this demo, which talks to the Microsoft Exchange Server inside of corpnet here.

So, we can actually get access to the free/busy information from Exchange, and we want to go make that available to our application. So, what we did was we wrote an OData service that talks to Exchange, and then we stood it up on another platform service, which is the service bus. And the service bus allows me to put a name out into the global namespace of the Internet, and then use that as a relay point where I can send messages back and forth.

So, what we’re going to do is go talk to that thing, and so when I click the show availability checkbox, that causes when we hit the find recommendations to not only do the SQL Azure query, but to also send the OData request over the service bus, so I can get the information about free/busy. And you can see because Jonathan did that, we got free/busy information down below.

But notice up underneath by “four results found,” it took about, what did that take, 2.391 seconds. That’s actually faster than it normally runs. We got lucky today. Sometimes it takes 10 seconds, sometimes it takes 15 seconds. Usually it takes about five or 10. Great.

So, what’s happening is we’re going off and doing all these round trips back to the Microsoft corpnet server that’s serving up the OData information, and what we’d like to do is use another platform service to make this thing go faster.

So, one of the new services that we’re announcing today is the cache service. So, we’ve got a cache service as part of our platform that you can use, either in your on-premise apps or in your Windows Azure hosted apps, which is what we’re building here. So, we’re going to go use that right now.

So, what Jonathan just did is he opened up our Web.config file in our application, and notice that under the data cache client config section we’ve identified where the cache service is that we’re going to use in our application.

And notice that the DNS name that we’re using is actually in That’s a part of the Windows Azure world over here that we manage.

So, we’re going to go use that, and let’s go actually write some code.

So, we’re going to open up the program that is behind the find recommendations button, and we’re going to write the code using the exact same code we would use if we were doing the on-premise cashing technology called Velocity. Some of you folks in this room are probably using Velocity right now. This is basically a hosted version of that that I no longer have to worry about deploying and managing.

So, we’ll do the standard go off and get a default cache. And the key for the cache entry, we’re going to take that list of topics that you put into the text box, we’re going to alphabetize it so we get a canonical key, and then we’re going to use that as the key when we go off and do the lookup.

So, we’ll do a cache.get using the key, and if we get back a non-null value, that means we got a cache hit. And if we got a cache hit, we don’t need to go make those service bus calls, we can actually just use the cache results and return those immediately.

So, that’s what we’ll do here. Marshal by ref object? That’s so old school.

JONATHAN CARTER: I got away, yeah.

DON BOX: That’s awesome. You’re going to code create free-threaded marshal or something?

JONATHAN CARTER: Oh yeah, a little remoting code.

DON BOX: Great.

So, we’ve done the cache hit, we’ve looked for if we got it, we did it. If not, we’ll go ahead and do those service bus calls, and notice that we’re doing basically standard OData calls using the OData URL, which is, which is where we stood up the service bus entry. And then we do a for-loop which is why it takes so long. You can criticize us for doing for-loops across the Internet, but whatever.

And so now what we’re going to do is once we’ve got that information, we’ll stick it in the cache, so the second time around, as long as you’re looking for the same information, we’ll be able to get a cache hit and it will go a lot faster.

So, Jonathan, are we done?


DON BOX: So, what we’re going to do is again deploy it back out to Windows Azure, and use yet another platform service. We’ll log back in with our Facebook account. That’s Jonathan.Carter<at> (Laughter.)

JONATHAN CARTER: I love friends, so it’s fine.

DON BOX: The first thousand friend requests he will accept. After that, he’s done. Great.

So, now the first time we went out, this time it took 2.403 seconds, but the second time when we do the request notice it took about .4 seconds. So, this time we got a six-time speed-up. And again your mileage will vary. Typically we get about a 10x.

Great. So, we’ve now added multiple platform services to our app. We’ve used SQL Azure, we’ve used ACS, we’ve used the cache service, and we’ve used message bus.

There’s one more service we’re going to go use before we get kicked off the stage, and that’s SQL Azure reporting.

So, we’ve got a little reports tab up here. Why don’t we go click it?

Now, notice that when we click the reports tab, we got a 401, we got an access denied. Let’s talk about why we got that. Remember when we started this demo we were in the portal and we were configuring that if we came in through Facebook we had one set of claims, but then if I came in with a Microsoft logon, we got that extra claim which was the role equals softy?

So, if I go look in my standard ASP.NET config, which has been around, you know, this is like 10-year old config schema, and if I look at the location for the reports site, which is what we were just clicking, notice we have an authorization rule which says if you have the softies rule, we’ll let you in, and if you don’t, we’ll deny you. Because Jonathan was logged in with his Facebook credentials, he didn’t have that claimed softy, so he wasn’t allowed in, which is why we got the 401. So, I was able to use that platform service to get my access control pretty trivially without writing. Actually, in this case, I didn’t have to write any code. I could have programmatically looked for it.


So, what we’re going to do now is use SQL Azure Reporting. SQL Azure reporting, I am not a great user of SQL Azure Reporting. Jonathan, are you?

JONATHAN CARTER: Never used it once in my life.

DON BOX: Actually, you’re about to use it.

JONATHAN CARTER: Except for right now.

DON BOX: I’m going to walk you through gently through the world of SQL Azure Reporting. So, what we did was, we got someone we know who is an expert at SQL Azure Reporting to produce this beautiful report using the Business Intelligence Development Studio also known as BIDS. This is the standard BIDS version that ships with SQL Server 2008 R2, standard file format, unmodified tool.

What we’re going to do, though, is now we’re going to take this report, and we’re going to deploy it not to a local server, but to a SQL Azure Reporting Service, which Microsoft maintains in the cloud. We don’t have to maintain it. We don’t have to deploy it.

So, we’ve just done the deployment, and now what we’re going to go do is go to our app. We’re going to go to the ASPX page behind that tab that we haven’t been able to see yet, and we’re going to actually go implement it. Now, to implement it, we simply go grab the standard reporting services control, drag it out onto the ASPX page, and throw in a little bit of information. We’re going to tell it what sizing to use. So, we’ll say with SQL is 100 percent, report content equals true. And then we’re going to talk 

Whoa, dude.

JONATHAN CARTER: I was going local again, look at that.

DON BOX: So what we’re going to do is say where to go, which cloud service to go use. This service has not been released yet, so this is a demo you cannot do. You can actually go use the Task Service imminently. The reporting service is not up live yet. So, we have a very funky looking DNS name there. I promise we’ll have one that’s obscure, but not quite this obscure by the time we’re done. We also say which particular report to go grab. So, we can we get rafiki/top-matched experts.

Now, what we’re going to do is go deploy the app one more time. And really, if you think about it, like we’re deploying this app five or six times during this demo and this very interactive, very immediate experience is actually quite an advance from where we were at before.

Notice now that when Jonathan logs in, do not hit sign in – please, please, Jonathan, hands away from the keyboard. Step back. Awesome. Great.

Notice that when we logged in, we didn’t log in with our Facebook credentials, because had we done that we would have gotten the 401. Instead, we’re logging in with our Microsoft credentials, and that’s actually a page that’s getting us into the Microsoft IT. Great. We’re on the Microsoft Campus, but we didn’t have to be. We could have been anywhere in the world, and we could have used ADFS to get into the Microsoft corpnet with our credentials, and get an actual valid set of claims.

So, go ahead, Jonathan, you may now step back. Great. Sign in.

So, now when we log in, we’re going to be logged in with our Microsoft credentials, which means we have that extra claim, the softy claim, which means I will be able to hit the report tab. And drumroll – notice that we get the familiar SQL Server Reporting Services dialog, but it’s not being served up by SQL Server Reporting Services. It’s being served up by the SQL Azure Reporting Service that we maintain in the cloud for you. Got it.

So, what we’ve seen is an app using the same tools, the same technology that we’re accustomed to using as developers deployed into Windows Azure using a bunch of platform services, SQL Azure, Service BUS, ACS, the CAST Service, and finally SQL Azure Reporting. Got it? There you go. When you guys get back to a computer, please go write program against this stuff so you can get the same experience that we just got. And I would like to thank you very much.

(Applause.) Thanks, Bob?

BOB MUGLIA: So, lots of great services that are available for you to work with, and that’s the whole point. How can we make it as easy as possible for you to focus on your application, and focus on what only you can do, which is solve the particular issue, and meet the customer need that you have in front of you.

So, we showed how these services could be worked with through standard coding. What I want is to really talk about some of the great things that are coming, and how we can enable you to get these App-V services and applications that you’re writing out into the market.

Today I’m announcing that we’re going to make a marketplace available to you, the Windows Azure Marketplace. And in particular, what I want to announce is that we are going to make available starting today the Windows Azure DataMarket. We’ve been talking about Windows Azure DataMarket in the form of “Dallas” for a little while right now, and we’ve been working broadly across the industry to get a large number of data providers up into the Windows Azure and SQL Azure environment, and to bring that out to you. And today we’re announcing that this is generally available. We have over 35 providers – actually now over 40 providers – that are available live today, available for either free download or for purchase in the Windows Azure Marketplace, the DataMarket Aisle of that. And we are working with a lot of other providers that we expect to be online in the coming months.

So, this is an example of how existing applications and whole new applications can be written using new sets of services and tools that you’ve never seen before, and the data market is a great example of that. And all of this data is all available in a standard way. It’s all available through ODATA for you to incorporate within your applications. It’s all done in a very consistent fashion.

So, we’re talking a lot about all of these different services. And now we have the DataMarket service available. What I think is really interesting is how can we change the way applications are written in a path world where you have this broad set of services that you can begin to work with. And, as Don and Jonathan showed, you can do that by writing code to put the services together. But we also realized that we could run a service for you that would allow you to composite these different services together, and essentially define the core, the backbone of an application, as a composite application that is managed through workflow.

So, what we’re announcing today is, with Windows Azure AppFabric, we’re taking and creating something that’s very, very new – a composition service that takes all of these other services that are available within Windows Azure and also available, frankly, anywhere else on the Internet. Because you can actually composite non-Windows Azure services connected together through a consistent workflow, and build the backbone of an application in this shared multi-tenant environment. You’ll still be writing code to actually work with and manipulate those services, but essentially this is a declarative environment that builds the foundation of your application.

And with that, what I would like to do is invite James Conard up to show you how AppFabric Composition Model and Composition Service can change the way applications are written.

James. (Applause.)

JAMES CONARD: Thank you.

Good morning. I’m excited to be here today to show you a demo of the AppFabric Composition Model. Earlier you saw how Don and Jonathan quickly created a Windows Azure application that consumes several new services, they consumed caching, access control, even new SQL Azure Reporting Service. What they effectively created was a composition of services. And in this demo what I want to show you is how using Visual Studio we make it easier to build, deploy, and manage these composite applications.

So, I will just minimize the application they created earlier. And I’ll just drop into Visual Studio here. Inside of Visual Studio 2010, you’ll notice that we’re introducing this new process template to distribute cloud applications. What you’ll also see is that I get this design surface where I can start building these relationships to services.

Now, Jonathan and Don already did the work for me of writing that application, so I’m just going to pull that in here. And you’ll see that in addition to our NBC apps coming into Solution Explorer, we’ll also get that application represented here on the composition model surface. Drag and drop that over. And, of course, that application uses a database in the cloud, the SQL Azure database. So, I’ll add that database here on the design surface as well.

And so, many of you in this room, of course, know the data tier applications today a capability in Visual Studio 2010 where I can represent my database, be it a SQL Server or SQL Azure database.

BOB MUGLIA: That’s the DAP capabilities that got added in SQL Server 2008 R2, right?


BOB MUGLIA: And SQL Azure now supports that. And here with the AppFabric Composition Service, you could take and just configure that very directly.


And so what I’m doing here is running through the same wizard you’ve already seen in Visual Studio where I’m actually going out to the SQL Azure database, I’m reflecting on the database, identifying the tables, the store procedures, all the other database objects. I’m actually pulling that in to have a database model as part of my app as well.

What’s interesting about this, coming back to the composition model, is I can wire up that relationship from my Web application to my SQL Azure database. And what I’m doing here is I’m making that relationship explicit. And you’ll see a little bit later how because I have that relationship model here, we could actually automate some of the things you may do manually today, like setting up firewall rules, like setting up affinity groups, we can manage that in a more automated fashion.

I’ll add in the axis control service that you also saw dawn and Jonathan use earlier, and I’ll just go into the property sheet, and there you see I can turn on through the properties three services directly as part of Visual Studio, where it will be captured as part of my model. I’ll add in caching, of course, and draw that relationship, creating this dependency of my code to the applications and services I want to consume in the cloud.

Now, what’s interesting is I do some of these relationships like caching, is that we can also automate some of the configuration behind the scenes, such as making caching the default session state provider for NBC applications. So, as what see here is effectively the same app we created, what this is doing is making these relationships a little more declarative, or a little more explicit.

BOB MUGLIA: And at development time you’re really designing this essentially for how it will run within the production Windows Azure environment?

JAMES CONARD: Exactly. Now, it would be nice actually to add a little more functionality to the app they created. So, for example, allowing the community to submit additional submissions for experts that should be in our database. I’m sure there are many folks in this room that should actually be in that database as experts. So, I’m going to drop an app fabric container onto this design surface. What this container really does is allows me to go and execute workflows in the cloud. This is a scalable, multi-tenant host for running workflows within the Windows Azure platform.

BOB MUGLIA: And that’s a really important point to make. This is designed as a fully scale-out service so that you can run a very complicated workflow within the Windows Azure environment through this app fabric container, you don’t have to worry about whether the workflow will exceed the capacity of any given machine.

JAMES CONARD: Exactly. You see I’ve just dropped my workflow into the container and wired up that relationship. You also notice that over in Solution Explorer we have a new Windows Workflow Foundation project, same exact project you’re familiar with in Visual Studio today, just using the Windows Workflow Foundation 4.0.

What I’ll do here, since many of you are using Workflow today, I’ll just do add existing item, and I’ll just browse the directory here. And I’ll just go ahead and pull in the XAML file that was created earlier. So, it’s a simple workflow. As we open it up here in the designer you’ll see that it actually takes in that incoming request for adding experts to the database. We’ll send that off for review and then receive back the confirmation. Whether we should accept or decline that given nomination. So, it’s a very simple workflow here.

Now, what I really need to do, though, is wire up this workflow as part of my application. So, I’ll go over here to my home controller for my ASP.NET NEC app, and I’ll just navigate down here to the suggestion expert. Now, one of the important things that we’ve applied as a principle as we’ve been building out this composition model designer is that we don’t want to do code changes as part of this design. We want to leave your code in your control. What we want to do is just allow you to model those relationships so we can automate them as we move forward.

So, what I’ll do here is I’ll just go ahead and take this to full screen and I’ll just go ahead and type in here a little code snippet for starting approval work for us. And what this does is you see I’ve taken some of the incoming parameters for my ASP.NET NEC app and down below here you see I can actually get access to that composition model at runtime. So, this model is not just about drawing relationships within the designer. I can actually access at runtime to get access to my connection string information, to get access to my URLs, to my services, and even change different aspects, like the metrics for our application at runtime.

BOB MUGLIA: So, it’s a very dynamic service, and because it’s done through dynamic composition at runtime, what it means is that Azure applications running Azure are learning from your customers what works, what doesn’t work, different offers, things like that, you can much more easily modify your application to respond to those needs.

JAMES CONARD: Exactly. So, we have our application here, Bob. And so what I’m going to do is right click on the distributed cloud app, and I’ll select publish to AppFabric. Of course, to get a visual representation of what I just designed, but also go into a table view where I can see exactly each of the services that we’re going to configure within the cloud, along with the default attributes and some of the ones I’ve overridden through the property designer. Now, in an enterprise environment I would probably package this up and hand it over to my operations specialists. They would deploy it and manage it using all of the procedures that they have in place. If I was an ISV I’d probably give this to my customers. In this case I’m just going to use a self-service capability in Visual Studio and I’ll just select deploy and start here.

Now, as I’m doing that, of course, we’re doing a build and compile across all these different projects in the background, but we’re also taking that model, or were they calling out to a new service in the cloud that you just mentioned, which is that composite app service. What this composite app service does is effectively acts as a controller. It takes in that model and defines this relationship between our code and all of our services that we consume and it goes through provisions and it goes through provisions and configures each one of those services, caching, access control  

BOB MUGLIA: Database.

JAMES CONARD: It configures the database, as well, exactly. And what you see here is that we get another view of our model, which is within a portal experience. Now, what I can do from here is drill into that Web application and actually just click on a link to launch that Web app. One of the important things to notice here is our application started out very, very quickly. The reason why we’re able to do that is because that callback service, or that controller I just mentioned, is also managing a number of instances for us. So, this is just taking instances out of the pool, allowing us to quickly spin up our applications and launch our applications.

BOB MUGLIA: The composition service is designed just like all of the other services in Windows Azure, which is as a fully scale-out set of services that are always available for your applications. In this case what we’re doing is enabling you to put workflow and to put the design of an application within this composition service. But, it’s really consistent with the way all the other services are designed.

JAMES CONARD: Exactly. Exactly. So, I can go over here to suggest an expert, and of course kick off the workflow that we just designed a few minutes ago. We’ll just launch over here to Visual Studio. And inside of Visual Studio I have a quick little test here that will simulate some load against our application. So, I’m going to run some queries against our app, submit some additional nominations for experts. So, we can really see how our application behaves and making sure that our application is going to scale to our requirements.

As I go to launch that test, I’ll navigate back over here to the app fabric portal. And you can see here I get that visual view of our model again, but in this case I actually have an alert. So, what we’re doing is that closet app service is using all that model information to know how to watch your applications and alert us if there’s anything that we should draw our attention to. In fact, if I scroll down here you can see there’s a number of different metrics that that service knows about our apps, because we’ve explicitly defined those relationships.

I’ll expand ASP.NET requests, and you can see here that by default we were configured to support 5,000 page requests. As our load test is running we’re obviously applying more than that to this particular application. Let’s go ahead and correct it. I’ll drill into the ASP.NET Web app here and I’ll just change this attribute from 5,000. Let’s just go ahead and crank it up to 15,000. Now, as I do that what we’re doing is we’re scaling out our applications to support this additional load, by just taking some more instances out of the pool and continuing to execute our Web application across those instances.

I’ll just navigate back over here to the dashboard and it looks like in this case that we’ve resolved that the increased load is being applied for our app. So, what you’ve seen here is that we’re really taking platform as a service to the next level. What we’re doing with some of these tools is allowing you within Visual Studio start modeling that relationship between your code and the cloud services you’re going to consume.

We’re also going to start introducing this composite app service, which acts as a controller, takes in that model, provisions and configures and even monitors those services on our behalf. And then finally what you saw is how we’re going to bring workflow into the cloud by providing a scale out, multi-tenant host for running Windows Workflow Foundations within the Windows Azure platform.

BOB MUGLIA: It’s a new way to build apps.


BOB MUGLIA: For the cloud. Great. Thanks, James.

JAMES CONARD: Thanks a lot.


BOB MUGLIA: So, what we’ve seen today is a very broad set of new capabilities that are available in Windows Azure to help you write this next generation of applications.

Now, we’ve talked about a lot of things, and covered a lot of ground. And what we wanted to do was make it easy for you to get a list of all of these new services, and get clarity on what dates they will be available, when they’ll be in CTP, and so you can first start working with it, when we’ll begin charging, when they go into beta, and when ultimately they’ll be available for GA. So, for the next milestone for all of these things we’ve put this up on the Web under, so you could take a look at that and get all those details.

So that really brings me back to the beginning. It brings me back to where I started, the PDC, the cloud, platform as a service, and of course your applications. With Windows Azure, what we’re doing is enabling you to take your existing applications to places that they’ve never been able to go to before. And we’re allowing you to write a whole set of new applications that in the past you never could have dreamed were possible.

Windows Azure is ready. Today what we’re about at the PDC is giving you a lot of information to make you ready. It all begins now.

Thank you very much. Have a great PDC. (Applause.)