DJ DMZ: Ladies and gentlemen, please welcome president, Server & Tools Business, Satya Nadella. (Applause.)
SATYA NADELLA: Good morning and welcome to TechEd 2012. First of all, thank you so much for being here. I know it takes a lot to travel over the weekend and spend an entire week with us. So, on behalf of all of us at Microsoft, we deeply appreciate your commitment and everything you do, and hopefully this will be a great conference for you.
This is a pretty special event, so I wanted to also have some special welcomes for all the first time TechEd attendees out there, and one special group that we have here is the winners, the U.S. finalists of the Imagine Cup tournament, which is the student tournament that we have, from Arizona State University. (Applause.) They have a very, very cool app that they have built, a Web app with a mobile front end, which really deals with an issue that I think all of us care a lot about, which is about how do we get food, perishable food in particular to the shelters and the homeless, and so it’s a fantastic app and they’re going to go to Australia for the global finals. So, good luck to the team from Arizona State.
This is also a very special event. It happens to be the 20th anniversary, the 20th TechEd. I had the good fortune of being here in 1993 for the very first TechEd in Orlando. I remember it was one year into my Microsoft career, and I remember coming in one night, and I was immediately recruited I think to stuff the conference bag all night. And also, if I remember right, it was sort of probably the first and last time I had to room with people I didn’t know in a hotel room, because we just completely blew all the forecasts. We had more people show up than we thought, we had lots of sessions added; it was an amazing time and an amazing conference. And here we are 20 years later in 2012 and that excitement continues into a new era.
In 1993, if you look back, it was really the dawn of the client-server era. Pentium had been just announced, so 32-bit operating systems and 32-bit chip architectures were becoming mainstream, because Windows NT was also beginning in that same timeframe. So, it was possible to start imagining that you could have Intel-based servers run the back-end of many of the compute jobs that traditionally ran in minis or mainframes. And then, of course, having a 32-bit operating system with Windows NT, that started becoming a lot more real.
The tools innovation, especially with Visual Basic coming on live, it was possible for many of us to start building these new applications and really getting the first generation of client-server started, and subsequently obviously we had that revolution over the last 20 years play out in a very major way.
We are at the beginning of another inflection point, a major shift. And whenever we talk about shifts I think it’s best to conceptualize them in terms of application architectures, because applications then drive the infrastructure and the entire cycle around this.
The new era is something that we’ve been referencing and calling the world of connected devices and continuous services. This is a very distinct phase. It’s different from client-server in a variety of ways, and we’ll talk a lot about that.
But the key question is, how much of a departure is it from what we’ve been doing, and I think it is very important for us to approach this as a huge opportunity to reinvent ourselves.
Clearly there are things that we do today that are going to change, some of them fairly dramatically, but the opportunity that we have collectively as a community is to reinvent ourselves to drive even more value. Because if you look at sort of history of computing with each of these generational changes only one thing has been secular and true, that is that each one of these generational shifts has created more agility, more innovation and more value to the business, and that’s going to be absolutely true with this next shift. We will have to question that nexus that exists between end users, application developers, and IT, and how we work that loop of virtuousness, but at the same time none of us should be confused that the opportunity to add value will only amplify. So, that’s really what this entire conference is about, that’s what this talk is all about.
And for us in Server & Tools in particular, the folks responsible for the back-end computing infrastructure inside of Microsoft, it really manifests as a new operating system. Just as in 1993 we were talking about a full 32-bit operating system with Windows NT, we now get to talk about a new operating system and the beginning of a new era.
And anytime you talk about operating systems, two things really pop out. The first thing is the new hardware abstraction. In other words, every time you talk about an operating system its major job is resource management and resource management at the scale of the hardware, and as that shifts you need to sort of re-conceptualize the basic meaning of an operating system.
The new hardware abstraction that we are dealing with is at a datacenter or a multiple datacenter level. So, you’re really thinking about compute, storage, network as a unit and at datacenter scale and multi datacenter scale, and really thinking about resource management with all of that in mind. So, that’s really the modern datacenter for us.
The next element is the application platform. To run these continuous services, both stateful and stateless, you need to be able to re-conceptualize the runtime, the frameworks, and the tools and the lifecycle around the tools so that you can run these services 7 x 24.
So, that really is the gist of what we are going to discuss over the next 90 minutes, the modern datacenter, the modern application framework that make up the cloud operating system, the basic underpinnings for this new era of connected devices and continuous services.
When you talk about the modern datacenter, it’s perhaps best to start with what’s happening at the system level, what’s happening at the silicon, what’s happening to a single blade, a single system, and then a cluster.
The fundamental thing that all of us at this point are tracking pretty closely is the notion that storage, compute, and network are co-evolving. I mean, if you think about the compute power, for sure Moore’s Law is still continuing to work in its full glory. It probably is resulting in more core density versus perhaps single-thread performance, but still we’re able to pack amazing amounts of compute power.
Once you have a lot of compute power, there is not much use to it if you can’t really have the IOPS that have to go with it. And so the revolution in storage, especially around tiering, is what’s really in full play, because the disk speed itself is not something that’s going to faster, but at the same time the fact that SSD costs and Flash costs are coming down give us a huge opportunity to rethink, especially if you think about what’s happening in the database world with in-memory, you can sort of see that you can start thinking about applications and application performance and IOPS per dollar in a very different way in terms of the nexus between CPU utilization and storage access.
But one of the things which is really an artifact of the CPU to storage connection is the network. It’s the fast interconnect, it’s sort of the era of the fast interconnect between storage and compute that’s driving a lot of innovation.
And the key to this co-evolution of storage, compute and network is really software control. You really cannot afford to have the control fragmentation, because if you do that then you’re not going to be able to achieve the economic benefits, the agility benefits and the innovation benefits that these new systems at high density can provide.
Now, perhaps you can sort of say, well, that’s something that’s been true. After all, Moore’s Law has really played this out even in the past, over the last 10 years in particular as we have gone to these clusters and blades and software-based solutions, but one of the fundamental things that I believe has changed is services at scale, and this is a very big difference for me personally since 1993. When we were building Windows NT, we didn’t have in-house at scale workloads on NT. Subsequently we got onto a fantastic virtuous cycle, which is the fact that we had hit workloads in Exchange, in Lync, in SharePoint, in SQL Server ensured that with each release of our server operating system we were able to get the feedback from you and learn continuously from you and make that product better and better and better and more robust, and that’s sort of testament to sort of all the deployments that we have.
But for the first time now the same kind of cycle of learning is playing out when we talk about Internet scale services. Just think about the depth and breadth of the first-party workloads that Microsoft is running today on a daily basis. We have Xbox LIVE that’s doing some fascinating GPU simulation in the cloud for some of their games. You have Office 365 which is Exchange and SharePoint at scale. You have Dynamics CRM which is a stateful transactional application in the cloud. You have Bing, which is really a big data-applied machine learning application in the cloud. You have things like HealthVault, which is secure transactional consumer applications. So, you have a very broad spectrum. So, we run approximately 200 very diverse workloads across Microsoft.
That diversity is what’s really making us build the right operating systems, the right management stack, the right tools. In fact, perhaps the best way to illustrate it is what happens to us on a daily basis. For example, just in terms of the physical plant we have around 16 major datacenters across the globe, we have around a thousand access points, we have a couple of hundred megawatts of power powering hundreds of thousands of machines. We have terabits of network out of our datacenters. We have petabytes of data. In fact, Bing itself has got approximately 300 petabytes of data. We write something like 1 terabyte of records each day.
Now, all of that you could say is fascinating statistics; what does that really have to do anything with infrastructure that we build? It’s just that we are battle-testing every piece of software. Just, in fact, last week, we upgraded all of the Bing front ends to Windows Server 2012 RC. And so today, Bing is running the release candidate of the next server operating system in full production workload.
That type of feedback where we are constantly able to take the learning internally is what’s shaping the host OS, the guest OS, the frameworks, the tools, the performance, and that we believe is not something you can easily — you can’t just head fake it, you can’t just go in and say we’ve built it for scale without having run if yourself, and I think that that’s perhaps in the long run going to make one of the biggest distinctions.
Of course, none of this matters if you can’t scale minimize it, because it’s not as if every deployment of a private cloud or a virtualization instance is going to be at the scale we run. So, the key is for us to be able to take all of that power, all of that learning, package it up into the smallest of clusters, half a rack, a full rack or what have you, and that’s really what our intent is.
And when you think about that as the backdrop, the criteria to look at a modern datacenter, there are four key attributes that I would say that one should look at. The first one is the scalability and the elasticity, and you need the elasticity to go with it, especially in the context of a heterogeneous set of workloads when you’re running in particular highly virtualized distributed environments, because you want to get utilization up and without elasticity you’re not going to be able to achieve it with all the amount of scale.
The second one is always up, always on. There’s no point having all of the scale and elasticity without the continuous availability.
Shared resources, building out for multitenancy from the ground up; it could be when your private cloud, the fact that you’re running two departments, two applications, and you want to be able to isolate them.
And then, of course, automating, because you can’t linearly scale your operations with your infrastructure, and that means automation, automation, automation, and that is something that is a very super important thing to make sure that the system provides the hooks for you to be able to achieve that and then lower your costs.
So, that’s what really inspired us to build Windows Server 2012. It’s an amazing, amazing release. In fact, you know, as we were preparing for this event perhaps the biggest struggle we had is we have a 90 minute keynote, we have a lot to show, what features of Server 2012 do we even get to demo and talk about is perhaps the thing that really troubled us the most, but Windows Server 2012 has hundreds of features, and I just wanted to highlight a few of them in the context of this notion of a modern datacenter.
When it comes to scalability and elasticity, the performance gains, the sheer capability gains of the host operating system, Hyper-V 3.0, are just stunning. Just one of them, the notion that we now have in one VM the ability to support 64 virtual procs and 1 terabyte of memory is something stunning, because we can now run pretty much 99 percent of the tier one SQL workloads can be virtualized on Hyper-V 3.0, and it’s a pretty stunning figure and you’ll see a lot more of that.
Always up, always on is something that again has been built deeply into the system. Something which is sort of a feature that I love the most is this ability to update the cluster without having to bring down the cluster nodes, and have continuous availability.
Continuous availability of storage, huge gains in that dimension.
Shared resources. Again, multitenancy both with System Center and Windows Server has been built into the foundation. The ability to have network virtualization, storage virtualization to go with server virtualization is what makes it possible for you to have a fully virtualized environment that is sharable.
And if you have these multiple workloads from multiple departments you can isolate them using policy, you can monitor the resource usage using policies and make sure that there isn’t one workload that takes away all of the resources. So, a lot of gains again when it comes to sharing of resources across the virtualized infrastructure.
And lastly, when it comes to automation and self-service we have done a lot in terms of exposing the surface area of PowerShell. It’s actually a pretty amazing release for those of you who are big PowerShell users in terms of the commandlet explosion that we have had so that you can automate pretty much anything that’s there in Windows Server. We have 2,400 commandlets in PowerShell. We have built-in standards based management, and of course with System Center you have a full capable datacenter management suite.
So, to show you some of this in action I wanted to invite up onstage Jeff Woolsey from our Windows Server 2012 team. Jeff? (Applause.)
JEFF WOOLSEY: Thanks, Satya. It’s a pleasure to be here.
How’s everybody doing? (Cheers, applause.) Oh, come on! How’s everybody doing? (Cheers, applause.) Awesome. Welcome to Orlando. It’s a big, big, exciting show.
Well, Windows Server 2012 is about making your business more agile. It’s about making your datacenter more flexible, and providing you the ability to extend your datacenter to the cloud securely on your terms. Quite simply, Windows Server 2012 is about providing the best cloud OS.
Let’s start with scale. With Server 2012 we want to virtualize those workloads considered non-virtualizable, workloads that require dozens of cores, hundreds of gigabytes of memory, are likely SAN attached and with exceptionally high IO requirements.
Well, today, we want to redefine performance, we want to redefine scale. So, today, with Server 2012 and Hyper-V we’ll support up to 320 logical processors per server, up to 4 terabytes of memory per server, and up to 64 virtual processors per VM.
In addition, you can see we support I’ve got 100 gigabytes of memory allocated to this virtual machine, but we’ll support up to a full terabyte of memory for a VM. And whether this VM has been allocated 10 gigabytes, 100 gigabytes or a full terabyte, it still costs the same.
In terms of virtual storage our virtual disks now support up to 64 terabytes per virtual disk. That’s 32 times anyone else in the industry.
We also support the largest clusters with 64 nodes and up to 4,000 virtual machines in a single cluster.
Now, if I give you a virtual machine with 64 virtual processors and a terabyte of memory, quite honestly that’s irrelevant if I can’t provide the ability to give you the IO to actually keep those workloads and those resources busy.
So, let’s take a look at Hyper-V IO performance. Now, before I do, let me tell you a little bit about the hardware I’m about to show you. This is an industry-standard four socket server. It’s got 80 logical processors, 256 gigabytes of memory. It has five LSI HBAs attached to 40 SSDs.
Now, you may be thinking, hold on here, why is he using SSDs, why is he not using traditional spinning media? Well, for this next demo we certainly could have used 15k SAS disks. However, we would have needed 4,000 disks in 10 full sized 42U containers, racks, full of disks. So, we decided to opt for SSDs instead.
Let me show you. I’m going to switch on over here the Iometer. Iometer is an industry standard tool and, in fact, the configuration and test that I’m going to run is industry standard. This is 4k random IOPS. This is the hard stuff, not the easy sequential stuff. This is 4k random IOPS, cued up to 32, 40 concurrent threads.
By the way, the guys over at VMware claim that they can deliver up to 300,000 IOPS from a single VM.
Well, let me show you with Windows Server 2012 we’re delivering 985,000 IOPS from a single virtual machine. Let me say that one more time: over three times more IOPS from a single virtual machine. (Cheers, applause.)
And let me be very clear: This is not a Hyper-V limitation. We can go much, much higher. This is as fast as the hardware will go. We couldn’t put any more host bus adaptors in this machine.
So, with support for up to 64 virtual processors, a terabyte of memory, and nearly a million IOPS in a single server, we can run over 99 percent of the world’s SQL Servers.
Now, while we’re talking about storage, by the way, let me talk about some of our other investments in storage. For example, in Windows Server 2012 we’ve made some huge investments in file-based storage. For example, we have a new scale-out file server. With the scale-out file server it intrinsically, because of the architecture, it’s an active-active architecture, which intrinsically inherently means as I add more nodes I get more scale, but I also get more continuous availability because I can remove or add nodes without any down time. It’s an extremely powerful new capability in Server 2012.
And then there’s what we’ve done with SANs. Quite honestly, this is earthshattering with offloaded data transfer or ODX. With offloaded data transfer Windows Server 2012 can leverage the native SAN array capabilities in your array.
Let me show you. In this first example I’m going to copy a 10 terabyte file using non-ODX storage. Now, you can see in this example from a CPU standpoint I’m getting about somewhere between 35 to 40 percent CPU utilization. In terms of networking you can see we are fully saturating Ethernet. We’re getting about 78 megabytes per second; not too bad, but in this case the server is performing all of the copying. It’s reading from the source and writing to the destination, reading from the source and writing to the destination.
Well, now on the split screen let me actually copy the same file, 10 gigabyte file, using ODX enabled storage.
Now, make sure you don’t look away. I’d hate it if you missed the demo here.
Again this is a 10 gigabyte file, and what are you seeing? You’re seeing that I’m copying and getting over a gigabyte per second. I’m copying a 10 gigabyte file in 10 seconds. (Applause.) Awesome ODX-enabled storage from our partners over at EMC. And by the way, there was no network utilization at all, because this was leveraging the capabilities in the array. When you couple ODX with a bunch of our other enhancements in storage, virtual fiber channel, cluster enhancements for replication and synchronous replication, as well as a swath of other capabilities, quite simply if you own a SAN, Windows Server 2012 is a no-brainer, it’s really that easy.
Now let’s talk about networking. In Server 2012 we made a huge investment in networking, for example network virtualization. With network virtualization I can have multiple companies, disparate organizations, all sharing the same physical fabric with secure multitenancy. In addition, we have features like Windows NIC teaming that brings LBFO into the box, and literally dozens and hundreds of new capabilities when it comes to Windows Server networking.
In terms of the Hyper-V switch we’ve done a tremendous amount of work in the Hyper-V switch for performance, security, manageability, automation, and one of the things we did was we knew that we couldn’t be all things to all people. So, what we decided to do was also make it open and extensible.
So, for example, I’m going to go here to the virtual switch manager and you can see I’ve got the Cisco Nexus 1000V for Hyper-V running right here.
Now, in this case you can see in the split screen I’ve got a couple VMs over here that are using quite a bit of bandwidth, and my network admins, they like to keep an eye on their network utilization and they want to apply a QoS port policy to this. No problem. I can manage it because I’m using the Cisco Nexus 100V in the same way I manage my other infrastructure. And in this case I’m going to use simply the Nexus 100V admin tool, and I’m going to modify the port profile, and I’m going to use a QoS port profile. And like that, I’m applying QoS port profiles on my virtual switches just like I can on my physical switches.
Now, this is just one example of the ecosystem we’re creating with the Hyper-V extensible switch. While you’re here, make sure you check out the tech expo. We’ve got a lot of partners that are plugging into the extensible switch, and, in fact, there’s a lot of excitement in the industry around where networking is going right now, and including a lot of people embracing software-defined networking.
Now let’s talk about automation. One of the best ways to reduce costs and improve efficiency at scale is pervasive automation. With Windows Server 2012 we’re dropping in a V12 world class automation engine in PowerShell. With over 2,400 PowerShell commandlets everything you want to do in server now can be automated. (Cheers.) Got a winner out there.
One of the things we wanted to do is in this next demonstration I wanted to show how we’re coupling PowerShell with site migration capabilities utilizing one of our hotly anticipated features, Hyper-V Replica.
So, in this case I’m going to bring up System Center, and I’m going to start my runbook. Now, what I’ve been doing is I’ve been using Hyper-V Replica to replicate virtual machines from one site to another. Now what I want to do is I actually want in a systematic and methodical way to actually bring them up on my new site.
So, first, I’m going to type in my destination host, going to type in my source host, and I’m going to provide the server that’s actually going to do the runbook automation, and I’m going to click start.
Now, while that’s happening let’s move on over to instances and view the details. And you can get a high level overview of what’s actually happening here. What’s happening here is through runbook automation System Center is using PowerShell as the automation engine to actually make sure that everything is in place to begin the migration from my workloads from one site to the next. It will then bring up my virtual machines in the correct order with dependencies configured in the runbook automation. All of this very cool, brought to you by Hyper-V replication, and of course at its core PowerShell.
Now, what if you don’t want to just migrate your workload, but what you’d really like to do is extend your datacenter to the cloud using capabilities and capacity from a provider. Well, let me show you how we do that with Windows Server 2012 and System Center 2012 SP1.
You can see here I’ve got a view of my clouds on-premise: dev cloud, infrastructure, preproduction and production environments. But what I’d like to do is I’d like to connect to my service provider. So, I’m going to go to connections, which is where I broker connections. Here I’m going to click on connect and you can see I have the option to connect to another VM in the server or use SPF, the System Center Provider Foundation. This is a powerful new capability that allows me to take on capacity provided to me by my service provider.
Now, in this case I’ve bought capacity from Orlando Hosting, and they have provided me a URL. Of course, I need a certificate for pretty obvious reasons, encryption. Type in my password and click OK.
And what you’re seeing is in a few easy steps what I’m doing right now is System Center is brokering the connection with Orlando Hosting so that I can provide that capacity and manage that capacity under my control. In fact, if I go back on over here to cloud what do you see, you’ll see that Orlando Hosting now appears in my console in the context of my other clouds running on-premise. Very cool stuff here.
So, in just a few moments we’ve flown through literally a whole bunch of technologies and capabilities, but one thing I want to be very clear about, quite honestly I haven’t even scratched the surface of what’s new in Server 2012: with massive scale, massive performance, complete VM mobility, the only virtualization platform that allows you to live migrate servers with nothing but an Ethernet cable, PowerShell automation, offloaded data transfer, and the ability to extend your datacenter to the cloud with System Center. Quite simply, these are just a few of the dozens of reasons why Windows Server 2012 and System Center 2012 is the best way to cloud optimize your business.
Thank you very much. (Applause.)
SATYA NADELLA: So, hopefully you got a quick glimpse of the power in Server 2012. It’s a fantastic release, and I think over the course of this conference you will get a chance for many drilldown sessions on the hundreds of features in Server 2012.
Ever since the beta there has been tremendous traction with our customers. Over 300,000 customers used the product since beta, and, in fact, the first three months after the beta. Since we went and had the RC launch, in the first week we had 80,000 customers download the RC, as I said. Internally we have RC deployed in production.
We had 150 customers who are part of our TAP program that we work with very closely, many of them already taking the RC and put in production workloads that we support, so a tremendous amount of progress.
So, let’s roll a video with some of the comments from the customers who have been using Windows Server 2012.
SATYA NADELLA: And as Jeff was mentioning, we are building Windows Server 2012 of course to power your datacenters and your private clouds, but we’re also building it with in mind an overall broader ecosystem. We want to make sure that there is a consistent world of Windows Server across the service provider, Windows Azure, as well as your datacenter, and that’s one of the most important technical and strategic goals for us at Microsoft.
When we say consistency, the key thing is for us to ensure that identity, virtualization, management, and development is something that is consistent across the service provider, Windows cloud, your datacenter, and Windows Azure.
And in that context last week, we announced a major set of revamp and features for Windows Azure, and one of them was our infrastructure as a service. With the launch of infrastructure as a service capabilities in Windows Azure you now have virtual machine portability with no changes to format, the ability to take an app and a workload and move it transparently from your own private cloud to Azure, to a service provider, and back with no lock-in is something that you can do.
So, I wanted to give you a feel for some of the new capabilities in Windows Azure and the infrastructure as a service, and to do that I wanted to introduce up onstage Mark Russinovich from our Windows Azure team. Mark? (Cheers, applause.)
MARK RUSSINOVICH: Good morning, everybody.
So, I know most of you like automation, especially with PowerShell, but we wanted to make it so easy to create virtual machines in Windows Azure that even your boss can do it. And so for that I’m going to switch over to the newly designed and Metro-optimized Windows Azure portal.
Now, there’s been an explosion of a certain class of devices and a specific device that I suspect many of you are using. So, we wanted to make sure that this new portal works on all operating systems and all browsers. So, to answer your question that I know you’ve got in your head, yes, it will look great on your Nokia Lumia 900 Windows Phone 7 device.
Now, here you can see all the resources that we can manage in the portal, including virtual machines. We’ve got a consistent experience for creating new resources here you can get with this new button down here. I’m going to show you how to create a new virtual machine and how easy it is. When I select this menu item I’ve got two options. One is quick create, which lets me with a single dialog box pick the most common options for creating a virtual machine in one click. But I’m going to show you some of the more advanced features that we’ve got with this release by picking the gallery option.
And so you can see the list of platform images I can select from, and no, we haven’t been hacked, we’ve actually got Linux up here in Windows Azure. We’ve worked closely with these companies making these distributions to support them on Windows Azure. But for this demonstration, of course, I’m going to pick the best operating system in the list, the one that is so good that it doesn’t even need an icon, and that is Windows Server 2012. (Laughter.)
Press next here, I’ll give it a sample name, a password that will make it happy. I hope I’ve got those matched. And press next.
Now, in this dialog I get asked whether I want to create a standalone virtual machine or to add this virtual machine to an existing virtual machine to create a cloud service that consists of multiple virtual machines. I’m going to go ahead and select standalone virtual machines, give this virtual machine the DNS name that I can access it over the Internet with, and now I pick the storage account into which I want the operating system VHD to be placed, and that’s because we’re using Windows Azure storage underneath to store VM VHDs.
I click to use automatically generated storage account and it will create one for me, or I could pick one that I’ve already got, and then I get this dropdown here that asks me where I want to place this virtual machine. I can pick from any of the number of datacenters that we’ve got Windows Azure running in across the world or I can even pick to deploy it into a virtual network, which is a VPN gateway subnet up in Windows Azure that connects back to corpnet. So, I’m going to pick the VPN network that I’ve already got up there, the corp network, and press next.
And this final dialog lets me pick some of the scale-out and high availability features that we’ve got that I’ll demonstrate actually in a few minutes. So, I’m going to go ahead and skip that slide and just press next to go create that virtual machine.
Now, as Satya said, one of our goals was to make it easy to migrate virtual machines back and forth. So, let’s say that you’ve got an application running on-premise in your Hyper-V private cloud like this one right here. It’s a simple events manager app that’s built on IIS and SQL Server, and I want to take that application and migrate it up into Windows Azure. Because Windows Azure virtual machines are based on Windows Azure storage, I can simply upload them to Windows Azure storage blobs and then create virtual machines from them.
But what makes that simpler to migrate virtual machines is System Center 2012 App Controller that Jeff introduced you to. I’m going to go over to the App Controller dialog here, and you can manage private clouds, you can also manage Windows Azure. I’ve got that application running here on my private cloud. If I go to the virtual machine menu entry you can see there it is, events manager local.
I previously made a backup of that virtual machine VM or VHDs and store it in a library here, and when I right-click on this and select migrate I’ll be guided through a simple wizard that will let me push that entire VM with its VHDs up into Windows Azure.
I pick the cloud that I want to deploy to. In this case it will be Windows Azure. Pick the cloud service I want to deploy into. Here I can create a new cloud service or I can add this VM to an existing one. I’ll add it to this one right here, press OK.
Final step, pick some of the options you saw me pick in the portal there with the create virtual machine wizard like the instance size. I’ll pick an extra large. The storage account I want to upload into, so just navigating through my Windows Azure storage accounts. I’ll put it in the migrated VMs container.
And the nice thing about App Controller is this virtual machine actually consists of two VHDs, an operating system VHD and a data disk with a SQL Server on it. App Controller knows that and will automatically migrate both of those VHDs up when I press the deploy button.
But how about the reverse? Let’s say that I’ve got an application running up in Windows Azure and I want to bring it back on-premise, maybe for disaster recovery, maybe for backup, maybe I want to just take a look at it. I’ve got that events manager application. I’ve already migrated it up with app controller. You can see it here in this virtual machine list. When I click on it, here you can see a virtual machine dashboard we’ve got. We’re actually having the infrastructure collect performance information and surface it up in the portal, including CPU usage, network usage, in and out, as well as disk I/Os.
Here in the URL you can see the DNS name assigned to that virtual machine. And just to prove it’s actually the same app let’s log in, and we’ll see the same exact interface we saw, because it is the same virtual machine with the same VHDs.
You can see down here there are the two disks that we migrated up sitting in Windows Azure storage. Because it’s Windows Azure storage, not a separate storage service, it uses the same storage APIs, and that means that off the shelf storage utilities for Windows Azure just happen to work against it.
I’ve got an example utility here, Cloud Explorer, and if I go take a look at the VHDs that I’ve got running or the migrated VHDs that I’ve copied up into the cloud here, the data disk and the OS disk, I can simply say copy, paste them into a backup folder up in the cloud. Because this is the copy on write copy it’s almost instantaneous, and now I can take those, copy and then paste them to download them to my local system, and then at that point I can just use Hyper-V to create a virtual machine with those VHDs and get the application back up and running on-premise.
So, that’s a fairly simple application. How about serious enterprise applications like ones that are built on SharePoint and Active Directory? We also have features that support those and we’ve got people that are already building those kinds of applications.
So, to show you one I’m got invite onstage senior vice president and chief information officer at AFLAC, Mike Boyle. We can’t resist, we’ve got to take this opportunity to welcome him with a big AFLAC. So, why doesn’t everybody give him a big AFLAC?
AUDIENCE: AFLAC. (Applause.)
MICHAEL BOYLE: Good morning, TechEd!
Just a few seconds on AFLAC. We were founded in 1955, and we’re the No. 1 provider of guaranteed renewable insurance in the United States. In Japan we’re the No. 1 life insurance company in terms of the number of individual policies that are enforced. And globally we provide service to over 50 million different customers.
So, let’s talk about Windows Azure and AFLAC. We’ve been collaborating with Microsoft product teams and their strategy consulting practice to evaluate Azure for our Web Services. There’s really three primary drivers for what we want to do inside of AFLAC. Agility is the first one. We want to be able to rapidly deploy computing services faster than we can do it possibly on-premise. The second is flexibility, having the ability to add computing power on the fly for peak activity in enrollment periods that we have during the year. And then last of all, performance. We’re anticipating that Azure Services will give us the ability to provide a consistent user experience to the AFLAC user space by virtue of Microsoft’s cloud geographically dispersed datacenters.
So, what we’d really like to do right now is take you through an example of using AFLAC.com. So, Mark, you want to fire it up?
MARK RUSSINOVICH: Sure.
I’m going to switch over here to the AFLAC application running up in Windows Azure.
MICHAEL BOYLE: So, this is AFLAC.com. Any of you can go and utilize it any day. You can go and find out information about AFLAC policies.
But let’s say you wanted a quote. You’re a business owner and you want to get some information.
So, what is happening with this particular application, it’s built on SharePoint 2010 and SQL Server 2012, and it’s hosted on virtual machines running in Windows Azure today.
So, basically we go over here and we’d select some products that we want to see. So, let’s select accident, cancer, and dental I guess. We go over to the next page, here is where you would fill out your personal information on this. Take it over by clicking next, and now we’re going to pick a time that you want to be contacted. It could be right away, but let’s say you want to wait until the keynote is finished today. So, we’re going to choose 10:30, we’re going to choose go ahead and hit next, and what happens at this point in time is this contact information is being fed from the cloud back through a secure VPN into our datacenter, and it feeds into our AFLAC lead system. It’s then fed into our call centers auto dialer for the specified time that the person wants to get their call back, and at 10:30 that call goes out to them and it connects them to one of our contact center associates.
MARK RUSSINOVICH: So, let’s take a look under the hood to see how that application is built. I’m going to switch back to the portal here, and we’ve got the virtual machines that make up that cloud service. I’m going to store it by name, and now we can see the tiers pretty clearly by the naming convention we’ve adopted, first starting with the SharePoint applications themselves and the applications here. Then we’ve got two SQL Servers. They’re mirrored, and so we’ve got a SQL witness for failover. Then we’ve got four front ends for scale-out. We’ve got domain controllers, two of them up here in the cloud, because SharePoint requires Active Directory. And then finally we’ve got a System Center Operations Manager server monitoring the whole thing.
Now, the reason that there are at least two of every tier, instances in these tiers is both or either for availability or scale-out, and we can see both of those in action with the Web front end when I click on one of them.
If I go to the end points tab we can see that this HTTP end point is actually load balanced, and it’s of course going to be load balanced with the other three SharePoint front ends as well, so customer requests coming in can get directed straight across to the front end for scale-out capabilities.
We also don’t want that application to go down if there’s a single failure in the datacenter. For example, a top of rack switch fails, we don’t want that application to come crashing down. So, we can instruct the infrastructure to spread the application’s instances across different what we call fault domains or single points of failure in the Azure datacenter by putting them in what’s called an availability set. You can see I’ve got several availability sets here for this application. This one is the front-end availability set, with the other front-end instances in them, and that means if a switch fails in the datacenter that only half of these instances will be affected, the other half will continue to operate and serve customer requests and the application will remain online.
So, tell us about some of the other applications you’re looking at migrating up to Windows Azure, Mike.
MICHAEL BOYLE: Absolutely, Mark.
We’re looking at Azure’s viability to host the Web properties that service our policyholders, employers, and different types of agents and producers that we have around the world. We believe that Microsoft’s technology and services have the potential to dramatically improve the user experience and give us competitive advantage in the marketplace. We’d like to thank Microsoft for this partnership. We’re thrilled to be one of the first companies that are actually utilizing this particular space.
MARK RUSSINOVICH: Well, thanks for working with us, Mike. And if you’d like to learn more about these features, come to my afternoon session on Windows Azure infrastructure as a service, and I’ll be giving away one of these AFLAC ducks here that we’re actually going to give a few away right here.
All right, so I hope to see you this afternoon. Thank you.
MICHAEL BOYLE: Have a good one. (Applause.)
SATYA NADELLA: Thank you, Mark and Mike.
Windows Azure, hopefully you got a good flavor for the capabilities in Windows Azure. It’s the most enterprise-grade public cloud service. Last week we announced a set of features that we have updated as part of our spring wave. We have the fall/spring rhythm with Windows Azure, and we are continuously improving the service. We think that we’re really ready for the mainstream of the enterprise, especially with the coming together of IAAS and PATH.
In terms of the feature capabilities, all the things that we talked about for Windows Server in terms of the release criteria, so to speak, apply to Windows Azure. The first thing in terms of scalability and elasticity, that’s what it’s really been built for at the core. You can scale the virtual machines, you can scale the Azure website, you can scale the cloud services that you build.
In terms of always up/always on, that is of course the underpinning of Windows Azure design, the availability set feature that Mark demoed is something that you inherit even for the IAAS infrastructure from within the core underlying storage and network and compute, and the way it’s constructed so that it’s resilient to hardware failure or network failure. Shared resources, you can make Windows Azure a seamless part of your datacenter, the network virtualization capabilities is something that makes that possible. And, of course, you can automate everything. Everything that’s available, Mark showed a lot of the capabilities in the management portal, but everything is exposed through PowerShell and APIs, so that you can automate it and make it part of your own management suite.
So, we think Windows Azure is really ready to take some of those very mission critical workloads and use them on the public cloud. And hopefully you’ll give that a try as we’re in the early access program for infrastructure as a service.
So now I want to switch gears. We talked about the modern datacenter. We talked about both Windows Server 2012, what it powers in other datacenters with service providers, as well as what it does in Windows Azure. After all, Windows Azure runs on top of Windows Server 2012. Now the idea is, what about the apps? How do we talk about the next generation of apps? And perhaps the best way to get a feel for the modern app is to look inside it and try and get a feel for what is it that is going into all of the application development that many of you are involved in.
One of the first key attributes is this notion of being very personal. It’s tied to your core identity, so something like Active Directory and the core identity becomes very critical in the application development, because you want that identity and person the notion of the personalized experience being pervasive through all of the various devices you use or access that particular application. So, personal app experiences is something that all of us are dealing with.
The second one is social. You’re building in all of the social semantics, things like sharing is not a bolt-on but it’s built into the applications, but you go far beyond just sharing my email, or sharing links to having things like follow and like, and all of the semantics of social are built into your application.
And lastly, you want to make the application intelligent. In fact, if you take even just the social features, once you have opted in and friended somebody in, say, the Bing experience today, the fact that you are now bringing in all the photos associated with your friends, and making that part of the result set from a search results perspective is the kind of experiences that we will build even for our corporate applications. That means we’re going to reason over large amounts of data and generate recommendations, auto-suggest, and features that make your applications that much more powerful.
So, this coming together of personal, social, intelligent applications is what all of us think we are really going to have to do as we reinvent the applications for this cloud operating system. Now, not only do you want to build these applications, you want to rethink how you manage the lifecycle around these applications. So, the build, measure, learn loop of how development and operations come together also requires a fundamental rethink. And some of the innovation in Visual Studio and System Center that we have done in order to enable that new lifecycle is, again, going to be very, very important.
And so, when we think about what our core platform and tools enables to build these mobile, social and data-driven applications, we think about it in a couple of different layers. At the foundation, of course, for any application platform is a rich set of runtime services. So, the combination of Windows Server and Windows Azure, you have the full spectrum of services from storage, compute, Web, media services, a bunch of middle-tier services like the cache, the service bus; about that you have a rich set of frameworks .NET 4.5, an amazing set of features to be able to build this new type of modern application with async caching to the client. And then, on top of that, you have this new toolset, Visual Studio 2012 is a fantastic toolset to be able to build these Web and rich client applications for the world of connected devices and continuous services.
And to give you a flavor for how to go about building some of these applications I wanted to introduce both Scott Guthrie and Jason Zander, but Scott is the first person up on stage to show you some of the capabilities of our application platform across Windows Server and Windows Azure.
SCOTT GUTHRIE: What you just talked bout, some of the great new capabilities that we’re going to be shipping this summer. I’m now going to spend a few minutes walking you through how you can build an application with them. And specifically we’re going to build a mobile Web application that we’re going to host in the cloud using Windows Azure, and we’re going to build it using ASP.net, Visual Studio 2012 and Windows Azure. And we’re going to get started here inside the new Visual Studio 12 RC that just came out. And we’re just going to build a new project.
And I’m going to pick the new ASP.net MPC 4 application project template. And one of the cool things about the ASP.net MPC 4 is it comes with a bunch of new capabilities. And one of the really nice new features is built-in support for building mobile Web applications. If I just pick the mobile application template this is going to give me a project that has all the files I need in order to build a Web application that works against any type of phone or tablet device. And I’m going to customize it here and protect it. We’ll just say, hello, and I can run it in a desktop browser, or the other nice thing about VS 2012 is it allows me to plug in new emulators and new browsers, and I’m just going to launch this here using a third party emulator to see what the app looks like.
You can see here it’s a very simple app. It just says “Hello TechEd” customized. But, it has a kind of nice navigation and animation UI that works great with a variety of different phones and tablets. And I’ve got a Hello World app going here. So, that’s building Hello World inside VS 2012. Let’s now deploy this on the Internet. And specifically, we’re going to use Windows Azure.
So, I’m going to drop into Windows Azure’s admin tool, admin portal. You saw Mark use this a little bit earlier when he was setting up virtual machines. And so he said, sort of a new virtual machine inside the tool. I’m going to create what’s called a website, which is a new feature we just shipped with Windows Azure last week and specifically we’re going to call this thing TechEd. I can choose what data center I want to create it in. I’m just going to go ahead and hit “create.” This is going to provision for me a website that I can deploy any Web-based application into.
One thing you’ll notice is how fast it is now inside Windows Azure. We literally just stood up a new website in one of our data centers. If I click in I see a similar dashboard view to the one that Mark showed with virtual machines. This is one that was optimized for website. Now, I don’t have any monitoring data showing up yet, because I don’t actually have my website deployed. So, that’s the next step that we’re going to use here.
Now, there’s a couple of different approaches I can take to deploying a website inside Windows Azure now. I can use standard FTP tools and just copy bits up. One of the cool features that we also support is integration with source control. So, if you have Team Foundation Server running either on-premise, or hosted using our online service, you can easily link your TFS account and your projects with Windows Azure and then any time someone checks in source-code into that project we can automatically do a build, run your unit tests and if they succeed, deploy the project automatically into Windows Azure, a really cool feature and we’re pleased to announce even better today, we’re actually taking our TFS online capability out of previews and opening it up so that any of you can actually sign up for free and try that out. (Applause.)
What I’m going to do here, though, is just actually click a fourth link here called “download publishing profile.” And this is just going to download an XML file that has all of my published settings and then I’m going to set this up inside Visual Studio so we can just directly deploy within the IDE. And doing that is really easy. All you need to do is right click “publish,” I’m going to pick that publishing file that we just downloaded and I can just click “publish.”
This is going to cause Visual Studio to package up our files, upload them into Windows Azure into that east U.S. data center and basically launch a browser with our apps. And so I hit refresh here, publish, and of course the one time I deploy, it won’t work. Let me actually switch now to another machine. There it is. Look, it deployed. So, it’s running here on this account. So, what you can see here is we have this app running inside Windows Azure. In fact, if we go back to our portal here you can see dashboard statistics around this app that we deployed and one of the things you’ll notice here along this line is a number of statistics in terms of number of request that have hit. So, I actually start getting real-time statistics around this deployment. I can see kind of exactly what’s going on here. And now I have kind of basic apps up and running and working fine within the system.
Now, this is sort of Hello World. Let’s actually make this a little bit richer; I want to extend this application to add some additional capabilities to it. And so specifically we’re going to do a couple of things. One is I’m going to modify this Web app so that I’m going to have a nice little HTML form on the front end and what I want to do is make this a little social interactive here and basically build an app so that all of you can send me messages and we’re going to display them on the screen. So, we’re going to have this mobile app that you can use on your phone, to get those ready, that you can go and hit a URL, you can type in a message and then what we’re going to do is modify the apps so that every time someone posts a message to this mobile app.
We’re going to write some server code that’s going to take that message, put it in what’s called a message object, and stick it in what’s called a service bus queue. The Service Bus is another cool feature of Windows Azure, a really powerful messaging capability that allows me to link code running in the cloud with code running on premise. And then I’m just going to build another app, which is just going to pull that message down and output it to the screen. And so hopefully I can see all of your messages showing up on my local system here. So, we’ll go back now to the Visual Studio and let’s walk through the code necessary to make that happen.
So, what we’re going to do here is drop back into Visual Studio here and step one is I’m just going to add a reference to the Service Bus. I’m just going to use the package manager in .NET in order to add an assembly there. This gives me all the APIs that I need to program. I’m then going to update the HTML form in our mobile apps. I have a nice little form here that lets people post. I’m going to change, get rid of some of the default actions in my controller, and just replace them with a little bit of code that’s going to use the Service Bus APIs we just imported and specifically this code here is just every time someone posts a message I’m just going to create a message object and send it off into the Service Bus queue. And then last but not least I’m going to update my Web.config file and put a connection string here, which is just a connection to my application.
Now, when I run this locally pay no attention to that. When I run this locally let’s run it remotely, be bold. One of the things you’ll notice that we do have support for now inside VS 2012, and inside Windows Azure is what we call incremental deployment. And what this allows me to do is it allows me to actually see differences between files local and on the cloud and so rather than have to redeploy an entire application when I want to actually supply an update, I can just go ahead and hit “redeploy” there. And when I do that it will just sort of deploy the incremental changes to the application up to the hosted cloud environment. And now it actually works in the cloud.
So, I can now say, “Hi, Scott.” This is going to connect to our Service Bus. It’s going to post a message and now I should have a message waiting for me in a queue. And read it from the queue I’m just going to run this application here, which is a command line console app, that’s going to run my laptop. And it’s pulling message from the messages from the queue and displaying them onto the screen. So, when I run this now, there we go, someone already found the URL. How wonderful.
So, what you can do here is, I can, on the left-hand side I’ve got apps, target mobile devices running in the cloud. I can say, let’s see, to the guy in the next queue. We can make it interactive. And the cool thing is you can all participate in this, too.
So, pull out your phones, go to MessageScott.net. You can go ahead and run this app as well, and basically, we’ve basically connected this system up, we’ve got an app in the cloud that anyone can go ahead and try sending a message. We’ve built it in only a few minutes. (Laughter.) We’ve built it in only a few minutes. You can run it anywhere on any device. And the cool thing is also we’re showing connectivity between the public cloud and the private cloud. And there’s no end of fun that you can have with this application. And you can scale it out anywhere.
So, a simple, fun demo. Hopefully it shows off some of the power of what you can do with Windows Azure. The cool thing is all the features I showed here you can do now, all the bits are available. You can download and sign up for free, and build all these things on your own. We’ve got a lot of great sessions. I’m going to be doing one a little bit later this morning where you can learn more.
Now, we have a large feed of very interesting comments. I’m going to bring on Jason Zander, and he’s going to show you how you can build even richer device applications that advantage of.
So, here’s Jason.
JASON ZANDER: Thanks, Scott.
Good morning, everybody. So, we just saw a great example of building some fantastic cloud functionality in the backend. Make sure you keep that website up, because we’re going to keep using that.
The next step we have, once we get the cloud searches up and going, we want to be able to hook it up in all sorts of interesting ways to all these clients that Satya has talked about.
Now, I’ve got a couple of devices I want to dig into here. The first one right here, I’ve got a couple of ARM tablets running Windows 8 with Metro style. And these are really dev kits. Their little barcode sticker says CLR, so if the CLR team is, if my team is wondering where these two machines are, they’re in Florida. So, we’re going to show these off.
Now, what I’ve got is Visual Studio 2012. This is the release candidate right here. And as you notice, we’ve got a lot of great feedback around the beta, and some of those sort of things. Some of you gave us some good feedback on the UI. So, I appreciate that. One of the things you will see here is that we’ve added quite a bit of color back into Visual Studio 2012, a popular request.
We’ve done a few things with the UI, too. We’ve got some additional color here. You see you can actually drill down to symbols, and those sorts of things. There’s a lot of color on the outside. We’ve also gone through and made it easier to find docking windows, and things like that. And we’re still maintaining the focus on the core portion, which is your code, your forms experience, those sorts of things as well.
Well, let’s go ahead and build this application. So, I’ve got my ARM device over here. And I’ve got Visual Studio 2012, and I’m in a Metro application. In this case a C# application. And you see I’ve got exactly the same code that Scott just showed working from the Web, but now we’re inside of the client code here as well.
Now, if I go up to the remote machines, you can see I can attach. If I go back and look, yes, there’s my two machines. And so, as Scott showed, we can actually attach to whichever one we like. I’m going to go ahead and hit “remote.” And if we show a split screen here, we’re going to go ahead and do the build. We’ll kick off the deploy. You’re going to see Visual Studio on the left-hand side, and you’re going to see the ARM tablet running on the right-hand side. And we’re going to attach with our connectivity.
Of course, we’re going to keep running Visual Studio itself. Now, let’s go take a look at that. Let’s try this one more time. So, on the remote machine, you can see we’re actually running the debug monitor. I’ll tell you what, let’s cancel that and go to the second machine. That’s why we have two.
So, we’ll go to one, and we’ll switch to that. There we go. So, if we can switch the machine on the overhead to 14.8, please. OK, I don’t think this is going to connect either. I’m going to cancel this one out. What we’ve got is, with the remote debugging, let me just talk about this for a minute and tell you. We’ll stop this. We’ll go ahead and terminate that.
OK. Now, what I will do is, I will back up here, and we’ll take a look at the simulator machine that we’ve got right here. So, we’ll go ahead and launch this application. Let me set a break point though over here and run.
Now, the application that we’re actually running, again, it’s connected up, but we also have the simulator option. The remote option here is to enable us to very easily connect another device. And the cool thing about this is, we’ve got our messages coming in across this. We can do a nice Metro interface, so we’ve got all the tiles that are set up to connect and pull back different information. For each one of those we can go back and pull individual items back by country code and those sort of things. When I hit my break point, of course, you’re going to land inside of a normal break point and calls back to all the rest of stuff.
In this case, we’re in C# and we’re running against the simulator. The simulator gives me all the debugging capabilities that I have. It’s actually running on the same machine, and so I can get all that content. If I hit F5 and go over here, the simulator is also going to give us additional settings. So, you probably don’t have every single type or configured machine, these screens are much, much smaller. And so with those I might be able to reset the focus. I can make it really large and look at the layout and those sort of things as well.
Now, the next thing I want to be able to do is, let’s talk about other types of clients. So, that was an example of building a Metro style application. We also want to be able to hook up other types of applications. Many of us here are probably build line of business applications, things that have lots of data, lots of screens. I might be using these as internal apps, and those sort of things.
Let me show you an example of that with Visual Studio LightSwitch. This application that we have here, LightSwitch is a development environment that we’ve created to make it really simple to create the type of application that you see here. You can basically just add data, you can add screens. And with the previous release, we can also include things like SharePoint. With Visual Studio 2012, LightSwitch is built in, and we’ve added support for OData as well. Now, OData is becoming a very, very popular feed. It’s an easy way to get a hold of data from all sorts of vendors who actually have submitted in for standard.
One of the things I want to show you here is, we’ll go ahead and do a search on customers, and if I pull up, let’s look at the Chloes (ph) for example. And this application that we have, we’re a moving company. So, we keep track of our employees, we keep track of all of our appointments, those sorts of things. If I go ahead and look at this particular customer, one of the things you’ll see here is I have my appointments, but I’m also using the SAP NetWeaver Gateway, which exposes OData. And so with this I can actually very easily attach my SAP system, and pull both of those in and kind of mash it up in one system running right here on top of my desktop. So, lots of cool support in there, and that’s neat. This is content you can get now with Visual Studio 2012 with the RC.
Now, we have some additional support that’s coming. I want to give you a preview of that. Now, in this case, I’ll go ahead and pull up the application itself. That’s a nice application. It works pretty well. Now, the problem that we’ve got with my moving company is that we’ve got a lot of independent contractors that we also work with, you know, the folks that actually go out and drive the trucks, and actually help customers. We want to make sure that they have access to our internal systems, too, because we can give them their job list, and all the rest of this kind of stuff. The problem is, I don’t control the right key. I don’t know what kind of systems they’re running. They may just have a smartphone, I don’t even know which type. So, I would like to be able to expose this data as well, but we need a way to go do that. So, I’m happy to announce here at Tech Ed for the very first time, we’ve not told anybody this, we’ve added HTML support for LightSwitch.
Now, I can go ahead and add HTML off of the front end out here, and we’ve actually done this already, so if I expand this client over here, let me go ahead and set the mobile target as the startup client. And then I’m going to go ahead and do an e-book. I’m going to go ahead and build the application, and go ahead and run it. In this particular case, again, we haven’t done any changes to the backend systems, so we’re still working with the same database and the SAP system, all the OData feeds, and that kind of stuff coming through, but I want to write this companion application that’s going to run on all sorts of devices.
Now, again, one of the goals for working with LightSwitch itself is to make it really fast and easy. So, I’m going to go ahead and double click on the home project, or the home page, select the list that we’re displaying. And I don’t have to write code if I don’t want. In fact, I haven’t really written any yet. But if I do want to add any code, I can just go in and write code, and go add some formatting elements.
Now, I also know that some of you may be carrying around other devices that have weird pictures on them. You probably want to have access to that, too. So, let’s switch over to one last example here and I can hit that same application, I’m running the iPad emulator and if I go ahead and hit the same website that we were just showing I can go ahead and load that so if the person does have that kind of device then that same app that we just did the formatting and all the rest of it on, and here it is on this device, as well.
All right. So, very simple, very easy to get going, I want to go ahead and switch over to the slides here. So, that’s a couple of different ways to build great applications. I can build very rich clients. I can also do standards-based. Scott showed you how to do that with websites on the back end. We have LightSwitch for easy HTML generation. So, you’ve got that kind of standards that Satya talked about.
Now, the next thing we may be able to do, though, is it’s necessary but not sufficient to go build great software. All of us in this room, IT professionals, as well as developers, we’ve got to work very, very close together if we want our customers to be happy. Like, I’ve got to build the software and I’ve got to deploy the software and keep the thing up and running.
So, we’ve got a lot of fantastic tools already with Visual Studio. In fact, just last week Gartner released their ALM, Magic Quadrant (for) Application Lifecycle Management. And you’ll find that Visual Studio is in the top of the quadrant now with that report. So, we’re very excited about that. And that’s, even for Visual Studio 2010, we have even more coming. We’ve got a lot of great support coming for agile development being able to do tests, and testing the production, all those sorts of things. That’s helped us on the left-hand side.
But, again, all of us in this room have to be working together if we’re going to be successful. So, we also need to get integration and when we get issues, and they’re going to happen, how do we get that mean time to resolution to be as fast as possible? I want brand new features coming in, and when I have issues let’s go resolve things quickly.
Let’s go look at some new examples of how we can do that. And I’m going to start off first of all as a developer. And I have a website here, a B to C sort of site and we sell toys and those sorts of things off of this particular site.
Now, we’d like to go worldwide and have a global presence. So, this is a pretty straightforward app for me to go build. Now, what I do want to do, when I’m building the application I’ve got things like Web tests that you see here. The Web test allows me to go through and automate the functional testing. In this case we’re testing the checkout cart. So, I want to make sure that when I add items that go through in the checkout it is successful. So, that makes sense. I’m going to run functional tests in order to validate things. OK. That’s great.
Now, once I’m done with that I want to hand it off to OPs and I want us to be able to get customers on board. So, let me put on my operations hat for a moment. We’ll go over here to my favorite tool, which is System Center. Now, I’m happy to say that we’re going to show you some new functionalities coming out with System Center 2012, SP1, Global Service Monitoring, or GSM. GSM is going to give us the opportunity to do monitoring worldwide using Azure in all the points of presence that we have around the world, in order to be able to make sure that your machines are up and running.
Now, if I go ahead and click “add” on this wizard right here, you can see right here, I have that same test. So, the cool thing to notice is I’m using an acid as a developer to do the functional testing and I can share that with my operations partner on my team and say, let’s go ahead and use that to go do the monitoring. When I add this guy in, then I’m able to go through and say well, where would you like to run it. If I go do a search I can run all sorts of points using Azure, again, around the world. Go ahead and add these in. OK. That all looks good.
Now, with this I’m able to very quickly configure, share an asset with my dev team, but now I’m going to do monitoring. For the same of time I’m just going to cancel out of this one, so we don’t need to get it going. We’ve already run it, and we’ll shut down that application.
Now, I’m going to start getting a couple of views now to monitor the apps. One, I’ve got a worldwide view and a map showing me how are things going. It looks like it’s mostly OK, but we do have an issue in London. I’m also going to get the response times, the timing. You can see how well is the application going. Now, we do again have this problem with London, which seems like an issue. So, at this point, of course, I would go through my normal set of things to look at, the network, and the machines, and the configurations.
If I do decide that this thing is just not really ready, I really need to assign it to engineering. So, I’m just going to right click and go ahead and do the assign. Now, the cool thing about that is we’re letting the software now do the work for us. We are running the tests that came from engineering. We actually collected a whole bunch of diagnostics information that we want to get over the engineering team.
So, again, if I head back over and put on my engineering hat, one of the things I’m going to see is NTFS, that issue has been routed to me. So, I’m getting NTFS, the bug report and it’s saying, “Hey, you’re getting a SQL exception.” If I look through the attachments I can see I’m getting an IntelliTrace log that was captured for me from OPs, nothing special they had to do, but it gets routed. I can look through the exception data, and clearly I am getting some SQL Server exceptions. If I go ahead and do a start debugging, that will debug the log. So, it’s not a live attach, it’s debugging the log locally. So, as a developer I can see that and I get local variables and those sorts of things.
Now, I can see that in this code it’s pretty clear what’s happening. I’m doing a look up of a U.K. country code, and it’s not finding it. So, that’s clearly an issue I have to deal with. At this point I know how to resolve it. I can go fix my data, or the service that’s surfacing that information. And when I resolve it, it will go back into TFS and it will be reflected. So, with this the cool thing is, everybody in this room is probably using Visual Studio for devs, System Center for ops. We’re going to be able to start cooperating very closely with each other and we’re going to be able to start releasing features much, much faster. So, that’s the vision around developer and operations and we’re adding a whole bunch of support for that going forward.
Now, with that, what I’d like to do is show off a case study that we’ve got from ING Direct, and they have a very similar sort of problems trying to figure out, “Hey, how do we get this time, the amount of people we take, the amount of time it takes to develop software and using private cloud, Windows Server 2012, System Center Visual Studio, and everything put together?” They’re able to get some pretty dramatic reductions in part of this production pipeline. I think the excitement about that solution shows through pretty well in the video.
So, with that we’ll go ahead and roll the video and I’ll say thank you very much.
SATYA NADELLA: All right. So, that was a great set of demos that showed you development of new apps across devices, how dev OPs and a new lifecycle come together. Some day I’ll even convince Guthrie that Service Bus is for durable app-to-app messaging, not an instant messaging system for him. But, that’s another day. But, we have a very, very rich set of runtimes, tools, and frameworks to build applications. But, it’s not really the end of the story when it comes to modern apps.
One of the other, most important considerations for us is how do we rethink even access and device management, because if you build these applications, take for example one of these SAP connected business applications that you build using LightSwitch, you want to make it available across all devices, but you can’t attest to the health of all the devices that you want to set policy based on the type of application, based on the user and the device give restricted access. You may want to encrypt data.
So, there is a real rethink in terms of enterprise access and device management. And the fundamental thing you want to make it is much more people-centric. It’s not about a particular device, but it’s about the particular person and the device that they are using and the applications they want access to. You want to make available all of the applications and all of the data to that user on that device, but at the same time from an IT perspective have the control and governance framework.
And so to that end, we are really extending System Center with Windows Intune. You should think of Windows Intune and System Center providing that combination of control and governance so that you can give the right access to the user on their device for their applications and data.
The first thing that we have done with this new release of Windows Intune is integrate with Azure and AD. So, for example, if you’re an Office 365 end user today, you’ll log in with the same single sign-on into Windows Intune, then you can go into a self-service portal, look at all of the applications, your IT administrator would have then given you access, depending on the type of device you’re using, because there’s an inventory of the device, the ability to integrate the device, to find out whether you can attest to its health. And based on that you may get a VDI [Virtual Desktop Infrastructure] session, a rich application, you may encrypt the data before sending it down, all of those mechanisms are things that you can set policy on.
So, Windows Intune will do this across all of the devices, Windows Phone, Windows 8, which you’ll hear a lot more about tomorrow, as well as the other devices from iPhone to Android. So, we have a multi-device management, great, our people-centricity with Azure-ready integration, and self-service application poll, if you will, to complement everything that you’re already doing with System Center.
The last dimension of a modern application is data. At some level a lot of the applications that Scott and Jason showed already had the power of data. But for us thinking about data as a first class part of our application platform is pretty important. With SQL and SQL Server and SQL Azure, that’s one of the places where we’re investing very, very heavily. We want to have a complete storage portfolio for any type of data, for the no SQL pattern, to the SQL pattern from OLTP to BI to data warehousing, we want to at the core level have a complete data platform.
Second, we want to have rich capabilities to reason over data, basically how can you add intelligence to your application by doing things like applied machine learning? How can we connect up all of the enterprise data and look at that together in terms of data mining? Connected to data that’s there in Bing in order to be able to reason over the world’s knowledge as a union of your data and the world’s knowledge. And, lastly, how do we connect these petabytes of data in many cases to real insight by having Excel connect with big data?
To give you a demonstration of that last key where more data doesn’t necessarily mean insight if you don’t connect petabytes to end users I wanted to invite up on stage Amir Netz from our SQL team to show you what we’re doing in some of the new capabilities in SQL.
AMIR NETZ: Thank you, Satya.
Hi there. So, the more than business applications generate massive amounts of data, big data, and we talk a lot about building these large Hadoop clusters to store all of it. But just as important as storing the data is making sure that users can make sense out of all the data to make a real business decision.
To show you how this is done, we have used our own Hadoop cluster to collect a bunch of tweets, 12 minutes of tweets about movies. And we have this large cloud of unstructured data, it’s 12 million, 140 character strings, and let’s see what you can learn out of it. And to do that, we’re going to use PowerView of SQL Server 2012.
So, what you see here is SQL Server 2012 on the screen, and we have here the 12 million tweets. And one thing that we can do with PowerView is modify the data, and pull our hunches. So, instead of just looking at the total, I’m going to change it to take a look here if there’s a cone chart. And we’re going to see how it behaves over time.
So, just like that, we can see here tweets over the last five months. And we can see some really interesting patterns emerging. There’s a nice spike here of tweets that is appearing on March 23rd. There’s a big pile of tweets here at the beginning of May. So, something is going on, and I, as a business user, might be interested in following some hunches and figuring out what’s going on. Maybe it has something to do with movie releases.
Let me go in here and try it out. I’m going to take the tweet count. I’m going to look at the movie titles. I’m going to make it into a bar chart, make it a bit larger, and we’re going to sort it. And we can see here the various tweets with the various movies. We can see the movie that had the most tweets was “The Hunger Games.” Maybe it can explain some of the spikes we’re seeing here.
I’m clicking on the bar of “The Hunger Games,” and immediately I can see that “The Hunger Games” is truly responsible for that peak that we had at the end of March. In fact, “The Hunger Games” was released on that date, on March 23rd. “The Avengers,” well, that one is responsible for the spike that we’re seeing at the beginning of May. Obviously here, “The Hunger Games,” the anticipation that the users had here for the movie because of the book. So, all that is great insight we’re getting here from just looking at the tweets.
But what kind of business decision we can make? Well, put yourself for a moment into the shoes of a theater operator. One of maybe the most difficult operations or decisions that you have to make every week is how many screens do I allocate to each one of the movies? Well, maybe the tweets can help us make that decision. So, we’re going to go and do some experiments here. I’m going to go and maximize the chart. I’m going to focus only on the top grossing movies for my analysis. And I want to look only at the tweets that were tweeted before the movie was released. That’s our leading indicator.
I’m going to go in here and change to say only tweets that were before the movie was released, so a different number of tweets. And now let’s compare and see if it’s correlated well to the revenue of the movie. So, we’re going to go here and take a look at the revenue in the first week, and the size of the bubble is the revenue, overall revenue. And now we can see that on the X Axis we have the tweet count, on the Y Axis we have the revenue, and we can see that there’s a very strong correlation. The more tweets the movie is getting before the movie is released, the better it will do in the first week.
Of course, there is one exception here, “The Avengers” is getting way more revenue than the tweets would let you believe, and actually tried to figure out why the outlier here? I went to see the movie, and I still cannot figure out why it did so well.
So, this is kind of the first decision a theater operator has to make. The second decision comes a week later, now we’re getting to the second week. And the second week generally, the movies are doing less you know, generating less revenue than in the first week. And you can see it here, the blue bar is the revenue in the first week, the red bar is the revenue in the second week, and it’s always going down. And we can actually see that the drop between the first week and the second week is not consistent. It’s actually different movies I’m actually displaying the wrong one. Here we go. And we are here. The drop between the first week and the second week is inconsistent. Some movies are dropping much faster than the other movies. And the question is what is leading to that drop, and you can see that it’s not really the number of tweets that can give us the clue. What can really give us the clue here is the sentiment analysis.
And what we’ve done on our new clusters, we’ve already calculated the sentiment of the tweets, whether people are saying good things or bad things about the movie. And when you look at the correlation between the sentiment and the drop between the first week and the second week, you’ll see a very critical correlation, the higher the sentiment, the lesser the drop.
So, if you look here at the movie that had the worst sentiment, click on it, you can see this was “John Carter.” “John Carter” was a megaflop. All the revenue was done in the first week. People went to the theater saw it, told their friends, tweeted about it, and nobody came the second week.
Now, on the other hand, we look at the movie that had the best performance in terms of sentiment, and this is the movie, it’s “The Dictator.” It has a phenomenal sentiment, and it was so good that it did better in the second week than in the first week. People went to the theater and then tweeted and said that was actually much better than the reviews would have led you to believe. Word of mouth gets out, and people are going to the theater even in droves, even more than in the first week. That’s a really rare phenomenon.
If I’m a theater operator and I’m not listening to the sentiment on Twitter, I’m missing the opportunity to allocate more screens to make more money. So, that’s critical business decisions can be driven here by the sentiment.
Now, the other thing you can use here with Twitter is manage brand. And in the movie industry one of the biggest brands is actor names. So, let’s take a look here, I want to move here to the next screen. And we see here a list of actors. And this list here is not just any list. If you look at the list, you might recognize all the names. There’s something special about this, too. All these actors here were nominated for an Academy Award, either a leading role, or a supporting role. So, this is a fairly distinguished set of actors. And each one of them is a brand name. Their name is worth money. They use it in commercials, they use it in so many other appearances. And so it’s worth money, and awareness of brand is very important. So, we can appreciate the tweets to the actors and seeing who has the most awareness.
You can see here very clearly that Brad Pitt, and George Clooney, and Meryl Streep, these are A-list actors. They have way more awareness than any of the other actors. It also could be interesting for us to go and take a look and see how the tweets are behaving over time. Let’s go here. And we ought to go and change a little bit. So, let’s go and tweet by actor, and instead of looking at by actor, we’re going to go and tweet by date, and we’re going to go and change it to a line chart. And we can see that the pattern of tweeting about actors has a really spiking behavior. You see a very large spike here on January 24th. Something very special happened to this group of actors on January 24. This is the date where the nominations for the Academy Awards was announced, and the whole Internet is blazing for those names, and Twitter is full with tweets that mention those names. But the behavior for those actors is different. Let’s just look here at Brad Pitt, highlighting Brad Pitt. And you can see that for Brad Pitt January 24th was just another day in the office. Nothing special. He doesn’t lead the nomination for Academy Award, but for other actors, like Melissa McCarthy here, January 24th was really important. You see, nobody remembered Melissa McCarthy at all, and then January 24th comes, giant amount of tweets mentioning her name, and then everybody forgets about her.
Now, we can go and take all these things and combine them together and see how they really operate. So, I’m going to go here and maybe change the side here to add some data labels. So, what you see here are the various male actors that we have here. And we have here on the X Axis, we have the percent of tweets to date, and on the Y Axis we see the sentiment. Let’s see how this nomination affected each one of them. So, we’ll start with Max Von Sydow. Max Von Sydow was one of the elders of the movie industry. He started in his first movies in 1946. He’s 83 years old today, and he was nominated for a supporting role in a movie called “Extremely Loud and Incredibly Close,” and you can see that before the nomination Max Von Sydow had very little tweets. Almost nobody was talking about it; almost nobody had any significant sentiment about him. But if we go to highlight map here, and we’re going to see what happened.
So, we’re going to get to January 24th, and suddenly you see a giant spike in awareness about Max Von Sydow. And the sentiment is going up, people say, wow, it must be a great actor because he’s nominated for the Academy Award, and then it tapers off. But overall Max Von Sydow ends up in a much better place than he started with.
And that kind of makes sense, and you think it’s always happening this way, but it actually doesn’t always happen this way. So, let’s take a look at another actor. We’re going to look at Demian Bichir. Now, Demian Bichir was nominated for a leading role in “A Better Life,” and we’re going to go and see that he actually had good sentiment, little awareness, and January 24th comes, and a giant spike in awareness, but actually sentiment is going down. People for some reason aren’t liking the fact that he’s nominated. In fact, it’s going down, down, down. He’s ending up in a very, very bad place. If you are a PR manager, you’re managing the brand of Demian Bichir, you have some work set out for you to figure out what’s going on. Some business decision has to be made on how to make sure that the public perception of Demian is much better.
And, of course, we have our last hero here. We have here Brad Pitt. Brad Pitt is a brand name, household. For him, January 24th comes and goes. He just has giant volume, same sentiment. People already made up their mind about Brad. Brad doesn’t have to worry about anything. He’s just focused on the wedding.
So, you’re seeing here, what you’ve done, right? We took 12 million tweets, 140 characters apiece. From this pile of unstructured data, we’re able to figure out how to predict the first week sales, the second week sales. We were able to manage a brand with awareness, with sentiment, with back up events on those, and we did it in a way that was fun and easy, anyone can do. Isn’t that just amazing?
So, just imagine what you can do when you give this tool to your users. Thank you.
SATYA NADELLA: Thanks, Amir.
That was a whirlwind tour of all of the capabilities that make up the cloud operating system. If you look at what we talked about, we talked about the next generation of data center infrastructure. We talked about the next generation of data center management, application runtimes, device management. All of these capabilities across the modern data center and the modern applications is what we’ve been hard at work building over the last year.
You see that in the wave of products that are launching. And that cloud operating system is the core capability that we wanted to make sure we can first do a great job of building it out, comprehensively and consistently, so that we can equip you with it, so that you can usher in the era of the cloud operating system within your own organization and reinvent the value together to drive additional business value for your business.
Thank you very much and have a great rest of the conference. Thank you.