Satya Nadella and Terry Myerson: Build 2016

Remarks by Satya Nadella, chief executive officer, and Terry Myerson, executive vice president, Windows and Devices Group, at Build 2016 in San Francisco on March 30, 2016.

SATYA NADELLA:  Good morning!  Good morning, and welcome to Build 2016.  It’s fantastic to be back here in San Francisco.  Welcome to everyone here with us, as well as everyone joining us on the Web.

You know, for me it feels like I’ve marked all my adult life with Microsoft developer conferences.  In fact, when I think about even the birth of my children, I think about the various platform eras at Microsoft.  My son belongs to IaaS, and my daughters are all .NET.  And so it’s so fantastic to be back here and talking to developers.

I mean, there’s one thing that’s so unique about developer conferences.  Developers come here excited already about technology.  They get to see more technology, meet more people who are talking technology.  But most importantly, they walk away inspired by what they see, what they learn, to build even more cooler technology.

That uniqueness of developer conferences is what we want to celebrate here over the next three days.  That ability of developers to amplify technology is what we want to talk about over the course of this conference at Build.

Talking about all of this technology, in fact, there is a much more mainstream dialogue about the role of technology in our society.  And it’s the right time to have that dialogue, because technology is so much more mainstream.  It’s embedded in our daily lives.  It’s embedded in our companies, in our industries, in our economies and countries, much more so than ever before.

And so we have these profound questions and issues in front of us.  Is technology driving economic growth for everyone or is economic growth stalled in spite of technological spend?

Is technology empowering people or is it displacing us?  Is technology helping us preserve our enduring values such as privacy or is it compromising it?  These are the issues that are being discussed, and they’re the right issues for us to have a broad dialogue, not just in one company, not just in our industry, but as a society.

I am an optimist.  We as a company are optimistic about what technology can do for us.  I believe technology can, in fact, drive economic growth all over the world.  I believe technology can empower us in our daily lives.  I believe technology can be used to preserve our enduring values.

We do, however, have to make choices how we go about building technology.  We need to make design choices, economic choices, and social choices that ensure that the way we build technology, the way we use technology helps us make progress as a society.

That optimism and that self-choice is embedded and coded in our mission, to empower every person and every organization on the planet to achieve more.  We want to make things so that others can make things and make things happen.  That is our broad platform approach.

And when we think about driving success and empowering people and businesses through digital technology, we have to start with all of you as developers.  We have to empower you with technology, but more importantly we have to create that opportunity for you to be able to express your creativity that can change the world.  And throughout this developer conference that’s what we’re going to talk about, what are the platforms that you get to work on that really are going to help you change the world.

We live in a mobile-first, cloud-first world, and we’ve talked about this many times before, but it’s always good to go back to make sure that we are grounded in what we mean by mobile first and cloud first.

For us, mobile first is not about the mobility of any single device.  It is the mobility of the human experience across all the devices and all the computing in our lives.

Cloud is not a single destination, cloud is a new form of computing that, in fact, enables that mobility of experience across all our devices.  It infuses those experiences with intelligence, because it has the ability to reason over large amounts of data using a distributed computing fabric.  That’s what is the rich mobile-first, cloud-first world for which we’re building.

And we are building three interconnected platforms.  Tomorrow, you’re going to hear from Scott Guthrie about our intelligent cloud and Azure.  Scott’s going to talk about extensively how for all of the new application platforms, starting with IoT to mobile to Web to machine learning, how you can use data and infrastructure to build these applications.  More importantly, he’s going to talk about how building on Azure gives you the opportunity to reach over 5 million businesses that are already in Azure Active Directory.

Qi Lu is going to talk tomorrow about the opportunity with Office 365.  We’re opening up Office 365 unlike ever before.  Your applications, your data through Office 365 connectors and add-ins, can be found within the UI scaffolding of Office very naturally by the users of Office.  That is 1.2 billion users of Office ultimately will be able to get access to all of your applications because they get built into the UI scaffolding.

We also want to take the rich semantic information underneath Office, that is people, their relationships, their schedules, their files, and all the other artifacts, and expose it as Microsoft Graphs so that you can use it, you can extend it as part of your applications.  So Qi’s going to talk about what that rich world looks like.

And this morning, Terry and I will talk about more personal computing.

Windows 10 is off to an amazing start.  It’s the fastest-growing version of Windows with both consumers and enterprises.

We have tremendous innovation in devices.  In fact, it’s new categories of Windows devices, whether they are IoT or HoloLens, are getting created.

And this morning, you will hear from Terry and team about all of the advances with Windows, how we are bringing the natural user interface, whether it be touch, ink, voice, or even image recognition, so that you can use them to build new categories of applications.

We’re also going to talk about how in gaming and consumptive applications we’re going to open up both the PC and the console for new opportunities for developers.

We’re also going to talk about how Windows is your ultimate dev box, where you can do all your application development for Windows and beyond, right there on a Windows PC.

We also are going to talk about a new emerging platform.  We’re in the very early days of this.  It’s actually at the intersection of all of our three ambitions.  We call it Conversation as a Platform.  It’s a simple concept, yet it’s very powerful in its impact.  It is about taking the power of human language and applying it more pervasively to all of our computing.

That means we need to infuse into our computers and computing intelligence, intelligence about us and our context.  And by doing so we think this can have as profound an impact as the previous platform shifts have had, whether it be GUI, whether it be the Web or touch on mobile.  And so we’ll have an extensive conversation about the beginnings of this new platform journey.

But we want to get started with Windows 10 and what that opportunity represents to all of you.  And to do that, let me please welcome Terry Myerson up onstage.  Thank you.


(Video:  WDG Momentum.)

TERRY MYERSON:  Hello, developers.  (Cheers, applause.)  I’m so excited to be back here at Build with all of you.  I mean, it really is such a great time to be a Windows developer and today it’s going to get even better.

Satya shared our ambition with Windows.  It’s to make computing more personal and change the way all of us interact with and benefit from technology.

Enabling us to interact with devices more intuitively, using natural language, conversations, evolving us from mechanical keyboard and mouse to touch and beyond, taking advantage of more human capabilities like our voice, pen, our face, gestures, fingerprints.

Expanding beyond today’s PCs and phones, bringing computing into our world with holograms, immersing ourselves in games across devices.

Windows 10 is our home for this evolution, making all of us more productive and enabling us to have more fun.

Windows 10 has been out for eight months, and it’s already being actively used by over 270 million people.  (Cheers, applause.)

And customers are more engaged than ever before, spending over 75 billion hours on Windows 10.  And our hardware partners, they’ve launched over 500 new devices designed for Windows 10 — devices with large screens, small screens, no screens, and everything in between.

We’re humbled by this response.  Across the globe from Dubai to right here in San Francisco.

Now, Windows 10 is off to the fastest adoption of any release of Windows ever.  And it’s not just consumers.  It’s enterprise customers also.

Consider that the United States Department of Defense, 4 million devices will upgrade to Windows 10 just this year.

So on behalf of the entire Windows team, we’re happy to welcome all of these customers to Windows 10, whether they have a brand-new PC, a five-year-old PC, or a brand-new Mac.  Join us, you’re welcome on Windows 10.  (Applause.)

Now, every day we’re working hard on making Windows 10 better, to delight all our users all around the world.  And today we’re so excited to share with you what we’ve been working on.

The next wave of innovations from Microsoft.  The Anniversary Update to Windows 10 coming this summer for free to all of our Windows 10 customers.  (Cheers, applause.)

Now, how many of you are Windows Insiders?  Come on, how many Windows Insiders out there?  (Cheers, applause.)  All right.  That’s so awesome.

I mean, from the whole team, just thanks again.  You guys have helped shape everything we’re doing at Microsoft, giving us feedback every day on every pixel, every ounce of performance we can squeeze, and of course today you can go download the latest build and give us feedback on everything we’re talking about today.

With this Anniversary Update, more personal computing really does come to life more.  Take Windows Hello.  Today, you can more securely and easily log into your device than ever before.  But with this Anniversary Update, you’re now going to be able to use the secure and easy biometric authentication with your apps, or even better, with Microsoft Edge, now will be the first and only browser to support this secure and easy biometric authentication to all the supporting websites.  (Cheers, applause.)

With the Anniversary Update, we’ll introduce an all-new Windows Ink experience.  Our goal is to make using a pen with your device as seamless and as easy as pen and paper.

The Anniversary Update will come to your Xbox One, now bringing all of your Windows applications into your living room and enabling you to turn any retail Xbox One into a dev kit.

And, of course, the Anniversary Update will come to HoloLens, enabling all new ways to work, communicate, learn, and play.

And Cortana, in the Anniversary Update, there will be incredibly new ways to get stuff done.

So please welcome Bryan Roper to do a demo of the Anniversary Update.  (Cheers, applause.)

BRYAN ROPER:  Thank you, Terry.  Ladies and gentlemen, how are y’all feeling this morning?  (Cheers, applause.)  Oh, man.  This side was louder.  I’ll give this side a chance now.  How you feeling?  (Cheers, applause.)  Awesome.

Now, with this Anniversary Update, man, I’m pumped.  We’re going to completely modernize the PC.  We’re going to make it safer and more secure, we’re going to make it more personal and productive, and we’re going to keep you really engaged with this more personal computing idea.

I want to start with that safer and more secure.  All right?  Make some noise if you use Windows Hello today.  (Cheers, applause.)

You remember we launched this biometric authentication was enterprise grade, keeping folks secure with the most personal things possible — your finger, your face, it’s awesome.

We see folks loving this, all right?  It’s coming to apps.  People are buying things in the Windows Store using this, Dropbox integrated.

But today, I want to talk about another partner that’s very excited, that’s USAA.  All right?  They’re the largest financial services provider to our U.S. military, and they’re actually going to extend Hello support for their website using Microsoft Edge.

Because of that new spec in W3C that was implemented in Edge, this convenience and security of Edge can actually light up here.

So, check it out, we’re going to show you the website now.  This is so easy.  I’m on a device right now that’s controlled with my fingerprint.  All I’m going to do is just touch — log on with Windows Hello.  There you go.  And now it’s asking me to touch my fingerprint.  And just like this, it logs in.  It’s that fast, it’s easy, it’s safe, it’s secure.  We are pumped about it.  (Cheers, applause.)

Now, let’s talk about more personal computing and get a little bit productive here, all right?  Terry mentioned a new experience that’s going to put Ink front and center.  All right?  We’re going to help you with Windows Ink create, ideate, and collaborate better.  OK?

And why would we do this?  Bryan, why would you put Ink front and center?  Because a lot of you all still use pen and paper.  Make some noise if you still write stuff down all the time on pen and paper.  (Cheers, applause.)

Why do we do that?  We do that because it’s fast, it’s easy, it’s right there, it’s immediate, it doesn’t require setup.  But then sometimes you lose that piece of paper and, oh, man, where was that?

So our goal with Windows Ink is to really combine the naturalness and the speed of pen and paper with the power of a PC.

There are four parts to this.  No. 1, we’re going to put this pen front and center so you know what to do with it.  No. 2, we’re going to make sure that we solve for the tasks that people are reaching for pen and paper most for.  And No. 3, I’m going to show you how Windows Ink is actually a deeper platform that enables faster and more fluid Ink flow throughout the entire Windows experience.  All right?

So let’s start with that first one.  Putting the pen front and center.  OK, so when I got my pen, I was super excited.  But I wasn’t always sure what to do with it or which apps to use.  So we’re solving for that.

I’m going to push this pen button.  Check this out.  This is the Ink workspace.  All things Ink are here.  So these are the three tasks people do most.  I’m going to jump in on that in a second.

But look at this, my most recently used pen apps, I can get right back in and do what I was doing.  Also down here, tips and tricks.  So users are going to know what to do with this pen.  We’re going to walk you through that.

But another thing I want to call out to you folks, this right here, suggested apps.  This is one place where if I touch this, I’m going to go to a dedicated section of the store that just is an amazing app to support pen.  All right?

So we’re going to light that up for folks.  They’re going to know what to do with the pen, we’re putting it right front and center.  What do you think about that?  (Cheers, applause.)  I’m just getting started, all right?

What about the tasks we do most?  No. 1, we jot down things to remember.  Make some noise if you use sticky notes in your life.  (Cheers, applause.)  I’m not surprised.  All right.  3M sells 50 billion sticky notes a year, all right?

Today on Windows 10, we have 8 million monthly active users of sticky notes and 3 million daily active users.  So, of course, we’re going to make this better.

I can get to my sticky notes right here, check this out.  Here, I had some stuff I had to make sure I showed you guys, we’re doing good on that so far.

But watch this.  I’m going to make a new note here.  My mom always gets worried when I do these demos.  She calls me.  You know, she speaks “Spanglish” she mixes it up, we’re Latin.  Oh, yeah, Bryan, I’m worried that the demo goes good.  How are you feeling?  So I’ve got to always remember to call my mom after this.

So I want to make sure that tomorrow I call mom.  But watch what happens when I write “tomorrow.”  OK.  First of all, look how easy and smooth this ink is flowing.

But watch what happens next.  I wrote the word “tomorrow” and because we’re combining the power of the pen and the PC, you see it’s turning blue.  That’s because we’re smart enough to know that it was handwriting, to know that it was a day.  So that if I touch it, my homey who rocks at reminders, Cortana, is easily able to come up and actually set that reminder for me.  OK?

And beyond just setting reminders with Cortana, Bing is integrated.  We’ll be able to recognize places, maps, all kinds of cool stuff.  This is going to be very smart.  These sticky notes are geniuses.  What do you guys think about that?  (Cheers, applause.)  Yes.

Why else do we write things down?  Sometimes when you scratch out an idea, it’s just easier to communicate visually.

How many times when you’re moving furniture, you rip off a piece of paper, you start jotting on it.  You devs, I know you run up to the white board and write a bunch of stuff.  And then at the end, you’re like, oh, wait, let me photograph the white board and later try to decipher what those things were that made sense at the time.

OK.  So we want to make sure you have a way to do that.  And that’s why there’s a sketch pad now.  So check this out.  The sketch pad.  I already wrote some stuff here for you.  And what you’re going to see is I wrote, of course, Windows loves Ink.  Can I get an “ah”?  Ah.  Y’all did that nice.

So check this out.  Why else are we putting Ink front and center?  I’m going to hit you with some stats here.  We did a little survey and found out that 72 percent of people still use pen and paper or various writing instruments a lot.  That’s 72 percent use it for actually more than one hour per day.  That’s crazy, all right?

Now, of that 72 percent, 32 percent actually use it for more than three hours a day.  So there’s a lot of people using pen.  All right?

Now, check out how I did that.  It was quick, it was easy, it was immediate, I got there, I could illustrate an idea to you.  But here’s what I want to call out, because I have the power of the PC, I can do cool stuff like have a ruler.  So maybe these lines are not as cool as I wanted them to be.  Look how I’m easily able to come in and rotate this.  I can come in and get my lines super straight.  I can come in and do this.  (Cheers, applause.)  Oh, y’all like it?  You like it.  Cool.

And, of course, the last thing — this is going to bother me if I don’t do it.  Y’all remember back in grade school the first thing you learned is that whenever you have a heart, you guys put the little arrow, and you got to do that.

OK.  And then because this is the power of the PC, yes, that person was excited about the heart.  I wrote that heart for my person.  (Applause.)  I’m not going to lose this white board because I have the power of a PC.  And you know what?  Just so y’all can have this for later, I’m going to go ahead and tweet it to you so this way you all have it.  You’ll remember that we love Ink.  What do you guys think about that?  (Cheers, applause.)

Now, yes, and of course we don’t want the tech to get in the way.  So if I actually go and lock my PC right now, check this out, I’m going to lock it.  That entire Ink workspace is available above lock.  I can get right back to what I was doing.  So when it’s time to rock and roll, you grab your device, your master button, and you’re ready to go, man.  OK.  (Applause.)

Now, beyond all this cool stuff, inking is actually a platform.  So I’m going to really showcase now a couple things of how this is actually extending great inking experiences into Office, a maps app that we have, and then also in Adobe.  So check this out.  I’m going to start with maps.

Sometimes it’s hard to actually explain something to somebody as far as where it is on a map.  Look at this.  I want to make sure you see.  I’m just going to take my pen and make a little dot right here.  Watch what happens, though.  That dot — this is smart ink, man — it becomes a point of interest.  That’s a registered point of interest.

And if I want to go over here and meet somebody over by the water, I make another dot.  Now, watch what happens.  I just draw a line connecting the dots and look what happens.  This Ink is smart enough to know those two points, to know that distance, and actually see what’s happening there, OK?

And, also, you’ll be seeing we can easily get directions there.

Now, beyond that, here’s Mount St. Helens.  Make some noise if you’ve played with the 3D section of this map before.  (Applause.)  Yeah.  If you have not, you’ve got to check this out.

Let’s say I want to go for a hike and I found a really cool route to go do this.  Now, watch what happens.  This ink is going to be smart enough, again, to dry and know that that’s 2.1 miles.  But watch this.  I can make a little place and say here’s where we’re going to meet.  And then up here, I’ll say maybe we’re going to stop and do some lunch.  OK.

And then right up here, maybe, I’m going to go and say here’s where we’re going to do a picture.  All right?  So I’m making these custom routes, these customer notes are available across all my Windows devices.

But watch what happens.  Look at this, when I turn this to 3D, that Ink is smart enough to actually stick to that thing.  It knows that that’s a topographical map and it’s sticking to it.  And watch this, when I rotate this, those little billboards that I wrote are smart enough to face me the entire time.  That’s the power of the pen and the PC coming together.  How do you feel about that?  (Cheers, applause.)  That’s right!

Now, I’m going to revisit some childhood trauma here with my teachers marking red stuff on my papers consistently.  But check this out.  We’ve all done it, whether you’re helping your children with their homework or if you’re a teacher yourself.  Look how easy this is now in Word.  I crossed out a word, it’s gone.  Maybe I don’t like this whole paragraph, OK?

We’ve always had inking in Word, but watch how the text is actually reflowing now.
Also, if you’re like me and you’re not artistically inclined, maybe when you highlight stuff it looks like this.  That’s cool, look what happens.  You all laugh because you’re, like, I could never write it.  (Cheers, applause.)  Yes.  You can do a whole paragraph.

Now, also, today, PowerPoint does a great job at aligning things naturally on vertical and horizontal axes, but check this out.  Let me show you what the ruler can do here.

So, two things, when I move this ruler with two fingers, I’m able to easily control the rotation.  When I use one, that rotation is locked.  So what I can actually do is move this down here and snap these objects alongside this ruler, all right?  So this is easily coming together.  (Cheers, applause.)  Yeah.

And now when I line this up, check this out.  It’s really easy for me to draw some cool little diagonal lines to make my slide look awesome, that’s the power of the pen and the PC coming together.  All right?

Now, beyond Office, beyond maps, we know that Windows Ink is a platform for everybody and Adobe’s embracing this.  I’m going to talk you through something they made for us, check this out.

What you’re seeing right now, it’s going to come up, so that’s a stencil called a French curve.  Adobe does not want their tools to get in the artist’s way.  All right?  But watch what happens.  Simultaneous pen and touch is enabled with the Windows Ink platform.  Look how that artist is easily able to use touch and pen together to be able to actually make these perfect lines on the shoe with the French curve stencil.  I mean, that’s awesome.  This is empowering new experiences.  The tech is not getting in the way, it’s enabling.  That is the goal.

What you’re seeing here is an example of the amazing latency — or lack thereof — of the Windows Ink platform.  The Ink is just flowing completely smoothly out of that pen.  I mean, you would never know that it’s actually a device doing that, because that’s our goal.  How do you guys feel about that stuff?  (Cheers, applause.)

Ladies and gentlemen, that is just the beginning.  There is no way I could show you all this on stage.  There is so much more coming.  And to kind of give you a glimpse into what we’ve got going on, there’s a little sizzle video, take a look.  (Applause.)

(Video:  Redstone Features.)

TERRY MYERSON:  Was that awesome?  (Cheers, applause.)  Now, all of those experiences Bryan just showed are built on our Universal Windows Platform.

We designed this platform for all types of apps from productivity to gaming to holographic.  We designed this platform for all types of devices from phones to tablets to PCs to IoT to large screens like Surface Hub and HoloLens or Xbox and more.

And we designed this platform for all of our customers, for all the needs of our commercial customers and for those of our developer partners targeting consumers.

And this is an open platform.  For over 30 years, Windows has welcomed an open ecosystem of hardware partners, software partners, developer partners, wherever they are around the world, and nothing changes with the Universal Windows Platform.

The Universal Windows Platform brings together all that Windows history as well as everything that today’s users expect from a modern app platform like a seamless and robust install, uninstall, and seamless updates.  (Cheers, applause.)

Now, our 270 million users of Windows 10, they’ve visited the store over 5 billion times.  And our Universal Windows Platform developers have responded with some incredible new apps.

The number of new developers is growing month over month with a 60 percent growth in just the last few months alone.  And these developers are bringing some incredible new apps to the store, some coming very soon.

Partners like Twitter, Bank of America, Starbucks, Uber, Disney, Yahoo, Worldwide Wrestling, and more.

And Facebook is bringing a new Universal App for Facebook, Instagram, and Messenger to the store very soon.  (Cheers, applause.)  I’m very excited to share that Facebook will also be bringing their Facebook audience network and app install SDK to all of our Universal Windows Platform developers so that 3 million Facebook advertisers can reach their customers through your apps.  (Cheers, applause.)

Now, this is not just momentum in the consumer space.  We’re also seeing great adoption from our commercial customers.  Thousands of organizations are now piloting the use of the Windows Store for business within their organization and building incredible apps to take advantage of it.

Let’s take a look at one partner who is really leading the way.

(Video:  Boeing using UWP.)

TERRY MYERSON:  (Applause.)  And you can do things with the Universal Windows Platform you just cannot do on any other platform.

This is a developer conference, so I think it’s about time we looked at some code.  Please welcome Kevin Gallo.  (Cheers, applause.)

KEVIN GALLO:  Thank you, Terry.  Good morning, I’m so excited to be here.

We know that every innovation and step forward with Windows is only as powerful as the ecosystem that rallies around it.

That means not just building investments in Windows for customers, but more importantly, investing in Windows for you, our developers.

And not just for building Windows Apps, either.  Whether you’re building your cloud services, websites, or even apps for other platforms, we want Windows to be the place where you can be the most productive and successful.

We want Windows to be home for developers.  (Applause.)

Let’s start with some of the newest innovations coming to the Universal Windows Platform and how we’re enabling developers to push the envelope with natural input and intuitive design.

So here is an application that I’ve built.  I’m just going to launch it here.

Now, one of the great things about native applications is the ability to really harness the power of the GPU and build stunning, and more importantly, differentiated user experiences for your apps.

We’ve added hardware-accelerated, 60-frame-per-second animation and effects.  These effects bring richer and more immersive designs that create a sense of depth, layering, and fluidity which help make your apps more delightful and intuitive to use.

So in my app, I have a few of these that I want to show you.  So the first one here, the blur effect.  As I move my fingers, I get this blur that is synchronized to my fingers in perfect harmony here, absolutely no glitches, and just looks stunning, it’s layered, it’s all in real time.

We also have the ability to do mouse-based effects here, or just any kind of animated effect as a layer.  So as I move my mouse, I get these bars in this chart.  You can see that over one, I get this layering to kind of get a great feel for how the actual bar is there and the actual graphics here.

So when I showed these to a designer, I can’t tell you how excited they were with these new GPU effects and they’ll be able to go do with their designs.

Now I want to move on to kind of my favorite, which is Windows Ink.  You know, we really want all devs to be able to quickly and easily integrate Ink into their experiences and make the pen come alive for end users.

So I’m going to show you, what is typical here at a conference, the Hello World for Windows Ink.

I’m going to start with the code.  So here I have my XAML.  So I built this in XAML.  And I have two controls.  The first one here is an Ink canvas control.  We introduced this last year.  This is where I get all my strokes and drawing, all that capability is all built into this single control.

We’re introducing a new control, it’s a stock control now built into the SDK.  This is the Ink toolbar that you also saw Bryan use.  This has the pen, the tips, eraser, and that ruler that we showed you.

And all I had to do was do a binding statement here from the actual toolbar to the canvas so it knows which canvas it’s acting on.  Two lines of XAML, that’s all you have to do.

And let me show you what I get for those two lines, just so you know it’s real.  So here I can just go start drawing right here on top of this chart.  I can highlight something here and point it out, and I can also just pop up my ruler and here I can move the ruler and kind of maybe try and — I’m not as good as Bryan at this.

And so I can maybe draw some lines here between it.  And you see all of that with just those two lines of XAML.  (Cheers, applause.)

It’s never been easier to add rich inking features to your apps.  But, of course, we have a host of additional Ink APIs.  So you can build some of the advanced experiences that Bryan showed you.

So these are just a few of the many features coming to the Universal Windows Platform.  In total, there are over 1,000 new APIs and features.

But, you know, at the end of the day, things only become real for devs when you get your hands on them.  So I am excited to announce that today we are releasing Visual Studio Update 2 and a preview of the Anniversary SDK.  (Cheers, applause.)  Give them a spin, and let us know what you think.

Next, I want to move and talk about Web development.  Last year, we introduced hosted Web apps.  This allows developers to easily bring their Web experiences to the Windows Store.  They can leverage their existing Web investments and at the same time get full access to native capabilities like Cortana, Bluetooth, Ink, and more.

This is resonating with our Web community.  We’ve seen lots of highly rated Web apps submitted to the Windows Store including American Express, Yahoo Mail, Zulily, and more.

However, when we talk to Web developers, they still struggled with using Windows as their primary dev box as many of them have workflows which rely on open source command line tools, scripts, and frameworks.

We’re going to fix that so Windows can be your home.  So today I am so excited to announce the Bash Shell is coming to Windows.  (Cheers, applause.)  Yes.  The real Bash is coming to Windows.  This is not a VM, this is not a cross-compiled tool.  This is native Ubuntu Linux binaries running on Windows through the magic of a new Windows subsystem.

We’ve partnered with Canonical to deliver this great experience which you’ll be able to download directly from the Windows Store.

Inside of Bash, you have access to the native file system, VP100 support, SSH, and all of your favorite command line tools.  Let’s dive into the tech and take a look.

So here I have a website I’m running in the Edge browser.  This is just a Ruby website that we built.  It’s hosted in a Linux VM in Azure.  I also built a hosted Web app with it.

So I can just launch that app.  All I had to do was take the name for it and submit it to the Windows Store and it produced an app ex and it downloads when the user downloads it right here.

So I haven’t done anything extra to it yet, but I want to.  I want to start making this thing come alive and do better.  So I’m going to add the ability to take — this is a banking app, so I decided to take the ability to use the native camera and really get a great camera experience on the device like you’d expect.

So I’m going to hop over here to Bash.  So here you can see, I just do LS, I can see my local system.  I’m going to move over to my C drive, the nice auto completion here.  Then I just first of all I’m going to go modify my JavaScript.  So there are two parts, my JavaScript that comes down that lights up and uses the Windows functionality, and then I need to change my Ruby code on the back end.

And, of course, my favorite area is emax.  So I’m going to use emax, but you can use any of your favorites, they’re all there, so everybody can choose the one that they enjoy the most.

You’ll see here in my JavaScript that I do a check here first, this is how I make sure I can share my code.  If I’m running inside of the application, I just check to see whether the Windows definition is there.  If it is, then I install the Windows APIs, if not, then I’m being hosted in the browser and I don’t light up this code.

And you can see here, I just calling the standard WinRT APIs that I get access to.  And I can get that native experience because they are the native APIs that I’m actually calling.

So I’m going to save that.  Now I’m going to SSH into my Linux VM in Azure there.  I’m going to try to figure out my password.

So what I’ll do is, again, use emax, I’ll modify this file.  So, you know, like on all of these demos, I’m going to comment some code because it just takes too long to actually go modify it all.  So I’m going to comment that code there in Ruby.  It’s saved, all I need to do is restart that application so I get the latest version.

I’ll just pop over here, launch it.  You might notice here it added a new menu item, that’s what my UI did, so I can say scan check here.  And I get this native experience.  I got this check here from Giorgio for $1,000.  But I’m really not going to actually take the picture, because the last time it bounced.  (Laughter.)

So I’m just going to exit out of here and be done.

So, you know, with Bash coming to Windows, we’re bringing the power of open source command line tools to Windows.  What do you think?  (Cheers, applause.)  Pretty awesome?  (Cheers, applause.)  We’re excited.

Now I want to ask, how many people here have built either a Win32 app or a .NET application?  (Cheers, applause.)  Woo!  All right.  Thank you for being a part of our Windows family.

And so today I’m excited to announce that we’re adding the desktop app converter for Win32 and .NET apps and games.  What this converter does, it takes an existing desktop app and converts it into a centennial, what we now call a modern desktop app.

Developers can now take advantage of modern apps and games and submit them to the Windows Store.  With this conversion, code continues to run with very minimal changes.  In addition, modern desktop apps also have access to all Universal APIs when running on Windows 10.  Let’s see this in action.  Yes.  (Cheers, applause.)

So here I am.  I’m going to kick this off.  I’m running the desktop app converter that takes the existing installer, in this case it’s an MSI, but it can be any installer out there.  And it’s converting — we’re working with Sage, who is an industry leader in accounting software, to convert their Sage 200 application into a modern desktop app.

The converter then will produce an app ex package that Sage can take and submit to the Windows Store, just like any other universal application.

So what I’ll do here, this is the app that I can produce.  So I can just double click on it now.  We’ve added support for double click on top of app exes to make it easy to install it and debug it locally.  (Applause.)   Yes, that little delighter right there.

But so that I can get live tiles and full integration, I’ve actually already installed this from the store.  And so you see here in the start menu my Win32 application, Sage’s Win32 application now has access to live tiles.

It shows some of the figures, I can click it here and it launches the application.  I’ll minimize that.  And you’ll see it also be able to pause the toast, I can go dismiss.

Now, let me show you this in Visual Studio.  What I had to go do to do this.

So this is a single solution here and I have two projects.  The first is the Win7 code.  So this is just your standard Win32 code here with no modification.  So while I modified it, I didn’t have to.  It would have just worked.

Then I added this extra project, which is the Universal code that added all the live tiles.  And so what happens is when Sage loads, it’ll detect if it’s running on Windows 10, then it will load that DLL and use the extra functionality, otherwise, it continues like it was.

And over here, you can see that boilerplate, standard notification and live tile code that is just straight out Universal Windows code.

So I’m also happy to share that Sage 200 will be coming to the Windows Store this summer.  (Applause.)  It’s awesome.

For those of you who use Install Shield or the rich tool set, we’re working with both Flexera and FireGiant to produce modern app packages directly so you don’t even have to use a converter.  Just update, add to your project, and now you get both an app ex as well as your MSI.

We’re making it easy for developers to bring more than 16 million Win32 and .NET apps to the Windows Store.

Now, finally, we want Windows to be a home not just for developing apps for Windows, but for developing apps for all devices.

When I talk to developers, I hear that many of you use an architecture where you have a shared common core that you build native UI for each of the platforms that you’re targeting, and finally you load up native capabilities with platform-specific APIs for things such as the camera, notifications, and commerce.

For C# developers, they typically start with UWP to target the ever-growing base of Windows 10 devices, and now with Xamarin, you can target iOS and Android as well.

Let me hop over to Visual Studio and show you.

So here we’re looking at the code for the real Microsoft Health app, which is a companion app to the Microsoft Band.  How many of you guys have got Bands out there?  Absolutely awesome.  (Cheers, applause.)

So Microsoft Band users have Windows, Android, and iOS devices.  So the health team needs to have an app for all of them.

The UWP app that they started with uses a XAML UI on top of a C# library containing the core business logic and the service connectivity.

Because the health app was designed this way, it was easy to use Xamarin to take it to iOS and Android.

Let me show you here the project.  So at the bottom here is one of the projects that I have.  This is the bulk of the code.  This is all the shared code among all the different platforms.

And you see the huge amount of code.  It really is, like I said, this is the actual health application.  We’re not showing the code on the left side, we didn’t want to share that.  (Laughter.)  Like a lot of you don’t want to show sometimes the real code.

And now I have a project for each of the heads.  One for their Universal App that they started with, and then one for iOS and one for Android.

The nice thing is with Xamarin, you can not only debug an emulator for Windows, but we now also have the ability to do the inner-loop cycle development for their Android head.  So you have the Android emulator here that you can go run and the Windows emulator as well.

In addition, for iOS developers, we’ve also added the ability to have a remote designer.  So you can stay in Visual Studio the entire time.  Who loves staying in Visual Studio the entire time?  (Cheers, applause.)  I do.

So here you just get a remote designer, they can just tweak that UI, you can go tweak it for iOS as well.

So Windows can now be your home for developing apps for all devices.

Now, before I go, I want to personally thank you, our developers, who have taken the time to give us feedback and to partner with us to build a great platform together.  Keep the feedback coming.  We cannot build the platform without you.  Now back to Terry.  (Cheers, applause.)

TERRY MYERSON:  Thanks, Kevin.

With the Anniversary Update, we’ve really been focused on making Windows 10 the best home for all of your development.  Enabling you to choose whatever shell you want, whether that be PowerShell, DOS, Bash, or more coming soon.

Enabling you to embrace new device capabilities like Windows Hello or Windows Ink.  Enabling you to take your existing .NET and Win32 applications and distribute them through the Windows Store and enhance them with all the new device capabilities we’ve been talking about.

We’re enabling you to build new .NET code bases and take them cross platform.  And, of course, if you’re looking for the highest performance precision laptop and tablets to do all of your development on, we have Surface Book.  There really has been no better time to be a Windows developer.  (Applause.)

Now, with this Anniversary Update, we have experiences that go beyond the PC.  Let’s take gaming.  You want to go beyond your mobile devices into your living room.  You’ve taken advantage of the game console you have there, your Xbox One or HoloLens.

This is the world’s first fully untethered holographic computer.  In the last two years, HoloLens has gone from a dream to a prototype, and now today we get to see what real customers are doing with it every day.  (Applause.)

So now please welcome Phil Spencer to discuss gaming, and then Alex Kipman will join us to discuss holographic computing.  Thank you.  (Cheers, applause.)

(Video:  Forza.)

PHIL SPENCER:  (Cheers, applause.)  Hello, everyone.  Good morning.  I’m Phil Spencer.  I’m proud to represent — to be here at my very first Build Conference to represent the talented and creative team responsible for gaming at Microsoft.

You’ve heard this morning about how Windows is the best platform for developers.  Today, we’ll spend time talking about how Windows is the best platform for the vast community of game developers.

The Universal Windows Platform provides developers the ability to write a game and deploy it across all Windows 10 devices from desktop PCs to tablets and phones and Xbox One.  (Applause.)  Thank you.

This morning, we will demonstrate how Windows 10 provides the most productive and efficient platform for developers of all sizes because Windows is and will continue to be an open development ecosystem where anyone can build, deploy, sell, and service their games and applications.  (Applause.)

In the opening video, we got to hear directly from one of our first-party studios, Turn 10, on the impact the Universal Windows Platform had on bringing Forza Motorsport 6 Apex to Windows 10 and how UWP can benefit both developers and gamers alike.

Through the Universal Windows Platform, our plan is to deliver games that will run better on Windows with more predictable performance, more robust install, uninstall, and servicing capability through a modern application platform, and greater safety for users through a protected runtime environment and distribution of modern desktop applications on any store, including the Windows Store, and via any deployment mechanism.

We recently launched three blockbuster franchises in the Windows Store:  Rise of the Tomb Raider, Gears of War Ultimate Edition, and just yesterday, Killer Instinct Season 3.  And I’m looking forward to the upcoming launch of Quantum Break next week.

We have heard the feedback from the PC gaming community loud and clear, and we’re working to ensure Windows 10 has a great game experience.

We will be enabling the ability to disable V-sync and adding support for G-sync and FreeSync in May.  DirectX 12 added support for new and multiple GPU scenarios which work today for both Windows 32 and UWA games.  And we’re committed to ensuring to meet or exceed the performance expectations of full-screen games as well as the additional features requested, including support for overlays, modding, and more.

We also know many of you have Win32 game code bases today, and we want to give you access to the ability to easily package your games as modern desktop applications through the desktop app converter.

Let’s take a look and see how that works.  So one of my favorite games, and I hear it’s a favorite of our founder, is Age of Empires.

So what we did is we took Age of Empires 2, you can see it pinned to the start menu.  Now this is the package you would get from Steam, where it’s available today.  And, in fact, if you watch, you’ll see the live tile turn and you’ll see the number of people that are actually playing this game live right now on Steamworks.  We have 5,223 players.

So what we did is we took the package from Steam, ran it through the converter, and turned it into a modern desktop application.

So let’s launch it to see how it runs.  You can see it launched.  Our splash screen.  Steamworks kicks up.  And here it is.

So we’re now running just as it would run if you bought it from Steam today.  Multiplayer is here.

One of the requests we’ve heard from people is support for mods and how that’s going to work.  So if you know anything about Age 2, you know in the Steam workshop, here’s the list of mods that are available for Age 2.

I can select any of these, apply them against the running version of the game here, and it would work just fine.  But I want to actually boot the game and just show you, the proof is here.  And there you go.  So Age 2 HD running as a modern desktop application taking advantage of all the features that Windows 10 enables.

But while I love Age 2, we wanted to challenge ourselves a little more to see how our platform was holding up.

So we did an experiment, and this is just an experiment, with our friends at CD Projekt RED.  We took The Witcher 3, which for many people is the 2015 PC game of the year, an amazing game, ran it through the same converter, and let me show you how it works.

So here it is pinned to the start menu, it’s a bigger game, takes a little bit longer to load.

Here it is.  So what you’ll see is there it is The Witcher 2 running at full frame rate as a modern desktop application.  (Applause.)

This allows these games to take advantage of the common services and technology in both Xbox and Windows 10, things like platform-specific features like live tile support, notifications like you just saw in this demo and Kevin’s demo earlier.

We have consistent input support with controller and mouse and keyboard across all of our devices.  And distribution in the Windows Store or any other store.

We’ve said from the start we want Xbox One to be a great place for games, and also a great place for developers of all sizes so you can create new games and application experiences.

Today, I’m pleased to announce Xbox Dev Mode, giving developers the ability to convert their retail Xbox One into a development kit.  (Cheers, applause.)


PHIL SPENCER:  Hi, Ashley.

ASHLEY SPEICHER:  We’re building a truly universal platform where your apps run everywhere, including the Xbox One, which provides a huge opportunity for app developers to bring their experiences to life in the living room.

So now I’m going to show you how anyone can take a retail Xbox One and turn it into a dev kit.

OK, so I was playing Forza over here, but what I really want to do is work on my app, which I’ll show you in a minute.

First, I need to put my console into Dev Mode.  So I’ll go over here and I’ll run the Dev Mode Activation App, which I’ve already downloaded from the store.

This app allows you to register your console as a dev console, which I’ve already done using my same account in Dev Center, where I submit all my other UWP applications.

This means that I can switch to Dev Mode with a single button press.

Once the Xbox is finished rebooting, it will be in Dev Mode, which means I can deploy and debug my app.

Now, while it’s rebooting, let’s go over here to the PC and look at it running on the PC where I’m actually writing it.

So you can see how the UI elements move and adjust as I change the size of the window.  And now let’s see how we can deploy this same app to the Xbox One.

So I’ll stop debugging over here on the PC and I’ll switch local machine to local machine.  This is the IP address of my Xbox.  And then I’ll just hit go.  And since I’ve already paired Visual Studio with my Xbox, I can do a remote deploy just like to any other Universal Windows Device.  (Cheers, applause.)  Yeah, right?  (Cheers, applause.)

OK.  Right, so that’s my console, and here’s my app running on my Xbox One.

So this is the exact same code that we were just debugging over here, but now that it’s on the console, you can tell that it’s been optimized for the living room.  The good news is that the Universal Windows Platform did a lot of the heavy lifting.

For example, the controller just works.  All the XAML controls have already been optimized to work with the game just like you would expect, including focus navigation.  And not just on Xbox, it works the same way on the desktop when you have a controller plugged in.  In fact, all XAML controls are designed to work great with touch, mouse and controller.  (Applause.)  Yeah.  That was a lot.

It also means that you can continue to test when you don’t have your Xbox, like on your flight back home.

Now, since this is a Universal Windows Application, it also has access to the same universal APIs, including APIs with speech recognition and synthesis.

So I’ve added a fun feature, which allows me to add speech bubbles to my photos.  I can simply use the speech recognizer to fill them in with an Xbox is way easier than typing.

So let me show you how this works.  We’ll caption this photo right now.

Lions bark, right?  Boom.  There we go.  (Applause.)  So much easier than typing.

OK, I want to show you guys one more thing.  So in addition to the Dev Mode Activation App I showed earlier, there’s another app called Dev Home, which will automatically get loaded onto your console once you’ve put it in Dev Mode.

And when you run it, it has everything that you need as a developer, from anything from account management to developer settings.  Most importantly, once I’m done working on my app for the night and I want to go back to playing Forza, I can just go ahead and press leave developer mode, and I’m back to retail mode.

I should mention that a preview of Dev Mode will be available starting today.  (Applause.)

PHIL SPENCER:  Thanks, Ashley.  Our commitment to turn every Xbox One into a dev kit isn’t a hobby, it’s a commitment.  It’s a commitment to empower every developer on the planet to reach the largest addressable TV audience on one open app platform.

But enabling developers to create and deploy UWP apps and games on Xbox One is just the beginning.  With the upcoming Windows 10 Anniversary Update, we will continue our progress that brought Xbox Live to all Windows devices and the Xbox app to Windows 10 where millions of gamers are connected around the world.

We will also be adding a single unified store across devices, giving developers new features and consumers a consistent experience.  New support will include features that game developers have come to expect and require from Xbox One into the Windows Store, including support for bundles, season passes, preorders and more.

We will also be bringing Cortana to Xbox One.  Cortana will become your personal gaming assistant with help to help you find great new games, new challenges, or help you with tips and tricks.  And the foundation for some of the most-requested new features, features like background music, which will also come with the Anniversary Update.  (Cheers, applause.)

We’ll announced new Anniversary Update features for Xbox One and Windows 10 at E3 in June.

Finally, I want to close with a technology that allows developers to maximize their own investment across Xbox One and Windows 10.  DirectX 12 allows developers to unlock the full potential of the graphics hardware in PCs and in Xbox One.  It’s the most powerful and efficient graphics API we’ve ever shipped.

We’re seeing incredible industry adoption of DirectX 12.  In fact, it’s the fastest adoption of any DirectX version in history.

Let’s take a look at what that power can deliver when put in the hands of the greatest developers on the planet.  Thank you.  (Applause.)

(Video:  DX12.)

ALEX KIPMAN:  Good morning, everyone.  Wow, what a phenomenal time to be a Windows developer.  Do you guys realize that from Xbox to holograms, Windows and our Universal Platform is where this future is being created, a future that isn’t possible on any other platform or set of devices?

Last year, here at Build, we asked you to share in our dreams for our holographic tomorrow, a future where we bring 3D holographic content into the real world, enhancing the way we experience life beyond our ordinary range of perception.

And today, today is the day where we leap from dreaming about mixed worlds to having Microsoft HoloLens become a reality exclusively on Windows 10.  (Applause.)  Thank you.

And to help us celebrate this moment, let’s help me welcome on stage HoloLens’ co-creator, Kudo Tsunoda.  (Applause.)  Hey, Kudo.

KUDO TSUNODA:  Thanks, Alex.

We’re elated to announce, and we couldn’t be more proud, that today, March 30th, 2016, Microsoft HoloLens, the first and still the only untethered holographic computer, will start shipping to our Windows developers and to our enterprise partners.

ALEX KIPMAN:  Our journey starts right here, right now, with all of you, our creators, our dreamers, our Windows developers.  (Applause.)

I have waited for this moment for a very long time.

And because HoloLens is a Windows 10 device, all of our tools are familiar.  And your investment in UWPs will continue to carry forward.

But to help a little bit with inspiration, let me update you on a project that we launched earlier this year called Share Your Idea.

The community submitted ideas for applications that we could build.  All of you voted, and we built it.  We built Galaxy Explorer in just a short six weeks, and we chronicled the project all along the way.

This week, we finished the project.  And today we are announcing that the project is going to go up on the Windows Store.  And all of the code will be made available online via GitHub.  Let’s take a quick look.  (Cheers, applause.)

(Video:  Share Your Idea / Galaxy Explorer.)

ALEX KIPMAN:  (Applause.)  From individual developers to customers like Volvo, Autodesk and Trimble, thousands of people have experienced HoloLens over the last year.  Dreaming about holographic experiences that eventually become pilots, and ultimately become real experiences deployed in production that change the way we work, communicate, learn and play.

Take virtual buildings as an example.  They are rethinking the entire construction planning process from first pitch through groundbreaking development.

Or take Japan Airlines, on the other hand.  They have a vision for holographic computing that will impact every area of their training and operations.

Now, these are just two examples of companies and organizations of all shapes and sizes across a wide range of industries who are piloting amazing Universal Windows Applications created specifically for mixed reality and our new holographic landscape.

A pilot and proof of concept will eventually move into production.

Last year here at Build, Professor Mark Griswold joined me on stage.  And we showed you a proof of concept where we used HoloLens to innovate on how medical students learn.

At the same time, Case Western Reserve University and the Cleveland Clinic broke ground on the construction of their brand-new medical school, a facility, an amazing facility, that will use HoloLens as a key part of their curriculum.

Now, over the last 12 months, this proof of concept has moved into production and today I am honored to have Dr. Pamela Davis, dean of Case Western Reserve University School of Medicine, and two of her team members, Henry and Jeff, join us to tell us more about teaching medicine with HoloLens.  Welcome.  (Applause.)  Dr. Davis, thank you for coming on stage.

PAMELA DAVIS:  Hi, Alex.  Good morning.  At Case Western Reserve, our motto is, “Think beyond the possible.”  And today, we and the Cleveland Clinic are constructing a state-of-the-future health education campus.

Within this building, our students will learn using the most forward-looking educational programs.  HoloLens is a key part of this.

Now, last year, we showed a few things that we thought might be possible with HoloLens.

Since October, we’ve had a small team of three computer programmers turning our ideas into reality.  Today, we’re going to show you some of our work on our holographic anatomy program, an example of the hours of curricular content that we have created.

Now, anatomy is all about mastering the complex systems in the body.  And you can see from the digestive system here that there are a lot of parts in a small space, and students need to understand not only how these parts fit together, but how they work together.

Henry, can you tell us which organ in the body aids in digestion and also makes insulin?

HENRY EASTMAN:  Yes, that would be the pancreas.  It can be a little hard to see it behind the stomach here.  But, thankfully, with HoloLens, it’s really easy to get the best view of things.

PAMELA DAVIS:  Now, another area that’s difficult for our students to fully grasp is the nervous system.  HoloLens can make this much easier by improving our interactive learning beyond the single classroom.

Professor Mark Griswold can teach us about the anatomy of the brain, even though he’s in Cleveland.  Let’s show you how this works.  Are you there, Mark?

MARK GRISWOLD:  I am.  Hi, Pam.  Hi, everybody.  Welcome to our new classroom.  This is our new system that allows me to teach and interact with you even though I’m not there at Build.  I’ll tell you, this feels really natural.  The three of us can see each other as we’re talking, which makes us feel connected.

And even though you can only see my head and hands, you get a sense of my body language.  This is really changing what it means to be “in class.”

PAMELA DAVIS:  Mark, let’s show them the white matter tract.  These tracts act like super highways, allowing messages to travel from one part of the brain to another.

This example comes from an MRI scan of a real patient from the laboratory of Professor Cameron McIntyre at Case Western Reserve University.

MARK GRISWOLD:  It’s critical that our students know how these structures relate to one another.  I can tell you, I’ve looked at data sets like this for 10 years, and I never fully understood their 3D structure until I saw them in HoloLens.

Let’s say we have a patient with a brain tumor, which we’re showing here with this red area.  Hey, Jeff and Henry, can you see how this tumor intersects the light blue tract, but not the green tract or the yellow one?  This allows us to predict the impact of the tumor on the patient’s symptoms and their outcomes after surgery.

JEFF MLAKAR:  This could impact the occipital lobe back here, which may harm the patient’s vision.  It might also influence the parietal lobe here, which would affect their sense of touch.

PAMELA DAVIS:  The quicker our students learn facts like these, the more time they have to think with them.  We’re teaching them to think like a doctor.

Thanks, Mark, for joining us today.

MARK GRISWOLD:  Bye, everybody.

PAMELA DAVIS:  Being untethered and able to walk around 3D holographic content gives our students a real advantage.

Students have commented that a 15-minute session with HoloLens could have saved them dozens of hours in the cadaveric lab.  When we have only four short years to train them, this is invaluable.

We’re so grateful to Microsoft for giving us the opportunity to develop this technology, and we can’t wait to collaborate with others using HoloLens to think and to see beyond the possible.  (Cheers, applause.)

ALEX KIPMAN:  That’s amazing, thank you so much.  Thank you so much.  Wow.

Now, NASA was our first partner, and they have also made incredible progress over the last year.  They have developed several applications that are already deployed in production.  Let’s take a quick look.

(Video:  NASA/JPL.)

ALEX KIPMAN:  (Applause.)  Thank you.  And here’s my favorite part, NASA brought Mars here to the Build Conference so that all of our Windows developers can be the first people in the world to view Mars from the same perspective as NASA scientists working today.  (Applause.)

HoloLens are shipping today.  Thank you.  Marking another great step on our journey of interacting with computers in ever more-personal ways.  My “ask” of all of you to actually join us and help us create this new reality.

Our partners have done exactly that over the last 12 months, and the progress is nothing short of spectacular.

So let me leave you today with a snapshot of their tremendous creativity.  I hope to see all of you this week here at Build.  Thank you.  (Applause.)

(Video:  HoloLens Commercial Partners Montage.)

SATYA NADELLA:  It’s fantastic to see the progress on the Windows platform.  One of the things that I’m most excited about is how Windows can become the dev box of choice, whether you’re writing Win32 apps and converting them into modern desktop apps, writing universal Windows apps, Web apps, even Linux backends or iOS or Android apps, you can do them all right on Windows.  And it’s fantastic to see bringing that developer love to Build.

We also are really excited about, I think, the opportunities, the new frontiers that Windows represents for all of you as developers, whether it be writing that next pen app, whether it’s a speech app, if it’s computer vision or hologram, these new frontiers I think are a fantastic opportunity where you can learn a lot about them at Build, and we really hope to see your creation in our story.

So I want to now switch gears to talk about this Conversations as a Platform, as I said, with this at the confluence of all of our three platforms, Azure, Office 365, as well as Windows 10.  It is best brought home with an anecdote that Qi Lu shared with me about his mom, who is 80 years old, lives in China.  She had been trying to use, because of Qi’s influence, computers.  And she found it very hard, even the Web, although it is one of the greatest democratizing forces of computing, she found it hard to navigate between websites, was uncomfortable clicking on links.  The mobile phone and the app revolution further democratized things, but even there going and finding the right apps was difficult.  And so she never got around to using it, but was using SMS.  And now with WeChat she’s able to, at the age of 80, not only have the all the conversations with people seamlessly, but she’s also able to interact with businesses and bots.

And that shows the power of human language.  We want to take that power of human language and, as I said earlier, apply it more pervasively to all of the computing interface and the computing interaction.  And to do that, though, you have to infuse into the computers and computing around us intelligence.  That means you have to bring forth these technologies of artificial intelligence and machine learning so that we can teach computers to learn human language, have conversational understanding, teach them about the broad context, people, places, things, context about your preferences, your personal knowledge, so that they can really help you with your everyday tasks and everyday life, both at work and elsewhere.

And so as we infuse intelligence into everything I think it’s very important to have a principled approach, have a way to guide our design as well as how we build things.  So at Microsoft we are focused on an approach that’s grounded in our mission to empower people and organizations all over the planet to achieve more.  There are two core principles here.

First, we want to build intelligence that augments human abilities and experiences.  Ultimately it’s not going to be about man versus machine.  It is going to be about man with machines; each of us excel at very different things.  The fact that we have creativity, empathy, emotion and judgment that can then be mixed with fast computation, the ability to reason over large amounts of data and do pattern recognition fast.  It’s that bringing together that I think is going to help us move our society forward.

We also have to build trust right into our technology.  That means you have to have technology that has built-in protections for privacy.  It’s got transparency.  You have security as well as compliance.  And, lastly, all technology that we build has to be more inclusive and impactful.  We need to build technology so that more people can use technology.  And we want to build technology such that it gets the best of humanity, not the worst.

And this is already guiding us as we build Skype Translate to have conversational understanding, as we build out Cortana to have more expertise, as we build HoloLens to have environmental understanding.  Also, we use this approach when we get it wrong.  In fact, just last week when we launched our incubation, Tay, which is a social bot in the United States, we quickly realized that it was not up to this mark.  And so we’re back to the drawing board while we continue to incubate, XiaoIce and Rina in China and Japan and learn why is it that the social bots there work differently?  And that’s how we’re going to make progress.  This approach is always going to be front and center to us.

When we talk about Conversations as a Platform, there are three aspects.  There’s us, people, we want to be able to have the most natural conversations with other people.  We want, in fact, to enrich our conversations, and Skype Translate is a fantastic example of that because I’m able to talk in two different languages and really have automatic translation. We may also have presence.  You saw in the holographic demo how you can have remote presence.  These are all ways for us to enhance human conversations.

But we want to take that same power of human conversations and apply it to everything else.  And that’s where these other actors come in, a personal digital assistant that knows you, knows about your world and is always with you across all your devices helping you with your everyday tasks.  And bots, bots are like new applications that you converse with.  You’re not really looking for multiple applications, pages and pages of websites or apps, but you’re just able to call on any application as a bot right within a conversational canvas.

So this is the rich world of conversations that they envision, people to people, people to your personal digital assistant, people to bots, and even personal digital assistants calling on bots on your behalf.  That’s the world that you’re going to get to see in the years to come.

And so you can conceptualize this platform where human language is the new UI layer.  Bots are like new applications, and digital assistants are meta apps or like the new browsers.  And intelligence is infused into all of your interactions.  That’s the rich platform that we have.

And so let’s start by talking about Cortana.  We introduced Cortana two years ago and ever since then it’s becoming smarter every day because of its ability to know you, to know about your organization, to know about the world and reason about all of this on a continuous basis.  It’s built natively into Windows, but it’s not limited to Windows.  It’s going to be available on all your devices, whether iOS, Android, across all of your applications, because if it’s going to be a useful personal digital assistant it has to be about you, not a single device.  And that’s really how we are building out Cortana.  (Applause.)

And we’re also making it such that the expertise of Cortana is extensible.  It’s truly unbounded in that way.  So the opportunity for developers to add their expertise, the expertise that they have in their applications, their services, into Cortana so that that can further drive engagement for their applications and services.  That developer opportunity is something that we want to talk about.  So to show you how this unbounded personal digital assistant works across all of your devices, and creates this ability for developers to extend it with your expertise, please help me welcome Marcus Ash from the Cortana team.  (Applause.)

MARCUS ASH:  Good morning, everyone.  It’s great to be here at Build to get a chance to give you an early preview of our future plans.  First I want to talk about a new experience in the anniversary update.  We want Windows to be the best place to have a voice conversation with Cortana.  Cortana is answering 1 million voice questions per day.  We have almost 1,000 applications that integrate with Cortana voice commands, and today there’s a new Cortana App Store collection available so customers can check out some great examples.

We want to go even further and make a voice conversation with Cortana as easy as a voice conversation with a human.  So now I’ve got this beautiful all-in-one with a nice big screen.  You can see at the top left corner Cortana is ready for me to have a conversation.  I’ve got to go Sausalito tonight, I don’t know where that is.  I don’t know how long it’s going to take.  Cortana can help me with that.

Hey, Cortana, how long will it take to get to Sausalito?

CORTANA:  It will take 27 minutes to drive to Sausalito.  It’s about 12 miles away.

MARCUS ASH:  All the things you’d expect to work in this experience on the lock screen, I can play my music, I can set an alarm.  And this also gives us a much bigger stage to let Cortana’s personality shine through.

Hey, Cortana, tell me a science joke.

CORTANA:  Sodium, sodium, sodium, sodium, sodium, sodium, sodium, sodium, Batman.  (Applause.)

MARCUS ASH:  Na-na, na-na, na-na, old school Batman.  Are you guys with me?  All right, this experience is going to be shipping in the anniversary update.  It’s also going to be available in 13 countries.  Cortana is going to be here all week at the expo fair; please go check her out.  But now I want to show you how she helps me get stuff done.

Here’s an early preview of some new experiences that we’ll release in the coming months.  Notice that Cortana has come into Outlook and scheduled some appointments for me.  It’s the blue with the circle.  The idea there is Cortana is now in Outlook and she’s looking at my email and my calendar with my permission and helping me get things done, stay on top of things.  A good example, this week I was exchanging a bunch of email with our Microsoft travel service trying to figure out the best time to get home on Friday.  Cortana was able to understand that conversation without me doing anything. I didn’t create an appointment, she automatically put all that flight information on my calendar to me it really easy for me to track.

She applies the same conversational intelligence to help me stay on top of commitments in email.  I get a ton of email, which makes it easy for me to forget things I said I would do.  So here Cortana has flagged something I said I’d send my manager today.  I was actually going to get ahead of this on the plane, but the siren song of that flight infotainment system got me again.  No problem.  Cortana is there and she knows the last time I made an update, she knows where the document is.  I can ask her to go take care of this for me.

Send Chuck the PowerPoint that I worked on last night.

CORTANA:  Sure thing.  I have found this PowerPoint document.  Is this it?

MARCUS ASH:  I can go ahead and get that sent.  Since Cortana is not bound to specific devices ‑‑ (applause) ‑‑ thank you.  Now since Cortana is not bound to a specific device she helps me keep track of notifications that are happening right here on my Android phone while I’m doing work on my PC.  So here I’ve got a notification, a text message that came in from my friend Ben.  He’s here in the Bay Area.  We went to Michigan together, Go Blue.  And we were talking about getting together after the keynote.

Any Michigan grads out there?

There we go.  I can go ahead and respond right here from my PC.  Cortana knows that I’m talking about meeting with Ben.  She offers to add this to my calendar.  And then she tells me you’ve got a couple of other things there, can I help you move these?  Let’s see, Ben or some work-related stuff?  I choose Ben.  And what she did there is appointments from noon on Wednesday automatically shift to the appointment ‑‑ the free block of time I have there on Thursday.

Now since that meeting with Ben is over lunch you can see here that Cortana has brought in some proactive actions that she thinks can help me.  Cortana has a team of experts that she relies on.  This idea that Cortana doesn’t need to know everything, she can go and ask experts for help.  Experts are all of you developers that are creating these amazing apps and services that can integrate right into Cortana.

So in the case that this meeting is over lunch I might want to book a table, we might not have time, so I might need to bring in some food, or she can offer some helpful suggestions about things to do.  Here you can see a bunch of options.  City Food Tour sounds great.  Here is some information about that.  Notice the option to buy tickets.  Extensibility also means help me get that action done.  Here we’ve created a very simple sample application, City Food Tour, notice Cortana has passed in some context for me.  There are two of us in the meeting invite, here is some other information.  I can confirm my order and now I’ve got tickets purchased.

This same experience also works for business applications.  So the next time I go to Cortana she’s queuing up another thing she thinks I might want to work on.  In this case I took a taxi last night and Cortana found out there was a receipt in my email and she’s offering to help me put that expense report.  We use the Microsoft Expense application internally to track this kind of thing.  When I click on that notice, all of the information and details from that receipt are put into that application and I can go ahead and add and choose to save that.

Since Cortana is not bound to a particular device I can finish all these tasks here on my Android phone.  So when I go ahead and launch Cortana you can see that my meeting with Ben is here and I’ve got the tickets to the Food City Tour, the Microsoft Expense application is here.  I can go ahead and choose to submit that and it’s all taken care of.

Since the phone is my most personal device and it’s with me all the time Cortana can augment me in very interesting ways.  So I have a 5-year-old son.  And I started this tradition two years ago when I brought him home a toy from a business trip.  Now every time daddy goes on a business trip little man thinks it’s his birthday and daddy better deliver and bring home yet another toy.  I don’t remember the name of the toy store that I went to last year when I was here.  But it has great Transformers and things that he wants.  So since Cortana is with me she can help enhance my memory.

What toy store did I go to last year at Build?

Here you can see she finds the correct toy store and, again, she brings in helpful proactive actions.  Now let me show you how easy it is to create these actions.  Here I’m on the developer portal.  Today we’re announcing an invite-based preview for proactive actions.  And here’s the portal that you can use to connect your apps and websites to Cortana.

I want to share two examples of partners that have already integrated, Glimpse and Just Eat.  Glimpse provides an easy way to safely share your location with someone in real time.  They had a great idea for improving the “I’m running late” experience in meetings.  Instead of fumbling around trying to figure out how to send a text message and figure out when am I going to get there and am I really going to get there at the time I think, I can just send a Glimpse, which gives all the attendees real-time tracking of me with a map and tells me my expected ETA.  Just Eat is a leading food delivery service based in the U.K. and they’re very excited about scenarios like helping you order dinner when you’re working late.

Let me show you how easy it was for Just Eat to configure the portal, only a few steps.  The first thing you do is you go and add insights.  Insights control when Cortana surfaces your action.  That’s this column right here.  This column is the contextual info.  This is the information that you want your app to request from customers.  So in the Just Eat example, Just Eat would like to know my cuisine preferences, because they could do a much better job of giving me helpful suggestions if they knew that I loved sushi.  Customers are always in control of the data that Cortana shares.  So she’s always going to ask permission before any data is sent to applications or services.

Here are the deep links that you create to open your app.  These deep links are not restricted to a particular device or platform.  We support Windows applications.  We support Android applications, iOS applications are coming in the future, and we also support websites, and then finally some metadata that you add that shows how the experience shows up in the Cortana UI, along with an icon.  So here you can see a sample appointment and both Glimpse and Just Eat are here as actions.  It only takes ‑‑ it took Just Eat 15 minutes to get their app registered to work in this experience.  (Applause.)

We’re looking forward to seeing all the great ideas you have for integrating with Cortana.  Please request an invite to the Cortana developer preview.  And I hope to see you at the Cortana session.  Thank you.  (Applause.)

SATYA NADELLA:  Thanks, Marcus.

That gives you a glimpse of how Cortana is a truly unbounded personal assistant that’s always with you.  And for developers it represents a new opportunity for you to be able to take the expertise that you have in your applications, the intelligence that you have in your services and then register them as extensions and insights and actions.

So we want to now move to another canvas, another set of tools where conversations are taking place.  These are our communications tools.  There are a wide variety of them.  In fact, we envision a world where there are going to be more and more communications tools, because that’s one of the most innate things that we as people do.  And we want all of these conversational tools to become rich canvases for computation.

In our case we have Outlook and Skype and we want to turn these into these rich conversational canvases.  Skype has over 300 million connected users each month.  Every day there is over 3 billion minutes of voice calls happening on Skype.  So it’s a tremendous amount of activity.  And so now imagine if we can open up Skype, not only to have all the rich people-to-people conversations, things that we are doing with Skype translate, or Skype on the HoloLens, what if we can now bring in your personal digital assistant right to your conversational canvas.  What if we brought the world of bots, the new applications, to be able to do anything, like hire a cab, book a ticket, anything that you may want to do, that you have done historically with applications and websites, now with bots right from within Skype.

We don’t even stop there.  We want to add richness to these bots.  So it’s not just text bots.  We want to have animation, we want to have videos and we also want to have holograms.  And to show you this rich world of conversational canvases, starting with Skype, I wanted to invite up on stage Lilian from the Skype team.

Lilian?  (Applause.)

LILIAN RINCON:  Thank you, Satya.

All right, hello, everyone.  I’m super-excited to be here with you today.  I get the pleasure of showing you our vision for how we see intelligence integrated into one of the most popular ways that people communicate today, Skype.  The Skype that you know and love is going to be smarter, more helpful and entertaining.  We’re moving into a world where you will soon be able to plan trips, shop, even talk to intelligent bots all from within your Skype task.

I’m going to show you a demo of this next generation of Skype.  Since it’s Skype it will work on iOS, Android, Mac, Web, you name it.  But I’m going to show it to you on my Windows phone.  So here I see that I have one missed notification, let’s see who that’s from and it’s from Gurdeep.  For those of you that don’t know Gurdeep is our corporate vice president for Skype.  So I can’t ignore this.  So let’s go into it.  And you see I have a visual video mail from Gurdeep.  I’m going to play it.

GURDEEP SINGH PALL (via video mail):   Hey, I just learned that you’ve been asked to keynote Codess again this year.  Congratulations, and thanks for everything you do for Skype.

LILIAN RINCON:  That is amazing.  I’m a huge fan of Codess.  I’m going to send him a quick thank you.  So right away, as you’re seeing here, there are some new things in this next generation of Skype.  You see this visual video mail.  So we have a video message with the transcript right below.  This is one of the examples where we’re taking intelligence from things like Skype Translator and bringing them directly into everyday features like video messages.  And now Gurdeep is telling me the team has sent me a little something.  So that’s really nice.  I’ll send him a quick little happy emoticon.

So in this next generation of Skype the other thing that you’ll notice in the canvas is my personal assistant Cortana is always there in the upper right hand corner.  Cortana is there if I want to talk to her one-on-one, or in a group, or even as you can see here she’s steadily improving my messages by highlighting key points in my text, like Codess.

I just showed you intelligence applied to messaging conversations.  So I can tap on Codess.  You see this rich part powered by Bing that’s showing us that Codess is a real organization that Microsoft sponsors to really promote women in technology, which is something that I’m very passionate about.  So now you see Cortana is actually sending me a private message.  I’m going to tap that and I go into the one-on-one conversation I have with Cortana.  So she’s telling me the Custom Cakes Bot would like to know my location for delivery and yes I will share it.  And would I like to track delivery?  Also yes, I’ll share it.

So right away what you’re seeing here is that the agent Cortana is actually brokering the conversation with the third-party bot, the Custom Cakes Bot. Now I see this rich card that is showing me that the cupcake is on its way and that it will be here within a few minutes.  All of this is happening within my Skype task.  This is just one example of the many ways that you’re going to starting seeing businesses using the power of Skype and Cortana to interact with their customers.

Now since I’m already talking to Cortana and I’ve committed to doing this keynote, I’m going to go ahead and try to plan the rest of my trip.  Please block my calendar for Codess from April 10th through April 12th.  So you saw there I used push talk, because I didn’t feel like typing.  And by the way, the other thing you’ll notice here is that Cortana actually has context from my previous chat with Gurdeep.  So even though I just told it the date it actually knows the location and that it’s from the conversation with Gurdeep.

Now Cortana is proactively thinking about the next steps in coordinating my trip.  So she’s actually suggesting that I connect with the Westin Bot.  Anybody that knows me knows I normally stay at a Westin.  So what you see here now is Cortana has actually introduced the Westin Bot into the conversation.  But not only has she done that, she has given the Westin Bot context, so I don’t have to repeat myself.  The Westin Bot knows this is for Dublin and it’s for those particular dates.  All of this saves me a lot of time and effort.  Now there are suggested rooms.  I’m going to choose the normal room.  Confirmation, and just like that in two simple steps I was able to block my calendar and also book my hotel room.  (Applause.)

Now there’s nothing else the Westin Bot can do for me.  So I am going to say no, thank you.  And now because Cortana has all my relevant details the Westin Bot can actually leave the conversation.  Now we’re seeing a third aspect of Cortana, she knows my connections.  She knows who I communicate with.  And so here you’re seeing her actually suggest that when I go on this trip to Dublin that I communicate with my friend Jani (ph).  And I see this card, I could call her directly from here or video call her.  But I don’t know what time it is and so I’m going to choose to message her.

Clicking that takes me to the conversation I had with Jani.  The other thing you’ll notice is that Cortana, because she has context about why I’m pinging Jani, she’s prepopulated the text with context around when I’m going to Dublin and basically why I want to talk to her.  And fantastic, Jani is available, and she’s even going to show me around town.  So this is just a glimpse into how Skype and Cortana, powered by Bing, can make our conversations more productive, informed and fun.  (Applause.)

Now as you know real-time video is a big part of how Skype is used today.  I just showed you intelligence applied to messaging conversations, and I now have the pleasure of announcing, we will also be bringing intelligence into real-time video.  Skype Video Bot will reinvent the way that friends, family and businesses get together.  For the first time we will enable a very personalized experience for you to connect with your favorite character, business or brand.  And the best part is that starting today developers will have access to our first ever Skype Bot SDK.  (Applause.)

And as you can see there, you can also sign up to one of our global hackathons starting with the first one in May, which is happening here down in Silicon Valley.  You’ll be able to seamlessly integrate into chats, like I showed you, or even invent totally new audio and video experiences for an audience of millions.  Also, to make this even sweeter for developers, as of today Skype consumers will have access to bots in our latest Windows, iOS and Android apps.  (Applause.)

But this is just the beginning.  With HoloLens becoming available we are also bringing intelligence into your virtual conversations with Skype for HoloLens.  Please roll video.

(Video segment.)

SATYA NADELLA:  Now that you’ve seen what these rich conversational canvases as well as personal digital assistants can do for you, as developers, we think that this represents a huge opportunity for you to write new types of applications.

We want to empower all developers to do this new work.  That means, we want every developer to build expertise for Cortana.  We want every developer to be able to build bots as the new applications for every business and every service.  We want all developers to be able to infuse intelligence into their applications.

And to really make this possible we think of a new run time, which is this intelligence run time that’s all coming together as part of Cortana Intelligence Suite that runs on Azure, that’s going to have a rich Bot Framework, a Bot Framework that will allow you to build rich dialogue capability into your application, into your bot.  A Bot Framework that will allow you not just to build bots for any one canvas, we don’t envision such a world, we envision a world of many conversational canvases.  So you will be able to take the Bot Framework and, in fact, integrate with all of them, Slack, Skype, Line, Outlook, all of the canvases.

We also want to give you a set of micro services, these cognitive services, so that you can have language understanding, speech understanding, computer vision built into your applications, and also rich machine learning capability, because we think of this intelligence run time and Cortana Intelligence Suite is going to be core, just like how the .NET run times were to all the applications that you’re going to build going forward.

And to give you a glimpse of this world and what Cortana Intelligence Suite can enable, I want to invite up on stage my colleague Dan, Lili, as well as Cornelia, to give you a sense for what development looks like with these new set of tools and micro services.

Dan, first you.  (Applause.)

DAN DRISCOLL:  Thanks, Satya.

In today’s demos of Windows, Cortana and Skype, we’ve shown you how Microsoft is creating new conversational experiences.  If you’re a developer or a business professional, you may want to add bots and conversations to your own products, and we want to share our services and open source our tools so you can use them to build your own great bot.  So today I’m so excited to introduce a brand-new platform for creating intelligent and connected bots, the Microsoft Bot Framework.  (Applause.)

Whether you’re building a simple tick-tack-toe bot, creating a productivity bot to make your business smarter or faster, or adding conversation to an existing brand like you saw with the Westin Bot, the Microsoft Bot Framework has everything you need to get your bot talking.

Let me give you an introduction.  If you’ve already got a bot, the Microsoft Bot Framework can seamlessly connect that bot to users on Skype, Slack, Telegram, SMS, email and more.  Don’t have a bot?  Our open source Bot Builder SDK available on GitHub has everything you need to give your Node.js or C#-based bot great dialoging conversation skills.  It also make it easier to integrate natural language processing and machine learning services.

Coming soon, our bot directory will contain many fantastic bots written using the Bot Framework, including the Build Bot, which is available to give you information about the Build event, like session information, nearby restaurants and more.

But I’m a developer, and I’m really, really excited to show you how easy it is to write code using the Microsoft Bot Framework.  To show you what it’s like I’ve put together a simple bot that orders me a pizza from Domino’s.  This is going to be great.  You’re all going to see some code, and I get to order myself lunch.

Here’s the source code for my bot.  When a user sends my bot a message, the Bot Framework falls into this method right here, and if the user said “/order” we drop into this block of code.  First, I’ve put together my pizza, a medium hand-tossed with spinach and mushrooms.  Then I put that pizza into an order where I’ve specified the delivery address and credit card that I found.  Satya assures me he’s buying.  And then I place the order to Domino’s secure REST API.

Now, since I’m dealing with a credit card, I don’t want to transmit that in plain text.  And so I want to make sure that it’s sent securely over a secure channel.  To try out our bot I’m going to show you my bot’s webpage in our bot directory.  Now, it can be really hard to try out new bots.  So as part of the Bot Framework, we included a reusable and securable chat control that developers can put on their own webpages so customers can talk directly with their bot.  Let’s try it out.  I’ll say hi to my bot, and it will respond.  And if I said “/order,” it will take just a second and there we go, my order is on its way.  (Applause.)

This is great for a simple bot, but interesting bots need real dialogue and language skills so that they can hold a real conversation.  I want my bot to be able to accept an order from a real customer where they could specify the size of their pizza, the toppings and more.  And you heard Satya talk about trustworthiness.  It’s important for a bot to get the user’s permission before it actually sends the order on its way to Domino’s.

Now this is where our Bot Builder SDK really shines.  Our Bot Builder SDK is available in Node.js and #C, but I’m writing in C# so I’ll show you that.  To use our Bot Builder SDK I’m just going to remove the existing simple bot code that I’ve got and add in a couple of lines here and then up at the top, where I’ve configured the Bot Builder SDK so that my bot talks the way I want it to, I’ve added a welcome message, a confirmation prompt and then actually added in the code to place the order.  The Bot Builder SDK does all the work of walking my user through filling in the object that represents my pizza, which at the time is a plain C# object with a few annotations.

Now next we’re going to actually connect my bot to users.  So I’m going to go to my bot’s dashboard in our developer portal in the Bot Framework.  You can see here it’s easy to connect my bot up to services like Skype, Slack, Telegram and others.  We built the Bot Framework to be an open platform, and so we include many third-party services, including pervasive services like SMs, which are accessible worldwide and available even on inexpensive devices like this $14 mobile phone.  (Applause.)

Now we’re really excited to introduce the Bot Framework, but we’re even more excited to see the kind of things you write using it.  We have a session coming up immediately after the keynote at 11:30 where we’ll walk you through the process of building a conversational bot from 0 to 60 using the Bot Framework.  We hope to see you there.

Now, up next I would like to introduce Lili Chen, who is going to give you a sneak peek at some upcoming intelligent tools coming to the Bot Framework in the future.  (Applause.)

LILI CHEN:  Thanks, Dan.

As Dan said, I’m going to give you a sneak peek at some tools we’ve been experimenting with to help you build your bot’s brain.  All right.  So here I have my chat control, and I can type in “please send me a large pepperoni pizza.”  And you can see that my bot understands the size and toppings and asks me to clarify what kind of crust I want.

Now in order to do this your bot needs to understand natural language.  So as you begin, before you have real users using your bot, and before you have real user response data, we provided some simple tools to help you built your natural language rules.  Let’s take a look.

All right.  So these are the rules behind my bot’s brain.  Just like Dan showed you, except we’ve provided a UI to make them a little bit easier to read.  So if you look at the first rule it matches responses such as deliver me some quantity of some size pizza with some kind of topping.  That’s pretty cool, right?  (Applause.)

So I can just click into the deliver blurb to understand it more.  And here you can see that not only the word “deliver” triggers a verb, but also “order” and “buy.”  So that’s pretty cool, too.  But what if people used other words.  Because we mine the Web and understand language, things like verb tense, translation into other language, common misspellings, we can help you by providing you some suggestions for synonyms.  All you need to do is click, select ones you want, and you’ve expanded your dictionary.  It’s really easy.  (Applause.)

What else is great is if I have other bots, like the cupcake bot that Lilian showed you earlier, I can share these dictionaries across my various bots.

Now rules are great and critical for getting started.  But as you start getting more and more users and more and more response data, rules can be very complex to manage.  And this is where we provide you with another tool.  Here we use machine learning and deep learning to create your conversation model.  So building upon what I just showed you, we automatically add your semantic dictionary and your rules.  We also make it really easy for you to upload additional training data to get started.

Now we know that people can be very unpredictable as they use your bot.  So here you can see a user has said, can you deliver three large pepperoni pizzas to my crib.  So you’re laughing, so you know and I know that my crib is slang for house.  But the system doesn’t know that.  So it’s really easy for me, if I think it’s appropriate, to tag it, add it to locations and save.  So it’s that easy to help teach your bot and help it become smarter.  And you don’t need to be a data science expert.  And, in fact, you don’t even really need to know how to code to help your bot improve.  So that’s pretty cool, right?  (Applause.)

All right.  So finally, we know intelligent systems have their limitations.  So here, like send me a large pepperoni pizza to Moscone at 2:00.  All right.  And here my bot is telling me, I don’t understand.  And what it can do for tasks that are rare, complex or that it doesn’t understand, it can automatically escalate to a person.  And this is something that we think is a great example of what Satya mentioned earlier, to combine what people do best with the intelligent system.

So let’s take a look at what the worker might be.  So here as I’m working the system in real time is automatically suggesting me responses.  And you can see those responses below my text entry in the chat window.

And then on the side, the system is automatically pre-filling my order form based on what it knows.  Now I can see here the system has incorrectly classified Moscone as a person.  So all I need to do, again, is click, correct the annotation and save it.  The system prompts me to clarify that it’s Moscone Center, and again my suggested response is update.  I can just click and enter to facilitate my interaction with my customer.  (Applause.)

So this is exciting because while working I can continue to teach my bot without needing to know natural language, machine learning, deep learning or even coding.  All of my input can make the system more intelligent, and the intelligent system can help me get my work done more effectively.

That’s it for my sneak peek.  We know we have a lot to learn.  As we continue to experiment, we will make these tools more broadly available.  (Applause.)

And, of course, as you’ve already heard, you can get started today with both the Bot Framework that Dan showed you earlier to build and connect your bot, the places people connect, the Skype Bot Tools, and also Cornelia is up next to show you the new cognitive services, which will help make your tools more intelligent.  Thank you.  (Applause.)

CORNELIA CARAPCEA:  Thank you, Lili.

All right.  Hello, everybody.  A year ago we introduced the first portfolio of intelligent services from Microsoft.  We had five APIs for you to go use and add intelligence to your apps.  Well, today I want to introduce you to, our new portfolio of intelligent services.  And we have 22 APIs for you to get started with for free today.  We have APIs across vision from image to video understanding; speech, language understanding APIs; knowledge API; and of course search APIs through Bing.  Twenty-two APIs for you to get started with.  This is years of machine learning and AI research from Microsoft ready for you to use at your fingertips.  (Applause.) Thank you.

Now you can just imagine all the cool stuff you can build with these APIs.  And to showcase what’s possible, I’m going to show you handful of demos.  The first demo that I’ll show you is a quick app we put together.  All I need to do in this app is either take a photo or upload some image URL, I’ll just take this here and use it, it’s called into the new vision APIs, and what it’s able to do, it’s able to recognize objects in the image.  (Applause.)  Thank you.

There are thousands of objects we can recognize, and we also give you a confidence score.  If the confidence is low, you know we’re not quite sure if we’re seeing what we say we’re seeing.  If the confidence is high, you can probably take it for real.  Now, it’s cool.  We have object recognition, we can take it a step further, and that’s what I’ll show you in my next demo.

We can take it a step further by not just recognizing objects but also figuring out the relationships behind them, and then using natural language to create a coherent description of what you’re seeing in the image.  This is Caption Bot.UI, a new demo that we’re going to allow you to play with today.  I went in earlier, picked up a couple of random images off of Bing.  And now let’s see what our little Caption Bot is able to say.

It thinks it’s a herd of cattle standing on a dry grass field.  I love that it gets the vocabulary right.  It doesn’t say “group of cattle” or “group of animals,” no, it’s a herd of cattle, and it also sees that it’s a dry grass field.

Let me try a different one.  This is the Mona Lisa, as most of you here know.  Let’s see what our little Caption Bot can do.  All right.  So it says, I’m not really confident, but I think it’s Leonardo da Vinci sitting in front of the mirror and he seems neutral faced.  (Laughter.)  This goes to show you our bot wasn’t taught any art, so it doesn’t recognize the masters of the Renaissance and their masterpieces.  We haven’t shown it any of that data.  We’ve only shown it landscapes, and gatherings, and your average day-to-day life to learn from.

And I showed you this example to show just how important data is when you are trying to take advantage of these machine learning services and APIs.  And we’re working on allowing you to bring your own data to these algorithms and customize them for your needs, which brings me to my third demo.

This demo is a quick little app that we put together to show you a service we call CRIS, Custom Recognition Intelligent Service.  It allows you to bring any audio files and customize our speech to text technology.  That’s already shipped in products like Cortana and Bing.  So if your scenario calls for audio that has a lot of background noise, think of a busy drive-through and you’re trying to understand what your customer is ordering, or think you’re trying to understand what people with heavy Romanian accents like me are trying to say, well you bring those files and you’re able to train the speech-to-text algorithm to have higher precision for your scenarios.

What I’m showing you here is an example that we trained with child speech and I hope the audio is hooked up, because it’s important to realize just how bad the audio can be here and the difference between regular speech-to-text versus CRIS.  Left side shows you regular speech-to-text, right side is CRIS.  And now let’s see what our system is able to recognize.

We had some kids here read a book about giants and you can clearly see that CRIS gets that the kid is not talking about dancing; he’s talking about giants and the fact that they love large bags.  (Applause.)  Now these demos are all fun.  I’m going to finish up by showing you a real customer scenario.

Vidigrove (ph) is a startup and they do real-time Twitter analysis monitoring.  They try to create Twitter analysis for anything from brands to politics.  And I went in earlier and used the product, the dashboard that they offer their customers, to actually create a dashboard for the current presidential elections.  You can see the mood of the entire election in this little blurb here.

Now the cool thing about Vidigrove is that it doesn’t just give you these mood indicators, it also offers extra context, like the gender of the people who tweet, or their age ranges.  And in creating these gender and age range indicators they actually use the vision APIs from Cognitive Services.  They use the vision APIs to look at the profile image and add extra precision to the age or the gender identifier.  They further use cognitive services to also filter out any kind of tweets or content that are just irrelevant to the dashboards they’re trying to create.

Now just like Vidigrove, who is one of the partners who got started testing our APIs early on, you too can go to today, get started for free.  We can’t wait to see the stuff you build with these cool APIs.

Thank you.  (Applause.)

SATYA NADELLA:  Thank you so much Cornelia.

So hopefully you get a feel for the richness of these intelligent services that you can use.  And now it is up to each of us as developers to imagine what’s possible.  Imagine what Boeing can do to the business applications in the aerospace industry, what Case Western Reserve can do to medical education, what NASA can do for space exploration, what a fast-food restaurant can do for order-taking through a drive-through using CRIS, what a news organization can do to be able to interpret all text, all images, all videos on a social media stream in real time.

And one developer truly inspired me this last year, very close to home for us at Microsoft.  I want to see what he dreamt of.  Roll the video.

(Video segment, applause.)

SATYA NADELLA:  For me it’s such a privilege to have a chance to share the stage with Saquil, and Saquil took his passion, his empathy and is going to change the world.  And to me that’s what Build is all about.  He’s here, in fact, at Build to network, inspire other developers, learn himself on how to take his applications and make them better.  But more importantly he’s also here to teach.  He’s got a couple of sessions that he’s going to do, as well.  And to me that’s what makes these developer conferences magical.

It is about being able to take our passions, our empathy and go after the opportunities that we see, go after the dreams that we have.  And what’s Saquil has done has been a real inspiration for me.  And we as developers have this tremendous opportunity and a tremendous responsibility, because not only do we get to dream of the future, we get to build the future.  I can’t wait to see what you dream up and what you build.

Thank you so very, very much.  Have a fantastic Build.  Thank you.  (Applause.)