Bill Gates: Microsoft’s Security Vision and Strategy, RSA Conference 2006

Remarks by Bill Gates, Chairman and Chief Software Architect, Microsoft Corporation
“Microsoft’s Security Vision and Strategy”
RSA Conference 2006
San Jose, California
February 14, 2006

BILL GATES: Well, thank you and good morning. Happy Valentine’s Day. (Laughter.) I’m really glad to be here at RSA. My other invitation was to go quail hunting with Dick Cheney. (Laughter, applause.) I’m feeling very safe right now.

Well, I’ve had a chance to keynote this conference a number of times, and talk about some of the things Microsoft is doing. I want to do that this year, but first I want to start off talking about the vision of how we see the industry coming together and really delivering the broad trustworthy environment that we all need for computing.

Why is this important? Well, the move towards digital approaches in everything we do is accelerating. Whether it’s medical records, tax records, buying and selling, scientific data, important communications, including national security; all of these things more and more are using the Internet.

Big Dreams

And this has become such a critical infrastructure for productivity, for reliability, for privacy that the dreams we have can only be realized if we not only build secure approaches but make those easy to administer and make it so the users understand exactly what to expect: how will their information be used, how often can they expect the system to be working totally on their behalf.

And so that means a lot of invention, a lot of improvement from where we are today. I think we’re making progress, but it’s a very big challenge to make sure security is not the thing that holds us back.

One way to talk about that is to use the term Trustworthy Computing. For us, that’s a very broad term, because [of] issues like reliability, so avoiding denial of service; privacy expectations, if I reveal information in a certain way, how will that be used; all of those things are encompassed in this goal.

And so it is a very ambitious goal but one that I think absolutely can be achieved. In fact, the level of inventiveness going all the way into the universities, the startups, the big companies working here together, the advances in the standards make me very optimistic that we’ll be able to pull this together.

Aspirations for the Industry: Trust Ecosystem

So what would the industry have to fulfill this? Well, I think there are four key things, each one of which I’ll dive into: the trust ecosystem, an ability to engineer for security, simple approach so that the models are quite clear, and finally, fundamentally secure platforms where the capabilities really are designed in, in a way that you don’t have to pay much attention to them.

So the first and I think a crucial element is this trust ecosystem. What is the trust ecosystem? Well, we have code, we have devices, we have users. And all of those things have certain characteristics. Users are members of groups; when somebody authenticates themselves, it’s based on a secret that was provided to them by somebody else. And so we have chains of trust, not just a simple single level, but many levels of indirection taking place.

What we need here is an ability to track those trust relationships, to be able to grant permissions, and to be able to revoke those trust relationships, to develop reputation over time; if a piece of code is not behaving appropriately, it should be marked that way and therefore blocked from being used on different systems. If a problem comes up on something that was trusted, you should be able to make sure that it’s no longer running, even if it’s gotten out very broadly.

Today, people live without this trust ecosystem by either limiting their activities, for example, being able to share confidential information with partners, knowing that it won’t be abused; or they simply take risks, they put it in as an e-mail attachment, knowing that can be forwarded on to lots of people.

The trust ecosystem has to have a very rich design, because after all, trust comes from many sources. We have companies that are focused on trust relationships, we have banks, we have governments, we have employers, we have affinity organizations, we have friends that put us on their buddy list and have a certain level of trust and willingness therefore to receive messages or expose their presence to other people. And so all of these trust relationships need to be taken into consideration.

So it can’t be something where there’s one unique piece of software, one unique organization, but rather it has to be totally federated so that all those trust statements can be understood and reasoned against. And so with that, we get reputation, reputation for code, reputation for users across all the different activities they do.

There’s been a lot of great work on this trust ecosystem around the Web services protocols in the trust area, and so I really think we are laying the foundation for what we need here. It’s a very important system that has got to be part of the vision of what we create to have the secure environment that we need to have.

Engineering for Security

So let’s move to the second piece, engineering for security. Again, this is an element that there’s no ability to get around. There will be portions of code in the system that act on your behalf to perform an operation, and those pieces of code have to be written reliably, reliably in a way so that even if somebody malicious is putting in an extremely long answer that wasn’t expected or trying to fool the input to put in a sequel screen where it was just supposed to be a literal, the code has to operate as expected.

Now, one of the architectural techniques that we can do is to make sure that the portion of the code that has to be correct is much, much smaller than it is today. Today, virtually all the code has the privilege to do various things. And so, for example, say you have code that’s just parsing a file; that code which is doing that parsing, in fact, you know very well that it’s not supposed to go off and read files or transmit information. And so you can take that code and run it in a process so that even if somebody manages to make the parser break, the kind of operations they can do are very, very limited.

So this kind of design approach, where we look at what code do we have to trust, to have the right review process for that code, the right tools to automatically check for the patterns that people run into there is absolutely critical.

So we have to design things to be secure, we have to deploy them in a way so that the way they are defaulted they’re very secure and you have audit logs in terms of how things are used, and you have to have really very simple management tools that let you go out and verify. If you want to say is my network isolated by IPSec or 802.11x, that ought to be a simple administrator operation to see that; are all my systems up to date needs to be very straightforward.

And so thinking these things through from the beginning and not bringing security in at the end where we say, OK, what’s the threat model after you’ve written the code, that’s very, very important.

This has been a big shift for Microsoft in terms of the expertise we’ve developed, the way we do the design, the whole process we go through, and this is important at all the different layers of the system.

So that would be the second element of what we need as an industry.


The third, you can say, is pretty much commonsense, but if you look at the security systems that are out there today, we don’t achieve this, and this is the idea of simplicity. How many different software products do people have to think about, how many different user interfaces do they have to see, how many different audit logs do they have to go through in order to see exactly what happened and try and track down the source of an activity?

Today, whether it’s end users or developers or IT professionals, the number of firms and products and screens, things they have to know is probably an order of magnitude than it can be in order to have these systems be administered very, very effectively.

After all, if the system is not run securely, and say it’s a partner system that you grant some trust to, then you become vulnerable to that. ISPs today who have lots of consumer machines out there, many of those have been exploited, and the inability to scan and do protocol blocking means that it can compromise the quality of what goes on in that network. So we have an overly complex situation today, and even if we do a better job on tracking things, we have to have this simplicity idea in mind.

One of the architects at Microsoft, Butler Lampson, who has long experience in the industry going back to Palo Alto Research, is always forcing us in all the security discussions to think through the user model, what is that user model where there’s a clear expectation of how the system works, and how many concepts have we had to expose them to.

So simplicity for end users, simplicity for IT professionals, and simplicity for developers, making them write far less code than they’re writing today, making them understand the patterns that are really key and so they’re doing the right things.

So this is one element that even if we did the other three, if we don’t do this right, we won’t get the results that we need.

Fundamentally Secure Platforms

Finally, and some people might have expected me to start with this, but another key element is fundamentally secure platforms; that is, you can’t layer on top of the system elements that really make something secure in terms of being able to track, keep it up to date; you just get too much of a mismatch between the elements and so that’s not going to work.

What are some key elements of fundamentally secure? Well, the first is isolation. If you go back historically and say why were computer systems largely secure, it wasn’t because people wrote better code – in fact, they had none of the proof tools and scanning tools and the rich things that we do today; those systems were secure because they were isolated, there wasn’t an Internet pipe that allowed arbitrary packets to come in and see what code paths might have flaws in them.

And isolation exists at many levels. It exists at the network level, a very important level, it exists in terms of process privilege where you want to make sure that a process is only allowed to do very, very limited number of things. It exists in the user model; even a user who from time to time can install software, you want to make sure when they’re just, say, running a normal application, that that privilege is not active, because they’re not intending that that program can go and, say, change things in the Start group or install a driver that ends up being a root hit. And so having it so that you only have the privileges you need and that you’re fundamentally isolated is very, very important. When you want to bring a new system in, you have to be able to check that system to see if it meets certain criteria.

Another weak link is in authentication. Today, we’re using password systems, and password systems simply won’t cut it; in fact, they’re very quickly becoming the weak link. This year, there was a significant rise in phishing attacks where sites that pretended to be legitimate would get somebody to enter their password and then be able to use that to create exploitive financial transactions.

And so we need to move to multifactor authentication. A lot of that will be a smart-card-type approach where you have challenge/response, you don’t have a single secret that you’re passing to the other person so they can actually have that and reuse it. It’s a significant change and that needs to be built down into the system itself.

We need to use policy base controls so that somebody managing systems, instead of picking system by system, can say users of this group are allowed do to these things, not allowed to do these things, making those management tools very, very simple.

And finally, the ability to track what went on, have very quick recovery because you have checkpoints that allow you to restore data, restore system state, those are very big things. And that kind of checkpointing simple restoration, it won’t work as something done outside the system, it’s got to be built-in to provide a fundamentally secure platform.

Evolving Landscape

Well, we do have an evolving landscape. In fact, many companies get a view of this, Microsoft does in a very strong way through a lot of things we have out there. The Hotmail mail service, which is the biggest free e-mail service; Exchange, which is the biggest commercial e-mail product; the kind of reports back we get from the Watson tool that’s built into Windows where we literally have hundreds of millions of systems we’re seeing any time there’s an application error or a hang on any one of those systems; our scanning tools, what we call the Software Removal Tool that’s been run many billions and billions of times, and gone out and literally found millions of systems where there was a problem on those things. So we can see this evolving landscape.

The move, you could say, in a way that’s more malicious, which is instead of seeking publicity, which meant that it wasn’t targeted at a single company and it wasn’t targeted at financial gain, that has really shifted. The publicity seeking is still there somewhat but in some ways that’s the benign element now that in some ways may have forced all the things we need to do to block the really bad, which is the specific targeted attacks. And, of course, we’re seeing those not just on the PC as a device but also on the phone and all the other elements that connect up as well.

Microsoft Leadership

Well, so Microsoft has a big responsibility here to play a role, both in terms of participating in these standards, sharing the experiences we have, and doing a lot of things in our products so that they can adopt to the principles that we need. For example, in the area of spam, there has been good progress. If you ask users, over 90 percent of users say that spam is significantly down; doesn’t mean it’s gone, still work to be done.

A key, key tool there is this thing called Sender ID where in the MX record you verify exactly that the sender is appropriate for that domain. That’s something we have a lot of people who have implemented today, it’s a clear call to action from us that everybody should implement the Sender ID capability, because it makes the separation of legitimate mail from illegitimate mail dramatically easier because of knowing that it actually comes from a particular source.

With the new version of Outlook, we’ve actually put in computational proof, as well as the Sender ID, so that if you’re a stranger sending something for the first time from an unknown place, you simply put in a little bit of computation that’s not economic for a spammer, but if you know the person doesn’t have you yet on their safe sender list, you get through by doing this capability. It’s not something that costs them money, it just costs a little bit of computation.

And in all the mail clients now we have an information bar that educates you about the status of this e-mail, did it come from a verified domain, did it come with a proof, things like that that involve the user in a very straightforward way.

We have new work we’re doing with our anti-spyware Defender product, a new version of that coming out with a richer scanning capability that’s very important.

In terms of standards, I’d say the progress on these Web services protocols would be the most exciting thing, particularly using those protocols for identity federation. It’s a technological innovation that’s key for this identity meta system that we all know is very, very fundamental there. This is something that’s industry-wide, is going to let all these systems work together, and yet have these rich trust relationships. And so I think it will join other standards like TCP/IP and HTTP as a standard that’s a key underpinning of this system that we use.

Trust Ecosystem

When we think about this trust ecosystem, of course, it starts with the directory itself, and there the idea of federating, saying, for example, Microsoft saying we have a relationship with Intel so that as we’re using things like SharePoint and sharing documents, you don’t have to create a new account; if we’re using a vendor’s site or exchanging e-mail with a law firm, we have that security of who’s who, trust, just log in one time in a very strong way. And so having this plumbed for that type of security pass through and not just using the one domain is a very, very important element.

A lot of things I’ve got here on trust ecosystem: using IPSec to do the isolation, making that strategy; making smart cards work very well, including the logon process itself; a lot of things about code signing, making it so that anything that’s going to run at the driver level can’t perform operations that malicious code would perform, and forcing those things to be signed, we think that that’s a very important thing.

The trust ecosystem, a key use of it will be about people and the need to manage certificates. We have a certificate lifecycle manager, so if somebody comes in that doesn’t have their smart card they can get that renewed very easily. Having the revocation and issuance work as easily as passwords do today is a critical element here, and I don’t pretend we’re going to move away from passwords overnight, but over, say, a three or four-year period for corporate systems, this change should take place and can take place, and there’s no need to give up simplicity as we do that.

Another key initiative is making sure that site certificates say more than just that somebody paid a small fee, that if it’s a Citibank site certificate it really says this is a reliable business. We call that High Assurance Certificates, and you’ll see we have in the browser now we indicate to the user when you’re on a site that uses one of those versus when you’re on a site that you probably shouldn’t trust and shouldn’t give out specific information to.

So the trust ecosystem is something that we’re seeing a lot of great progress on.

Well, to see that come together, let’s go ahead and see a demonstration. Let me ask Howard Ting from our Windows Server group to come out and show you some of the things I was talking about. (Applause.)

HOWARD TING: Thanks, Bill.

So in this demonstration I’ll be playing the role of David Barber, a sales rep for a company called ADatum.

So this morning, on my way back from a business trip, I had a very, very terrible experience: I lost my laptop, my smart card, and my mobile phone. And it’s the end of the quarter, and I need to close deals, so I’m back in the office and I’m trying to get back online.

The first thing that I need to do is get a replacement smart card. So I’ve gone to see my manager, who’s provisioned a one-time password for me, and then I pick up this blank smart card from the receptionist, and been instructed to go to this nearby kiosk to get the card provisioned, and you see the kiosk here.

Now, let’s go ahead and get this started. So the first thing I need to do is put in that one-time password that my manager gave me. What you’re looking at is Microsoft Certificate Lifecycle Manager, a product that Bill just referenced. Certificate Lifecycle Manager or CLM is a product which just entered beta today, and it will vastly simplify the process for issuing digital certificates and provisioning smart cards.

The next thing I need to do is put in a new pin. This pin will protect my card in case I lose it again.

Now what’s happening is CLM is getting my certificates from Active Directory and then it will take those certificates and place them on the card to provision it.

Now, in the past, when I lost my smart card, I would have to go to a corporate security office and wait an hour, maybe several hours while someone manually provisioned my card. With the kind of day that I’m having, I think that’s the last thing that I would want to do.

Now, as you can see here, CLM has finished provisioning my card, and I’m ready to go; it was really that easy.

OK, so now I’m going to take my newly issued smart card, and I’m going to try and get onto the network. So I’ve pulled out an old laptop that I haven’t used in some time. This is a machine that, as you can see, is running Windows Vista. (Laughter.) And like most information workers, the first thing I’m going to want to do is check e-mail. As you can see, Outlook is having some trouble connecting to the Exchange Server. And the reason for that is my network access has been restricted. In this case, there’s a new feature of Windows Server “Longhorn” and Windows Vista at work here, known as Network Access Protection or NAP. And NAP is informing me here that my machine doesn’t have the latest security updates, and so I don’t comply with the corporate health policies and I need to go and install these current updates. So let me go ahead and do that now.

So as you can see, I’m installing the updates manually, but in most cases NAP can be configured to automatically remediate my machine, and bring it into compliance with corporate health policies, without having me take any action.

Now, while my machine is updating, my IT administrator has configured what’s known as a quarantine zone. While in quarantine, I have limited network access, so in this case I can go get the updates I need but I don’t have full network access, and that’s why I couldn’t connect to the Exchange Server.

So if I had lost my laptop before NAP was available, I would have to hand my laptop to an IT administrator and wait while that person updated my machine. And like I said, I really need to get back online.

So as you can see here, NAP has now restored my network access because the update has finished installing, and I’m back on the network. And notice here Exchange has now synched up and I have some new mail. It was really that easy.

So now let’s go ahead and take a look at this mail here. This mail is from my procurement department, and they’ve been alerted that I lost my cell phone, so they’re instructing me to go buy a replacement cell phone. And watch what happens when I click this link. I’m being brought over to a third-party Web site, the Phone Company. The Phone Company is a company that we partner with to provide mobile telephony and data services.

And notice I’ve already been logged into this application. This application already knows what company I represent and who I am. Furthermore, notice there’s some additional information here such as a spending limit of $300 and a 15-percent discount. This is all based on my corporate identity.

What happened was Active Directory Federation Services, a product which is available in Windows Server 2003 R2, has federated my identity or sent my identity using secure Web Services protocols to the Phone Company, and based on the existing relationship that my employer, ADatum, and the Phone Company have, I’ve been logged into this application.

So now let’s see what happens when I try and buy a phone that’s a little too rich for my IT department’s tastes. As you can see, notice, the application has gone ahead and rejected that order.

Now, let me go ahead and get a phone that does fall into compliance with policy, and there. The phone has been ordered and the phone is on its way to me.

Notice, in this entire transaction I have not entered a single piece of information about me. This all occurred because of Active Directory Federation Services.

So in just a few minutes I was able to provision myself a new smart card, get my laptop updated with the latest security updates, and then go order my replacement mobile phone. So I’m now back online and I’m ready to go close some deals.

Thanks to Microsoft, my day just got a little bit better. Thank you. (Applause.)

Engineering for Security

BILL GATES: Well, now let’s talk about progress in the second area, engineering for security. I mentioned an overall process that we’ve created, working with others, called the Security Development Lifecycle, and that’s exactly this idea of going through, thinking about the threat models, understanding what code to run at what privilege level. Some of this involves the creation of new tools, tools that do extremely deep static analysis of our code, and for the first time we’re actually able to prove properties of the code, understand does it ever get into certain states or not, and if it does, be able to show exactly the path that would create those states.

A lot of this is really going to the developers, getting them to write the security architecture as one of the very first things they do. We’ve documented this and we’re sharing that, lots of good feedback, and active community involvement, so you can scope it so it works for projects of different types. Obviously, the ones we do are very large scale, but these principles actually can be applied even for doing simple Web sites, simple applications, it’s still very, very important.

We have the tools built in to the Visual Studio compiler. These are the tools that we built ourselves to do these analyses, and so, for example, catching a memory overrun or looking at an API, we’re actually not passing in the right kind of information, those will get flagged, and these tools run fast enough that you don’t wait and have it be part of some fancy build process, literally this runs on every developers workstation before they can do any code check in. So making that quick, getting it upfront, we’ve found that that works extremely well.


Let’s talk about simplicity. I’d be the first to say that some steps have been taken here, but if there is an area that we absolutely need to do dramatically better, this would be it. The number of screens that you have to get involved in, the number of places you have to go to find out what went on are still too high, and more integration of the design is part of the way we’ll deal with that.

For end users we will have this year a thing called OneCare built into Vista, we have a dramatic improvement in the Security Center, and we have this new idea of “InfoCard.” This is the idea that you don’t always want to present all your information, you’ll have different cards, cards that just give your location, cards that are more secure that give your credit card, and cards that you would protect very carefully and even have a pin for every use of it where you might authorize access to your medical information.

And so thinking of those different cards really has gotten people understanding that there are different types of authentication, and even in those authentication steps, in some cases you need a level of indirection so that you can have privacy even as you’re doing those authorization steps.

A lot of governments around the world are looking at issuing smart cards, and now they understand that there needs to be many of these “InfoCards” on there to deal with the different scenarios that users are involved in. And the user interface for this is actually built in to the next version of Windows and the Internet Explorer.

For IT professionals it’s all about making simple group policy statements work in a very broad way, combining the work that’s done on the directory and the policy things today with security. Security and management are not really two separate things; you want to have an architect that allows for lots of extensibility, lots of plug-ins, third party products that you want to bring that user interface back together into one simple place that works for the IT person.

Fundamentally Secure Platform

Now let’s talk about the fundamentally secure platform. We did put a lot of our energy, engineering energy into XP SP 2. That was released a little bit before my talk a year ago. We’ve had very good results with that. Our goal was to actually reduce by an order of magnitude the vulnerability of the system, and now we can see just by the scanning we’re doing we’ve actually achieved about a factor of 13, which is right about what we hoped to achieve by that major, major investment. If you get factors of tens at various levels in terms of the code, the isolation, things like that, then you’re really substantially changing the landscape for these things.

Now, the next release we do later this year will take us to another level in a number of key areas. For example, when you browse, you can browse at a lower level of capability so that the code you bring down in the browser doesn’t even have access to your files. When you use the system even with normal applications, you won’t always have your admin capabilities turned on, you can either run as standard user or protected admin, so you’re only going into that admin state very rarely. And we’ve taken a lot of the operations that required admin and broken those off so that standard user becomes a very common choice that we expect people to use.

The User Account Control, the user logon, we’ve changed that so it can work with the smart card. We’ve made Kerberos pervasive there, so we expect that this is where smart cards will really start to kick in and be a very key thing that people use.

There’s also things about when you lose your laptop can you make sure that that drive is something that people can’t break into, making that just a standard part of the system, and starting to show how this TPM, Trusted Protection Manager, how that’s going to be used for a lot of these secure operations.

So if you took the investment we’ve made in this next version of Windows, security would jump out as the thing that we’ve spent the most time on, first and foremost, the review of using that Security Development Lifecycle but then all the different elements that go into the specific security features as well.

Well, now for another look at these things, a second demo, and here you’re going to see protecting identities and information. So let me ask Austin Wilson and (Richard Turner to come on out. (Applause.)

AUSTIN WILSON: Thanks, Bill.

So let’s take a look at what we’re doing to provide a fundamentally secure platform in Windows Vista. Today, in Internet Explorer you can click around in the options and accidentally end up browsing the Web in an insecure state. In Windows Vista you’ll notice that Security Center is always monitoring your Internet security settings, and will turn read and tell you immediately if you’re not in a secure state. In this case, I can simply open Security Center and with a single click I’m back to a secure state, I’m all green.

Now, within Internet Explorer, in Windows XP SP 2 we introduced the concept of the information bar to prevent Web sites from slamming ActiveX controls on users’ machines. ActiveX is how we provide a rich browsing experience through controls like Flash or Windows Media Player.

But there are many controls that were never designed for use on the Web, and here’s a great example. This is a Web site that’s going to offer me digital prints for a penny each, and I’ll click that. That Web site is going to try to invoke a control that’s already on my machine that has a vulnerability that we introduced as part of this demo. You’ll notice that we’ve extended the information bar in Windows Vista to allow me to provide consent the first time I run any ActiveX control. So I’m going to go ahead and run this control.

Now, had I ignored the information bar like many users will do, the control just wouldn’t have run, the page couldn’t have used it. But this is actually a malicious site that’s going to do everything possible to get a piece of malware on my machine, and let’s take a look at what it will try.

I’m going to go ahead and click this link to get started. And I see this message that says the service is unavailable, so it looks like the Web site is nonresponsive. In reality, that site tried to exploit a vulnerability in the control I just enabled to write an evil executable to my Startup folder. But let’s open my Startup folder up here, and you can see that it’s empty. And if I look at this other folder, I can see the evil executable. That executable is really in a harmless area under the temporary Internet files, that’s where it was written. Why? Because I’m using the new protected mode in Windows Vista that’s part of Internet Explorer. In protected mode, Internet Explorer is limited to only writing to the temporary Internet files folder and its part of the registry. It’s unable to write outside that area. To preserve application compatibility, we redirect general writes to the file system to an area under the temporary Internet files folder. Any files written to that area are restricted to the same access to the file system as Internet Explorer itself.

Let me move back to the browser. There are lots of sites that will do everything possible to get a piece of malware on a user’s machine. And back on this site, so while it’s telling me the service is unavailable, it has this great helpful link that’s telling me to click here to get help. So I’ll go ahead and click that link. So let me run this.

This is actually a link to a piece of spyware that the site tried to get on my machine, but in Windows Vista we have Windows Defender integrated right in there to scan downloads and keep spyware from getting on users’ machines. So with a single click, the spyware is gone.

That’s a quick look at the multiple layers of protection we’re providing in Windows Vista for a fundamentally secure platform. Now Richard will show you what we’re doing in the area of the trust ecosystem. (Applause.)

RICHARD TURNER: Thanks, Austin.

Good morning. In this demo, we’re going to see a glimpse of how “InfoCard” can help me much more effectively manage my personal identity information.

We’re going to be logging onto this Web site, and we’re going to be booking a car and getting a discount against that booking, without ever entering any personal identity information by hand.

Now, one of the first things you’ll notice about this browser is that the address bar has gone green. This is a new feature of Internet Explorer 7 which essentially indicates this Web site is protected by a High Assurance Certificate.

In order to obtain a High Assurance Certificate, Kontoso has to go through several extra steps to positively identify themselves to the certificate authority. The result of this is that I have much a higher degree of confidence that this, in fact, Kontoso that I’m dealing with, and not someone masquerading as Kontoso.

So now that we know this is the Web site we expect to be at, let’s go ahead and log in. Now, I could log into this site using my username and password, or at least I could if I could remember which username-password combination to actually use at this particular site.

So instead, I’m going to have to go ahead and log in with my “InfoCard.” Now, watch this screen carefully when I press this button. This is the “InfoCard” UI. This is where I manage and control my “InfoCard” containing my personal identity information. It also helps me select cards that are appropriate for completing a given action, in this case logging onto the Web site.

So let’s go ahead and look at the details of this card. Now, this is a self-issued card, a card that I manually created myself, containing a bunch of claims that I’m willing to assert about my personal identity; in this particular case, my name, my last name, and my e-mail address.

So let’s go ahead and submit this card to the site, and we can see that the site has now logged me in, and has actually recognized who it is that I am.

Now, that was a fairly simple scenario of logging into a Web site, but let’s go ahead and take a look at a slightly more advanced scenario where we actually apply for a discount.

I’m going to complete this form, I’m going to blow some expenses on a decent car for a change, and submit this. Now, I’m being asked for a payment card; I’m going to use a card that I previously used on a prior visit to this site. And I’m also being asked to enter a discount code, a membership number or something along those lines. And again I’m particularly bad at remembering those numbers. So once more I’m going to use “InfoCard.” Now, notice I once again get this same rich user experience. You’ll notice that my desktop has actually faded underneath this shell. This shell is running in a secure desktop, separate from my logged-in session, under a different user account, making it far, far harder for malware which might have made it onto my machine to actually attack this identity infrastructure.

Now, the other thing you’ll notice about this UI this time around is that the only card that I can use to complete this operation to apply for this discount is my Fabricam auto group membership card. This is a membership card I can use at various auto sites.

Now, let’s take a look at the details of this card. This is a managed card, different from the first card that you saw. This is a managed card, which is issued by a trusted third party who’s willing to corroborate to the set of claims made about my identity actually relate to me in some way.

You’ll notice that this card contains several claim names, we have membership number, membership type, membership expiration date, and a name on the card. But you’ll notice that there’s no actual data associated with this. You don’t, for example, see my membership card number. The reason for this is that all of that data is actually stored within Fabricam behind their system, and only exposed on demand from their secure token server.

So let’s go ahead and submit this card. Now, I’m being prompted at this point to authenticate myself once more, this time to authenticate myself to the secure token service running at Fabricam, so that they know that it’s me that’s requesting the release of my identity information, which gets sent back to “InfoCard.” When I click the approve button now, “InfoCard” will retrieve that token and then submit it to the Web site. There, the Web site has now received my Fabricam auto membership card and has applied my 20 percent discount.

So let’s go ahead and complete this operation. Here we can see the discount being applied to my bottom line, and we finish and pay.

So there in just a few mouse clicks, without having to manually enter any personal identity information, we’ve gone ahead and logged in, we’ve booked a car, and we got a great discount on that vehicle.

Now, unfortunately, I’ve only been given five minutes up here on stage. If you are interested in this, we are running a session at 2:00 this afternoon to delve much more deeply into the identity meta system and “InfoCard.”

Thank you. (Applause.)

BILL GATES: Well, let me wrap up by talking about the need to focus as an industry on each of these four areas, to continue to make progress. After all, the opponent in this case, the person that’s trying to invade these systems for financial gain, is not standing still. For every improvement we make they look for additional vulnerabilities, and so the idea that we have to improve, really improve broadly in a consistent way with tools that allow for broad adoption of these things I think is extremely important.

In terms of the trust ecosystem, there are some key elements. The move to smart cards, the idea of the “InfoCards” that have different pieces of information that can be stored in different places, the support for the protocols that have come out of the standards process there that make it possible for people who are writing service type applications to not have to duplicate any of that security code, so a lot of adoption there. We’re really just at the beginning of the trust ecosystem that is very, very fundamental. Even federation today, most companies are not doing that, and if you look under the covers, there’s a lot of insecure activity, a lot of lost productivity because that’s not in place.

Secure development practices, having code written at the highest level, sharing the code that does have security attributes, knowing whether you have the expertise internally or you need external review to make sure that that’s done well, and having the tools themselves be better and better to reduce the amount of code that people have to write, that’s very important.

Simplicity: A lot of ways to measure this, and that’s one of the things that we’re really getting better at is looking at how many of those screens ought to be in systems to see if people aren’t setting these things up the right way, it must have been that we made it too complex to do it that way. Updating was a great example of that. We had tools for updating, but the actual adoption of those, if you go back two years ago, was at about the 15 percent level. That one we’ve made enough progress that now we’re achieving about 80 percent of people are doing the right type of regular updates; obviously we want to get that to 100 percent because updating is just one element of making sure there’s no widespread attacks.

And finally, the fundamentally secure platforms, a lot of work to go in on that.

I’d say across all of these areas it’s very important to us, given the magnitude of investment we’re making, that we get your input, that we’re prioritizing things, working with others, driving the standards in a way that meets your needs, because we’ve all got a common challenge here, and yet an amazing opportunity to let these digital systems be used in the broadest way.

Thank you. (Applause.)