RSA Conference
Remarks by Scott Charney, Corporate Vice President, Trustworthy Computing
San Francisco, Calif.
March 2, 2010
ANNOUNCER: Ladies and gentlemen, please welcome corporate vice president, Trustworthy Computing, Microsoft, Scott Charney. (Applause.)
SCOTT CHARNEY: Good morning. It’s a pleasure to be here again. You know, I was here last year talking about our end-to-end trust vision and progress we made. And this is actually my anniversary with Microsoft. I’ve now been with the company since 2002, so eight years.
It’s really interesting, because when I went to the company, we had just started the Trustworthy Computing initiative. I had gone in March 1st of 2002. Bill Gates had announced the initiative on January 15th of that year. So, you know, we’re in our ninth year of this journey, and it’s been a very, very interesting journey, and it continues to change. It started, of course, with us focusing on the PC. When I first got to Microsoft, we started with SD3, secure by design, secure by default, secure in deployment. And it was really about protecting the PC from the onslaughts that were coming.
But over time, of course, we recognized that it wasn’t just about protecting the PC, it was about protecting the ecosystem. So, we started to think about the end-to-end trust vision, how do we actually create a safer, more private ecosystem.
And then, of course, in the last year or two the cloud has become the big box; everyone is talking about the cloud.
So, it’s kind of natural for me to think about, okay, we have this end-to-end trust vision; how does it relate to the cloud? Does it work in the cloud? What changes?
And so I’m going to talk about a couple of things today. One, I want to talk about our progress on end-to-end trust generally. I also want to talk about how end-to-end trust is affected by the cloud. And I’m also going to show you a video and talk a little bit about our products.
And the reason for that is I am in a corporate staff job. I think about strategy and policy issues. But whether we’re actually effectuating against that strategy depends on whether it actually turns up in your products and services; not just for Microsoft but for the entire industry. It’s great to talk about how important it is to think about identity differently, but if people aren’t building identity technologies, what’s the point?
And it’s great to talk about the fact that we can harmonize identity and privacy, and we can have authenticated worlds but still protect anonymity, but what’s the point of having that discussion if you don’t effectuate it in your products?
So, I’m going to talk about all of those things today.
Now, before I do talk about the cloud, I want to remind people about what the end-to-end vision was, and I also want to talk about the current box product world, because while the cloud is the buzz today, the reality is there are still a lot of people all over the world running traditional boxed products, dealing with a range of threats.
But here in a nutshell was the end-to-end trust vision. It really has four different components. The first is that we have to do the fundamentals right. We have to design products to be secure, we have to deploy them in a secure state, we have to help customers keep them secure, we need to do defense in depth technologies — we all know that there’s no silver bullet, that this is about risk mitigation, not disk elimination — and we need to be able to respond to specific threats.
Additionally, we need to start building the trusted stack, and we heard some of that in the earlier keynotes, Art, for example, about the need to root trust in hardware.
We need to know that the software is from the source we think it’s from, that the applications are signed, that the data is coming from the people we expect.
We also need to think about identity. And identity is so important in the context of the Internet generally, and actually becomes an amplified issue in the cloud that identity gets its own place in the stack.
You know, I’ve been involved in the identity space now for, oh, 18 years. I used to go to conferences, Center for Democracy & Technology, predating that, Computers Freedom and Privacy conferences, and they would put government people up and they would put civil libertarians up for the big debate about identity.
And the question was always phrased, do you want anonymity or accountability on the Internet. The answer is, yes, we do. And when you have an answer like that to a question, you realize that you’re asking the wrong question.
And so one of the things we’ve come to recognize is we can’t think about the Internet as the thing, you have to think about the application layer as the thing.
There are places where you want robust accountability on all sides of the transaction. A classic example: online banking. The bank wants to know it’s me, I want the bank to know it’s me. But there are other places, particularly things like free speech, where you may want a large degree of anonymity, so that you can protect the value of free speech, so people can say controversial things.
So, it’s important to have a very nuanced discussion about identity, and of course as you build this trusted stack and you put identity on the top, how are we going to manage all of this, how are we going to audit it so when things go wrong we can roll things back and figure out what happened?
And finally and trust is about creating alignment, alignment between social forces, political forces, economic forces, and IT products and services. If you don’t create that alignment, you may have great technology that is economically unsustainable or objected to by many.
And so you have to figure out sometimes that the politicians want to do something like protect children online, a noble cause, but the technology doesn’t allow age verification, so the laws are overbroad and get struck down by the Supreme Court.
You always have to think about this alignment issue. If you can’t find alignment, you will have trouble.
So, I wanted to talk about the traditional threat for a minute, the boxed product threats and botnets in particular, because they pose a very interesting problem.
You know, I just heard a moment ago in the awards ceremony the discussion about we have to be careful about how we describe the threat, that we not either overhype it or underestimate it. And I agree with that completely — completely.
But in a lot of my dealings, both with industry, with government, with consumers, people have really struggled to understand the threat. There’s the sky is falling things sometimes. Some people diminish the threat, some people exaggerate the threat. Everyone is struggling to get their arms around the threat.
So, I just started thinking about why is it so, why is it so hard to understand this threat. Because, you know, we deal with all sorts of things in our lives. When we cross the street, we know what the threat is, we might get hit by a car running a red light. And we know how to mitigate it, and we have statistics that show us this threat happens, actually materializes, a certain percentage of the time people get hit. We can get our arms around it, and then we can mitigate it. But in the cyberworld we have so much difficulty getting our arms around the threat.
So, I started to think about why that’s true. And ultimately I think there are kind of five issues that really highlight the problem. First of all, there are a lot of bad actors out there, and there are many types of bad actors. There’s criminals, there’s organized crime groups, there’s hacktivists, there are those committing economic espionage or military espionage. You see reports about nation-state activity. You have individuals, teenagers, and sophisticated orchestrations. There are so many types of actors.
And there are so many types of motives. There are traditional motives like fraud or distribution of child pornography. There are more sophisticated motives like economic espionage or military espionage. There are cutting-edge motives like cyberwarfare.
So, you have lots of actors and lots of motives.
The next problem is their attacks can look the same. You see an attack on a network, it doesn’t tell you anything about who the actor or the motive might be. And if you don’t know who the actor is and what their motives are, it’s very hard to figure out how to respond.
And so you have this kind of amorphous threat. It’s some actor — there are many — with some motive — there are many — doing something, and I can’t tell what it is.
And they’re doing it in a shared and integrated domain. It’s shared by consumers, by businesses, by governments.
And it’s integrated. You can’t tease these things apart. You can’t put consumers over here and the military over here. We’ve done that in the physical world. We know where the battlefields are, we know where the hospitals are, but here it all munges together.
The fifth point is that the worst-case scenarios are devastating and dramatic and scary. So, if you don’t know who’s acting, what their motive is, you can’t tell from their attacks, it’s in this integrated domain where so many things can go wrong, and the worst-case scenarios are horrible, suddenly you get stuck, you get into paralysis. And that’s why we have trouble grappling with the threat.
Against that backdrop, we can start to think about how to handle these things in a slightly different way, at least when we know where there’s attribution, and we know where we can assign motive, and we can start thinking about what to do in the cases where we can’t, and where we can create a different framework for this.
Against that context it’s very interesting to think about botnets, because botnets exemplify the problem in this environment. So, it can be very challenging. There are many botnets now that are responsible for a lot of types of criminal activity, everything from spam to denial of service attacks. There are millions of botnetted computers around the world, and most of them are consumer computers, which raises interesting problems about how we protect consumers on the Internet.
So, we have to start thinking about, you know, innovation, innovation in both disruption and prevention.
So, one of the things we did at Microsoft is we looked at a particular botnet that was responsible for a lot of nefarious activities. This botnet was the top 10 botnet in 39 countries. Hotmail alone blocked over 650 million connections from infected computers. The Malicious Software Removal Tool that we run as part of automatic update cleaned almost 100,000 infected machines.
So, knowing that we had this problem, it was an opportunity to innovate about both disruption and to start thinking about innovating about prevention.
So, how did we think about disruption? Well, one of the things we did is we turned to the courts. As I said, we always have to align not just social, political, economic, and IT forces, but we need to use social mechanisms and political mechanisms to reinforce values: If you think this behavior is wrong, then society should do something about it in a structured way.
So, one of the things we did is we used the court process. We went and did a court filing. It was supported by industry partners and academia. We got temporary orders to sever the domains. We notified the people whose domains were affected. We waited 14 days, and then we could terminate basically the head of the botnet.
Now, it’s an interesting idea, and is something that we will continue to pursue as appropriate. I don’t mean to suggest this is a remedy for the botnet problem; it isn’t. The point is that just like we do defense in depth in IT, we have to do defense in depth in response. We have to think about technical solutions such as cleaning machines, and we need to think about social solutions like using the courts and getting people involved.
So, to me, of course, this was an interesting way to proceed, but it also raises an obvious question. This is reactive. What do we do proactively to change the problem? So, we’ve started to think about this.
I actually think that the health care model, particularly related to the World Health Organization and the Center for Disease Control, and other countries have similar organizations and authorities, might be an interesting way to think about the problem.
With medical diseases we basically educate people, and sometimes if you’ve flown to certain countries they’ll scan you for your temperature as you get off the plane. If you seem to be infected, you’re quarantined and you’re treated.
The question is, why don’t we do this for consumers? Why don’t we think about access providers who are doing inspection and quarantine, and cleaning machines prior to access to the Internet?
You know, and the reason this becomes so important is if you look back at the end-to-end trust paper, which was actually written a couple of years ago, there’s a line in it that says, “Who’s the CIO for the public sector?” The reason governments and enterprises can manage the botnet risk is because they have professional IT staff; this is their job, they manage this risk. It is much more complicated in the consumer space.
I mean, one of the things I love to do is just look at my own life, and one of the things I’ve done is, you know, I have three children and I have my mother, who’s fortunately still healthy. And so I’m one of the sandwich generation. I’ve got the mom and I’ve got the kids. And every now and then I juxtapose my four and a half year old with my 80-year old mother, in part because they behave so much alike it just astounds me. (Laughter.)
But let me tell you one way they also behave alike. My four and a half year old has learned to navigate with a mouse, and it’s just great to watch. He navigates to the mouse, up pops this security dialogue. He can’t read. (Laughter.) He doesn’t understand it. He clicks okay.
Then I go to my mom. She’s got a PhD in education. She gets the dialogue box. She can read, she doesn’t understand it, and she clicks okay. (Laughter.) Okay? (Laughter, applause.) We can’t do it that way anymore.
The attacks are happening at light speed; we have to respond at light speed. So, we should think about inspection and quarantine.
Now, there are some obvious questions. Why should we be doing this for people? Will people accept it socially and politically? Well, we’ve done it with smoking. People used to smoke, and we said, look, you’re going to kill yourself, but if you want to die, go ahead. You’re causing cost to the health care system; we’ll eat those costs, go ahead and kill yourself.
Then, of course, the EPA comes out with secondhand smoke. Suddenly, smoking is banned everywhere. You have a right to infect and give yourself illness, you don’t have the right to infect your neighbor.
Well, the computers are the same way. We’ve told people run anti-virus, patch, backup your data. But if you don’t do that stuff and you lose all your stuff, that’s a risk you can accept. But today you’re not just accepting it for yourself, you’re contaminating everyone around you, right?
And we do this in other areas like vaccinations. If you have kids who go to public schools, they get vaccinated or they don’t go. We do those under enforcement.
And then there’s a question of who would pay for that. Well, maybe markets will make it work, but if not, there are other models: use taxes for those who use the Internet. We pay a fee to put phone service in rural areas, we pay a tax on our airline ticket for security. You could say it’s a public safety issue and do it with general taxation.
And there’s also a good role for government here. If access providers are going to scan for content, what content can they scan for, if you want to protect networks and you have to limit the activity to network protecting activity, not worry about copyrighted material or speech or other kinds of things.
But we do have to innovate, and this example gives you another reason to constantly think about alignment, and how would you get social acceptance, political acceptance, find the right economic model, and have IT, which does do, of course, things like NAP and NAC today, how do we use IT to achieve these objectives. So, we have some interesting opportunities here.
Now let me talk about the cloud, because the cloud is the big thing. First of all, let me make sure we’re talking about the same thing. You know, the National Institute of Standards and Technology has done a great deal in talking about cloud definitions. There’s infrastructure where you just rent the hardware. There’s the platform where you get the operating system. Microsoft has Azure that’s on the platform. And then there’s the total outsourced model: infrastructure, operating platform, and applications. And there will be different kinds of clouds. There will be public clouds, private clouds, hybrid clouds.
To be clear, these models are not completely new. People have outsourced in the past, and put their data with third parties. Consumers are used to things like Hotmail, which is a software as a service.
But what is different about this cloud, what makes this different is a couple of attributes. There will be some new platforms, like the operating system for the cloud that dynamically allocates resources. There will be global elasticity. You’ll be able to park data in various places all over the world on the fly, and you might not even know where your data is, and there is co-tenancy. Art talked about Coke and Pepsi wouldn’t want to share the same hardware, but there will be co-tenants in the cloud.
And as a result of that, that creates a bunch of implications for those of us in the security and privacy space. The first has to do with the shifting shared accountabilities. So, in the boxed product world we develop a product, we give it to you, and then you go out and you configure it and patch it and do that management work. Depending on which cloud model you use, you take on more or less responsibility in the cloud. And there will be challenges for compliance, because the fact that your data is in compliance in the cloud doesn’t mean you don’t have compliance obligations.
Moving all the data together in the cloud can create rich targets for bad guys to go after. And there will be the question of how you do shared investigations. I’ll talk more about this in a minute. There will be a lot of information aggregation in the cloud, and questions about what the cloud provider can do with that information. And there will be questions about jurisdiction, which I’ll talk about, too.
So, if we think about the cloud implications, then I can start to think about the end-to-end trust vision and how it’s impacted by the cloud.
First, it’s more important than ever that we do the fundamentals right, and that your cloud provider do the fundamentals right. Because you’re increasingly going to be relying on that third party. If you’re using a platform, was that platform developed with a Secure Development Lifecycle in mind? Can you ensure that the cloud provider is using the same rigor that you would use, is one of the things to think about.
We have to think about privacy-centric models, so how does the cloud provider, what rights, if any, do they reserve to use your data, and how are they going to respond to threats in the cloud.
The trusted stack becomes important, as Art talked about. We need to root the trust in hardware, but it’s going to be virtualization that is the key to keeping the tenants separate. So, how are we going to make sure those virtualized compartments are secure as we move up the stack from the hardware?
Audit is going to increase in importance as we try and figure out what happened to the data.
And how are we going to do forensics investigations in the cloud? You know, not long ago, there was a hospital that got contacted by a bad guy who said they had stolen data, and if they weren’t paid money they would release the data. And they shared some of that data to show there was really a compromise.
Suppose that data had been in the cloud. The hospital calls the cloud provider and says, we think you lost the data. The cloud provider says, no, we think you didn’t manage your IDs right.
Now someone has to investigate. The cloud provider says, well, we’ll investigate the cloud components, and the hospital says, no, we don’t want you to investigate what we think is your own misconduct; we want to investigate. And then the cloud provider says, how can you investigate? We have co-tenants on this machine; you can’t see their data.
How are we going to manage those kinds of problems going forward is going to be very interesting, but having a trusted stack, having the good audit trails will be critical to getting trust in the cloud model.
Identity and privacy are going to be key. Why? Because as people move more and more of their data to the cloud, their identity is the access point to that data. And it’s going to be critically important that companies produce technologies — and we’ve been talking about it for years, but produce technologies that allow us to better manage identities, particularly across trust domains.
So, we have some good news to announce in this regard. First of all, we’re reducing Forefront Identity Manager 2010, which is a policy-based ID management system that has a self-service model in it, so it’s much easier to manage identities.
The other thing that’s super important though is we’re releasing U-Prove technology. This is something that Microsoft got involved in a few years ago, and the key to understanding U-Prove technologies, and I’ll show you a video about it in a minute, is that it allows people to have multiple IDs. If you remember my presentation last year, the way you avoid the national identifier arguments is to grant people multiple IDs so they can use different IDs at different times. It’s a claims-based model so you can pass claims about yourself instead of your whole identity. And it provides for limited disclosure tokens, so you can limit the amount of information you disclose so that you can execute a transaction without revealing too much about yourself. At the same time, we have Active Directory Federation Services that will allow you now federate IDs across trusted domains.
So, how can we put this stuff together? Last year, I showed you a small prototype that we were doing with the Lake Washington School District where children got proofed in the school’s office and then had a credential.
Well, we’ve continued along this path, and we have now been working with a partner in Germany, getting ready for the German EID card that is coming, and I wanted to share with you a video to show you the work that is going on in Germany.
(Video segment.)
SCOTT CHARNEY: So, that’s an example of how we can take some of the principles we’ve been talking about for the last couple of years, and actually start applying them in real-world environments.
But we’ve also said that this is not a Microsoft problem, this has always been an ecosystem problem, and so it’s critically important that everyone participate.
So, today, we’re announcing a couple of things. First of all, with regard to U-Prove, we’re releasing under the Open Specification Promise the patented crypto algorithms of U-Prove, and we’re donating under the Free BSD License two reference toolkits implementing the algorithms.
Additionally, we’re releasing a second specification under OSP for integrating U-Prove into Open Source identity selectors. That will be accompanied by preview code integrating U-Prove, ADFS, Windows Identity Foundation, and CardSpace.
The key is to get more people to embrace these kinds of technologies so that we can create the identity meta system that we’ve been talking about for quite a while.
And then finally, I want to talk about the cloud and social-political alignment. One of the things about the cloud is this elasticity and the fact that data can go anywhere. But what’s really interesting to me is that the cloud has the potential to alter the balance of power between the individual and the state.
In a pre-IT world if the government wanted data from me, they had to come to me to get it. They could bring a search warrant and search my house, or they could bring a subpoena and I could move to quash the subpoena or produce the documents.
Of course, the world has changed. First came the telephone and the capability to wiretap. But wiretapping was about ephemeral communications; you either captured them in real time or they were gone. Then came the first killer app for the Internet, e-mail, and suddenly there was a store and forward technology, and suddenly governments could go to a service provider and say, here’s a subpoena or a court order or a search warrant, you have to turn over certain records relating to this subscriber, and even the content of their mails. And so over time, there’s been the ability of the government to get more data out of this IT environment.
But as we migrate to the cloud, there will be even more of this. Everything will go to the cloud if the vision is right: your health records, your tax records, your diary, which you’ll want to be able to access from all sorts of different devices, from your phone to your PC to your Xbox to whatever. And the fact of the matter is as we move more and more of this data to the cloud, it means governments and litigants can go to the cloud and get that data without ever coming to the citizen.
The question is, is that the right place to be or not? The Center for Democracy & Technology has done some interesting work looking at reforming the Electronic Communications Privacy Act, but it will I think be a big issue for social cloud acceptance about whether or not people want to park their data in the cloud, knowing that this may give not only their government but other governments around the world access to that data, because this cloud is going to have datacenters all over the world, and governments will go to companies in whatever country they’re in and seek data.
And so I have always believed that information technology should not dictate social policy. You create the social policies you want, and then you align your technology. This will be a big issue.
So, let me close by saying that it’s really important that everyone participate in these debates and in these issues. We need people to deploy robust identity solutions that are good for both security and privacy. Start using the latest products and technologies and get involved. Start thinking about how to innovate in both the prevention and response to the different kinds of attacks that we see on the Internet. And start thinking about how we can encourage governments to set normative behaviors for government activity on the Internet, and think about how consumers, businesses and governments have to manage this shared and integrated resource.
Have a great conference, thank you very much. (Applause.)