Scott Charney: RSA Conference 2009

Remarks by Scott Charney, Corporate Vice President, Trustworthy Computing, About Moving Towards ‘End to End Trust’
RSA Conference 2009
San Francisco, Calif.
April 21, 2009

ANNOUNCER: Ladies and gentlemen, please welcome corporate vice president, Trustworthy Computing, Microsoft Corporation, Scott Charney. (Applause.)

SCOTT CHARNEY: Thank you very much. Last year at RSA, in conjunction with the RSA Program, I published a paper called “End to End Trust: Establishing End to End Trust.” It was kind of my vision for the future, but I want to explain to you how I came about that vision.

You know, in 1991 I was assigned to prosecute cybercrimes for the Justice Department, and in my initial indoctrination there were really three cases in the 1980s that made it clear that the U.S. government wasn’t prepared for what was about to come. The first was the Cuckoo’s Egg, which was the KGB hacking of the Department of Defense that was documented by Cliff Stoll. The second was the Morris Worm, which shut down the Internet in 1988. And the third was the Legion of Doom case in Atlanta in 1989 where three hackers penetrated Bell South, and had the ability, by their own admission, to shut down the phone system for the southeastern United States. It was the first attack on critical infrastructure protection.

In 1991, when I took over cybercrimes, I convened all the usual industry players to talk about what we were going to do in this new threat environment. These were the earliest of the CSO community, and they were all passionate about security but said markets wouldn’t support it.

Throughout the 1990s there was growing awareness about cybercrime as a problem, and the problem just getting worse and worse and worse.

And ultimately I coined the Charney Theorem. I was just watching the trends. I came up with the theory; I named it after myself. Here it is. Get ready to write it down. You ready? There’s always a percentage of the population up to no good. That’s the whole theory. (Laughter.)

But the reason it was important is in the pre-Internet world those people were in physical space, and we had police patrols and neighborhood watches, and in the Internet world, as the whole population migrated to the Internet, so did the criminal population.

And there were four things about the Internet that made it a great place to commit crimes: globally connected, anonymous – we can argue about how anonymous – lack of traceability – it’s hard to find source, sometimes technically hard, sometimes politically hard; and there are really rich targets – financial data, personally identifiable information, military information, business information.

If everyone’s view of the Internet is correct, global connectivity is going to continue to grow. There are a billion people online, 5 billion who aren’t.

If our vision of cloud computing is right, there will be more rich targets online as more and more people do more and more things in the cloud. So, if global connectivity is going to continue to grow, and there will be more and more rich targets, what are we going to do about the criminal population? That is a huge challenge.

And ultimately you realize in the physical world we managed the crime problem because we have preventative things that work — police patrols, neighborhood watches — and we have reactive things that work as well — court systems, law enforcement agents. The Internet doesn’t provide those kinds of mechanisms.

And, of course, it’s important to understand, as was talked about a little bit this morning in Art Coviello’s keynote, that the threat model continues to change in important ways. Things are moving up the stack. Attacks are getting more sophisticated. It is a complete global problem.

So, ultimately in 2002 I went to Microsoft as the chief security strategist to work on security. Initially my friends laughed, because I used Microsoft and security in the same sentence. But it turns out in the years that followed I think we’ve proven we’re very serious about security, and we made a lot of changes in the way we build products. We changed our process, we incorporated SD3 – secure by design, secure by default, secure in deployment. We embrace the Secure Development Lifecycle, the SDL, so that we build threat models at design time, and we put security milestones throughout the process. And we do final security reviews, so we don’t ship products that we know have important or critical vulnerabilities.

We did all that work, and yet as we were coming up to RSA last year, you still got the sense that most people are still very worried about their safety online, that security and privacy are not yet at acceptable levels on the Internet.

So, I started thinking about what next. The fact of the matter is the things that we did were right. They were fundamental things. We talk about secure coding and automatic updating. All those things are critically important.

But we had to do more, and ultimately I wrote this paper, “Establishing End to End Trust,” that has four major components. The first is that we have to do the fundamentals right. You have to build security in at the beginning, throughout the development lifecycle, and keep people secure after they’ve deployed your product in the marketplace. You have to focus on defense in depth, because you know not one single solution will work in all cases. And when bad guys do become adaptive and they change their techniques, we need to do specific threat mitigations, like we’ve done with phishing filters or sender ID frameworks.

And that’s where we were from 2002 to last year in large part. And while it’s important work and has to continue, it’s a flaw to say that will ever be enough. Why? Secure development is great, but we’re not going to get vulnerabilities to zero. Defense in depth is great, but we’ve learned in all sorts of examples that defense in depth can be penetrated by persistent adversaries. Specific threat mitigation is great, but it’s reactive.

So, I started thinking about what we do next, not just Microsoft, the ecosystem, all the partners, all the people building applications, what the policy people should be thinking, and this is the vision. It consists of a trusted stack. We have to root trust in hardware, because it’s less malleable than software.

We need to sign operating systems and applications so people know the source, and that they haven’t been tampered with.

At the application layer it’s somewhat hard, because there will be three buckets of signing. There will be software signed by people or organizations you trust, there will be software signed by organizations you don’t trust, and there will be software signed by Joe’s Software, who’s Joe? And there we have to use the Internet and its reputational platforms to figure out how people decide what they want to install on their machines, and we need to know the source of data and recent attacks. It’s very often a data attachment that contains the malware, and the recipient thinks it’s coming from a known source but actually it’s spoofed and it’s not.

And then you need trusted people. We need to know who we’re dealing with on the Internet. The way we do identity today is completely flawed. I go to a Web site, they challenge me for some personal information — social security number, date of birth, mother’s maiden name — they validate that information, and they give me a credential. Of course, those secrets aren’t secret at all. Yet that is the way we’ve done identity on the Internet.

And so we need a different model for thinking about identity, one that allows authentication in the right places but not in all places, and not an authentication model that strips away anonymity and the values that anonymity protects such as free speech, such as political debate.

And because people, when you talk about trusted people, it is so sensitive it gets its own place on the top of the stack. That is, we have to think about identity in its own sphere, and figure out how to manage identities inline, and I’ll talk more about that.

And then we have to figure out how we manage and audit all of the things that we have built, building on the keynotes we heard earlier about how you know what’s actually going on in your organization.

And finally, and this is important, we need to have alignment. We need alignment between social forces, economic forces, political forces, and IT.

Too often the information technology community has a solution, but they can’t figure out how to monetize it or it’s not acceptable for some other reason. Too often the politicians may have an objective, a worthy one like protecting children online, but the technology is not supportive and it has too many unintended consequences. Too often good ideas fail because the alignment isn’t there, and that in a way makes them not such a great idea.

So, the goal is to build this trusted stack, and do it in a way that builds alignment between all of these forces.

So, I want to talk this year about some of the proof points, some of the things we have done in each layer of the stack. We published this paper last year. This is a long term vision, but we are working. So, I want to talk about the fundamentals first.

In our products, of course, one of the things we have to do is make people more secure. Smart screen technology, for example, in IE 8 allows users to prevent downloading malware from suspicious sites by giving them warnings and blocking access to those sites. That kind of just-in-time information that’s actionable is critically important. Too often in our user interfaces people get information that’s not readily actionable.

The SDL has been a key to Microsoft’s reduction in vulnerabilities in generation after generation of our products. One of the things we decided very early on is that we were going to share the SDL with the developer community, because it’s critically important to understand this is an ecosystem problem.

So, in the early years we published a book on writing secure code, which is how we trained our developers. We’ve published a book on threat modeling, which we do at design time. We published a book on the Secure Development Lifecycle itself.

But in the last two year we’ve done two more things. One is we’ve publicly released a threat-modeling tool. One of the challenges with threat modeling is it can be as much art as science. If you want to do something at scale, if we remember that attacks are moving up the stack and we need ISVs everybody to do threat modeling, you have to build tooling capabilities. So, we built a threat-modeling tool and we made it publicly available.

The second thing is we had a lot of customers and organizations coming to us and saying we’ve seen the results from the SDL, can you teach us how to go through this process so that we can emulate the success that you have had?

The problem is we’re a software development company, not a teaching organization per se.

So, one of the things we did is we created the SDL Pro Network, we went to some of our key partners who you see there, and we did a train the trainers program so they can go into organizations and help them deploy the SDL.

The second important part, once you go past the fundamentals, is to make progress on the trusted stack from the bottom up, and that you will see in our products and some of our plans for the future we’re constantly focusing on how we can build trust in the stack.

Let me be clear what I mean by trust. I do not mean absolute trust. This is not a binary concept. Trust has to be reasonable and relative to what you’re trying to accomplish. There are some people I trust a lot, there are some people I trust a little. There are some people I used to trust but I don’t trust them anymore.

Sometimes it’s about harm. Maybe I’m willing to give my credit card to a vendor who I do not know and don’t necessarily trust well, because I know if the transaction doesn’t work I can roll it back. So my risk is low, so I’m willing to engage in a transaction that in other circumstances I might require a higher level of trust. This applies not just to people, of course, but machines. You may trust the machine to connect to your network. Then patch Tuesday happens; now you’re not sure if you should trust the machine to connect. So, trust is not a binary concept, and this is true throughout the stack.

So, what have we been doing? The first thing, of course, is we’re starting to use the TPM, hardware-based encryption to do security. In Vista, of course, we had BitLocker, which is full volume encryption, and in Windows 7 there is also BitLocker to Go, which doesn’t rely on the TPM but takes your portable removable devices and makes them secure when you pull them out.

The other thing that we’re doing is what’s called App Locker. One of the key points I made earlier is that you need to know the source of the applications you’re running. In Windows 7 App Locker allows a system administrator by group policy to require code signing before things are installed in client machines. As a result of that, it gives you the ability to block unsigned code or code from organizations that you don’t trust or for other reasons just don’t want running on your internal network.

The other important thing that we talked about earlier in Art’s keynote is this idea of a security ecosystem. In Microsoft products for some time we’ve had information rights management, which has been very helpful. We basically create a mail, attach a document, click a button, and by clicking that button you ensure that the mail does not proliferate across your organization, because you can set permissions for the recipient of that mail, so they can only read it, for example, without forwarding or printing it.

One of the challenges with IRM though has been that it works great within an organization but not across organizational boundaries. This is a classic case where we are better together when we partner with others than we can be alone, and by doing this partnership with EMC we take the capabilities of IRM and go cross-boundary.

It goes to the concept that we heard about earlier that we have to be more information-centric. You all know the rumored death of the firewall, right, that we were going to eventually go to an environment where we weren’t basically having walled gardens but focusing more on individual devices, as well as in discrete pieces of information. It’s very important to understand that some of that comes true in something called Direct Access in Windows 7.

I’m going to explain kind of how this approach came about. I’ve been using Direct Access now for quite some time. It is a huge productivity gain. Here is what happened. It used to be in the old days if you were remote from work you would RAS in, and you’d go through this kind of laborious RAS-in process, and then you’d connect to the corporate network, and you’d have access to the entire network.

It turns out most people who were RAS-ing in just wanted e-mail and calendar. So, by using RPC over HTTP in Outlook we created an environment where you can just load Outlook, connect to your mail and your calendar, get that stuff without going through the RAS-in process.

The challenge was when you got a mail that asked you to approve an expense report, you’d click on “approve,” and of course it would say “server not found” because you didn’t RAS into the corporate network.

In Direct Access we have a different model. It uses IPsec-over-IPV6, and when you’re connecting to your mail and you click on the expense link, your machine, your client goes out and makes a peer-to-peer connection with the expense server, and you approve the expense report.

The fact is wherever I am, whether it’s in this hotel next door at the W, in this conference center, wherever I am at home, it’s like being in the office.

And the interesting thing from a security perspective is it means your machine, your box, your client becomes all that more important, because it has credentials that give you access to your network in this model.

So, one of the things we require is two-factor logon to the client. Whenever you close the lid, whenever you boot it up, you’ve got to do two-factor.

I tell this story because we’ve always had the ability to enforce two-factor, for years, but customers didn’t want it. Users at Microsoft didn’t want to have to pull out a Smart Card or a USB dongle with a Smart Card in it, so that they could just connect to their clients. Suddenly, though, when you say, well, if you’re willing to do that you get this productivity gain, suddenly they’re all over it.

It really goes back to the model that security for security’s sake doesn’t work well. You really need to think about how you encourage people to embrace and adopt new technology by giving them a productivity gain or features that makes the security tax, if there is one, worth paying.

Now, one of the challenges, of course, is as we build this trusted stack we have to figure out how you manage what’s happening in your environment, and how you audit what’s happening. This is not about a single point. There are many security features that are running in an organization, whether it’s firewalls, anti-malware, you’ve got intrusion detection systems. There is a range of things that organizations run. And the problem is how you harness all that information and manage all that information, and actually get that information massaged so you can make actionable decisions in near real time.

I’d like to show you a quick video to show you the work that we’re doing in this space.

(Video segment.)

SCOTT CHARNEY: Now, one of the things that’s not pointed out there but is important is that in a heterogeneous organization you will have security applications running that are not Microsoft applications. And so one of the things that Sterling enables is an API so you can pass data information from other third party products into this console so you can get an all up view. Because as Art noted earlier, it really is about understanding the ecosystem, and taking a lot of disparate pieces of data and getting a picture of what’s going on in your enterprise.

Now, the other very important part of the stack has to do with the identity meta-system. To some extent this is the most controversial part of the trusted stack concept, because of concerns about privacy.

So, I want to talk a bit about identity. It’s really important to understand how I think about the problem, because again it’s not binary, it’s not that I know who you are or I don’t know who you are; there are some very important nuances that need to be understood.

My real revelation in identity management actually occurred three and a half years ago when my son was born. My wife and I did not know if we were having a boy or a girl, and we picked out one name for a boy and one name for a girl. My wife said to me, by the way, when the baby comes out, if I look at him or her and the name doesn’t fit, I’m going to rename it on the spot. (Laughter.) I said, well, you’ve done most of the work, I’m OK with that. (Laughter.)

So, we go to the hospital, out comes the little guy, I look at my wife, she goes — I go, OK, we’ve got a name. And the doctor says, what’s the name, and we said, Dillon. Our custom is to name children at birth; that’s not true everywhere. So, I say the name is Dillon, and the doctor writes out a birth certificate, and this is the way we do ID, a combination a social custom, plus government document, right?

When we went to take him to preschool and we went to register him, they said, who is he? We said Dillon. They said, how do we know? We said, here’s his birth certificate. Social custom, government document.

When he’s older, over my dead body, he’s going to want to drive. (Laughter.) He’s going to go to the DMV and say, my name is Dillon, and they’ll say prove it. So, he’ll pull out his birth certificate.

Then one day he’s going to want to travel overseas at his own expense. (Laughter.) Hopefully. So, he’s going to go to the post office, because they do the in-person proofing for the State Department passport, and they’re much more rigorous than the driver’s license bureau. They won’t just take a passport, they want two forms of ID. So, he’ll produce his birth certificate and his driver’s license, which he got because of his birth certificate because we named him in the hospital, right?

All identity is based on in-person proofing and then derivative identity. That’s how it’s done.

We don’t do that on the Internet; we do the shared secret thing I talked about before. We need this model, in-person proofing, followed by derivative identity.

Imagine an environment where somebody is charging something with your credit card number, and at the end of the transaction before it’s finalized they say, show us your e-government ID so we can validate the name on your license is the same as the name on the card, the credit card. Suddenly what happens? The ability to do that transaction and commit the fraud goes away, right?

Now, it’s very important to understand that in your life you get multiple forms of identification cards based on in-person proofing. So, I have a birth certificate, in-person proofing, really tiny; I have a driver’s license, in-person proofing by the state; I have a passport, in-person proofing by the post office, federal; I have a Microsoft ID, in-person proofing by Microsoft based on my driver’s license; I have an alumni card for my school based on in-person proofing, my driver’s license. But I have lots of IDs, and the point is if I can pass different IDs at different times, it becomes much harder to correlate the data about me.

So, I might use a state ID for state transactions, a federal ID for federal transactions, a Microsoft ID for business transactions, a bank ID because they go, you go to the bank, what do they look at? Yes, your driver’s license, they have you sign a signature card. I use that for financial transactions.

The other important thing about identity is it’s not always about who you are; it’s just about a claim about you and attributes: How old are you, what is your state of residency? The classic example I like for this: When I was younger, I used to get proofed at bars. Before they would serve me a drink, they would ask to see my driver’s license.

Now, my driver’s license has a picture, my hair color, my eye color, my address, a driver’s license number, and a date of birth. They actually didn’t want to see all of that data. They really wanted to see the picture — is this really him — and the date of birth — is he old enough that I can sell him alcohol? The other identity attributes they didn’t care about, but I had to give it to them because I had to either give them the whole driver’s license or none of it.

In the Internet we can do it differently. We can pass data attributes, and it’s critically important that we do that because it also allows things to happen that are both good for commerce and individuals. You can think about, for example, things like targeted anonymous advertising. If I know a bunch of people share an attribute, I might be able to target ads to them, but I don’t need to know who they are, I don’t need any of those particulars.

So, it’s really important when we talk about identity to understand that it has to be based on in-person proofing, that all identity is derivative, but if we focus on the fact that people have multiple identities and identity attributes may be all we care about, we can create an identity meta-system that allows us to achieve the right objectives.

And, in fact, we have a product now, codenamed Geneva, that is a claims-based identity system. Essentially what it does is it allows you to pass claims about a person as opposed to full identity.

Now, one of the things that a lot of people have been concerned about is child safety. So, what I’d like to now do is show you a video of a prototype project that we’re doing in Washington State.

(Video segment.)

SCOTT CHARNEY: So, before I go, I want to highlight two more things. The first is I talked about this issue of alignment. There is a lot of activity now going on between government and the private sector within industry, with industry, the government, and consumer groups to get that alignment. The CSIS Commission report is one example of that. The Berkman Center’s task force on Child Safety is another example of that. The key is understanding that on many of these issues many different constituencies care about the same thing, and if we can get the right kind of alignment and the right technologies in place, we can achieve real solutions.

The second thing I would say is that you as security professionals really need to get involved. You can learn about the threats. We heard a lot about those this morning. You can focus on the fundamentals and the things that you build. You can develop new scenarios that enable the use of PPMs, for example, enable the use of infocards, enable the identity meta-system to fulfill its promise, and you can participate and enable change.

You will hear other presentations during the course of this week, like Melissa Hathaway talking about her 60-day review. You should hear about what the U.S. government and what other governments are doing, and participate in informing public policy.

Have a great conference. Thank you very much. (Applause.)