Craig Mundie: RSA Conference 2008

Remarks by Craig Mundie, Chief Research and Strategy Officer, Microsoft
Christopher Leach, Chief Information Security Officer, Affiliated Computer Services
RSA Conference 2008
“Enabling End-to-End Trust”
San Francisco, Calif.
April 8, 2008

ANNOUNCER: Ladies and gentlemen, please welcome Chief Research and Strategy Officer, Microsoft, Craig Mundie; and his guest, Chief Information Security Officer of Affiliated Computer Services, Chris Leach. (Applause.)

CRAIG MUNDIE: Good morning, everyone. Let me add my welcome to Chris Leach, who I asked to join me today, and start by explaining a little bit about why I did that.

Basically in early 2001, I was asked to take on two things at Microsoft, neither of which I had a lot of experience with, and neither of which were highly coordinated inside the company. One was security broadly, and the other was privacy.

As I started to learn about those things, to provide oversight to them and to develop our strategy, I was really struck in early 2001, because the security people would come and say to me, you know, we’ve really thought a lot about this, and if we just had perfect identity on everything, we wouldn’t have such security problems; we’d be able to track everything down.

And then I’d have another meeting with the privacy people, and the privacy people would say, you know, we’ve thought deeply about this, and if we just had perfect anonymity, we wouldn’t have any real privacy problems. (Laughter.)

I began to realize that there wasn’t a fundamental tension between these two important ideas.

In early 2001, on a global basis I would have said that the privacy needle was shifting upward, there was a lot of focus and a lot of concern about that, a lot of talk about legislation around it, and along came September 11 that year, and we had already begun to move down the path to what we later announced a few months later as Trustworthy Computing as a concept, as a program inside Microsoft, that Bill Gates and I put that together.

It really came from that fundamental realization that there was a tension between privacy and security, that the battle against the problems that we had in security at the time would continue to evolve, and that over time we’d have to find a way to balance these things.

I remember when we gave the first talks about Trustworthy Computing, both inside and outside, people would ask, well, when will you get it all fixed? I said, it isn’t clear we’ll ever get it all fixed. Society has battled crime for a long time, still does. It’s an evolving threat landscape, and the privacy issues will probably get more extreme over time and not less extreme.

So, we sit here in 2008 and it’s really clear to me now that despite huge progress on the security side, largely I’ll say more remedial than proactive, we do find ourselves in a situation now where the intimacy with which computing touches people’s lives, whether at work or at home, really is escalating the challenges we have in privacy.

So, I wanted to have a discussion not just as the technology provider but basically with someone who lives these things on an international basis on the operational side every single day. So, I asked Chris to come and chat with me today for the next 30 or 40 minutes with you about these issues, how they’re evolving, what we can do about them, and then talk specifically a little bit about some of the things that Microsoft is going to do to try to engage more prospectively on these issues.

So, Chris, welcome.

Craig Mundie, Microsoft’s Chief Research and Strategy Officer, discusses the End to End Trust vision during his keynote remarks at RSA Conference 2008. San Francisco, Calif. April 8, 2008.

CHRISTOPHER LEACH: Glad to be here, Craig; appreciate the invitation.

So, let me ask you a question. So, as I listened to you speak, and as we’ve had some dialogue previously, so you came into a very siloed view of security, one of privacy. I’m sure that as we’ve talked, that those are challenges within themselves, but have you seen a convergence of those, like we have, where you can’t have one without the other, and there are some tradeoffs going back and forth with them?

CRAIG MUNDIE: No, it’s clear once we started down this path we actually combined both of these activities under one executive in the company, who reports to me, and we have always now found ourselves having to work in this push/pull relationship between security and privacy.

We just last fall announced HealthVault where we’ve entered into the personal Electronic Medical Record area, and the greatest concern that the company had as it went toward the launch of that was not whether we could secure the records, but whether the privacy issues surrounding this would overwhelm people’s view of the benefits. So, most of the effort went into getting the privacy policies right.

CHRISTOPHER LEACH: I guess from our perspective the struggle that we deal with is if you look at privacy, the way we do privacy is through security. And then it becomes an issue of at what cost. So, how much is too much, how much is not enough, and that whole balance of security and privacy that comes together becomes an ever-changing landscape as the threats change? But from my point of view I guess if you were to look back at Trustworthy Computing and the other initiatives that have gone on, it’s really more where are we focusing today, and then how do we get convergence, and then operationalize that and get people involved going forward?

CRAIG MUNDIE: Yeah, in this threat landscape change scenario so much of the focus five, six years ago was on the desktop or the services, servers. Today, we have cell phones, we have people carrying laptops around everywhere. As you said, a lot of the focus was on making sure that you didn’t lose the data, because of a security problem in the network or in the software. And while we’ve made a lot of progress in that, you know, the number of threats that exist because of failures of physical security or people taking thumb drives around or other things, all of these things are changing the way in which the bad guys seek to get access to the data. So, we’ve had to come up with mechanisms that are just practical ways of dealing with those threats, in addition to the traditional issues of blocking failures in terms of remote access.

CHRISTOPHER LEACH: I guess from our analysis and things that we deal with on a daily basis with regard to that, if I look at where we’re putting most of our resources today is looking at where have our losses been. Have they been a breach to our network or have we put processes and facilities in place that are securing our network? And so I think that’s where we are today, which is what you were saying. And where are our risks today? They tend to be around people wanting information, which is better than data, and how are they getting that? So, our losses that we’ve experienced have tended to be around, like you said, thumb drives going out the door, an unencrypted PC, Blackberrys being lost, and so those tend to be issues that we need to focus on, and those become privacy issues as well.

CRAIG MUNDIE: Sure. You know, over the last four or five years, we’ve started building what I just think of now as a foundation for dealing with these kinds of problems. We had to change the way we actually did the development work of our products inside the company.

CHRISTOPHER LEACH: And we thank you for that, by the way.

CRAIG MUNDIE: You’re welcome.

But it’s been clear as we got better at the bottom levels of the stack, the attack just moves up the stack. The last couple of years, the vectors that go in through the Office products and other things just show that we’re going to have to continue to focus on these issues.

So, I think that the foundation has been laid for good design practices, for good, basic security.

I’d say the biggest challenges we have now are moving into some of the management issues and more broadly some of the identity related questions as they draw us into the question of both how do you define what you want to secure and in context how do you establish the right privacy constraints around that.

CHRISTOPHER LEACH: And I would agree with that. I guess if you look at from ACS’s perspective our number one audit issue continues to be and has been for quite some time around identity. And I don’t mean just necessarily me as Chris Leach having access to the network, but it becomes the identity of what we’re accessing with. So, that whole end-to-end piece for us is very, very critical when you talk about identity, and I think too often we end up focusing ourselves very narrowly on who is Chris Leach as opposed to a broader point of view that we’re starting to look at today.

CRAIG MUNDIE: Yeah, as we’ve thought about it now, and today we’re actually going to release a paper that’s targeted at helping to explain this question of end-to-end trust, you know, what are all of the issues that the industry, as well as the user community are going to have to focus on, what are the different layers of this problem?

Having got I’ll call it the core stuff in place, we now look at the next requirements being sort of a trusted stack of software. You can’t just look at any one piece. You can’t say, okay, the operating system is pretty hardened; the applications may or may not be. We really need to stitch these things together in some complete way.

So, today, we think of five different components that collectively we need to work toward to get this next layer of the trusted stack.

First is we’re going to have to have the devices be trusted, and today we’ve started to get things like TPM hardware in phones; some of the hardware-based credentials are already present in the phones. The question is, how do we bring these together, use them as a root of trust in order to bootstrap ourselves up.

I think the next thing that’s going to have to happen is that we have to have a trusted version of the operating system, not trusted just in the sense that it doesn’t, quote, make mistakes or won’t have a lot of vulnerabilities, but you have to actually know what you’re running. So, there’s going to have to be more energy applied to figuring out how you bring up the stack of software on a machine and can really have some attestation of what’s there.

The applications are going to have to be trusted as well, and we’re going to have to know those that are essentially certified or attested relative to the practices that have been brought to bear on their construction, just like we do today for the operating system. That doesn’t mean others won’t exist, but I think we’re going to have to know which ones are in the category that we trust, and which ones are not.

Increasingly the identity question is part of how we deal with trusting people, and the processes of how we manage people and their operation.

And then finally, I think we’re going to have to put more energy into maintaining the provenance of the data that we’re dealing with, so that if it’s been altered or you don’t know exactly where it came from, you can take some special action there, too.

CHRISTOPHER LEACH: Let’s back up just for a second. We start talking about operating systems, hardware, devices, all those, and then we kind of go into identities and there are some policy issues around that, and then the A word, the audit word or attestation word.

But one of the concerns that we look at is interoperability of all that, because if it’s difficult, if it’s not seamless, if we can’t go end-to-end quickly and efficiently, we don’t do it. I don’t think that’s necessarily out of band from anybody; we just don’t have the time to do it. So, interoperability to me will be critical to make this thing work. But I agree with you, I think end-to-end it needs work, but when we start talking about hardware standards and so forth, do we need to consider those types of things, interoperability?

CRAIG MUNDIE: It’s clear, I mean, in the 15 or 16 years I’ve been at Microsoft, you know, I’d say that the era that we’ve entered in the last few years in terms of the pressing customer demand and requirements is this question of interoperability.

In fact, interestingly, about two years ago — we started the Trustworthy Computing program, as I said, in 2001, but two years ago, we actually added a formal component to it, which was interoperability, because the trust that people had in these systems wasn’t just whether they could get the one system to work by itself, but could they trust that the data could be moved and that the systems could operate between them.

So, we’ve as a company made a lot of commitments lately to interoperability, and I think we’re very focused on that.

And, in fact, as we go forward now in working to deal with all of these end-to-end trust issues, one of the things that we want to do now is formalize a dialogue with a lot more people about how do we specify these things, and then how do we negotiate the things where we think interoperability will be mandatory.

CHRISTOPHER LEACH: So, you’re not looking at this as a Microsoft end-to-end solution; you want dialogue?

CRAIG MUNDIE: Absolutely, not just dialogue but ultimately we need collaboration with other people who are building some parts of the products in the system. You know, we’ve recognized that it’s just the practical reality is that people are going to have a lot of heterogeneous systems. In fact, as you move beyond just thinking of enterprise computing, and you think of this as it relates to all the different forms of computing in your life, even in your residential environment, and all the portable and mobile things that people carry around, there’s no way that we could posit a solution to this that doesn’t somehow seek to address all of the interactions of those devices.

CHRISTOPHER LEACH: I think you keyed on something that’s very critical as I think about things coming at us from a business environment. It’s the anywhere, anytime, anyhow access.

So, when you look at up and coming individuals — and this relates also back to identity — people who are in high school today, who pick up a cell phone and text faster than I can put an e-mail in a Blackberry, who have grown up in that environment and want to have access anytime, anywhere, and yet if I can’t provide that for them, and I’m concerned about this end-to-end trust, they’re going to go to work for somebody else, the competitor, and I’m going to have workforce issue. So, I think this is something that we have to deal with today.

CRAIG MUNDIE: You’re right. Actually, on this stage at this conference last year, Bill Gates and I did one of these types of conversations, and one of the things that we talked about in that conference was this anywhere access requirement. We’ve been steadily moving down that path to create the mechanisms that will allow literally any device to access any data set, run any application from any location. We talked last year about the kind of technological steps that would have to be taken to do that. Many of those are being put in place. I think that they create the next level of the foundation on which we can build this sort of interoperable collaborative environment.

As we move beyond some aspects of just getting the platform right, we do find ourselves in a combination of technical and policy areas having to deal with another five issues, which I think we should talk about briefly.

The first, which we touched on, is identity, and the claims around identity. These we think are going to be critical in terms of how we find the structural balance between the privacy requirements in a given context and the security requirements.

Today, the computer uses security or identity started out as username and password, and we’ve made some progress with two-factor authentication and other ways of presenting credentials or tokens, but those things have still more or less been an all or nothing kind of capability.

Recently, a few weeks ago, we announced that we had acquired Credentica, and their U-Prove technology, which we think is going to be an example of a way to realize this requirement where we can tease apart some of the individual claims around identity or elements of identity and present them individually, and therefore be able to prove certain pieces of information without disclosing too much.

CHRISTOPHER LEACH: I think you’ve hit on a key point. I think if you look at an identity, Chris Leach as an identity, or Craig as an identity, there are many facets to our identities. I have my work identity, I have my online identity when I’m at home, I have other types of identities, and so it’s never just an easy question of here I am; it has to be contextual as well.

CRAIG MUNDIE: Right. And so I think that there are two things that require a lot more work and are yet to come, but one is the ability to establish policies based on these individual identity claims, not just on who you are, or even just that you have multiple personas. I think that we will get down to finer granularity. For example, you should be able to present a cert that just says, hey, I’m over the age of 18, and not disclose anything else, but allow a Web site or a business to know that they’re dealing with someone who qualifies as an adult.

I think another reason this is going to be quite important is the variability on a global basis of the laws and regulations that govern a lot of this stuff. We now have a set of technologies in terms of Internet and online capabilities that don’t stop readily at a custom’s station on the way in and out of the country. So, it is a challenge for businesses, and more and more of them are global businesses, who have to deal with the vagaries of different international laws.

So, I think we’re going to have to be able to represent a lot of these facts in ways that will allow systematic, programmatic determination of the legality or legitimacy in these different jurisdictions.

CHRISTOPHER LEACH: It seems to me we’re talking about two things here. One is situational privacy, being 18, I should be able to say nothing else in the cert, I’m 18 and proof around it. I think there are times when that’s important. I think there are other times when I want to know everything about you, and in my mind taking the Internet from a transactional based environment where we buy and sell goods to one of where we’re able to collaborate, so then I want to know a little bit more about you. I want to know what information you’re sharing, a little bit about that data. And I think in order for us to move from where we are today at transactional to collaboration, where I think we can get a lot more bang for the buck, if you will, out of the Internet, those are some things we need, so situational privacy is going to be an issue we’ve got to deal with.

CRAIG MUNDIE: Yeah, another way that situation enters this equation I think shows up as roles. You have a lot of things where we want to authenticate and then authorize access, but we don’t want to do it to just a person, we want to do it to a person in a role. This is more of the contextual environment, and not just in a geographical context, but in an application context.

Another example of this is we’ve been doing new software for use in the healthcare industry. You don’t really want to have — even if you want to give the doctor access to a particular set of your record, you don’t want to give each doctor — because you want to say, look, any doctor who’s in the emergency room should be able to see this. And so you have to be able to establish roles and then essentially manage people and their functioning within those roles.

This is another thing that’s going to have to percolate through our management systems, and it creates an identity on another class of objects, if you will, security objects that we have not universally applied within the system, but it is something that we’re focused on adding as we move up the stack.

CHRISTOPHER LEACH: One of the things that we have to deal with again on situational issues — and roles-based identity management you mentioned — is when you cross international boundaries it becomes a whole different world.

For example, one of the things that we’ve learned to deal with here in the United States my employee ID number is not a protected piece of information. However, that employee number, if I cross the pond, it is now protected. So, it’s very difficult for us when we talk about policies and the people side of this to have a single policy that can apply. And if we start looking in now crossing that even within companies, so the federated identity, how do I have a policy? I mean, what is your view? How do you think that this will all play into federating an identity, and having a single policy or not, and dealing with those international issues that we deal with?

CRAIG MUNDIE: Well, I think one of the issues is that it becomes too difficult, just given some identity or role management tools, to be able to deal with these things, that you have to be able to state in some logic sense what you expect to happen and where as a function of these things.

Another thing that we actually worked on, and actually have developed and I think released from a specification point of view to many other people, is a technology we called SecPAL, which is a Security Policy Authorization Language. We have never really had a language, in the sense of a computer programming language, that would allow people to make statements, logic statements about what they should do and when they should apply it. I think some of the research people at Microsoft had worked on this problem for a while. It actually showed up first in the work we were doing with people in grid computing where you had all of these heterogeneous machines, they existed in different security domains, and yet people wanted to collaborate and allow some type of collective use of the computing elements, but they needed to be able to describe who was allowed to do what, where they could do it, when they could do it, and that grew into this Policy Authorization Language.

I think that tools like that are going to become both another interoperable component between heterogeneous systems, and they will move us to another level in terms of the rigor with which we can describe these situations and describe what actions the systems should take to conform to the different laws and regulatory requirements.

CHRISTOPHER LEACH: Coming back to interoperability, something that just kind of crossed my mind as we’re talking is we always kind of focus on interoperability of boxes, machines, hardware, so on and so forth. One of the things I think we avoid, because it’s a very difficult thing, is interoperability of the laws and regulations we have to deal with. So, from our perspective in a company as a outsourcer where we have multiple regulations, multiple countries, we find that there’s a lot of overlap where one law doesn’t apply. So, I think there’s a need to look at that, the interoperability of the laws and policies so that they can come together and that we can reduce some of the complexity that we have to deal with that also makes the security and the privacy extremely complex.

CRAIG MUNDIE: I think that’s true. I think both of the earlier keynote speakers mentioned along the way some of the issues of regulation or legal environments. This is a complicated space, and one of the reasons that we’ve put together this — and released this morning this whitepaper called End-to-End Trust was to create a framework where a lot of people could begin to collaborate on a discussion of these points.,

You know, it’s very difficult, particularly in a rapid moving environment, to legislate the answer for the future. It oftentimes represents an overly constrained solution, because people become prescriptive about what you might do to solve today’s problem, and that ends up reducing the freedoms you need to solve the long term problem.

So, this End-to-End Trust proposal that we’ve put together is not a product roadmap, it’s a way of framing the problem not about the remedial security and technical issues, but, in fact, all of these things that get to the questions of authentication, authorization, access, audit, and then what the various social, economic, political and technical issues are, and how they come together.

We would like everybody in the industry, whether you’re on the legislative side or the government or regulatory side or the users or technical side, to really get more organized in this discussion, and to be able to begin to think about how we’re going to address those problems. So, that is something that we really think is a big requirement and one that everybody is going to have to play a more active role on.

CHRISTOPHER LEACH: Another thing I would add to what you said is the cost. We always have to balance that cost in there.

Let me ask you another question that you mentioned as you started — as we started our dialogue, our fireside chat, so to speak, that you took on the security issue and found out what the securities were and then the privacy, and we’ve kind of talked about how they’ve been integrated. Do you feel that in order to have this end-to-end trust that we’ve been talking about, that I now need to give up my privacy, my anonymity, or is that going to be something that can continue? That’s something we struggle with.

CRAIG MUNDIE: Well, I think in the real world people make these choices every day. I mean, I remember when the credit card was first introduced, people said, oh my goodness, isn’t this a problem, the bank will know everything about what I spend my money on? But over time people felt that the benefit of getting credit and the ease of that was a good thing, and they didn’t feel abused by the information that the bank collected in the process of doing that.

So, there’s always a balance required between the people who gather the information and whether the person whose information you’ve collected is feeling good about what you do with it.

I think at Microsoft we made sort of I’ll call it again a foundational decision a few years ago, which was to say, look, everything starts in privacy with notice and choice, that if you at least notify people what you’re collecting and you give them choice as to whether they participate or not, so that it isn’t an inadvertent collection, you know, then you start with at least a foundation of trust in this privacy battle.

Other things I think are — but to some extent the intimacy with which computing touches people now makes this problem a lot more pervasive, or has the potential to make it a lot more pervasive than we’ve seen in the past. So, I think there are a new set of challenges.

But I also think there are going to be new technologies that are able to be brought to bear on that. We have a researcher in our Mountain View lab, whose name is Cynthia Dwork, who has been working on privacy enhancing technologies that allow — when you are examining things across a large data set. So, for example, if you have information that people want to use to study a drug, a drug interaction, and people choose to participate, even if they choose to participate, they don’t want to face the prospect of loss of their personal medical histories as a function of that. But it’s clearly a good thing to participate if you have a disease and the drug might resolve that disease in a favorable way. And so how do you deal with that?

Well, one of the things that they’ve been able to do is to calculate how to essentially inject noise in a data set where you can calculate a way to put enough noise in the data that no particular piece of individual information can be deduced from it reliably, triangulation can be defeated to some extent, but at the same time the amount of noise is statistically insignificant relative to the sample size for the purposes of the analysis that was declared at the outset.

And I think that there will be more clever things that people do along those lines that will help us find a balance not by just saying, nope, you know, I’m going to hold my information close and not let anybody have access to it, or I’m going to deny myself access to useful services that might be advertiser supported, simply because I have an overriding fear of misuse of the data.

I do think that the key is going to be about choice, and we’re going to have to give people not just the opt-in/opt-out level of choosing, but a bit more refined ways of saying, look, I’ll authorize a particular use of my information in return for a particular potential benefit, as long as I know what my risks are.

CHRISTOPHER LEACH: But I would also I guess I would like to add that there will be times when you want complete anonymity. So, part of it — in my opinion, part of the growth of the Internet is because there’s been anonymous things that are going on. There has been a lot of stuff that’s been moving forward. And so while I agree that end-to-end trust will be critical to go to collaboration, to make sure that we understand who we’re dealing with, why we’re dealing with them, when we’re dealing with them, I also believe there will be situations where we want those times where we’re private, so they’re islands of privacy, that I know when I go there I have no — I don’t have to give any information, I can be completely anonymous, as opposed to those where I’m going to a B2B type scenario where I want to know everything.

CRAIG MUNDIE: I think that’s true, and certainly this comes down in a sense again to the choice question, not just the choice to participate but you have to be able to identify the regions or the zones, if you will, of the Internet where the ground rules are complete anonymity, and likewise society I think will increasingly demand that they know that there are certain places where identities are really well known.

So, for example, if you’re putting together an online playground for young school kids, you really want to know about all the identities of the people who have access to that. You don’t want people lurking in your playground if you don’t really know who they are in a reliable fashion.

So, I think that much as happens in the physical world, we will cone to understand that there are certain places where you’re not expected to have to be identified, that you enjoy the ability to move around.

Even in the physical world today, you know, people are worried that different types of surveillance are going to impinge on their personal privacy, and I think as we get in this more connected society we’re going to continue to always find these challenges.

CHRISTOPHER LEACH: Do you think that the bar we have on the electronic world for identity and all those kinds of — you mentioned surveillance — is a higher level than we have in the physical world today?
CRAIG MUNDIE: No, I actually think it’s lower today. I think we started in the Web world where identity was not present really, and so we started with a world where it wasn’t whether you were seeking anonymity; you just didn’t have to present credentials to do most anything.

But I think as this becomes a substitute for more and more of the things that we did or it becomes a surrogate for more and more of the things that we do in the physical world, that the society will come to demand more reliable presentation of credentials and information about people in order to feel comfort, and we will see the emergence of the need for these new forms of credentialing. I think it’s a natural thing, and as long as people are given the choice between having it and not having it, as a function of what they seek to gain access to, then I think we’ll find a happy medium.

CHRISTOPHER LEACH: But I guess I’m back to if it’s too difficult to do this end-to-end, if there are too many moving parts, I’m not going to do it. If those parts cost me too much money, they’re too difficult, whatever that is, go back to what you said; it was easier when the Internet — I mean, the Internet exploded because it didn’t matter what I was doing. But I think we need to balance that with all the problems that anonymity has caused in many, many different ways. So, there is a balance we need.

CRAIG MUNDIE: Two things are clear, that the overall management systems today are not integrated enough, they’re too complicated. That has been a major area of focus for Microsoft, and I think later today some of the product groups here at RSA will make some announcements about the next steps that they’ve taken in building a more integrated way to deal with the access technology, security, identity, and a management solution for all those things.

So, for the people on the professional side, you know, we’ve recognized that it’s too hard. Therefore it costs too much or you just don’t get it right. And there’s been a lot of focus within the product groups on that lately. There will be more that you can hear I think as they hold their press conferences later today.

I think on the consumer side there are two things that I hope will come together in a positive way. We put the CardSpace mechanism into Vista as a baby step in a way of introducing a GUI that people would be more familiar with, like they use credit cards and driver’s licenses. There is some physical token that they use to present particular types of credentials, and we wanted some way on the Internet to be able to do that, too. And if you take the Credentica technology and you merge it with a CardSpace GUI, I think for the first time we may be getting close to a model that people could understand at a personal level, to the degree that they understand how to present credentials in the physical world, that may make them both comfortable managing the different elements of their persona or the different facts that they want to deal with, and not just username and password. So, we in one fell swoop get beyond the liabilities of those overly broad and overly simplistic identity mechanisms, and get us into a world where there’s a manageable strategy to present credentials on the user’s side across different device categories and then a much more integrated solution for managing it, and hopefully dealing with the cost issues on the enterprise side.

CHRISTOPHER LEACH: So, we’ve got a long road is what you were saying. It’s a very long road.

CRAIG MUNDIE: A long road.

CHRISTOPHER LEACH: What about today?

CRAIG MUNDIE: Today, I think we’re in a transitional situation, at least at Microsoft, where we are focused on moving beyond what we did in our sort of first generation of trust, you know, on security and privacy, moving into this trusted stack environment. In fact, I only brought one slide, which maybe they’ll put up here, but the — which basically says in our mind there are three parts to this End-to-End Trust thing, and we want to move toward that.

We’ve done a lot of work in the foundation level. We’re starting to do work in this middle trusted stack level. And we’re adding more and more of these core security components that people can use to build the next generation of applications.

To get us to that requires today that we focus on improved management. Today, we can’t get the things established in the enterprise in a uniform way, and a lot of the focus now is on that, at least in the things you’ll hear about from the company today and in the weeks ahead.

But this End-to-End Trust model, and there’s a URL on the bottom of the slide, is the place where we’d like to get people to go and begin the dialogue. We need a lot of work. We can’t do this by ourselves. Even if we did it just for our products, that would be fine, but it wouldn’t work in the world that you work in every single day, and we need to get ahead of the power curve in thinking about how we bring these things together, what protocols and formats are going to be required to ensure interoperability, and what regulatory environment we want to wrap around that and how we deal with that on an international basis. So, I guess the call to action today is get good at operating what you have, and help us think about going to the future.

CHRISTOPHER LEACH: So, we have an open invitation to participate in this road with you?

CRAIG MUNDIE: Absolutely.

CHRISTOPHER LEACH: We’ll be there.

CRAIG MUNDIE: Thanks a lot.

Thank you, everyone. (Applause.)

Related Posts