Speech Transcript – Craig Mundie, RSA 2002 Conference

RSA 2002 Conference
Remarks by Craig Mundie
Feb. 20, 2002

ANNOUNCER: Ladies and gentlemen, please welcome Vice President of Product Marketing, RSA Security, John Warrell.

(Applause.)

JOHN WORRALL: Well, thank you very much. Welcome to day two of the keynote sessions for the RSA Conference 2002.

We hope that you’ve really taken advantage of all the opportunities to learn about the market, the technologies and where they’re going in the past couple of days and will continue to do so, and that includes going to the class tracks. It includes the vendor expos and certainly the keynote presentations, and of course it includes the after hours networking opportunities with your peers that can be so valuable and a great learning experience for you at such a conference.

As you look through the program for this afternoon, you’ll notice that we’ll be quite busy. We’ll be looking at the road ahead with Microsoft, lifting the bar with infant technologies, browsing through Compaq’s customer files a bit and I think finally we’ll be wrapping up with steering into the changing face of risks with computer associates.

So the lineup today is with leading companies in the IT industry today and, in fact, with leaders of those companies. So let’s get started.

I have to say that we’re very pleased to have such strong participation from the Microsoft Corporation at this year’s conference with Monday morning’s .NET tutorials, yesterday’s participation in the identity management panel with Brian Arbogast and with today’s keynote from Craig Mundie, Vice President and Chief Technology Officer for Microsoft, entitled Microsoft and Security: The Road Ahead.

Craig Mundie is the Chief Technical Officer of Advanced Strategies and Policies. He reports to Bill Gates, Chairman and Chief Software Architect, and he works with him on developing a comprehensive set of technical, business and policy strategic for the Microsoft Corporation.

His role includes coordination of aspects of these strategies, where their implementation spans multiple Microsoft product groups.

He focuses on Internet scale platform architectures, the definition of consumer computing experiences as part of Microsoft .NET initiatives and technical and policy issues surrounding critical infrastructure protection, intellectual property and trustworthy computing.

Please give a warm welcome for Microsoft Corporation’s Craig Mundie.

(Applause.)

CRAIG MUNDIE: Good afternoon, everyone.

What I’d like to do in the next 40 minutes is share with you a bit of the focus in Microsoft on how we’re trying to move the company and to some extent the industry forward in this quest to make computing and telecommunications more trustworthy.

This is not a new initiative at the company. There are many people that have been at it for quite some time. But in terms of getting the entire corporation to focus on this as one of its foremost objectives that represents significant change. That change was brought about in January by a now well-publicized memo that Bill Gates wrote to the entire corporation, saying it really won’t matter what we and other people do in the future if ultimately people won’t trust their computers. And so if we want to continue to build these franchises and all succeed together, then clearly we’re going to have to put more priority on the trust factors than we have in the past.

This conference is important because security, we’ve learned, is a key component of achieving many of the different objectives that are key to people feeling a sense of trust about the computer systems that they use today and will use in their lives in the future.

So this is an industry-wide problem. It isn’t something that any one programmer or any one company, even any one country ultimately will be able to address and completely deal with by themselves.

The security breaches are quite common, despite a lot of effort by our company and many others. It doesn’t matter whether you’re big or small, the problems seem to persist and, in fact, the threats continue to evolve at a fairly rapid rate.

The coincidence in September of the terrorist attacks on the World Trade Center and on Washington, DC, coupled that month with the NIMDA virus as a successor to Code Red, the combination of these things really did produce a palpable change in the psyche of the using community, and that was true around the world, and it became very clear that there was a demand and also a receptivity to changes that really had never been there in any time in our corporate history.

Ultimately, people are going to have to believe in three different things. They have to believe in the technologies themselves, the products. They’re going to have to believe in the companies and ultimately the services that come from companies, because together these things will create a reputation that either is trustable or isn’t.

The other problem that we’ve found in the last year or so as particularly I and some people at Microsoft started to focus on this at the highest level in the company is that there’s really no common framework for discussion and management of these problems. There are lots of ad hoc mechanisms inside and outside companies but there’s no really good way to have a consistent dialogue about it. And so we decided that we would set out at least for our own account to try to address that problem in a methodical way in the company and then ultimately to share some of that thinking with other people.

Security is just one part of trustworthy computing. We won’t e able to ignore the things that we don’t trust. This problem in my mind is a fractal problem, and even though a lot of discussion, in fact, my own focus on this problem started four years or so ago around the question of critical infrastructure, which most people think of as the power grid and the water supply and the telephone network, but, in fact, as computing gets diffused into every part of our daily lives it will extend to the entire range from the ultimate personal computing devices of the future, perhaps like digital wirelessly communicating pacemakers to data centers and Web services, and you’ll be just as unhappy to have your pacemaker hacked as you would to have your data center hacked.

These are not just technological questions, but business policy and practice issues as well.

So as a company in communicating to ourselves we’ve said to the employees, “Look, ultimately computing has to find its way into many devices and people shouldn’t worry any more about that than they do about telephony or electricity today.” You certainly worry about them when they’re not there for some reason but fortunately, at least in most places, they’re there most of the time and we don’t get up every day wondering whether it’s going to be a problem for us to really worry about.

But we don’t have the luxury to some extent of the telephone system or the electric power grid where, in fact, there’s a fairly centralized form of control of those environments; there’s not a lot of complex system that’s diffused at the edge of the network, and yet that will be the characteristic more and more of the computing world that we all live and work in.

So when I’ve thought about this I think that there are really three different time horizons that we have to focus on.

In the short term we’ll work harder, everyone will, to improve the designs, to improve our implementation techniques that change policies in favor of security and privacy. These have been underway for some time but clearly we’re ratcheting up the level of focus on these short-term questions.

Then there’s a set of medium term issues: How do we actually deal with the fact that most of the computers will be in an unmanaged environment in the future? Therefore you don’t have the crutch of an IT organization to lean on to deal with the administration and remediation of these problems. And that leads you to believe that the service components and even the systems themselves have to exhibit more properties that are almost like the biological metaphors where they’re self-organizing and self-healing and without that it will be hard to believe that people will be able to keep up with the amount of computing capability that surrounds them in their daily lives.

Longer term I think that there has to be an increased focus in education, academic and even government and commercially funded research. Microsoft and companies like IBM actually do a disproportionate share of the world’s computer science research today and ultimately that probably has to change as well. And some of these things have to be guided by much longer term issues than we typically do when we think about solving problems in the let’s say three to seven year time period.

The product cycle, if you will, for infrastructure products like Windows is actually much longer than people think about. This is year 12 for Windows NT as a code base and it’s really the code bases in their entirety that represent generations of these products, not the individual releases.

And so the question is when will Microsoft and when will other companies decide that the fundamental way in which we build these things in the long-term has to incorporate new techniques or technologies.

So in November Microsoft sponsored a Trustworthy Computing Conference in Mountain View at our campus there and we invited lots of people from both the privacy world and the security world. But unlike in the past, we had had one other of these conferences the year before where the focus was on safety on the Net, we decided, I decided that I really wanted to try to elevate the discussion and not end up with a situation where you had the privacy people on one side complaining about the security people.

Ad this was, of course, in the aftermath of the terrorist attacks. Prior to September many people who talked about these issues would be very focused on one or the other and, in fact, they would worry about the dynamic tension between the two, but it was only the cognoscenti that seemed really focused on it, but after the September events every time you turned on the radio or television you could frequently find somebody talking about whether you were willing to trade your privacy in order to have greater personal or societal security, and it seemed like the world tilted at that moment in favor, at least near term, of security.

So we at this conference rolled out a thing that we called the Trust Taxonomy. And the idea was to tell people how we were thinking inside the company about the issues that would ultimately lead to trust and to suggest that it became both a taxonomy for a discussion and a way to encourage other people to do what we’re doing, which is to elevate their concern about these issues and to try to develop their own methods of managing toward these goals.

So we decided that there were five things that ultimately lead to trust; privacy, for example, is a key one. What was interesting as we developed the taxonomy was the conclusion that security per se wasn’t a goal; it was, in fact, one of the more important means that is used to achieve some of these goals, that if you were perfectly secure that in itself wouldn’t give you anything. It’s just an enabler on which you can move forward with comfort to achieve the things that are really ultimately important or your objective, whether it’s personal, business, governmental, what have you.

So we looked at a lot of these things and have begun to think of this as an inventory. Then we stopped and thought one level further that even if we knew what the goals were and we really focused a lot on the implementation issue, the means, it also became clear that you can screw it up in execution.

And so we began to think separately about measuring things like what are intents in terms of management assertion and business policies and practices underneath that, what are the risks you’re really trying to manage for, and I think the industry to some extent has succumbed to the binary nature of computing and said, “Oh, it’s a zero or a one; you’re either secure or you’re not secure.”

But nothing else in our life really operates with those kinds of absolutes. There are always tradeoffs that exist between convenience and economic considerations in everything that we deal with, and I think that more and more those tradeoffs will have to become evident and made more consciously against a broader array of parameters than we’ve historically thought about. The implementation has to be something that is really focused on meeting these objectives.

And one of the things that the industry I think has to have more focus on is the question of the evidentiary record, and that deals not only with evidence in the sense of audit mechanisms that would allow the creation of better tools but truly the ability to only perhaps create evidentiary records in a legal sense because, in fact, one of the big problems that we have in this world today is that there really is insufficient deterrent. Many things in our societies reach an equilibrium because, in fact, there’s a balance between the deterrent that comes from law and law enforcement and what people decide their risk is.

And today the reality is that you can argue whether the laws are sufficient. They’re clearly not harmonized on a worldwide basis, even though the computing and Internet environments really don’t stop easily at geopolitical boundaries, and as such we have really never focused as an industry on creating the things that are necessary to aid and abet law enforcement or to increase the policy and legal status of deterrence in this area. And I think those are going to have to change as time goes on as well and they will alter the balance a little bit.

So we thought about this trust scorecard and started thinking about it as a three dimensional cube, where the goals are in one axis, the means are in a second axis and then finally the execution parameters are in the third, because they all interact with one another.

And the goal was to think about how we could use this inside Microsoft to give ourselves grace, to create management mechanisms that would allow us to methodically focus and to put weights as a business policy and leadership objectives and responsibility demand, put different weights on the intersection of these different parameters, and to be able to communicate those things clearly to different parts of our organization.

And so we’re moving along in order to create this, and interestingly some of the auditing companies, the big five auditing companies, after I rolled this out publicly in November, have looked at it and said this is an important concept, because ultimately the auditors are going to be asked to opine as to whether companies have risks in these areas and today they don’t have a good scorecard, they don’t have a technological basis to interact with a company, particularly one that’s moving from a world of paper to a world of all electronic record keeping. And so I think there are many benefits to using this as a way to guide the discussion.

One way to think about this is it helps us clarify the different issues and how they relate to one another. Back months ago when we talked about Passport, one of the things that was engendered was what I called “Passport anxiety.” And there were different questions that arose and a lot of times they all get blended together in the dialogue in the press or in different conferences, but at least by taking some of the questions and parsing them along this taxonomy — for example, will my data be safe; well what does safe really mean in this environment — I guess you can interpret that as saying well if privacy relevant to your data is the goal and security is a means to help achieve that, then the risk that you’re trying to mitigate or manage is the unauthorized access to the user’s data and therefore you can say, “Okay, I understand the role and relationship of these things, and therefore if I quantify the risk I can decide what a suitable investment is in this area.”

On the other hand, if you say that the question is, “Well, will my data be shared,” you’re not really asking a technological question. You’re really asking a question about the business practice or policy of the person who has your data.

And to the extent you worry about that, then, in fact, having an evidentiary record or an audit mechanism that would allow you to gain some assurance as to whether that business practice is actually being followed is important. But it’s a completely different question and, in fact, not one that you ask of the programmer by and large but one that you ask mostly of the business manager.

So another question that has been around for a while is what I’ll call Kerberos Anxiety, and today I want to make an announcement in this speech that there have been many people who, in a similar way to the privacy and data question about Passport, had asked a lot of different questions about Kerberos and Microsoft’s implementation. And many customers and partners have asked us that they want to be able to interpret the Kerberos authorization data and as such to allow interoperability between our products and others.

And so after some consideration, today we’re announcing that Microsoft will now publish and grant a royalty-free license to the specification for the group membership PAC data, which is part of the application specific fields that we have been using in our system, and we think this is actually what people have been asking for.

On this chart the blue part on the left is sort of the generic use of Kerberos for authentication. But in the application specific fields we’ve actually done two different things. The first is where we record and pass along between the systems the group membership data, and that was a specification that we have made available to people but only under a license that allowed them to use it within our own environment.

So today we’re changing that. We’ve made this available today to the IETF as an informational specification and we’re also publishing it on the MSDN Web site, along with the attendant royalty free license that will allow people to take it and build whatever system they want to build that will then interoperate with this part of the authorization mechanism.

Just to be completely clear, I want to talk a bit about the last part. There are really two sets of things that Microsoft passes around within the application specific environment. The first, which is necessary for broad interoperability, is this group membership PAC data.

The second is what we call the interactive logon data. This is essentially the private exchange of policy and other related information between Windows machines and a Windows domain controller. It’s part of the security environment of Windows. That part we’re still essentially continuing to use as a private mechanism between Windows machines. It doesn’t deal with the interoperation between a Windows environment and a non-Windows system, and as such we think that we’ve given people here what they actually need and asked for. But I wanted people to be completely clear about what’s in there and what we’ve done and what we haven’t done.

But we think that this is consistent with our now long-stated goals of federation, for example, in Passport and other areas, where different systems want to be able to federate with Microsoft, whether it’s at the level of somebody building a file server that wants to integrate into the domain and understand group membership to people ultimately who want to exchange things in this area of creating trust hierarchies.

So going forward what matters, to some extent, is what Microsoft does, certainly for our own customers and accounts. To the extent that we are successful in this, I think we will be a force that tends to raise the water level and a lot of the boats will float higher as a result.

To do this, we’re focused in the near term on three different concepts. Broadly I think of them as secure by design, secure by default and secure in deployment, and I’ll talk very briefly about each aspect of that and how we’ve changed our practice or thinking about it.

Right now we are completing a process that began in the fall last year of putting every developer and, in fact, everybody in a development organization in the platform groups, and ultimately we’ll do this for every group in the country, through a special training course to sensitize them to these issues, to teach them some best practices, to make them understand what we’ve learned in the course of the last few years through our Secure Windows Initiative and other programs how to train people to avoid some of the long-standing classes of problems.

We’ve also built a lot of new tools, which we’re putting in the hands of people to assist in the reassessment of some of the existing legacy code and products.

So we’re changing the coding practices, we’re changing the default behavior of some of these systems upon installation, and more to that in a minute.

And the business managers are very, very hardcore about this now. For example, Brian Valentine, who runs Windows, has said, “Look, not only will every person who works on Windows go through this course and adhere to these things going forward, but every group in the company who has any code that gets contributed and ships using our common mechanism, like on the same CD, will have to comply to this same set of rules.”

So we’re creating a level of focus and accountability in the company that goes well beyond what we’ve done before and extends from the senior leadership of the company down to the people writing the code every day or doing the testing.

We’ve at this point trained about 9,000 people in the Windows, ISA and .NET developer communities. There are other programs underway in Office and other places.

We’ve instituted a policy of involving more third party organizations in audits of the products, and the third parties in this case can be other parts of the company. I’ve actually got a security strategy and architecture team that works for me that’s independent of any product group and we essentially do audits and consulting with product teams about these issues, and that didn’t use to happen before.

And we’re actually hiring companies from outside the company to come in and look at these things much earlier in the development process than we have in the past, and all of these things we think contribute to better design.

And in the future the .NET Server, the XP update mechanisms, what I’ll show in a minute as a templated model for how we want to deal with these in a very uniform way in our company and across all the product lines, a uniform way of dealing on a response basis with problems, all these things we’re institutionalizing at a level that we have not done in Microsoft in the past.

And the products are taking what used to be a good practice at Microsoft in terms of getting a lot of customer feedback, but formalizing how that gets fed into the between-shipment learning process.

So today we’re also moving to what we think of as security by default. And here you could say this started almost two years ago when the first wave of e-mail viruses propagated around and the customers came to the people who ran Outlook, for example, and said, “Hey, we just need more help, Microsoft. We can’t manage this. We want you to make it so that these things don’t happen unless we take some conscious steps to do that.” And so we delivered service packs and new versions of the product, which essentially disabled these features by default.

And similarly when we did XP we decided to put in a basic firewall capability, but again we struggled with the tradeoff between breaking applications and the enterprise environment by automatically enabling the for example, and locking everything down, and making it easy for customers who didn’t know what a firewall was to get the benefit of one, for example, in the home.

And so we came up with algorithmic ways or heuristics to guide whether the thing would be disabled or not disabled by default, but more often than not we try to tilt in favor of having this on by default.

The .NET Server, which will come out relatively soon, IIS6 will be disabled by default as opposed to enabled by default. In general, the company is adopting this posture that we should reduce the surface area that’s available for people to attack and require people to make conscious tradeoffs to enable features that they think are needed but which have the attendant expansion of the surface area that can be probed.

The next issue then is what I call security in deployment and here we deal with a set of very real problems. One is there are lots and lots of machines out there. Two, these systems are big and complicated and as a result the way that we all build them today it’s probably a practical impossibility to say that they would be ever perfectly correct or secure, but even if they were I contend that there would continue to be a need to move things forward and to make fixes and offer new versions, even if you never cared about another new feature.

The Windows 2000 group has now come up with a security rollup, which was released. Enterprise Security Configuration Assessment and patch rollups are part of the STTT program we announced and accelerated in October. A lot of the focus in SP3 for Windows 2000 is around security issues. Visual Studio .NET and the way that we asked that team, even though it was fairly near the end of their development cycle, to go back and think again about different ways, not to go change the way they coded but to think of adding sort of suspenders to their belt, to change the way that installed things were done, to change the levels of privilege that things ran under, to provide lockdown tools as an integral part of the installation mechanism; these are all things that an increased ask on the part of the upper management has been responded to very, very positively by these teams and I think was the basis in the company for Bill ultimately writing the memo in January saying, “Look, the need is there. The focus has to be there. We’ve got a lot of good experience and practice to share in the company and the whole company is going to focus on this issue now.”

The idea of having federated Windows Update for corporations allows us to begin to have an automated mechanism to push fixes, ideally achieving a rate of fixed distribution that could be faster than the rate of virus propagation, because one could essentially be a big multitask broadcast environment where the other is essentially a programmatic propagation.

One of the challenges that we and everybody else will have to deal with, and while I didn’t hear his talk yesterday I know from my own interactions with him and comments he’s made at our conference in November, Dick Clark has said, “Look, many people have not really focused on what it will cost them to maintain currency, to maintain a capability, to defend against the evolving threat.”

And, of course, what we’re dealing with now is an infrastructure at a worldwide level that is really quite large in its install base. This is a graph that shows in the colored boxes what the current contribution to each of the succeeding versions of Windows is and its use within the installed base today.

And so what you can see is, as you naturally would expect, these things roll through the install base in big ways. You know, the aggregate number always seems to be increasing, but the tail effect is actually pretty pronounced. And one of the biggest challenges that we face as a company and the customers face in dealing with this is that they usually have a mixed infrastructure deployed in the organization, and how to manage that and deal with the fact that the lifecycle of change — if you look at this from 1996 to 2001 and what’s already rippled through there, the difference in capabilities of a machine that was the target design point for Windows 95, which was largely chosen around 1993, is a dramatically different computer than the one that’s the design point for a system being developed today or for even Windows XP, and the threat basically evolves at the speed of the capability of these machines and their broad use, even though the install base essentially moves at a much slower rate of evolution.

So one of the big questions that we struggle with both in practical and business terms but also in you’d say technical and architectural terms is how far back to stretch. You know, your ability at any one moment to deal with these things I think of it as a rubber band and it has limited elasticity. So you can hook it some distance forward in time to deal with the current capabilities or threats but because the elasticity is what it is you can’t essentially stretch it arbitrarily far in history.

And so this is that same set of bars but essentially plotted as waves where at any time you can see the relatively contribution. And while our best technology you could say is Windows XP it represents a tiny, tiny sliver of this.

To put it in scale, of the numbers we’re dealing with and trying to get everybody to do something at the same time, the little black line running across near the bottom is the slowly growing population of all the humans in New York City, and it’s a pretty small number. But if you think, “Well, what would I have to do to get every single person in New York City to do anything,” you’d realize well that could be quite a challenge, and to some extent we’ve got many, many more people and many more systems that we’re ultimately asking to do something to deal with this evolving threat.

And so while we can all work to make these things as good as they can be, I think we ultimately are going to have to find mechanisms that tend to aid people in getting this done and make it far more automatic than it’s been in the past, because if we depend on people to do this work we’re just going to get swamped, and perhaps you could say already are swamped.

The dotted line, by the way, just shows the rate of growth and adoption of the worldwide Internet phenomenon, and so because the install base was there you were able to get a lot of people engaged in this quite quickly. That holds some promise, as did things like the Napster phenomenon that said if you can create some automatic mechanism and use the Web itself as a distribution vehicle, then, in fact, there may be a level of automation possible here that has never been there before.

Microsoft has tried to come up and be very consistent internally in terms of a severity rating system that we’re using for the different classes of problems, the different risks and threats associated with those, and against the different kinds of systems, and we’re using this consistently in the company now to modulate where the investment gets made, what level of urgency we do and in what we choose as the shipment mechanism for fixes that relate to any of these particular problems.

So we now have several different delivery vehicles, which range from the broadly distributed single critical fix. This is in contrast to what we used to call “quick fix engineering.” There the QFE was usually developed and deployed on a selective basis, usually in consultation with a technical support organization, and there really wasn’t much mechanism in the company to get a fix produced that had broad requirements and a great urgency with distribution.

So now we’ve got a set of these things, the single critical fix, multiple patches delivered together, a security rollup, which is a historical repackaging of these things, and then service packs, which would include the security rollups as well as fixes for non-critical problems or lower severity problems.

And so the way that we’re thinking about evolving the company is to where we have a service or a product security template that guides our release mechanisms, and we have a uniform what I’ll call characterization phase, and then as a function of whether we’re dealing with a service like MSN, Hotmail or Passport or whether we’re dealing with a product, then we deal with these things in a slightly different way.

In the case of the services as a function of the criticality we deal with it on how quickly we propagate these changes inside the service operation or, in fact, out through the service operation if there are specific things in the client.

If it’s a product issue, then, in fact, the prior graph about criticality becomes a factor in determining which of the mechanisms we use and the urgency with which we push that thing out.

But the commitment is to take things of critical severity and deal with them on a sort of seven day a week, 24 hour a day basis and try to deliver these on an expedited basis.

So what matters at Microsoft in our view is really what we continue to do over time. There is nothing we or anybody else can do that fires a silver bullet at this problem and it all just gets better overnight or even in probably a small number of years. We are taking best practices that we’ve begun and learned about over the last three years in the Windows world and the .NET divisions in how to discover, develop, deliver better technology and we’re using it. We’re doing these external audits. This is a graph from an external audit that was done on I think Visual Studio .NET. And we’re trying to participate in more industry-wide initiatives both to frame, for example, a voluntary way of dealing with vulnerability disclosures that is you could say societally responsible and yet something that everybody understands what the process is for dealing with it. Getting something like that to be uniformly dealt with at the industry level would certainly be helpful.

To offer things like this trustworthy computing framework as a basis for having a common vocabulary to talk about the problems for basically beseeching the press to focus with some clarity on the issues of are you asking me about a business practice or are you asking me about a technological problem or a bug.

And the development of standards: For example, in the auditing area Microsoft produces audit facilities for a lot of our product but we don’t do them with the level of uniformity that I would like to see, and we don’t have a way of describing these things.

So moving perhaps toward XML descriptions of these things and then being able to facilitate the creation of more tools to deal with analysis and monitoring, you know, may be valuable and I think we’re very open to those kinds of discussions.

So for all of us this cycle really has no end. You can say in some sense that the threat model keeps up both with Moore’s Law and Metcalfe’s Law; that is, the rapid evolution in capacity of the computer system itself, and, of course, you’ve got storage in there as a factor too and Metcalfe’s Law to some extent that the more you connect together the more power you have. Well, that power can be used for good and not so good.

The unfortunate truth is that programmers today are still human beings and despite training them and asking them to think about these things, it is very difficult, particularly over a large population, for people to grasp the implication of these future trends. I mean, the Y2K phenomenon is the most well known example. And you could say, “Oh, that was trivial” and, in fact, it was and unfortunately people are even less capable of predicting what the future holds in a number of these areas.

And that’s why I say to people even if we could be perfect coders all the time, I’m quite convinced that we would still have to offer regular updates of systems to deal with the growth in capability and with it the growth and evolution of the threat model.

And so you could say there is lots of job security for people in the industry, no matter whether you’re focused on one aspect of the means of trust or whether you’re a product person or a support person. People are going to have these problems for quite some time.

So just in closing, I want to highlight a bit about the way I think about the much longer term future, the 10 to 20 year timeframe.

I think of this as a bunch of difficult problems, the confluence of which gives all of us an interesting challenge looking ahead. There are lots of processes out there, arguably too many relative to the way we program and manage them today but they’re spreading fast despite that. And I think you’re starting to see more and more instances of what you might call chaotic or mega system behaviors where the systems are of such complexity that no one can actually predict exactly what is going on in them or what will go on. And we’ve seen these in the past in terms of network storms but we also see them in the form of the new forms of attack on the network itself.

I think programming as we’ve all grown up with it is too error prone and while it is clear that there’s an obvious path to a better methodology my own belief is that something, necessity being the mother of invention, something will emerge probably in the 10 or 20 year horizon, which will force us to rethink the way that we actually code the computer programs and applications.

I think that in some sense the people are losing ground to the machine. Certainly by the numbers they are. And, in fact, if you look at our ability to train people to deal with the growing complexity of this environment, we’re clearly losing ground. We don’t have academic programs sufficient to graduate people who are experts in any one of these dimensions and in this field, of course, we know there’s a paucity of academic programs.

The whole notion of administration of computing systems and networks today has been based on the idea that there are professional IT managers for this infrastructure and that won’t be true for most of the computers in your life in the next five to 10 years.

We already crossed last year the boundary where more microprocessors are deployed in devices that are in unmanaged environments than there are in managed environments. And while not all of them are things that people can change or add things to, which increase the vulnerability, clearly some of them are. The natural trend is to diffuse the capability into the devices at the edge and the new use of the Internet in the next five years will be, in fact, to write the programs to exploit all the capability that exists at the edge. That will be the fundamental change in the paradigm, but that will also bring with it a whole new range of potential threats and vulnerabilities.

There are also lots of policy issues. Legal frameworks, of course, are retrospective in nature and should be. As a result, in a rapidly evolving environment like this, all of us have to think about how we mitigate the risks, recognizing that the legal support you could say and deterrent will actually always be lagging what we perceive to be today’s threats.

The norms for how to do this on a worldwide basis are clearly evolutionary at this point, and these things will always lag technological change.

The issues, society has never really dealt with something that seems to be as pervasive as information technology in its use and importance, and which also evolved at such a high rate of speed, and so that just exacerbates these problems.

In terms of future direction technologically I think we’re entering an era now where we’re going to have much more machine-to-machine interaction. The Internet today has been more about it’s a publishing environment where you have people falling on their mouse button and getting Web pages served up to them.

But the world we’re moving to is one where we expect all of these things to deal with each other on some type of peer-to-peer or orchestrated basis or where there’s a lot more computing and storage being done at the edge of the network. This will change the paradigm and, in fact, millions more programmers will be involved in dealing with the computing platform of the Internet than have been today.

Today arguably there’s only a couple of killer apps for the Internet computing platform, the e-mail clients and the Web browsers and that is not a steady state condition. Just as in 1984 there were the two killer apps of the PC, which forced its diffusion at the grassroots level, word processing and spreadsheets; it was the diversity that made the current model of computing, I think the same will now be true in the Internet, that the killer apps of publishing and electronic messaging have created the adoption and diffusion and the task at hand now is to recognize that that platform will be used by the world’s programmers to do many more interesting things, but there are just more people who are going to be involved.

And that creates new classes of problems. Some of the attacks, for example, in the Web environment of cross-site scripting, is you could say a problem that exists at the application layer because someone was opting for convenience in the way a customer would go from working in one Web site to working in another Web site but it created a time window of vulnerability, and I think there will be more and more of these kinds of things.

And at the end of the day the customer doesn’t care where the vulnerability comes from — is it a coding error down in Windows or is it some application level problem. Frankly they don’t care. They want to think of this like watching HBO on your cable system; just give me the movies, thank you, I don’t want to know about the details. And that is the environment all of us will have to deal with more in the future. I think this will require new methods of development, testing, operations and auditing that go well beyond what we do today.

And, of course, the hardware world and the networking world will continue to evolve. We’re working with many people now in the industry to try to figure out what could we do to use all of this underlying capability to make the system more impervious to physical modification, you know, to theft and to loss.

I think we will see more trends to rigorous authentication and key management, rigor being applied to the identity of people, machines, programs and enterprises in a much more rigorous way, and, of course, the people at this conference are I think proponents of these things.

So with that, I want to thank you for your attention and hope you enjoy the rest of the conference.

(Applause.)

JOHN WORRALL: Thank you very much, Craig; very helpful. I believe as security professionals we’re all very supportive of vendors that take important strides to improve the security of their products and certainly to improve the level of interoperability that they also offer within their products.

END

Related Posts