Speech Transcript – Jim Allchin, Microsoft Windows Briefing

Remarks by Jim Allchin, Senior Vice President, Personal and Business Systems Group, Microsoft Corporation

Wednesday, July 23, 1997, Seattle, WA

MR. Allchin: Welcome back.Paul said we were going to do some exciting stuff this afternoon.Well, you saw a bunch of exciting stuff already, earlier this afternoon.And my job is to talk about concrete and plumbing.So we’ll see just how exciting it can be, but we’ll see whether the demos can take us to a new level of Windows.

There are two trends that are happening, that are driving all the changes we’re making in Windows, from an architectural perspective.First, there’s pervasive communications.The other is simplifying it for the end user.In order to accomplish all the things that you saw this afternoon, as well as where we’re going, we have to continue to evolve that architecture.In order to do the pervasive communications trend, in order to accomplish all the things that we want to in that area, we have a vision called distributed computing.The promise of that vision is organizations with no boundaries.

You’ve got the Internet; it lets businesses communicate with their suppliers, have consumers communicate with the business.It lets the organization become virtual, where you can have remote sales offices between the organizations.It lets mobile workers take advantage of corporate resources, no matter where they’re at.

We also have the promise of the technology available to everyone.Where, regardless of whether you’re in a small business or a large business, this promise means that you’ll be able to get your computing work done.Greater economy of scale; we don’t really believe in a single machine being able to do everything.We believe in being able to take the technology into the PC, have them go remote, and be able to have the economies of scale that you get from the PC industry.

On the main frame or in the glass house area, we believe in taking high performance industry standard systems and adding on top of it industry-standard software, Windows, and being able to gain higher availability and greater scaling.We also believe the promise of distributed computing offers better apps with less effort.But, the reality today is that we’re missing a few features, the operating systems that are available today do not have all the features that you need.And we also lack comprehensive tools in order to manage it.One of the things that I’m going to talk about today is how we’re addressing total cost of ownership in Windows.

It also has a reality that writing distributed apps is very hard.While there have been client-server applications that have been available, writing those has been very difficult.And today, distributed computing is onlyused by large corporations.It’s much too complex for any small business to address.There are many components, in order to take this reality and drive it to realize the promise that is needed in a platform.The way we view it is that, sort of analogous to a car, you can go buy a car, and you can decide not to have the CD player put in, or decide to have a smaller set of tires, and then later on, add those.It will work.It’s just more complex.

In the operating system area, for all platforms, it used to be that memory management used to be separate.You may remember those days, when memory management was always an add-on.Well, you probably had problems with your PC because of this.That same trend existed in terms of networking.There were always add-ons to the system.The result was a lot of complexity.Today, people talk about directory systems being add-ons.And finally, in the future, people are talking about transactioning, scheduling, queuing, scripting, all those things can be added-on.Well, it’s like going and buying a car in a lot of pieces.It’ll probably work, but it’s a lot more complicated.

What are the fundamental problems of not integrating all of this together in the operating system? It’s increased cost and complexity.I’ve got a couple of examples here.They’re fairly technical, but to try to get across this point.If you take something, like a Web server that has its own account database, or something like a separate directory system sitting on top of an operating system that already has the directories, you end up with multiple-user accounts.You end up with having to log in twice to the system, with two different security systems, perhaps, that you’re dealing with, just a lot more complexity.

You also have a problem: suppose you’ve got a local printer on the computer that you’re using.And you also have access to that printer across the network.The result is they may not be unified in their names, or in how they’re doing security access.You may have to set up two different levels for gaining access to that printer, which is just added complexity.

But, there’s another fundamental problem with this; it just makes the OS inefficient.It makes this dual layer of a mini operating system on top of a real operating system very inefficient.It’s slower performance, because you’re having to recreate things that are already in the operating system.It’s bigger, because you’ve added more code to the system.And finally, you get this least-common denominator principle, which is many vendors are, today, trying to take this mini OS approach and put it on other operating systems.The result is that you end up missing some of the key functionality available in the operating system.

Windows happens to be pretty advanced.We think that you would have to duplicate Windows compared to other operating systems for this layer that runs on top.But, people won’t do that.Other vendors won’t do that.They’ll instead go to a least-common denominator.So users, developers and administrators would be limited to the lowest functionality level of all the operating systems that are supported.So for a variety of reasons that’s not our strategy.

Our strategy is pretty simple: it’s integration.We’re going to tie together, just like we did with memory management, networking and directory, the services that you would expect in order to write distributed applications, Web transaction, scheduling and the like.

In order to talk about this, and how we’re investing in this area,I’ve put together a block diagram of six pieces that we’re going to walk through one slide at a time, talking about what we’re doing in each of these areas.Some of the technology I’m going to talk about is coming out in NT 5.0 and Windows 98.Some of the technology is going to come out after that.But all of it is fundamental for moving ahead in this distributed computing vision.

In terms of the networking area, we have a very simple view.It should be networking for the masses.We believe every person should be able to gain access, without being a guy in a lab coat, to access information no matter where it’s at.They shouldn’t have to go through 26 steps in order to configure a dial-in line in order to get their mail.They shouldn’t have to worry about what type of media is being used — Ethernet, token ring, or the like.It should just be built into the system.

Second, we believe in the ability to have a programmable infrastructure.There is no reason why you can’t leverage the operating system to do more than just function as a Web server, or application server, or file server.It can have communications built into it.And that’s another fundamental thing that we’re doing.One API can get you to a lot of different services that are built in the operating system.

The last thing we’re doing is integrating a set of media, whether it’s voice telephony, video, or data, we believe one wire should be able to subsume all that.So you should be able to just walk up to an electrical outlet or a telephone outlet, and plug in and the system should automatically configure and do all the things necessary to get all the information that you need.Networking should be just transparent, hidden behind the walls.

In terms of security, the things that are driving us in our implementation come from scenarios that customers are asking for.One scenario is single sign-on.We have this today in NT, that they don’t want to walk around with multiple card keys, and as we move ahead in the future, it’s not whether they are electronic or mechanical, they don’t want multiple keys.I have an electronic card key for Microsoft and I have an electronic card key for where I live downtown.It’s very easy to get those two card keys confused.I don’t want to deal with that.I want one sign-on to the world.I’m the same person no matter where I’m at or what I’m doing.

People also want private communications.It could be an email, or it could be between businesses.They want to be able to have secure conversations.People want to be able to have secure transactions.They want to be able to have goods processing, and know that it’s not tampered with.They also want a simple, secure desktop.They want to know if they leave their office for a second, someone won’t rush in with another operating system, load it up, or do something to steal their hard drive or whatever.And we’re doing a bunch of things in that area.

Through this system we’re methodically adding capabilities to enrich the security level of the system.These are some areas in the base of the operating system we’re adding in NT 5.0.We already have a crypto- API, but it’s getting enriched in NT 5.0.Encrypted file systems, so that even if you have the information on your local disk, and you leave your office, the information is still secure, because it’s encrypted.And secure boot, so we can tie the operating system to the hardware, so that someone can know that someone can’t sneak in and do something devious to your machine, more auditing and the like.

In the protocol area, in this pervasive communication arena, we want to continue to enrich the security of the communications.So there are a bunch of buzzwords here that are protocols that we’re adding to the system.We have SSL today, we’re enriching that.IPSAC is a new Internet standard for IP communications, that we’re adding.And we already have RPCD COM, for being able to have secure remote procedure call communications.

We’re also improving the authentication of the system, so that regardless of whether you’re using public key systems or a private key, it doesn’t matter.No matter what security system you want to use, it’s available inside the operating system.And it’s seamless, the way it’s been integrated.Finally, safety is also an issue.Because of the connections that people have to the Internet and between businesses, it’s important to know what code you’re actually running on your client or on a server.

Work that we’re doing in there includes Authenticode, which you’ve heard about, which I believe is the ultimate and will be the way that we will have secure systems.You have to trust at the human level about where codes came from, because even if you think you trust a particular piece of code, who knows, there could be bugs in it.You’re going to have to rely on the vendor that you’ve chosen this technology from.And centralized code distribution, so that you can have more safety about where it comes from.

So we’ve done a variety of things, trying to address all the scenarios that are articulated, and there are a lot more.It’s a very complicated subject; onne where, in the case of NT it was built and designed from the ground up to be a secure system, and we’ll continue to push on that in the future.

The third area is the directory area.Now, w have in Windows NT 4.0 Server, a directory system, today.It’s very comprehensive; it allows large corporations to take advantage of it.It’s got single sign-on, regardless of how many servers.But, it’s a far cry from what we know is possible, and how we see the vision for distributed computing.

So in NT 5.0, we are working on a new directory called Active Directory, which we’ll give you a demonstration of in just a minute, which is tremendously powerful.It integrates the best of what’s been done on the domain naming system in the Internet, with the best that’s been done inside corporations, with things like X500.But, it’s not this heavy weight, comprehensive, confusing system.It’s very simple.A lot of technology underneath the covers.But, the whole goal is to be able to simplify and just put in servers and have it go from a grassroots, within each of the departments, to form this tree of information that anyone can then go and browse on.

We’re very excited about this technology.It’s also unique in the sense that, instead of just having clients and servers and users and printers, and the usual things that you might expect in a directory service in a network, it will have the capability to have all the gunk in-between the clients and servers.All the things dealing with hubs and routers and switches will be able to get their configuration information out of the directory.So with one administration model, you’ll be able to manage the network infrastructure, the fabric of a company.

In terms of storage, today, sitting on a client Windows 95 or NT Workstation is a variety of different storage systems that you can go and access.You might go and access a file, office server, your document, you might reference a SQL database, a transaction, maybe a warehousing system, or you might have groupware and you’re accessing storage inside Exchange.It works today, but we can go a lot further.One of the key things that we’re doing is really driving for location independence of the data.No matter where you are, no matter where the data is, and no matter when or where the data moves to, you’ll be able to find it.

We started this with NT 5.0, with something called the distributed filing system, to be able to help you transparently move information around and have it — still find it.But, we have a lot more dreams of having that data automatically migrate between clients and servers and between servers to automatically load balance.

Another thing that we’re doing is creating an interface that the programmers can use to gain access to the information, no matter where it’s at.If it’s in Exchange, if it’s in SQL, or if it’s in the file system.So no matter what type of data it is, or what type of manipulation between the data, that you want, you can have a programmer just write a simple script, in VB or JavaScript or whatever, and be able to manipulate that data.

Last, we have a very aggressive vision of unified storage to the degree possible.We’re going to take the best of file systems and the best of database systems and continue to merge them and blur them.So that standard with the operating system comes a very comprehensive storage system that groupware applications, simple structured tables, as well as just arbitrary documents and things like spreadsheets can be addressed in one storage system.

Now, we took the first step at that in NT 5.0.And we have been working on it all along.The file system that’s in NT 5.0 is a significant step up than what we’ve provided before.It’ll have the ability to put properties, or something like a case number, if you’re in a legal company, on a particular file, and then search for it automatically.So it will automatically index the properties and content.It will have things like quotas, so you can restrict the size of what you’re working on.It has links between documents.And if you move a document it will automatically find it, even if you move it from one machine to another, it will automatically track it down.

You’ll see us continue to work on this, to try to unify these two storage systems.It doesn’t mean there won’t be applications for high end clustering.This is just trying to enrich the file system with some of the attributes that you expect, or you have, in SQL systems.What we want to do is do a demonstration of the Active Directory.Now, most of these demonstrations that you see of directory systems are, frankly, pretty boring, because they’re all designed for the administrator.The administrator has this mess of users and he’s got multiple directories.If somebody changes their name, somebody is changing their phone number, they have to move them between the organizations, and it’s a bunch of trees, and it’s so great for the administrator, and we’re focused on making it great for them.But, what does it do for the end-user.

I think for the first time, we’re doing breakthrough technology in the directory area, to let the directory be something that end-users can take advantage of.Usually directory systems are designed for looking up a particular item in the directory.You want to look up a particular user, get their properties.

What we’re interested in is the ability to do things like fuzzy searches, so that just like the phone book, you can look quickly, alphabetically, or you look up the Yellow Pages and search for a particular categorization of that.We want to do that very fast.The other key breakthrough that I think we’re doing for end-users is the ability for them to use off-the–shelf components in the system that do new things to the directory.

Let me switch topics into the management area.We have a vision of no-touch clients and servers.We don’t want to take the complexity that currently exists in the client and move it to the server, so you have to deal with more people, very large servers, and a lot more administrative tasks.We want to take the complexity away from both client and server.

And we’re doing this through the Zero Administration initiative, which we started some time ago.The vision of this is to take away the complexity of managing change.When we talk to our support organization, Compaq, HP, Digital, and a variety of corporate customers, it was very clear, any time there is change there was additional complexity.That’s where corporate customers were having end-users call their help desk.They changed something and got confused.

So the biggest culprits, if you will, were when you add an application, take off an application, update the operating system, or change hardware in some way, it could be a small change, or if in a different sense, if a user has their machine crash, and it had user data on it.That’s a major disaster for them.In terms of the network, we just want to manage the change in the network completely automatically.And we want to do this whether it’s thick or thin, in terms of the client.

Now, we approach the Zero Initiative initiative, in terms of both software and hardware, because there’s no way that Microsoft alone can really get to this no-touch vision.So in Windows 95 and I.E. 4.0 we did certain things to increase the control administrators would have and to decrease support costs.

Earlier this summer we released something called the Zero Administration Kit, which are add-on components which configure Windows 95 and NT 4.0 in a variety of different modes, so that if you had a very task-oriented user, perhaps in the manufacturing floor, or perhaps a bank teller or the like, that only needed to do one specific task, you could control that, so that changes weren’t possible.

Where we’re going in NT 5.0 and a subset of it in Windows 98, is a much more comprehensive vision and implementation of that vision.Automatic system upgrades, a change in the operating system, out on the net, it will automatically flow to the client without you having to do any work.You can also add applications out on code servers, and those will flow, all depending on what is in the directory system about whether or not that user should or shouldn’t have that particular version.It also will support roaming users.If you go from one machine to another, your applications and your data can follow you automatically.

The Management Console is another part of this Zero Admin work, in that we have one combined console now, from the user interface and management perspective, that aggregates all of the individual management systems that typically people deal with.And we’ll show you a little bit about how that makes the administrator’s life better.

We also have a breakthrough technology that Paul mentioned, which is this IntelliMirroring technology.The whole idea here is that the network becomes a cache of the data that you have in your local environment.So you’re in a situation where if something happens to your data, or you happen to be roaming between stations, or you’re in your mobile environment working on it, it can be automatically synchronized, without you having to think about it.It’s a very, very powerful capability, certainly a breakthrough.

In terms of hardware, we’ve been working with the industry on what’s called hardware specifications for defining reference platforms for the PC.And the PC, as you saw earlier, has been evolving at a very rapid pace.We believe we can make it much simpler by working with Intel and the other OEMs of PC platforms.

The PC 97 specification, which soon many devices will adhere to, includes OnNow, the ability for your computer to always be up, similar to what you might expect in your stereo component system, or in a TV, so you can just quickly touch a key, or maybe a fax and telephone, a call comes in and the system automatically wakes up.No booting, it’s just always available.And there’s a series of other technologies that we put into PC 97 in order to make the PC a better, simpler appliance.

PC 98 is under development right now with dramatic improvements as well.One of those key things is remote boot.So the capability that these machines will have is that you can just plug them in and if the operating system isn’t there, if the disk has no information on it, suppose the disk crashed, it can get a new operating system automatically.The vision doesn’t stop there.

What we want to accomplish, and we get part way there in PC 98, is things that you’ve perhaps heard of or experienced yourself if you’ve tried to add any hardware to your PC, DMA memory, IRQs, I/O addresses, these things that nobody in their right mind has any business knowing.It’s like looking inside your TV and seeing transistors and trying to identify them, the color bands on them.It’s not something that you should be doing.So that’s just going to go away.

If you plug a card in, we will auto-sense that device.A key part of PC 98 is, any device you plug in to a system that adheres to that, we will auto-sense, the operating system will find that device.So all the problems of trying to do this configuration just go away.

In the future we want to go further, where you can take these devices, just plug them in and the system just reaches out, like it’s got another part of itself, and says, I’ve got another limb, and I can just start using it.That’s so key, on the high end, to be able to get the mission critical environments, where the machine can’t go down.But, it’s also key if you’re in your kitchen and you just want to plug something in.You don’t want to go through this reboot.It’s just OnNow.So you should be able to just plug in the connection there.

In terms of what we think we will accomplish by this effort is today, if you just take a Win 3.1 system and move to Windows 32-bit system, you’ll save 11 percent.Just that simple change.If you go and use some of our 32-bit Zero Administration Kit and you get a little bit more structured in how you’re managing the system you can save more.Basically you can save 30 percent.In addition you can use things like the Windows Terminal that we just showed.But, more importantly with the vision of Zero Administration, we think we can drop the cost in half.

The benefit of this is it doesn’t matter whether it’s connected at this time.The whole IntelliMirroring technology is designed so that mobile users get the benefits as well.So unlike a particular locked-down station that has to be plugged in or the system doesn’t work — a very dumb terminal environment or a dumb Java environment –the environment of taking your PC and going mobile or having it at a desktop and having that same technology, we think is huge.

I’m going to switch gears… It’s all part of the application services that we’re building into Windows.And the way that we think about it is all driven by what developers are telling us.They want no-compromise applications.What do I mean by that?Well, they want the best of the PC and the best of the Web.They want to make sure that they can take advantage of all the power that they’ve been used to in a PC, but also take advantage of all the cool things that are happening in HTML processing and the like.

They want a unified infrastructure that they can call and take advantage of, as well, into the operating system.They want integration.They also want flexibility.They want to program in a lot of different languages.They don’t want to be restricted to just one language.They want to take advantage of all the system services.Remember the operating systems are evolving very quickly.You saw some of the technology of DirectX, for example. That technology is moving very fast.Great multimedia is key to new applications.Developers want to take advantage of that.

Another aspect is that developers want the flexibility of leveraging the component technology.All the third parties that have gone and created those is a huge market not only for the developers, but also for corporations and other people who aggregate those components together to build applications very fast.Finally, they want an integrated, simple, development tool set.And they want from that development tool set common access to all those system services.

We see two trends going on in terms of application development.The first is the continuing of the client server trend, where components are used in building business applications, and you’re getting access to the services, rich services, richinformation.And in terms of the number of developers, Visual Basic is still the predominant language, and then at the very top the smaller number is C++.Java is someplace in the middle.It’s, of course, a derivative of C, so it’s much more complicated than something like Basic.

But, that’s not where the trend is in terms of the Internet.The Internet is a much larger audience.A lot of people aren’t programmers; they’re content authoring people.They know something about the content, and they want to get rich, interactive media available to the largest number of people.The Internet trend is basically using HTML scripts and components tied together and then leveraging the power of the server side computations.Using scripts and HTML doesn’t take a programmer, and that’s why there is such a huge stampede toward using HTML or, in the case of what we believe, Dynamic HTML, to create these exciting Web sites.

Our focus is to make the infrastructure the same between the client and the server.A tremendous benefit of doing that is that programmers can count on the same services being available, the same rich multimedia, the same common storage APIs and the same capable storage system there.Whether it’s a mobile system or whether it’s a Windows Terminal, they can count on it — client or server.It integrates the best of the Web and the best of PCs together.

What we’re doing is, of course, tying Internet Explorer into Windows, and we’re tying Internet Information Server into NT Server.It turns out that the server is perhaps the more complicated part of the equation, because if you go to HTML and scripts, it doesn’t take a programmer in order to produce that content anymore.But, today if you want these rich environments on the server — mission critical systems — it’s very complicated.It’s been a stumbling block in distributed apps for a very long time.

So if you were to write a business app yesterday, you would be dealing with all the complexity of multiple calls coming in that you’re having to schedule, multiple users asking for requests.You’d have to think about, what happens if the server crashes. Will my data be safe or not?You have to think about security.You have to think about scaling up. What happens if the network goes down?Will your messages be cued into talking to your supplier, your business partner?

Our approach is just like I mentioned at the very beginning of this presentation, which is integration.What we’re doing is taking application services and tying them directly into the operating system, making them available to everyone.So instead of writing 70 percent of your code dealing with the complexity of things like transactions, you can spend your time on the 30 percent tied to business rule logic, or the other things that your application should be doing.We’ve made a tremendous amount of progress in this area.And the way we’ve done that is by tying Web services, scripting, transactioning and cuing, all into NT Server, creating this platform for developers to count on.So you don’t have to go and buy the CD from a separate person.It’s all in NT server.

There’s a lot of features that we’ve added into the system in each of these areas.But, the key thing is that programmers can count on all the infrastructure, with a very simple app, or mission critical, they can count on those services being there.Today, we’re the only company that has anything even close to this.Transactional Web capability, mission critical business rule running, we’re the only company providing anything even close to this.

The last piece, it’s obvious, is that you need a great integrated development environment.And we made a decision to follow what we did in Office and BackOffice, earlier this year, when we released, in March, Visual Studio. This is where we combine together Visual Basic, Visual C++, and our Visual J programming environment, tying those together into one common tool.We also created in this environment the same common debugging, across the languages, same common text editing and the like.

We allowed any language component to be integrated in, as you’re doing your development.And the result of this is that you can build any application, whether it’s going to be a very simple gaming application, or whether it’s going to be a business rule, transacted object that’s going to run in a very large server.

I’m going to switch gears totally now, since I’ve been talking about all this technology, and talk about what the products have in the next six months to nine months to a year.Windows 98, you probably heard Paul talk about it, is the name of the next version of Windows 95.We think it’s a very exciting upgrade to Windows 95.A key focus of what we’ve been doing is more reliability.We’ve spent a great deal of time with the support organizations, with customers, understanding their problems and addressing them.So the quality level is significantly higher.

It’s also easier, faster for application loads, and, of course, it has the Internet Explorer 4.0 directly integrated into it.So you have a much richer experience as we showed this morning.Also, it has an amazing amount of new hardware support: advanced Plug and Play, OnNow, integration with new devices. It’ll have DVD support.And things like the Broadcast T, all directly integrated into Windows 98.

NT Workstation 5.0, premier desktop for business users, superset of Windows 98.It’s all the features that have been talked about: DVD, OnNow, and the like will all be in 5.0.Greater reliability, manageability and security, but again with the same issues that exist today in the sense it is designed to be secure and robust.That means certain old TSR applications and DOS don’t run in NT.We see that as less of an issue anymore.It’s the same architecture as NT Server, directly the same.We recommend it to upgrade in the corporate space for any capable system.

The desktop timeline is that we will have a beta, the first beta of NT 5.0, at our professional developer’s conference, which is scheduled in San Diego in the third week of September.We will have a beta of both the Workstation and Server there.We are about to come up to our second beta of Windows 98.It’s moments away from going out, with a little bit larger beta than the last.And we will be shipping this, making it available in the first quarter of next year.So we’re getting close enough so that we pretty much understand this product and the quality level that we have to achieve and we think we can predict pretty accurately, the first quarter now.

In terms of NT Server, we view it as a revolutionary platform, for distributed computing.It has a very rich comprehensive networking infrastructure.Most all of the things I talked about earlier, quality of service, in terms of being able to schedule bandwidth on the network, the directory, integration of public key and private key systems, all that is included in NT 5.0 Server.

It is also the foundation for Zero Administration.Because of the integration of the application services in the diagrams that I showed, products such as this are possible, and that’s integrated into NT Server.It’s also a seamless upgrade from the previous version of NT Server 4.0.

In terms of other products, we have an Enterprise Edition, which is coming out, which expands the scale of NT Server, supporting enhanced clustering, fail over, and other technologies for clustering, as well as 64 bit VOM, for very large memory environment.

In terms of the timeline, NT Server 5.0 beta 1, of course, is coming out at the same time as the Professional Developers Conference.The enterprise version of NT Server 4.0 will ship this quarter.And the Small Business Server, based on 4.0, will also ship this quarter.There will be versions of both the Enterprise Server, as well as the BackOffice Small Business Server that are also shipped at the same time as NT 5.0.And we’ll know more about its availability date once we go through our first beta of the NT Server environment.

We’ve come a long way, and there have been progressive versions of Windows, and the other operating systems that have sort of driven the industry to where graphical user interface is sort of expected, client/server is well understood, what you see is what you get is just standard, cross-app integration, I think we pioneered that, compound documents, being able to have rich environments, rich components, rich multimedia inside of documents, and we do have some distributed technology that’s in the system.

But, where we are today is really just incredibly primitive compared to where we see the future.To some degree, even though this was a hardware issue, you know as well as I do that this software that you’re using, that you’re counting on, isn’t as resilient as you would like.There’s a fundamental problem in the software.And, as we go and move this user experience up to true ease of use, and comprehensive distributed apps, it’s going to be required that this system take a whole quantum step ahead in terms of simplicity.My mom lives in Boston, and we talk about once a week.And if it’s Sunday and she says, I have a question for you, I just sink down in the chair, because I know it’s going to be some question about accessing the Internet, or something about the hardware, or something about the software.And maybe you have that same experience with a neighbor or somebody.And even though we’ve made huge steps ahead, it’s still way, way, way too complicated, both in terms of the hardware and the software.It’s just not appliance-like at all.

So we came up with a vision that we’ve been driving to beyond Windows 98 and NT 5.0.We have a team working on a new generation.These are two of the things that we think are fundamental.An intuitive helpful interface, it’s not enough to be just smart.I have a lot of conversations with smart people, and sometimes I don’t know what they’re saying.You want a system to be able to do what you expect it to do.And then, if it doesn’t, to suggest things to you to be helpful.And you saw some of that in the research in the Office organization, but it’s nowhere near far enough.

We also want a very rich sensory experience, whether it’s speech or other input devices, you want a very rich environment.You want it to be information rich.You want to have information come to you.It’s nothing but information overload right now.You go out on the Internet, you’ve got information.No, you don’t.You’ve got data.You’ve got data everywhere.The question is, how to get just the information that you need?Or, if you’re just in your local environment, can you find the document that you just wrote?How hard is it?I mean, that’s a problem that my mom has.You know, she writes something and then says, well, you know, what folder is it in?We just made it much too difficult.So, we want to get out of the data mode into the information mode, and just have the information that you want come to you when you want it.

Everyone is going to be communicating all the time.It’s fundamental.In terms of the design goals that I set up for the team, I said, basically assume the network is broken all the time.And if it works, it’s a miracle.And, guess what, then you have great communication.If you design the system that way, then errors aren’t such a problem.In fact, a key problem underneath that that we find is, we find up error messages that no one can do anything about?We want all those to just go away.The system should be in the communications space just like electricity or telephones.It usually works.

We also want it to be just maintenance free.We don’t want any utilities to come up and have to scavenge the disk, figure out what to do about temp space, et cetera, et cetera.The systems should be self-healing, auto- configuring.

So, what you’ve seen today is that we’re going to continue to expand in terms of making the system simpler, more information rich, and resilient, and it’s all based on having very, very rich, solid, architecture.There’s no way that we’ll be able to continue adding all the features unless we simplify the system for end-users.They want this power, but it’s all about the plumbing and the concrete underneath that make all this happen.

With that, I’m going to turn it back to Paul.