Speech Transcript – Rick Rashid, PC Futures ’98

Remarks by Rick Rashid, Vice President, Advanced Technology & Research Group, Microsoft Corporation

PC Futures ’98
June 11, 1998, St. Louis

MR. RASHID: First, I have to log in.My talk is definitely going to be a change in pace from what you’ve been hearing, since I’m going to be really talking about things that occur in the future.Now, before I get started, I really do have to set this machine up, because we’ve got enough weird demos and things on here that I had to bring my own with me.

(Pause)

Okay.Two more things.We’ll get started.Don’t worry.

The disadvantage of not having a break when you have to set up a new machine.Got that set up.

Okay, what I’m going to do is really talk a bit about some new research, new areas, some of the changes that I think are going to be happening in the PC world over the next five to ten years.And by way of introducing myself a little bit, what I’m going to do is talk a bit, just for a couple minutes, about who I am and where I come from.A lot of people don’t really know that much about Microsoft Research outside of the academic community, so I just wanted to give you a quick primer on what our organization is.

I was a professor at Carnegie Mellon University for twelve years, and my own research area is operating systems.For those people who actually know about operating systems, the primary thing I was known for there was the MACH operating system, which is now actually being used by Apple, which I think is particularly interesting, and amusing.They’re going to be incorporating code I wrote back in 1985 into the Mac OS, so I think that’s kind of fun.

But what I was recruited to do by Nathan Myhrvold and Bill Gates was to come to Microsoft in 1991 and really found and create the basic research organization. Our research organization was created in 1991.It was a very basic research organization.The goal was to pursue strategic technology.We had a very small number of research groups when we first got going.I think at the end of the first year we had only 20 people.

We’ve grown enormously since then.We’re now over 300 researchers working at 27 different research areas.The organization — basically we cover areas all the way from operating systems to statistical physics and discrete mathematics.We’re now at three research locations.Most of our organization, the primary part of it, is in Redmond.We’ve recently started a research lab in Cambridge, England, working with the University of Cambridge.And we have a small group in San Francisco.And we’ve really developed a tremendous international reputation.

We’ve been growing really fast, which is really unusual in terms of basic research organizations in computer science.Most organizations have been cutting back.We’ve been growing.Strictly speaking, we’re slowing down.We only grew by a factor of four from ’94 to ’97.We’re going to be growing by a factor of three from ’97 on.So it’s a little bit of slowdown.And we’re having a major impact on the products that Microsoft ships.Basically, everything Microsoft does these days, all of its products, have some influence from a research organization, whether it’s actually the product itself, the features of the product or the technology used to create the product.

And it’s really like a university.We’re really more like a Stanford University, Carnegie Mellon or MIT in terms of the organizational model.That’s the structure we’ve put together.It’s a very open environment.If you go to our web pages, you see all the research that we’re doing, see all the names of the researchers that work for us.Nobody reviews people’s publications before they’re published.We have hundreds of people coming through all the time.At any given point in time we have anywhere between, you know, a dozen to right now we’ve got about 80 different interns in at the Ph.D. level from all over the world.So it gives you a feeling for what the organization is like.

And we’ve really developed a very strong reputation very quickly, which is, again, very unusual in this sort of basic research academic world.

Last summer, we got voted number seven overall in terms of computer science research organizations.Only Bell Labs and IBM Research were ahead of us.And I think clearly we had the momentum on our side.

The most amusing part of this is that when they ask a question of academics around the country, where would you most like to work if you had a Nobel Prize and could go anyplace that you wanted to go — of course, there isn’t a Nobel Prize for computer science, but the people running the poll didn’t know that.(Laughter.)You know, this is a reporter.In any case, we came out number two, which is great, because we’re hiring a lot of people.

So that gives you a little taste of where I’m coming from.We’re going to be talking about a lot of the research, a lot of the new technologies that are coming out of the research lab during my presentation.

Now, when you think about the future, at different points in time, as we move through the century, people have had different visions of what the computing future might be like.You know, back in 1926, you know, Fritz Lang had this vision of this computing entity that looked a lot like a woman.This is Metropolis.The very sort of utopian world.But often when people talk about the computing future, they seem to have these distopian visions of what might be happening.

Robots are a common theme.You know, they just had the Lost in Space revival, and that was — Lost in Space had a robot that was basically patterned after Robbie the Robot in the Forbidden Planet.That’s another vision of what the computing future is like.

If you ever saw the movie Plan 9 from Outer Space, voted the worst movie of all time, this is the vision of the future that that movie had.

(Video/audio clip)

VOICE:Greetings, my friends.We are all interested in the future, for that is where you and I are going to spend the rest of our lives.And remember, my friends, future events such as these will affect you in the future.

(End video/audio clip)

MR. RICK RASHID: Now, that’s a prettyopen-ended definition of what the future’s going to be like.Hopefully I’ll be able to give you something a little bit more concrete before we’re done.

Now, you might ask, okay, we had the Plan 9 view of the future, why is this one going to be any different?I actually have some credentials from the future that I can call upon.I don’t know, some of you may be science fiction fans, have seen this book by Greg Bear a well-known science fiction writer, called
“Plan.”
You may not have noticed that in this book there’s a character which has my last name — actually, it’s my last name and Nathan Myhrvold’s first name, so it’s sort of an amalgam of Nathan Myhrvold, who’s our chief technology officer, and myself.

Greg asked us,
“Can I use you guys, you know, to pattern this character after?”
And we said,
“Sure.”
We really weren’t sure how it was going to work out.One thing to keep in mind is that Greg — this particular novel is set in the sort of City of Angels universe that he developed.And it’s a sort of a cyberpunk kind of novel.And you really do wonder a little bit about what this character is likely to be doing in the book.Luckily, this character actually doesn’t do anything very bad as far as the other stuff is concerned.Mainly, this character saves the world, which I thought was a good thing. I thanked Greg afterwards for it.So I’ve got this perspective here from at least 2052 to draw upon.

More seriously, when you look at the software today, what you tend to see is a lot of very direct interaction.Today, in some sense, the way we interact with our computers, the way users interact with computers is very much focused on the user telling the computer what to do.The users directly interact with the software.They directly manage the machines themselves.They worry about the system resources of various kinds.They directly handle — at least the developers, and the people that maintain the systems directly handle management of resources, distribution of services and so forth.

This just says I’m supposed to be playing a game right now.There’s a story that goes with that too, actually.Actually, developing a game on the side, so that’s a separate issue.

So there is an issue there — yeah, it’s very direct, very interactive, but the user is both in control, but having to do a lot of the things that are necessary.

As we move into the future, it’s really not going to be possible to have that happen anymore.I mean, what you’re going to see is that increasingly the applications will concern themselves with the logic and the semantics of the task, but it’s going to be the underlying system that really solves a lot of the details, and that the users won’t directly manipulate applications, so much as they interact with applications through an intelligence agent of some sort.And the reason for that is just the complexity.We’re building systems, we’re providing features over time that are just more difficult than users can really handle or that programmers can really keep track of.When you ask a programmer today to build a highly fault-tolerant, highly distributed solution to a task, most of them can’t do it.It’s too difficult.There’s not enough support there to make it work.But those are things that we have to do.

There are also changes in the way computers are used, the kinds of things that they’re used for. In the past, we started with this model that computers were primarily these big machines with the guys with white coats and they were being used for analysis and problem solving.You would hand the guy your card deck or your paper tape punch or whatever it was that you had, he would carry it off to some room where the big machines were, he’d bring you back the answer, but probably not that great.That was sort of the early uses.

As we moved into the personal computing era, it’s really been more sort of document creation usage.

As we’re looking into the future, and you’re seeing this already when you look at devices like the palm-sized PCs, like the WebTV devices and so forth, that computers are now being used for a wide variety of purposes.They’re being used for reading, consuming and entertaining.They’re being used for understanding information.They’re really extracting data and giving it back to you in a new form.And they’re being used to communicate.I talk to my mother in Iowa over a videophone, which is really a computer on two ends.She had a stroke about five years ago.She can’t actually talk, but I set the videophone up for her so that we can actually see each other and she can pantomime, which is basically the way she communicates.

That’s the kinds of things that people are now increasingly doing with computers, is really using it as a form of communication and interaction.

And I know you’ve got — or at least I was told you had a demo — (inaudible) –which you can think of as a Web technology that’s really stressing the kinds of entertainment and interaction and data viewing and data visualization access to the system.So I won’t really go over that.

What I am going to do though is talk to you a little bit, give you a short little clip to give you a vision of what computing might be like in a few years, probably not two or three years, but sometime out in the future.This is a little concept video that was put together by some of our researchers, and looking at exploring how we might be able to change or augment the way people interact with each other and solve problems, using computers. I’ll just show this little clip.

(Begin video clip)

WOMAN:Everyone hates meetings.On the good side, meetings are where we make plans, debate strategies and come to decisions.On the other hand, meetings are tough to schedule, they disrupt everyone’s day and they’re usually boring.What we need is a way to attend meetings casually, while also getting work done.One approach is shown in this video.We call it a flow application.

The basic idea is that the computer creates a meeting room with great support technologies.You can casually monitor several meetings at once while working at your own desk, and easily jump into any discussion any time.

Flows are more lightweight and versatile than videoconferencing, and can help an organization respond to the unexpected.

Here’s a manager working with several flow apps at once.In the center of each flow window is a video of the meeting in a virtual environment.Participants are each in their own offices, but they inhabit the meeting room together.Although the imagery looks like live action video, it’s totally synthetic.Each participant’s work station uses built in videocameras and microphones to analyze body posture, gaze and expression.This information is sent to the other attendees, where it is used to create individualized views of the discussion.The application uses the rules of cinematography to clarify the give and take of the conversation.

On the left is an automatic running transcript for reference.On the right is a stream of suggested files from the computer inferred from the topics of conversation and filtered to personal preferences.These can point to press releases, notes, photographs, stock reports, maps and even previous flow discussions.

Flows can be saved, mailed and embedded in documents.

With several flows open at once, a busy person can be in several places at the same time.We call this presence multiplication.It supports efficient informed decision making.

(End video clip)

MR. RICK RASHID: Now, I’m sure you’ve all wanted to be in five meetings at the same time.Actually, the purpose of this video, or the purpose of this was really an experimentto say what you might be able to do with technology in the future, how you might be able to improve interaction or add to what people could actually do. This isn’t going to be a product next year.This is a very futuristic view of what you might do in the future, and it may not even be something that you ever want to do in quite that way.

But the interesting part of this is that although it seems very futuristic, the idea that the computer can basically have a camera on it, watch what you’re doing, put you in several virtual environments at the same time, monitor what’s happening, create a 3-D version of yourself that fits in a holodeck style room and interacting, that seems very futuristic. And yet a lot of the fundamental technology that would allow us to build a system of that kind is actually making a lot of progress in the research lab.There’s a lot of work going on, and you saw the sort of notion that there’s computer graphics doing realistic, creating realistic looking environments and people, so I’ll talk a bit about what’s happening in the research world there.

Computer vision, being able to analyze gesture and — (inaudible).Speech input and output are implied in that, where it’s really able to monitor what you’re saying and keeping a transcript of that, but also potentially generating speech on your behalf.The use of natural language, both for analyzing what’s going on and for doing information retrieval.And the notion that you can monitor the activity and basically perform implicit requests for information, and do tasks for users based on what you think they need at a given point in time.

So I’m going to just touch on each of these different areas right now, and give you a feeling for where the state of the art is.So one of the areas that was being referred to was this notion of being able to create these virtual people, these virtual individuals.Now you’ve probably seen 3-D graphics and 3-D games; there are a lot of good 3-D games out right now that don’t exactly produce images that look like actual people for the most part.As a matter of fact, even the Hollywood movies for the most part stayed away from representing actual faces.

One of our research groups under a researcher named Brian Gunser has actually been putting together a system that does a very good job of mimicking an individual’s face.I’ll just give you a real short clip to show you what the state of that research is in.

(Begin video clip)

MAN:Describes a system for capturing 3-D facial expressions, compressing them and reconstructing them using 3-D computer rendering hardware.The capture process begins with a cyberware scan of the actress’ head.Fluorescent colored dots are glued onto the actress’ face.

MR. RICK RASHID (interrupting video clip):You can see why we had to pay her to do this.

MAN:The actor is videotaped talking and making facial expressions.Six cameras are used, placed to capture every part of the actor’s face.Stereomatching and triangulation reconstructs the 3-D positions of the colored dots in each frame.These 3-D positions are used as control points to the form of 3-D cyberware polygon mesh.

The six videoscreens are combined into a single textured image sequence, and the dots are removed by image processing techniques.

The texture image sequence is —

MR. RICK RASHID (interrupting video clip):So this is a completely artificial image.

MAN:– to create the final animation, which can be viewed from any virtual camera position.

COMPUTER:(Inaudible.)

MR. RICK RASHID (interrupting video clip):You really wonder where they get these phrases from.This is supposed to exercise the parts of the face.

MAN:The texture imaging sequence can easily be gigabytes in size.For many applications, such as animated talking head sequences on CD ROM games or streaming video displays over the Internet, the data rate required for the uncompressed texture and geometric deformation data is much too high.Because of the structured nature of the data, it is possible to significantly compress it.

(End video clip)

MR. RICK RASHID: So it gives you a feeling — actually now they’ve been able to compress it so that it is something you could literally do over ISDN connections, to produce something of that sort.

And one of the things that you see there is it is a completely artificial face.It’s just a polygon model with textures being mapped onto it.And yet it looks incredibly lifelike.

Now, obviously we had to put this poor actor through a little bit of effort there, the face with the little dots and the — (inaudible) — to get all the information that we needed, but we think we’ll be able to collect this kind of data more easily as time goes on, as we use more computer vision techniques.

So what it’s showing you is that we’re able now much more so than has ever been true in the past to create an image of an actual individual person and make it extremely lifelike and then animate it on the other end.

One of the things that we’re doing with this research project now is we’re also looking to take these 3-D — (inaudible) — models of this kind and effectively puppeteer them remotely.This is work that’s going on in conjunction with our computer vision group.And what they’ve been able to do is come up with techniques for monitoring what you do in front of your computer screen, so they can recognize gestures.So, you know, movements of the head, movements of the hand.You can even create sort of stylized gestures.One of the things that they often do for demos is to show, you know, playing instruments, playing games and things, just with your hands, and with your gaze.

And, again, the goal here is to be able to keep track of what an individual is doing for a variety of purposes, for user interface purposes.Potentially it’s technology that could be used for accessibility kinds of settings, where someone may not be able to do anything, but move their head, or but move their hands in a certain way.But it can also be used for the kind of puppeteering that we saw in that flow video.

So you can see that technologies are beginning to come together to create this sort of vision potentially of what might be happening in the future.

Another area that was touched on in the video was speech recognition, speech generation.Now, we have a substantial amount of work going on in speech recognition, and, in fact, we just released the speech software development kit, version four, to the Internet, out of our group.That’s now available for free from our Web Site, so you can go there and get it, speech recognition software that runs in Windows 98 and that runs in Windows 95, and runs in NT.

But we’ve also been doing a lot of work on text to speech generation.Now, I think most of you are probably familiar with the fact that most text to speech generation systems sound pretty artificial, pretty roboty or pretty, you know, sort of garbly in some fashion.What we’ve been doing is trying to pull together this technology to allow text to speech to be done and to sound like a specific individual, so effectively extracting an individual’s voice characteristics, and then using that to drive a text to speech engine.

So what I’m going to do is play a set of short audio clips that are — where you hear the artificial version of the voice that is generated by the text to speech system, and the actual person’s voice that follows up after that, so you can get a sense of how far we’ve come in being able to mimic a human being.

(Begin audio clip)

MAN:(Off mike.)First prosthetic voice.Second original recording.

COMPUTER VOICE:The meeting had scarcely begun than it was interrupted.

MR. RICK RASHID: That’s synthetic.

MAN:The meeting had scarcely begun than it was interrupted.

MR. RICK RASHID: That’s the real person.

COMPUTER VOICE:He spoke to various of the members.

MR. RICK RASHID: That’s synthetic.

MAN:He spoke to various of the members.

MR. RICK RASHID: That’s a real person again.

COMPUTER VOICE:However, the aircraft which we have today are tied to large, soft airfields.

MR. RICK RASHID: That’s synthetic.

MAN:However, the aircraft which we have today are tied to large, soft airfields.

MR. RICK RASHID: That’s the real person.He just sounds like a computer.He’s a voice actor that we hired.

COMPUTER VOICE: –Summertime supper outside is a natural.

MAN:Summertime supper outside is a natural.

COMPUTER VOICE:The system may break down soon, so save your files frequently.

MR. RICK RASHID: You wonder where they get these things from.

WOMAN:The system may break down soon, so save your files frequently.

COMPUTER VOICE:Right now may not be the best time for business mergers.

WOMAN:Right now may not be the best time for business mergers.

(End audio clips)

MR. RICK RASHID: You do wonder where these speech guys get all this data from.You know, they’ve got their large, soft airfields, and the little cracks about business mergers.

The key thing here is that what we’ve been able to do, you can still hear a little bit of warbling in the artificial version of the voice, although we’ve actually been doing a good job in more recent versions of the system to remove that, but you can see how close we can come now to basically extracting a single person’s voice and mimicking it.Currently again much like the vision — like the face technology, it takes a fair amount of effort on the part of an individual to extract the information from them.They need to speak for four hours with a laryngograph on so that we can see how their Adam’s apple moves up and down.But the key point there is that we’re able now to do a much, much better job than has ever been done before in terms of creating a natural sounding, very fluid voice.

Now, of course when you think about talking, you know, one of the issues there is the prosody, you know, when someone speaks, they have kind of a lilt to their voice, almost the music of speech.That’s what’s called prosody.You can also think about creating a text to speech and effectively using music itself as the prosody.That then becomes singing.Now, you’ve probably heard the old AT & T — (inaudible).It was really the inspiration for HAL in 2001.

(Begin audio clip)

COMPUTER VOICE:Davey, Davey, give me your answer please.(Inaudible) — upon the seat of a bicycle built for two.

(End audio clip)

MR. RICK RASHID: Now, that was really good for 1962.It was actually a tremendous achievement at the time that the people were able to do that well.

We’ve obviously gone farther.If you take the kind of text to speech system that I just gave you some sound clips from, and used that to drive basically set it to maybe music, in fact we have an application where you can basically type in words, you know, and to a midi score, you get something that sounds a little bit like this.

(Begin audio clip)

VOICES SINGING

(End audio clip)

MR. RICK RASHID: Now, there are several things you can take away from that.First off, Simon and Garfunkel are not threatened, at least by the current state of technology.Maybe at some point they will be, but not right now.

Although, you know, realistically if you think about it, a couple years ago they sold over a million copies of monk’s chanting, right, so I think this technology, you know, there may be some opportunities here. In fact, it’s really funny, when we did some demonstrations of this system at our last company meeting, and one of the things that they did is they took a broader collection of songs and let people actually see the application running.There’s this marvelous version of Mark which is that male voice I talked about before that sounds a little artificial, singing Penny Lane, you know, the Beatle’s Song.And it sounded ever so much like Kermit the Frog singing.And I was really struck by that, and I kind of said,
“Well, why does it sound like Kermit the Frog?”
And the answer is because Mark has a very nasal quality to his voice and even though frogs actually don’t have noses, if you think of Kermit the Frog’s voice, you know, that voice actually does have a very nasal quality to it.So you got this nasal Penny Lane going on.There are some marvelous duets that we’ve done between Mark and Melanie singing together, again, all artificially.

So that gives you a feeling for sort of where that technology is going.So clearly we’re moving in the direction of being able to represent people as in terms of their physical appearance.We’re moving in the direction of being able to recognize what they say and represent the way they speak or how they talk, and to be able to generate it artificially.

One of the things that we’ve been doing a lot of work on in our research organization is language recognition, and language analysis.Now we first actually took the language research that we’ve doing — it’s our oldest research group in the research lab — we first began using their research in our product in Office 97, and if you’ve used Word 97, you know, it has a number of features in it that came from this general purpose natural language engine that we’ve created.Text critiquing is one aspect of that.Summarization.If you were in Japan, one of the things that our Japanese Word product does is something that’s called word breaking.In Japanese there aren’t spaces between the words; they’re just characters that run together.You need to be able, or at least ideally you’d like to be able, to isolate the words when you click on something.So that’s technology that we’ve incorporated into that market.This is what it looks like in the running product.Basically it’s a way of saying, you know, what are you doing, and giving you some feedback on mistakes that you may have made in your grammar.

What’s really happening there is there’s a general purpose natural language engine that’s doing the kinds of things that you did in grammar school.That’s why it’s called grammar school.Meaning that there’s a system for basically charting out the sentences, identifying where the nouns, the verbs, the adjectives, the adverbs are, recognizing what word senses are being used, and effectively analyzing the sentence, you know, for some sense of meaning that exists in it.

What we’ve been doing in the longer run, though, so where we’re taking this research, that’s work that’s been done.It’s in our products now.It’s continually being improved, and the next version of Office will improve it.But longer term what we’re trying to do is basically build up the ability to read English text or text in other languages — we’re currently working with seven different languages — so it could take text and effectively extract knowledge from it.So we’ve built something called Mind Net which is basically a semantic network of the English language.What we’ve done is we’ve basically taken this natural language processing system and we’ve used it to parse dictionaries and other kinds of source materials.And when you think of a dictionary, there’s a lot of information in a dictionary about the meanings of words and the relationships between the words.What we’re doing is effectively extracting that, creating a semantic network or a database, if you want to think of it that way, and linking those words together so that we have the relationships, the meanings of the words stored there.

And we’re also beginning to store knowledge there.

One of the things we’ve done recently is to use the same system to effectively read Encarta, which is our encyclopedia, and store knowledge there as well.

This is just a very simple little piece of what you will see if you look at the inside of a Mind Net.This is basically all the same — not even all of them, just a portion of those things that sort of funnel into the word
“bird”
or related to the word
“bird.”
And you can see this information about quacking, and that quacking is a sound, and it’s a sound produced by a bird, and you know a bird is an animal and all the other things that you might get out of that.

So that’s an example of what we’re trying to do there.

In terms of the applications, in terms of understanding knowledge, information retrieval is an obvious case, and we’ve already begun working with the product groups to take this technology and use it for information retrieval.Data mining is another obvious case.There’s a huge amount of data stored in documents, which basically the only form that people access that information now is by doing things like word spotting.What we’re hoping to be able to do is extract meaning and do meaning spotting and meaning extractions.

Question answering is my favorite.One of the things that we’re trying to do right now is use this knowledge base that we’ve created to be able to actually answer questions.So when people do information retrieval today, they’re really not for the most part doing information retrieval, they’re doing document retrieval.In other words, I go in and I type in some key words, and it comes back with a list of documents related to that.That’s document retrieval.

What you’d really like is to be able to say, who is the President of the United States, and have that come back to you as an answer, as opposed to a document that happens to refer to Bill Clinton.

So that’s something that we’re working on there as well.

Obviously spoken language understanding is a component of this, in addition.

Now, another aspect of that flow video I showed you was this notion to create virtual faces where people can interact with each other — now we’re also doing work in that area, and really it’s serving a variety of purposes.Collaboration is really the main goal.Why don’t we just give you a short clip of what we’re doing in this sort of creating these virtual places collaboration, and the technology that goes underneath that.

(Begin video clip)

MAN:The virtual worlds platform is currently under development by the virtual worlds group within Microsoft Research.This platform facilitates the development of multi-user distributed applications on the Internet.It is being designed as a general purpose system, supporting a wide range of applications, such as communications, collaboration, education, entertainment and commerce.The virtual worlds platform exploits Microsoft’s ActiveX and Direct X technology, allowing great flexibility in the design of user interfaces, and the support of multimedia.

V world uses a client-server architecture.Users enter a world by connecting their client machines to a V world server.Each client machine performs the audio and visual rendering of the world, and handles the user interface.The V world server maintains persistent world — (off mike) — coordinates changes to the world and then (inaudible).

V world implements Distributed Object Models that represent the entities in the virtual world.These objects can have properties, client side methods and server side methods.The V worlds platform allows developers to design and implement the objects in their applications, without having to concern themselves with communication and database details.

The virtual worlds platform is implemented on top of Microsoft’s ActiveX and Direct X technology.A V world ActiveX control can be included in a web page or embedded by any application that supports ActiveX controls.The V worlds controlled by low level distributed object services.Additional controls perform the audio and visual rendering of virtual worlds.

To facilitate the development of applications, the virtual worlds platform also provides several tools to assist in the creation and editing of objects and object graphics.Various wizards simplify the creation of worlds, rooms, portals and individual objects.

Graphic editors facilitate the composition of graphic scenes and collision detection boundaries.

Objects can provide their own user interfaces to facilitate editing.This example shows how a user can change a painting that is displayed within a virtual world.

Here are some environments that have been created using the virtual worlds platform.This is a sample world created by the virtual worlds group to demonstrate some of the platform’s functionality.It supports various communication options, such as chatting, whispering and graphical gesturing.It also supports the creation of buddy lists and lists of people to be ignored.

As users navigate through the world, they can collect and exchange objects with other users.Accessory objects can be obtained that allow users to decorate their appearance.Throughout the world, objects are available that encourage user interaction and customization of the world.This environment is being developed in collaboration with the Fred Hutchinson Cancer Research Center.It is designed to provide both social and information exchange for caregivers with people with cancer.

The avatars are photographic and use a custom gesture pane that more accurately identifies moods.The space has a mail room for users to exchange images and notes.Users can also visit an auditorium to view in-world presentations.

This environment demonstrates the combination of virtual worlds and electronic commerce.In this virtual music store, users can not only browse CDs and listen to music, they can get live advice from other users and store personnel.The store integrates with other Microsoft components to perform secure transactions.

The virtual worlds platform is a versatile system that provides many opportunities for creating shared worlds, experimenting with user interface ideas, and performing research on the behaviors of graphic objects and distributed systems.

Microsoft’s virtual worlds group is using this platform as a test bed for a range of applications and projects.

(End video clip)

MR. RICK RASHID: And obviously one of the test things that they’re doing is actually looking at this virtual meeting place technology as well.

That gives you a feeling again for the technology being developed to really experiment with different forms of collaboration, different forms of persistent interaction between individuals, and you saw one of the things we’ve been doing is working with the Fred Hutchinson Cancer Center, and that’s gotten a tremendous amount of very positive response from the people at the Center in terms of being able to allow them to communicate with and have people share knowledge and information and experiences, you know, between the caregivers and some of the cancer patients.

So I’ll talk now about a lot of the technologies that sort of form the core of that flow video.And in particular I talked a bit about, you know, being able to create these virtual places, put lifelike people in it, be able to create sort of lifelike extensions of themselves in various ways.I mentioned that there is this changing boundary between software and hardware.And really what that says is there’s a need for more and more intelligence, you know, as we change the boundary between, you know, what’s going on in the applications and what the user perceives or what’s going on in the hardware and what the application perceives.We need to add more and more intelligence.Users aren’t going to be able to predict very much about their environment.It’s more difficult as time goes on already.So we need to be able to fill that gap.

Now, one of the technologies that we’ve talked about in the flow video is this notion of being able to monitor what the user’s doing and effectively, you know, perform tasks for an individual or you’re performing– (inaudible) — queries of various kinds.

Now, there’s been a number of activities that we’ve been doing that fall into this category.We’ve developed something called the Mirror, which actually became what’s the Intelligent Assistant in Office 97.We’re doing research in — (inaudible) — queries, and I’ll talk to you a little bit about that.We’re also doing work in being able to automate other kinds of tasks for users, and I’ll talk to you about something that we’ve generated called Lookout, which is an aide to Outlook to help you schedule meetings and things of that sort.

All of this depends on the notion of user modeling.You know, in other words, it depends on being able to track what a user is doing, create a model of their behavior and activity, and then be able to effectively categorize that behavior as meaning certain things in certain contexts.

One of the things that we do a lot of research on are monitoring what people actually do.I just threw these things in just because they’re fun to look at.If you just say what does the person look at when they’re performing a task, here’s an example.This is — the little crosshair on the screen is the eyes of the user being tracked as they’re doing a web browser application.So here they’re looking at a Babylon Five Web Site, one of my favorites, and you can see them sort of scanning around, performing various functions, you know, going up, looking for the icons that they need, going over to look for scrolling and so forth.So that’s the kind of data that you get by monitoring users and keeping track of what they’re doing.

If you’ve ever wondered, what is it that you do when you’re reading e-mail, here’s an example of a user and they’re eye gaze as they go through a set of e-mail.And, again, you can see, you know, how their attention moves around the screen, what kinds of things are important to their eye, where they’re looking to perform various kinds of tasks.

And then finally, you know, for those people who really want to know what happens when I’m playing a game when nobody’s watching, you know, this you can see somebody actually doing a very poor job of playing space invaders.In fact, you’ll see how poor — yeah, I think they’ve already died at least once.This is a really terrible player.

The point in all this in this user modeling and monitoring what users are doing and collecting a lot of data about users is to be able to be predictive about what users’ actual goals are, you know, given the activities that they’re performing.So basically what you do is you go through a phase that’s saying, look at people whose tasks you know and you know what tasks they’re performing, and you look at what they’re doing, and then you collect those statistics, and then you go back and say,
“Okay, when I see people doing certain things, what is the most likely activity that they’re performing.”

We’ve developed something called the Mirror, which became the Intelligent Assistant in Office, which is really the underlying help system in Office, so you can type things like free text queries and it can monitor your activity.So that’s what happened with that technology.

Underneath the — (inaudible) — technology is this modeling of what is actually going on.It’s saying, you know, you’re looking at what is the probability of the concept, given the word used, and then being able to say, all right, when I look at a particular piece of text that the user types in — but they forgot to plug my computer in, I guess.Either that or I’ve not turned it on.So my suggestion for the people in the back room is if they can find how to make sure that my otherwise plugged in computer actually has power, that would be great, so I can finish my presentation.Did that actually produce a result?We’ll see.

There we go.It did produce results.See, there are people back there.You were probably wondering, are there people back there really listening to what’s going on here.

So here’s an example of where you’re looking at something where we’re saying, you know, I want to insert some text into my chart.You’re really trying to decide, you know, what is the most likely event, you know, is it that I’m talking about formatting a document or modifying a chart.And basically you’re looking at, well when those kinds of words were used by test subjects to mean a certain thing, you know, what was the probability with that, and then being able to combine the probabilities in an intelligent way.

Now, if you look beyond that, and you look at this notion of how do we do inclusive queries, not something where you just type in something and get the result, but where you actually able to monitor user’s behavior, monitor what they’re doing and effectively say, ah, this user wants information about this subject, because I can see what they’re doing on the screen.That was talked about in the flows video.Here’s an application that we’ve actually built, where what you can see is that, you know, there’s a user reading through some particular piece of text; there’s a lot of information in the text about natural gas, other kinds of data.You can use the information about what the user is looking at, the kind of information, how they’re dwelling on information to effectively generate queries that say, what kinds of documents might this person be interested in at the same time, effectively creating links on the fly if you want to use the web browser analogy.

You may also be able to do things, like here, for example, you’re looking at something about the yen, and it suddenly gives you a lot of information about money, right.You’re also looking at things like the dollar is moving sharply lower and now there’s other kinds of accesses.

So this is the notion of being able to do — (inaudible) — queries, you know, based on information.

Now, one of the kinds of things that you may be doing is monitoring a user in a specific application, and you’re trying to provide support for that.And one of our research teams has built something called Lookout, which is basically an addition to Outlook, just a program that just runs from the side, monitors what you’re doing, and lets you schedule events — or it schedule events for you.

So, for example, here I’m opening up a window, a particular message that says, you know, would you like to do lunch tomorrow.What the system has done, Lookout has monitored the fact that I was looking at that particular mail message, and opened up the schedule appointment.In this case it’s all set to be in the past, because I’ve done this demo more than once.And in fact there’s a setting that says ignore things that occurred in the past.I’ve turned that setting off.But you can see it says do you want to do lunch tomorrow, and what it’s done is it’s scheduled a lunch event for me, or at least it’s allowed me to schedule such an event for 12:00 to 1:00.

So we’ll get rid of that.

Now I might open another message saying something like, you know, we need to meet — this is a message to myself, so this is a fake message, but, you know, we need to meet on Thursday to discuss your performance from 1:00 to 2:00.Well, it’s got that information in there.

Now, what this program actually has done is it has monitored, you know, users and their behavior.In this particular case it’s monitored one of the developers — one of the researchers in this group, over the course of a long period of time, and looked at their scheduling events, looked at what they — what kinds of messages they looked at and when they brought up scheduled information.So it’s effectively tried to tune itself to a particular user.In fact, I can turn on a switch here, and over a period of time it will tune itself to my use.So it will look at when I don’t bring up the scheduler, but when I do, what kinds of messages I tend to want to be scheduled, and what kinds of messages I don’t.

You know, here’s something, an example here.This is, you know, sort of a generalized thing about me, and it says, no, really, that’s not a scheduling activity.If I go down in the corner, and it says the scheduling probability is only about, you know, 20 which is below its threshold for something that should be scheduled.I could force a schedule here by double clicking and saying view my calendar in May, and it would, in fact, bring up the appropriate calendar for May for me.But if gives you the point here that it can learn what you do, learn your behaviors and try and fit in with your environment.You know, when I say something like, you know, we’re meeting at 3:00 and you can — there are all sorts of ways of saying things that it will pick up.So it’s actually looking at the language content of your mail messages and extracting information from it.

Where’s a good example of that?Here’s a — I think that’s the best example.We were just talking about scheduling something for later in the week, and so it’s pulled up the week’s worth of information to say, you know, what is the most appropriate thing.It will actually do — where’s a good example of that.Interesting, how about getting together in the afternoon, and what it will do is it will actually pull up, find that I’m available that afternoon and find a time that’s available and suggest that as a potential scheduling event.

So that’s an example of the kind of application where you’re basically monitoring what a user is doing.You — (inaudible) — a lot of data about a particular user over a period of time, and you’re effective creating a model of what that user is likely to do, and then learning how to do it for that user.In fact, one of the things I’ve learned is how long to wait, right, before it tries to do a scheduling, so it also keeps track of the amount of time that you normally spend in scheduling activities.

Now you could use this — that type of an approach for doing scheduling, but you can also use a similar approach for doing other kinds of auto-filtering of messages and auto analysis of messages.One of the things that we’ve been doing, for example, is something called a spam killer.The idea is to be able to monitor what you consider to be spam over some period of time, so that it effectively learns what are things that you don’t want to hear about, versus what are things that you do want to hear about, and it will do whatever you want to do with the spam, whether delete it or put it into a separate folder.

So here’s an example of spam, right.This particular seems easy to characterize.Almost any message with that many dollar signs in it has got to be spam.There are a lot of other things to look for.But the notion is you build a detector that automatically determines what something — whether it fits into a specific category or not.It builds itself automatically.It adapts over time.Not just to you, but also to the spam, right, so as you get new kinds of things, as the spammers learn what spam is to your system, and they try to come up with new ways of sending you spam, you’ll note that, you’ll say, no, that’s a spam now, and learn that over time.So this is something that we’ve been doing.We’ve been training this system with a number of our people, but it can be trained against any individual.And we’ve been getting very, very good results with it.

Now, beyond spam, you can auto-filter, and this is also very core.Again, these are applications that we’re running today in our research lab, where you can basically decide that you want to have the system automatically saves mail messages in folders for you based on your placement of mail messages in folders over a period of time, so that’s another example.

Now, we’ve talked about — and that’s the kind of intelligence where it’s monitoring the user, and the user’s behavior as he or she interacts with the underlying system.

Now, there are other things that you can monitor, other things that you can model and provide intelligent support for.One of the things that one of our research groups is doing has been they’ve been working to build an intelligent physical database design system, so basically what you’re monitoring in this environment isn’t what a user is doing so much as it is how the underlying database system behaves, what are the queries that work against it, what kinds of queries, what kinds of results it brought back.And what you can do is automatically tune the indexes and the parameters in the database to fit a particular application.This is actually work that we’ve been doing with the SQL Group, and this will be technology that will actually be going into the SQL product in the fairly near future.But it’s an example of the kinds of things that you can do by putting intelligence in the system and doing a lot of modeling of behavior.

The point of this particular — (inaudible) — you can do extremely well with the — (inaudible) — techniques.Typically less than 10 percent off of the best possible database design, which is typically not what most physical database designers actually do.So, and typically we’re much better in a number of categories.

Now, I’ve been focusing a lot in this — (inaudible) — presentation on users and user interface technology, but in fact a lot of these issues of how to put intelligence into the system, how to make the system effectively do things for users in advance or they’re needing to know that they wanted to do them, how to adapt them to the underlying hardware environment or the underlying usage patterns, those are things which also apply to the system and the underlying system technology.And so we’re doing a lot of work there to really try to push forward, to build systems what I’m calling planetary scale — you’ll see why in a moment — to build virtual computing environments, that we don’t depend on hardware any longer, and to build systems that are self aware.

Now, I’m sure you’ve heard this in earlier presentations, so I’m not telling you anything new.We’re really moving to the point where we’re getting to roughly by the year 2000 effectively disk storage is going to be basically a penny a megabyte.It’s about three and a half cents a megabyte right now, at retail.What a penny a megabyte means is a terabyte of storage is only going to cost $10,000.So we’re really getting to the point where there’s a data tidal wave that’s coming at a tremendous rate, producing what some people refer to as the terabyte.I mean, you think, you know, what is a terabyte of information.It’s, you know, if you sell 10 billion items a year, which is something like a Wal-Mart, you know, you’ve got at least a terabyteof information, if you’ve got 100 bytes for each one.That’s aterabyte.Lots of other kinds of data would fall into that category.

Now, one of the things that we’ve been doing is putting together a very huge database to really stress, you know, what you can do, how sort of large you can build these database systems using PC technology.We’ve put together what we believe is the largest PC in the world.It’s currently about 324 gigs, about 3-something terabyte, 2.9 terabyte.Ten gigabytes of RAM.It’s a pretty big system.The goal of this system is basically to be able to provide on the Internet for free access to aerial imagery of the earth’s surface to people that are interested in it.In fact, we’re planning to bring this system online on the Internet fairly soon.I think the current date is sometime in July.I’m not exactly sure, but it’s in the next few months.

We’re working with both the US Geological Service and the Russian Space Agency.We’ve currently got something like three point something terabytes of data.Here are some of the partners.We’ve been working with Digital, the USGS, Spin II, which is a US company that really works with the Russian Space Agency, and a number of others.You can see some of the kinds of images that are created.

Just to give you a feeling for what we’re talking about here, the earth’s surface is something on the order of 500 terameters in scale.We’re currently in possession of six percent of the earth’s surface, so that gives you a feeling for sort of where we are.And we continue to add more data to this as time goes on.And it’s a big system, and it’s a big problem to try to put something like this online.

I’ve mentioned some of the people that we work with.Let me just quickly show you.I can’t run it from here, so I can’t give you the deluxe tour, but what I can do is give you a feeling for some of the imagery that we’ll be putting online.My house is on, for example, my back yard is available.The resolution of the images is 1 meter resolution for the USGS data, which is aerial photography.A camera is basically at 10,000 feet flying over the earth.The Russian Space Agency has provided us with data which is 2 meters resolution from their spy satellite.Here’s the US Capitol building, and you can sort of see roughly what the resolution is there.I’m going to bring up another image here, just to give you even a better idea.That’s a USGS image.Here’s an image of the Vatican for those of you who are familiar with what the Vatican looks like.

And the particularly interesting part about this image, which I didn’t notice at first, and it gives you a feeling for what 2 meter resolution actually can do for you, is if you look at this bridge — I don’t know if you can see the pointer very well, but if you look at the bridge here, there’s a traffic accident on the bridge.You can actually see that if you’re close enough to the screen.You can see the little — (inaudible) — cars askew, and if you get up close to the screen, you can actually see what look like little people.

So it gives you the feeling for what we’re going to be able to provide there.

Again, the goal here was to produce a multi-terabyte database that would be available, you know, seven by twenty-four on the Internet.This is all built on SQL Server 7.0, so it’s all built on — (inaudible) — technology, you know, running on PC hardware.And it’s very cool.The particular part that I like is just being able to go in and look at, you know, areas that you’re familiar with, and literally in some cases look at your back yard, if your back yard isn’t too caught up with trees.One of the things that you begin to realize, particularly, is that trees do tend to obscure a lot of backyards, if you look at the data.But it’s a lot of fun to look at.

And, again, you can get sort of the view of what’s going on there.

We will be updating — right now most of our imagery comes from the period of about late 1989 through 1992 or ’93.The Russians actually — this is just another view of the applications — and we tied the Encarta database into it, and the Streets Atlas database, so there will be a lot of things there.

One of the things that’s particularly interesting about it that although data we have is fairly — is fairly old, about five years old mostly, today, and that’s we have online.The Russians have recently put up a new satellite, and they’re going to be giving us new data as time goes on.And, in fact, we’ve tied the TerraServer into our commerce servers so that both the USGS and the Russian Space Agency can actually sell people imagery that they’re interested in using the same environment.So it gives you a feeling for what’s going on there.

So that’s big, that’s the, you know, building very, very large systems.

One of the things that we’re also doing within our research work group is looking at ways of basically building what I would call virtual computing environments, environments which no longer really depend on the underlying hardware, and that applications can be written in a way that don’t depend on the characteristics of the computers or the disks or the file systems and the networks that they’re connected to.

Now, you’ve heard something about Intellimirror, I’m sure, by now, so I’m not going to talk about that.But what Intellimirror is trying to do is sort of move in that direction, to begin to create that kind of a virtual environment by removing some of the dependencies between the software and the hardware.And you’ve seen some of these kinds of slides, I’m sure.

And I know you’ve heard something about, you know, the — our terminal server, you know, the — (inaudible) — and this notion to be able to basically provide a Windows environment in a remote way.

What we’re doing on the research side is trying to push the envelope much farther than that, though, and say can we really create environments in which application programs don’t even care what computers they’re running on, that all the interesting characteristics of the computers can be mimicked, in effect, by the system software so that you can effectively blow up one machine, have another machine take over transparently, and literally have nothing within the application’s environment change.The IP addresses that it thinks it’s working with remain the same.The environment’s the same.The database looks the same.It’s just literally the same kind of thing.

Part of this is this notion of building an infrastructure which has intelligence — (inaudible) — an object infrastructure where the objects are very self-describing.There’s a lot of work going on in that area in our product groups now related to COM+.We’re pushing the envelope with that in the research group to basically say, let’s go even farther; let’s make an environment in which literally objects don’t — not only can they not — don’t care about the systems they’re running on, they can’t even tell that they’re running in one location or the other, and be able to analyze the object behavior and effectively optimize and tune the code for those objects on the fly.

One of the things that we’ve been doing is automated distribution, and we’ve been able to do things like take some of our existing COM applications, like Microsoft Picture, which is a very heavily COM based application, and effectively analyze the underlying communication boundaries between the COM objects and their applications automatically from the program binary, then split the application apart into client-server side components, taking into account the most optimal way of placing that code on the network.

Going even farther than that, that’s fairly near-term research, what we’re looking to do is building something which we’ve been calling Millenium, just because I want the team to produce the first version by the year 2001.But they’re building a — (inaudible) — environment in which we effectively raise the level of abstraction of the underlying system in such a way that there really isn’t any longer the notion of a file system that’s in a single location or disk drive or individual computers or networks.The component builders will no longer have to care about that.

What we’ve actually done is to build, at least initially, on the underlying architecture components from the product groups, so we’re building on NT 5 and COM+.We’ve done effectively — we’ve built, you know, virtual cluster machines, a cluster virtual machine environment that maintain a single system imagine across a large number of individual computers, and we do transparent invocation migration recovery within that space.

So we’re working on this.It is part of our research agenda.And, again, it’s the notion of putting a lot more intelligence into the underlying system, into the underlying components, than was there before.So if you can think about the things we’ve talked about up to now, on the one hand we’ve said, you know, create this environment where people interact with other, communicate with each other, perform applications and we monitor that to provide a lot of services to create these virtual working spaces for them to operate in.On the other hand we can provide a virtual working environment for the code itself to operate in, so developers don’t have to be as aware of it.

So these, I think, are a lot of the directions that we’re going.

Now, I started out with a real quote from Plan 9 from Outer Space, which is its own view.Obviously, you know, different people have different views of what’s going to happen, you know, as computing becomes ever more automatic, you know, as we put more and more intelligence into it.But I don’t know how many of you have seen Dark Star; I do recommend it.Some video stores stock it; a lot of them don’t.But if you have seen it, it’s John Carpenter’s earliest work, if you had seen it, you’d hear this little clip, and this clip is — keep in mind, this is a ship captain talking to a planet that’s being bombed.

(Begin video clip)

MAN:In other words, all that I really know about the outside world is related to me through my electrical connections.

MAN:Exactly.

MAN:Why, that would mean that I really don’t know what the outside universe is like at all for certain.

MAN:That’s it, that’s it.

MAN:Intriguing.I wish I had more time to discuss this matter.

MAN:Well, why don’t you have more time?

MAN:Because I must detonate in 75 seconds.

(End video clip)

MR. RICK RASHID: Well, we may not get in the next two or three years to the point where we’re having existential debates with our desktop computers.I really don’t think that’s likely to happen very soon, but I think what you’ve seen is a lot of technology that are in the research labs, not just at Microsoft, but also I certainly highlighted ones that we’re doing, because it’s easier for me to get the demos and the videos.

But it gives you a feeling for sort of where some of these technologies are going, what some of the opportunities might exist in the future, and the level of intelligence I think we will be able to bring, you know, into the computing environment as time goes on.

And I am very happy to take questions now.So any questions?Just — you know, I don’t know what the protocol is for the question-and-answer period.So I assume you would go to the microphone or otherwise communicate with me.Shouting from the back will work, too, although it’s hard for me to see.

Yes, a question?

Q Hi.I am Todd Carlson from Northwest Natural.I actually have a question for Rick.(Laughter.)

Q Yeah. Back on all the user attributes — the voice recognition, the image information and everything.Have all your different groups been working together to try and come up with a single person’s human object to have all that stored, sort of an extension of the user profile?

And I don’t want to get into the existential arguments, but the ownership of that.I mean, if you’re able to create a human with that person’s voice, it seems like that kind of crosses the border between, well, it’s on the company’s computers, I think that person actually does own that.

Have you thought about that at all, or is that simply so far down the road do we not have to worry about that yet?

MR. RICK RASHID: Well, it’s certainly something that comes up.I mean, I don’t think we’ve done what I would characterize as a careful analysis of where this technology is really going to take you in terms of — you know, clearly one of the things that you will be able to do, not in this year or next year or the year after that, but in the not-too-distant future, is you are going to be able to mimic an individual very strongly; obviously, with that person’s agreement currently.You need a lot of data from the person to be able to do that.I think there will be applications for that.People will be interested in using that.

Who owns that?My guess is it will be the individual.But it’s one of these areas where, you know, law will have to catch up with technology, much as it has in other areas, in the future.

In terms of being able to create a unified user profile, again, there is a — (inaudible) — into that, but right now we’re not there.I mean, we’re still — the groups that are doing graphics are interacting with the groups that are doing natural language and speech, but there’s still a lot of work to be done.And so we’re well away from that.

I think what there is some progress in is being able to come up with more standardized ways of capturing information and keeping track of it and storing it, and we’re looking at how can we sort of build a data base of information that can be shared across applications, for example, rather than right now in Office, Office actually monitors the user’s behavior in order to provide help and support to that user.As time goes on, it will be much more important that those applications can do that.

Currently that information really only is part of that application, it’s not shared across other applications, so there are issues about how do you standardize that information, how do you share it across applications, how do you provide common events between applications that relate to what the user is doing?Because a lot of the user’s behaviors aren’t directed at the single application, they’re really directed at the total task, whatever that total task is, which means they’re doing several things at the same time that may be related to each other, and you may need to be able to take that into account and feed that information back up to the applications for them to make intelligent choices.

So we are looking at that. I think we’re still well away from doing a really good job of it, and that’s, again, another area of research for us.

MR. KEN GLASS :We have a question in the middle.

Q How much of your researchers are market-oriented or they are directed by other considerations?One of the marketing mottoes of Microsoft is that you’re helping make the world a better place.Now, do you believe a world may become a better place just by introducing more IT, or do you think it’s important to look after what mankind wants out of the information system?

MR. RICK RASHID: Well, fundamentally what we do in our research group is our goal is to push the technology forward in our respective research areas.And that’s something which is driven by the research community as a whole.We’re participants in that community.We publish papers.You know, each area, whether it’s computer vision or graphics or speech, we’re out there trying to push the state of the art.And so we don’t have a marketing aspect to what we’re doing or a product high plan to what we’re doing.We’re really trying to move the state of the art forward.We’re a basic research lab.

Now we also do, though, pay a lot of attention to the research that’s being done and the good ideas that get generated out of it, both ours and some other research organizations.And we really try to look at what are the best ideas that have really proven themselves that we can move fairly rapidly into our technology.And we’re able to do a pretty good job of that.

As an example, there was a research project that was done by one of our researchers in graphics.He gave a paper at SIGGRAPH a couple of years ago in August on something called progressive mesh technology, which is a resolution independent way of representing 3D surfaces.And it was great research.It really has changed the way the 3D graphics people think now about representing 3D surfaces.And yet we were able to take that research that was published in the paper in August of 1996, it was available in early beta form, in Direct X5 in, I think, February of the following year, and it’s part of our Direct X5 product today.So when a piece of technology makes sense, we can move it into our product very, very rapidly.

In terms of, you know, do I think that adding more technology will make the world better, basically for myself personally, the answer is yes.I mean, the reason I got excited about computer science was the fact that I just got so — when I was in college at Stanford, and for the very first time was able to program in those days what was an old HP-2116, for those people who are as old as I am, and got the thing to read paper tape and write to a teletype device and do things on its own, it was such a strong feeling, such a rush that I got about the notion I could basically take my thoughts and effectively animate this device and do something — which at that point wasn’t very interesting — but I could easily see how I could do things that would improve, you know, on what I did, would improve people’s lives in some fashion.

And that’s really driven me personally, this notion that I can create things that had never been done before and that really do something that people haven’t been able to do before, to solve problems that they needed to solve, they weren’t able to do in the past.

Some of the things like the TerraServer that we’ve just done, will be out fairly soon, I think that’s going to be a tremendous learning tool.I think it’s going to get used very heavily.We’ve gotten a lot of feedback from people this way as well.It will get a lot of use from schools, people that are trying to learn geography, learn about this kind of information.They will able to go on the Web, find that kind of data, and immediately be able to solve a problem.

And I look at what my kids are doing.My 14-year old, he’s after me to buy a dual processor Pentium 400 with half a gigabyte of RAM.You’re saying to yourself — with Windows NT on it, by the way.He wants Windows NT.And you sort of say, well, why is this 14-year old interested in this?Well, he does rendering of 3D objects and digital movie-making.And so for him, this is the tool.You know, a few thousand dollars invested in a computer, and he can suddenly make digital movies, animations that are as good as a lot of the stuff that you’d see in a TV commercial.And this is technology that’s in the hands of kids, literally kids in this case.

So I think that’s what we’re able to do now.You know, that’s how we’re changing the world.We’re giving people the ability to do things they could never have done before.I think that’s what’s driving our research in the future, is can we create technology to allow people to do things they could never have imagined doing.

MR. KEN GLASS :We have time for one final question.

Q Windows 95, 98, NT 4.0, NT 5.0, last year, last 18 months, maybe two years, Microsoft has said the Desktop, whether it be business or home, will come together into one OS.It seems that as NT evolves into a robust enterprise networking operating system, that perhaps it may not find its way into the home, or is Microsoft still on the track to have one operating system replace Windows 98 or whatever version of Windows is going to be out there with an NT version; meaning Windows NT for home and for business?Maybe that will solve a couple of the other questions that were asked today.

MR. RICK RASHID: Well, when we talk about Windows NT for the home, we’re really talking about the Windows NT kernel technologies being applied as the center of the technology that’s available for the Windows operating system in general.There will obviously be different things around it that will make it more attractive and more useful in home versus business environments.So we’re talking really the kernels coming together in the future, not necessarily beyond that.

We will continue to strive to meet customer needs.I mean, if we don’t, you won’t be buying our products, so we have to continue to do that, and meet the different needs of differing constituencies.

Thank you.

MR. KEN GLASS :You’re welcome.And this closes out the first PC Features conference.First of all, thank you, Rick, for the closing keynote.(Applause.)And I want to thank all you hearty souls for making it to the very end.Do remember to close out on those evaluation forms, fill them in so we can make improvements and decisions about the future.And very much appreciate you taking all the time and energy and money to come here and spend time from us and with us, and I hope it was useful.Thank you very much.

(Applause.)

Related Posts