Transcript: Future Forum Panel Discussion

Future Forum Panel Discussion
Redmond, Wash., Sept. 5, 2001

MR. RICK RASHID: I’ll be acting as the moderator today, and I’ve got the list of questions that people submitted during the day. So, we will work from that.

Let me go ahead and introduce the panelists here, and I’ll start on my side here with Dan Ling, who some of you have already seen his presentation, Dan is director of the research lab here in Redmond. Just as a point of historical information, I hired Dan into Microsoft Research, I actually tried to hire him the very first day that I accepted my job at Microsoft. Dan was one of my best friends from my days when I was at Stanford. And, of course, he told me no right at the beginning. And it took about six months or so before I was able to actually convince him to come to Microsoft. So, here’s Dan.

Chris Bishop, who is assistant director of a research lab in Cambridge, gave a presentation earlier today as well. So Chris will be able to talk about some of the work that goes on there.

Nathan Myhrvold, who you haven’t seen earlier today. Nathan is the guy that hired me to start Microsoft Research. He was most recently at Microsoft our CTO (Chief Technology Officer). At the time he hired me he headed up something called the Advanced Technology and Business Development Division of Microsoft, and was responsible for a number of activities within the company over the years. He retired from Microsoft, what, two years ago, a year ago. It just seems like longer, and now he has a company in the Bellevue area that manages his business interests.

Ed Lazowska just recently stepped down as chair of the Computer Science Department at the University of Washington here in Seattle, and Ed has been a very key figure in a number of organizations including the computer research organization. He led advisory committee for the National Science Foundation Computer Directorate. He’s been an influential member of the academic research community computer science for a long time. And has also been a member of our technical advisory board from the inception of Microsoft Research.

Next to Ed is Richard Newton, who is Dean of Engineering at the University of California-Berkeley. He is also a current member of our technical advisory board, and Richard has a broad collection of interests in electrical engineering and computer science.

And finally Ya-Qin Zhang, who is head of our Research Lab in Beijing, and before coming to Microsoft Ya-Qin headed up a multimedia group at the Sarnoff Laboratories. He’s a very knowledgeable and experienced researcher.

So, that’s our panel, and I’m going to be asking the questions. It’s kind of a weird arrangement here, because I’m not really sure how to ask the panel, but they can’t really see me very well, but we’ll figure out how to make that work.

What I will be doing is, I will be drawing on some of the questions that have been asked during the day in the little cards that you guys have filled out.

So the first question, which is sort of a question to Microsoft researchers, although there’s a version of it that I think could be applied to the entire panel, which is think back to 1991, is there a technological development that Microsoft Research didn’t foresee which shaped today’s world, and sort of a follow-up to that is, what technologies really surprised you?

And, Nathan, since you haven’t really talked today, I’ll start with you, because you’re the one that really came to me to start Microsoft Research.

MR. MYHRVOLD: The first big surprise actually is that this whole enterprise happened. I mean, obviously I did have huge confidence to start it. That confidence was shaken the first time Rick turned me down, and also the second time Rick turned me down, but eventually we managed to hire Rick.

It’s amazing to see what Microsoft Research has done over the last period of time. To answer the question more directly, of course, there’s major technology that no individual lab foresaw, in fact, there’s great technology that almost nobody foresaw. That’s sort of the nature of the research enterprise. I don’t think anybody foresaw how quickly the Internet grew, just as not that many people foresaw how quickly that the financial aspects of many of the Internet companies collapsed in the last year. So, that’s certainly a surprise.

We’ve seen tremendous kinds of progress in bioinformatics and human genome, that’s not something that the Microsoft Research really was focused on since we’re not oriented towards life science, but I think that’s one of the most exciting aspects of computing going forward.

It’s also surprising that in some areas there hasn’t been more progress. From 1991 to today, the progress in most aspects of artificial intelligence hasn’t been as much as I would have predicted. Many of the classical problems of machine understanding, translation, speech recognition, progress has been made, but where as 10 years ago you might have said they were five years off, some of those things, you know, they didn’t happen five years ago.

So, there’s my answer.

MR. RASHID: Dan, do you have anything you might want to add from the Microsoft Research perspective?

MR. LING: I think one thing that I don’t think anybody foresaw was the rapid decline of the cost of magnetic storage. I think everybody thought that magnetic storage was sort of tending to flatten out as far as the densities that people could achieve. And I think it’s been really surprising how quickly the prices have gone down, and how much progress has been made beyond even the super power magnetic limits.

MR. MYHRVOLD: In fact, to follow up on that, one of the things has almost always been wrong in this area is the people that have projected that technological progress was going to halt as physical limits were hit. That’s true for Moore’s Law. In 1991, and around that period, I would make projections that there would be a 500 MIPS processor by the year 2000. And Intel has just announced a 2 gigahertz Pentium IV that does, depending on how you count, three or four instructions every cycle. And when I would tell people inside Microsoft, even, Bill would say,
“Come on, what will we use that for? What would you use all that power for?”
I said,
“Look, don’t worry about it, someone will figure a way.”

And I think both in terms of the consumption of those things and the creation has just continued to be phenomenal. Nevertheless, you can read an article today that says, you know, Moore’s Law, it’s going to stop in 2003. I read an article like that last week. Some guy wrote an article in 1991 saying it was going to stop in 1994.

MR. RASHID: I was going to say, from the perspective of the academic, Ed, do you have some comments?

MR. LAZOWSKA: I was just going to make a comment on disk storage and bandwidth, because I think the progress in the past few years on those has really caught us all by surprise. Everyone understands Moore’s Law, you double every 18 months. What happened with disk storage is that suddenly, in the past three years, sort of the cost capacity has started doubling every year. So, at this point in time, our ability to store stuff is growing exponentially like our ability to compute, but on a faster curve. And what that means is that lots of computer designs are predicated on us having, in some sense, more processing than storage. Okay, but what’s turning things on its ears is the very rapid growth of storage.

Very soon, you’re going to be able to, for example, store the entire digital record of your life, for better or for worse, and an interesting question is, are you going to be able to find anything? You know, we talk about desktop metaphor, I don’t know if your desktop is like my desktop, but that’s not how you want the digital record of your life organized, this gigantic mishmash in which you can’t find anything.

We can talk more about this later, but backbone network throughput, you heard earlier today, is improving very rapidly. In fact, it’s been doubling every nine months or so. So backbone capacity, even faster than secondary storage. So these things

— these exponential changes always really catch us by surprise.

MR. RASHID: Richard, did you have any comments?

MR. NEWTON: Well, I can add, I think there’s a consequence to that consequence, which is that as a result of that unpredicted growth of networking and storage and so on, systems have become complex a lot faster, or more complex than we expected in that same time frame. So the emphasis on things like availability, maintainability, reliability caught us all, I think, by surprise too. And the robustness of very large complex software systems, that’s an area that I think Microsoft Research is effectively working on now obviously, and ahead of many other groups on, but it’s still a major problem that we haven’t yet resolved.

MR. RASHID: I’ll just throw my two cents in. I’m the moderator, so I can say anything I want. Which is that one thing I think is interesting is that you can actually do almost a better job of predicting technology than what the impact on society is going to be. I think that’s where the web caught us more by surprise in the sense that the societal impact, and the speed with which it had an impact on society, than the technology. The technology basically ran on the same curve that people thought it was going to run on. And I think that’s an area where if you look forward 10 years, it’s hard to predict where some of the changes that we believe are going to occur will have their societal impact.

Here’s another question. This is more for our foreign research laboratory guys, but I think it’s an interesting one. The question is, what is the contribution of the Cambridge and Beijing laboratories in terms of your culture and different approaches to problems in research? And, Chris, I’ll pass that one to you as the starting point, what do you see as the sort of different perspective that you guys bring to it?

MR. BISHOP: An interesting question. So, in terms of differences between the labs in different places and so on, I think one thing to emphasize is, research is very much a people driven activity. And one of the main reasons for setting up labs outside of the U.S. is to be able to attract people from a different talent pool.

And so, just one of the key differences between the labs is that we just have a different set of researchers, different individuals, who bring different talents, different skills, a different style of creativity. So, if you posed the same question to the Beijing lab, to the Redmond lab, to the Cambridge lab, you might end up with three rather different solutions, because you’ve got three different sets of people working on it.

I think the other difference, too, is the sort of environment within which we work. The sort of immediate contacts that we have. And in the Cambridge lab, we’re very fortunate because we’re in close proximity to Cambridge University, we have very close links with Cambridge University, we have very good links with a number of key universities throughout Europe. And those interactions, again, bring us into contact with particular talents, particular expertise, and sort of shapes the nature of our research.

So, I would sort of emphasize the people aspects of the distinctions between the labs.

MR. RASHID: Ya-Qin?

MR. ZHANG: We share lots of commonality, the best talents, an open and free environment. There are, of course, a number of differences. In Beijing, look at the composition of people, over 20 percent of the people are well established research leaders from all over the world, but 80 percent of the people are actually quite junior. There’s a lot of potential, a lot of energy who are really going to have lots of experiences.

So, I might point out, what we did was, we provide lots of directions, including mentorship, and that the good news is that that allows researchers at the beginning to be able to become more creative and evolve their own projects. And I guess also a lot of provision is top down. It’s becoming more of a combination of bottom up and top down.

Otherwise, I think it’s quite similar. There is a lot of collaboration among the three laboratories, actually four laboratories, that really helps us, and helps the other labs a great deal.

Rick, can I add a few sentences to the first question about the things we have not predicted?

MR. RASHID: Sure.

MR. ZHANG: I think one thing the computer community didn’t predict was the web, it actually invented by a physics lab, not by a computer science lab.

And the second thing, which I agree with Nathan, is lots of people back in the early ’90s, ’80s, actually predicted that Moore’s Law was going to stop in a few years because of the physical limit. In fact

Gordon Moore himself wrote a paper, I think it was titled Exponential Law, No Exponential Law Will Last Forever. In fact, it predicted that the .25 micron will be the limit for the lithograph, or the optics. And, in fact, that prediction was broken down, in fact, by Intel Labs itself. So, the good news is, we believe that Moore’s Law is going to continue for at least another 10 years, and a lot of the creativity software can be based on the Moore’s Law.

On the other hand, the software will become also the driving force for the evolution in hardware. I think as Nathan correctly pointed out, software was a gas. It’s only limited by the collective IQ, and the imagination of human beings.

Number three was really the use of broadband. Back when I started work in ’89, my first job was fiber to the home. That was 12 years ago, and I was working for GTE deploying a fiber to the home system. Then two years later, I was working on a system called a fiber to the curb. Then two years later, it was fiber to the loop. And then it was fiber, it was hybrid fiber cable, ADSL, and other things. So in the backbone, the speed has been really, really amazingly. It just doubles every six to eight months, the bandwidth, the traffic. In the last mark, it just takes time, scale, patience and also availability of the content to happen.

I’m sorry that I just rewound to the first question.

MR. RASHID: Okay. Does anybody have any comments they want to make on that one?

Here’s an interesting one. Actually, someone just walked up with a closely related question, so I’ll ask them both at the same time, the question is, will any system ever really pass the Turing Test? In other words, when will interaction with the computer be indistinguishable from interactions with a human? And the related question is, do you agree with the prediction of Ray Kurzweil about having a human intelligence in a PC by 2025, which I assume means that it passes the Turing Test, either that or it can communicate, and can you elaborate on this topic?

So, basically, both of these are asking the question of, you know, are we going to be able to build computer systems that can mimic human systems in some fashion, and when might it actually take place? I know Nathan has given a speech that I attended at CACM (Conference of the Association of Computing Machinery) where he said he wanted to download his intelligence by 2047 into a computer so that he could be preserved.

MR. MYHRVOLD: It’s not clear that I could pass the Turing Test, of course.

MR. RASHID: But does anybody want to tackle this particular one, Nathan, this is one you’ve talked about before.

MR. MYHRVOLD: There’s really two issues here, the first is the continual increase of Moore’s Law, will computers continue to get faster for the next 25 years at roughly the rate that they have been in the past? There are arguments you can make in both directions. There are experimental devices that would indicate that you should be able to take it that far. The technology may not be the conventional silicon technology, but just this last week there was a release on super conducting buckey balls, carbon buckey balls, it’s a Carbon 60 atom appropriately dosed. With carbon nanotubes, you should be able to make both transistors and wires that are fantastically smaller than anything we can do today. You don’t really

if you project Moore’s Law out, you wouldn’t need these carbon nanotube based conductors or chips for 40 years, potentially, so it’s even beyond 2025 prediction. So if Moore’s Law keeps going at that rate, computers will be so powerful that you can make a very strong argument that they could be as smart as humans.

The second issue is, what is the secret of cognition, how do our brains work? What is the general architectural approach to intelligence? We know that the way we think is not very analogous to the computer. When the computers have beat various chess champions at various times, the interesting thing is that when a computer does that, usually it’s in a limited context. But the computer is going 500 million cycles per second, something like that. You know the neurons are firing maybe 50 times a second. It’s clearly a vastly different architectural approach. We don’t know what it is. We could find out tomorrow, we could find out in 2025, it could take until the end of the century. It’s very hard to predict what we’ll be able to do when we figure those things out.

Some people like to create sort of a metaphysical argument that says, no, that’s too hard for us to ever figure out, a machine could never do it. Most of the people who say a machine could never do it are a little bit like

to me it sounds a lot like the argument that the earth was the center of the universe. Not always, but it’s one of those arguments where if you say god, it’s impossible to be as smart as us, you’re kind of setting yourself up for a bad prediction, I think.

So I don’t know the answer. I think that there’s a reasonable chance Moore’s Law will continue to 2025, or something like Moore’s law in perhaps a different technology. And I think it’s a reasonable bet, at least 50/50 bet, that we’ll understand how the architecture of cognition works, how brains work, enough to understand how to write software that could be that intelligent. I would say tentatively yes, but it’s not more than a 50/50 bet.

MR. LAZOWSKA: So here’s my claim. I think we will continue to make progress in emulating various characteristics of human behavior, and we will continue to redefine the problem in a way that preserves our specialness, because that’s what we’re like. It used to be that the greatest intellectual activity was playing chess, and we don’t talk too much about that anymore, because computers can beat us at that. And we talk about how they use entirely different approaches, geez, it’s exhaustive enumeration, which of course it isn’t. So we redefined the problem.

There are robots these days that are starting to show reasonable simulations of human emotions. And there are a bunch of lab experiments at MIT (Massachusetts Institute of Technology) and elsewhere, where you can really get people to react much more responsively to a robot that attempts to emote. Give that another five years and we’ll have these debates about whether these mechanical devices are really showing emotion, or whether they’re just simulating emotion. We’ll redefine the problem.

It’s like talking about whether a jet airliner flies or not. There’s some argument by which it doesn’t because it doesn’t have anything that’s flapping, therefore it’s not flying. But, if like me you have to get to Washington, D.C., tomorrow morning, it’s a perfectly reasonable simulation for all practical purposes. I think we’ll continue to make enormous progress, and we’ll continue to redefine the problem, because after all we do have this we’re the center of the universe notion of things.

MR. RASHID: Richard, you have a comment?

MR. NEWTON: I think, Dan, we’re the two hardware people here on the panel, so I have to chime in on the Moore’s Law argument, as a hardware person. My background, I direct a research center funded by the Semiconductor Research Association whose charter is to build the design tools that will keep up with Moore’s Law. That’s another aspect of the research that I do. We’ve built transistors now, demonstrated them, at channel lengths of less than 50 nanometers, or actually 15 nanometers now. Which means we can sort of show that on the semiconductor road map at least for the next 12 years we have a lot of runway to go, maybe beyond that, but at least for the next 12 years with Moore’s Law.

I think one of the bigger challenges with Moore’s Law is not the technical challenges, it’s actually an economic challenge. Even if we could build an emulation or a simulation of a person, we certainly couldn’t afford to build $3 billion of them, with chips, technologies that are built in fabrication facilities that cost many, many tens, potentially, of billions of dollars each. Even with the economies of scale that we would have, it’s not clear that the end of Moore’s Law is a technical end. It could be an economic end.

We may not be able to built chips inexpensively enough, the large ones we’re talking about, to address the very high volume markets that will fuel the economy. So we need to look for alternative technologies. And I think Nathan had it right, and so did Ed. That is, the challenge is an architectural challenge much more than it is a technical challenge. We are working on organic semiconductors today, plastic transistors that we can print using ink jet printers. We can print sheets of these as inexpensively as we can print the labels for soup cans. We can put billions of them down, we can integrate displays with them. Not today, but soon.

The challenge is that they break pretty quickly, they wear out. There are all sorts of technical challenges in terms of how we build circuits out of them. But, certainly in terms of emulating the cortex with its dimensionality, we can certainly get the sorts of densities and those sorts of technologies that potentially would give us the opportunity to do that. Then we have this challenge of, well, gosh they break a lot. Well, so do cells, so do neurons, so does the human body.

So then we have this programming challenge, this architectural challenge. How do we program things efficiently and effectively that break a lot. Now, that also applies to things like the Internet as well, of course. But, we have people like David Heckerman and Eric Horvitz here at Microsoft Research working on probabilistic approaches, statistical ways, you heard a lot about those today, for how we might solve specific problems. Right now they’re built on top of deterministic computers, but ultimately we may be able to use those sorts of techniques directly in the materials and the interfaces we build.

So to me I think the challenge for emulating the human as well as sort of the future of Moore’s Law ultimately is entirely on of architecture and how we approach the problem, how we redefine the problem.

MR. RASHID: Anybody else want to chime in?

MR. BISHOP: I think the point about probabilistic computing is a very important one, because there is just in the last few years a quiet but very important revolution going on in computing. We traditionally think of computers as finite state, logical, deterministic machines, we program a fixed set of instructions, they execute the instructions. There are a lot of things we can do with that, we can build wonderful word processors, spreadsheets and fantastic software. But, there’s a limit to the kinds of problems we can tackle that way.

I think one of the very important developments in making progress towards either artificial intelligence, or something which emulates artificial intelligence in a practical sense, is the move towards probabilistic methods using Bayesian inferential techniques, not only to handle uncertainty, but also to handle learning. So that instead of programming a computer to solve a problem directly, instead we do something very different, we program the computer to be adaptive, we program it to learn from data, to learn from experience, using these probability methods. And by using learning from data, we can have a computer solve a problem that’s way too difficult to solve by writing the program directly.

So I think, yes, Moore’s Law is tremendously important. Super fast computers are necessary conditions for progress towards artificial intelligence. But, I also think, yes, the software, the kind of software that we write is equally critical. I think the emergence of probabilistic methods is a very important computing paradigm alongside traditional methods, I think will be key to making progress towards machine intelligence.

MR. RASHID: Okay. I think we hit that one. The comment I’d make, just to follow up on Ed’s comment, it all is really sort of how you define it. Right. So as the father of a two-and-a-half year old, I can produce a perfect simulation of my son’s responses to almost every question. Which is that if it’s a direct question the answer is no, and you could easily pass his definition of a Turing test.

I’ve got some more questions here that are actually related to each other. I’m going to throw this one in because it’s partly a follow up to some of the comments that Chris just made, and there are two related questions, and I’ll just read them together.

The first part says, in my personal experience each release of Windows has been more stable than its predecessors. However, as the operations become larger, and applications become larger, stability must be increasingly difficult to maintain or improve. So the question here is, is Microsoft research doing anything to improve stability and reduce crashing, or intercepting potential crashes. There’s a related question, and I’ll just add that to it, which is, how has the process of software development fundamentally changed, or has it fundamentally changed, and what will software development look like five to ten years from today?

So those are sort of related questions, because basically

it’s

I mean, certainly when I look back on being a graduate student when we were building systems that we thought were pretty large and which occupied about 20 kilobytes of storage when they were all done. And we were sitting there saying, well, there’s certainly a limit to what people can do. Clearly we’ve blown through those limits. So somewhere along the line we did something to improve software development.

Dan, do you have any comments, or does anybody else want to jump in on this one?

MR. LING: One thing that I think has changed that has made it easier for people develop large artifacts is the ability to reuse very large artifacts and build things on top of that. And I think that’s been one of the key powers of the Windows platform, that it’s a pretty large and complicated platform, and by providing that to other developers people can assume all that functionality is available and build on top of that.

Another example that comes to mind is relational databases, that’s another very large component that’s reused over and over again by people on which to build much more powerful things. So even though some of the programming technology may not have changed too terribly much, we’ve got better editors, we’ve got faster machines, we don’t need to wait for compilations. But, by and large programming hasn’t changed dramatically over the past, say, 20 years, but we really have a lot more to build upon. I think that’s very important.

MR. MYHRVOLD: I was a development manager on Windows 2.0, that was the last unsuccessful version of Windows. Yes, you’re right. Each one has been better. You should have seen 1.0. These days everyone acts like Windows is this huge dominant thing that has always been there, and believe me there was a day that Windows

we were laughed at for the notion that anyone would want graphical user interface.

And if you think about the machines we were trying to get to run on, remember, Windows 2.0 didn’t assume a hard disk. You had to use it in dual floppy systems. We had a card that came in the box that showed you how you should lay out your piles of floppies on your desk so you could flop them in at the right time. The machines we were using then were .1 percent as powerful, maybe not even that, as the machines you have today, just an incredible range.

During this whole period from Windows 2.0 all the way up until recently, Bill Gates used to ask me, —

I remember 100,000 lines of code was a big thing, and then a million —
“We’ll never be able to do a million lines of code.”
And I said, oh yes, absolutely, ten years out. And the answer was never that there was some piece of magic. Object oriented programming was a technique, and various kinds of programming tools have helped. So it’s a little bit like what Richard was saying with design tools and semiconductors. The kinds of things that are routinely done today are vastly larger than was even imaginable to do ten years ago. The combination of hardware and software things has brought you there. But, it’s still not good enough.

Not that Windows is a bad thing. Whatever the Windows development process is today, whatever the stability or the features are, it’s certain that two or three releases out they have to do a better job. And that’s going to require a ton of stuff. It’s been a priority for Microsoft Research from the beginning that we spend a lot of our time on programming tools, technologies, ways that we can make the systems more brittle, and make our programmers more productive, because we have to meet that incredible challenge.

MR. RASHID: Ed?

MR. LAZOWSKA: So Fred Brooks is the godfather of modern software engineering. Fred has been at the University of North Carolina since the 1960s and before that ran the 0S 360 project at IBM in the ’50s and early ’60s, and there’s this wonderful paper called No Silver Bullet . And he was absolutely right, and this is just what Nathan and others have said, there is no silver bullet in software engineering. It’s a progression of tools and

for example, Amitabhs group in Microsoft’s group has very intelligently focused on building great tools for a set of point problems that you need to solve if you’re going to develop reliable software, and it’s management techniques. How do you organize teams of people and organize the communication between them.

Another great book that Fred wrote in the early 1970s, I guess, was called the Mythical Man Mutt , the notion that you really can’t evaluate the complexity of software by the number of, today, person months you put into it. And sometimes as you hurl more and more people onto the project, the amount of time it takes starts to grow instead of decrease. There are just plain old management issues involved, and there is no silver bullet. There’s a lot of hard work, building tools, developing management styles.

MR. MYHRVOLD: And a key reason there is no silver bullet is the problem changes. If, in fact, all we wrote were million line of code software it would be much easier. But, the trouble is the bar keeps changing, the functionality that people want keeps changing. All of these things keep growing. So unlike a car where creating a great automobile has been, for the last 30 years, has been a relatively fixed target, some changes in emission control, some changes in safety. But, the thing a car does now and a car did 30 years ago is really very similar. It’s the same problem. Software is not the same problem at all.

What goes into server software for giant farms of servers that handle millions, or tens of millions, hundreds of millions of customers like Hotmail, that’s a problem that was unprecedented in the whole world 10 years ago. And so because the bar has to keep changing, you’re never going to get so far ahead of it that all of a sudden software is easy.

MR. RASHID: Ed, one more follow up?

MR. LAZOWSKA: One more quick comment about why software is hard, if you’re building an apartment building the 11th floor has a lot in common with the 10th floor, which has a lot in common with the 9th floor. So in some you get a floor right, and then you just start chunking up. It’s not quite as easy as that, because if you just keep going up you start having trouble with the toilets and things like that. But, the first approximation, there’s a lot of replication.

Hardware is that way, too. Maybe it’s because we’ve figured out the design abstractions, but you design one bit slice of a register and then you replicate it 64 times. While we have discovered how to do reusable components in software, to first approximation a 10 million line piece of software is an enormous amount more difficult than a 5 million line piece of software. You don’t just double what you’ve already done and sit them one on top of the other and get that extra sophistication. There’s something about the engineering that either we haven’t figured out, or that’s inherently more difficult.

MR. RASHID: Let me move on to the next question. Again, there are a couple of related questions here, so I’ll read them together. The first part is, how would you characterize Microsoft Research in comparison to work being done in universities?

And the related question is, what’s better at a Microsoft Research lab than at MIT Media Labs, or other older basic computer science research labs in the world, meaning better for researchers?

So, it’s sort of related questions. One is more a statement about the research, I think, and the other is the environment for the researchers, but they are fairly related. Who wants to take this one up? Ya-Qin?

MR. ZHANG: On the one side, we were able to do a lot of fundamental long-term research, which is really very similar to an academic environment. There are conferences you can go to, the impact you can make, and those are very, very similar.

One of the things that I feel are very special, especially to me, was really the kind of impact you can have on the millions of people directly through Microsoft’s products. For people who come to Microsoft to work, they like to work on basic research, have some influence, impact, academia. And then, in the meantime, the research can be incorporated into products, and go to everybody’s home. Those are really something very special.

I think Microsoft is very unique, especially research, it’s very unique in that aspect.

MR. MYHRVOLD: I’ll make one comment. That was always my pitch when we started Microsoft Research and we tried to hire people. At the time, 1991, Microsoft wasn’t at all the company that it is today. It’s like the machines are a tiny fraction of the power, Microsoft was a tiny fraction of its current size. It was known mostly for DOS. Most researchers were used to being in an institution that was around for 100 years, whether it was a lab like Bell Labs that was around for 100 years, or it was an institution like a university. And so it was very hard for me to hire the first people at Microsoft. And the pitch that I made is exactly what he said. I said, look, if I want to hire you, you must be successful enough that you’re going to keep writing papers and doing research, and you don’t need me to keep food on your table or keep doing all that.

But if you want to change the world, if you want your ideas to go out and get the leverage of being on 100 million desktops, and change the way people live, change the way people work, and really have an impact, there’s no better place to be. And however many hundreds of people we have now kind of signed up on that basis.

MR. RASHID: Richard, from a university perspective?

MR. NEWTON: Well, the way we recruit people at Berkeley is, we tell them that if they really want to change the world, they really should come to Berkeley.

MR. RASHID: I see a common recruitment theme here.

MR. NEWTON: I think that’s my point. Let me start by saying, certainly from my perspective, and I know it’s true of all of us that are on the TAB [Microsoft Research Technical Advisory Board], and the many other people that work at Microsoft Research, the quality of the research environment at Microsoft is as good or better than any university research environment that I know of for sure. So, it’s not a question of a difference in any measure along those lines.

There are many people in this audience that I know we’ve tried to recruit at Berkeley that have chosen to come to Microsoft Research, and vice versa. Microsoft Research has tried hard at various times to recruit faculty from our various universities. And, frankly, where the faculty ends up is is probably the right place for faculty in terms of the choices that they make, whether they’re researchers are Microsoft or at universities.

But I think what’s important to me has been the openness of the exchange between these two different communities. Frankly, I can espouse the values of the academic community outside of Microsoft Research in a number of ways. I think there’s a different context that you have. If you’re a large public university like the University of Washington or Berkeley for example, the context of having all these other disciplines, the biologists, the social scientists, the other people around creates a different environment, not better, not worse, but certainly different.

I think the impact that universities have on the world in terms of their research is a different sort of impact, but if you think about some of the technologies that have been developed in universities, Berkeley UNIX, I’m afraid to say, from Berkeley, things like the arrayed disk drive approach, RISC computing, many things that you’ve heard about, even IEEE floating point format credit is given to a university faculty member for developing that, positively or negatively depending on how you look at it.

So, I think the key point from my perspective is that we are all peers, and we truly are. One of the things that I actually appreciated a lot about Microsoft Research, since I’m sort of the junior member on the advisory board here, but in the time that I’ve been here, also the contributions that Microsoft researchers actively make, and that management encourages the university research community. We were talking earlier at the break about some work that’s been going on helping the Department of Defense DARPA think through how they should invest their money. The result of that discussion that was led by Microsoft researchers with university faculty participating as well. Ed was involved. And it results in investments that benefit us all. And certainly it will benefit the research community, bringing awareness to these key important technical problems.

So, from my perspective, it’s a two-way street. One of my students is currently on an internship at Microsoft, totally motivated by the environment, of the products, the energy there, and the last conversation I had with this student was, when are you coming back to Berkeley to finish your Ph.D., because obviously there’s always a tension there. But Microsoft staff frankly helped us with that to make sure that what we do is ultimately the best for the students.

So, from my point of view, they are different environments, they’re complementary environments, and they’re certainly both very high quality environments.

MR. LAZOWSKA: Let me make two comments about this. One is, everybody is trying to have impact, and if you’re at a university, whether you’re a teacher or a researcher, your primary form of impact is the students you graduate at all three levels. That’s the business that people like Rich and I are in, producing students. That’s our capital. And the reason we’re at universities is, we’re good at getting leverage through students. If you’re at a corporate research lab, like Microsoft Research, then your leverage most likely

— obviously you’re trying to push the field forward, but your impact is, can you influence the practice, and at Microsoft you have a better chance of influencing the practice on a larger scale than anywhere else. My view of this is that Microsoft and universities are all in the research business. And, as Rich said, we’re in the research business together as extensive collaborators, and the Microsoft Research impact, direct impact that people feel from their research, the multiplicative factor is 100 million desktops. And, the multiplicative factor for me is the students who I’ve worked with who are out in the workforce or at universities as faculty members, or as researchers.

The other comment I wanted to make, this is a bit off subject, but not too badly, I think, the uniqueness of Microsoft Research compared to what I’ve seen in other companies over the past few decades. There was a time 30 years ago in which an enormous percentage of the U.S. gross domestic product in computing was IBM and AT & T and Xerox, and each of those companies devoted a certain amount of effort to fundamental research — things that looked out more than 18 months or so.

It’s important to realize that most corporate R & D is D. It’s product engineering, and that’s entirely appropriate. Now, what’s happened over the past 30 years is, the information technology pie is enormously bigger. There are hundreds, thousands of companies with enormous market capitalization, and of all of those emerging companies, only Microsoft, to first approximation, has invested in a serious way in things more than 18 months out. That’s because of Nathan and Bill’s vision in 1991 when Microsoft was, in some sense, only a billion dollar a year company. There are lots of billion dollar a year companies that are not investing at all in anything that looks out more than 6, 12, or 18 months.

Dell does plenty of R & D, but it’s product engineering. Compaq, product engineering. Oracle, product engineering. Cisco is, you know, my quote about Cisco is their R & D is M & A, it’s mergers and acquisitions. That’s what they do. Cisco is in the business of acquiring companies. If they need to get into wireless, they buy somebody. If they need to get into broadband, they buy somebody. If they need to get into DSL, they buy somebody. Microsoft is almost unique among modern companies in choosing to invest in pushing the forefront of the field. And other companies have the same financial resources that Microsoft had when it made this decision, but they haven’t done it. Microsoft deserves an enormous amount of credit for choosing in 1990 or ’91 to make this investment, and then following through on it.

MR. BISHOP: I actually made the transition from the academic world to Microsoft Research back in July ’97. I’ve been a research professor in computer science for about five years. I’ve done quite a bit of consulting for industry, for industrial research labs. And really just by a happy coincidence, I went to Cambridge to run a program that’s called the Isaac Newton Research Institute in Cambridge. I arrived in July ’97, which was the same time that Microsoft showed up. Roger Needham, the managing director of the lab, and in fact, David Heckerman, turned up one day and said, we’re setting up this new research lab at Cambridge funded by Microsoft, would you like to come and join us.

My first reaction was I love being an academic, I’m not sure I really want to go and work in an industrial research lab. I enjoy consulting for industry, I enjoy the interactions with industry, but an industrial research lab — I’m not really too sure. So I asked lots of questions, and things I was concerned about, I was concerned about academic freedom. As an academic there’s tremendous freedom, I can do what I like. I mean, I sort of characterize academic freedom sometimes a little bit like, I teach on Monday, on Tuesday I write a grant proposal, Thursday or Wednesday I teach a bit more, and so on, Sunday I’m completely free to do as I please, a little bit tongue in cheek. But as an academic, you do have this tremendous freedom. And my characterization of most industrial labs, there’s a lot of top down management, you can’t object, you’re told when to deliver, what to deliver, and so on.

I learned that Microsoft Research really isn’t like that. Microsoft Research management recognizes that if you just hired the smartest person in the world in computer vision, then surely that person must know better than management what are the right things to be doing research on. So you really should give that person the freedom. You should also give that person the opportunity, the resources. And in the sense of giving them time, not hassling them with a lot of unnecessary bureaucracy, giving them the resources also in the sense of being able to go out and hire coworkers, and put together critical mass teams of very, very smart people.

And then really just letting people get on with it. I think that’s one of the reasons why Microsoft Research has been so successful. Nathan and others created that philosophy right at the beginning, and then really stuck with it. I think they really haven’t changed in the last 10 years, and I hope they won’t change in the next 10 or 20 years either, because I think it really is a very powerful model. It is very different from most academic so-called research labs.

I mean, one of the things which characterizes that distinction, I think, is that individual researchers get to make the choice about whether they publish their research. A researcher doesn’t go to their manager and get something signed off to say, yes, this can be published, it’s the individual researcher’s decision. And, again, that recognizes the empowerment and the confidence that you have in researchers, in that if somebody created this new science, created this new technology, they understand better than anybody else whether it should be patented, whether it should be published, or perhaps both, or what exactly should be done with it.

I think the other point, which has been made several times already, but really is so exciting it’s worth emphasizing again, is that as an academic, if I published a paper, you know, ten of my colleagues, 20 of my colleagues email me and say,
“Hey, Chris, that was a really neat algorithm, and really neat paper there.”
I would get a real buzz from that. That’s very exciting. But now I have the opportunity not to have 10 or 20 people say,
“That was a nice piece of research,”
but to have 100 million people use my research on a daily basis. And that’s immensely exciting as a researcher. And, for me, that’s just one of those powerful magnets for working at Microsoft.

I say this from the heart, not because I’m here to say it, but I really feel and believe this, that as a research scientist, I can’t think of anywhere in the world I’d rather work than Microsoft Research, because it combines the full spectrum from basic research to applications, theres freedom, there’s resource, there’s opportunity, and we’ve really hired some very, very smart people.

I continue my academic links, I have a research chair at the University of Edinburgh, I’m a fellow at one of the colleges in Cambridge, I have Ph.D. students, I even do a little bit of teaching in computer science courses in Cambridge. So I’ve sort of kept my foot in both camps. But, really the environment of working within Microsoft Research I think is really unparalleled, unprecedented. It really is just an ideal environment in which to do great research. I sometimes say the only disadvantage of being a researcher at Microsoft, the one thing that we don’t have is an excuse for not succeeding.

MR. RASHID: Okay. I think we’ve got time for one more question. And I do apologize in advance, there are some great questions that we got that we won’t be able to get to because of time. This one we’ll have to keep

I want to give everybody a chance to answer, but you’re going to have about a minute and a half if we’re going to stay on schedule, so we’ll try to go through it quickly. There are two variations of this question, you can decide which one you want to answer. One is, sort of looking forward, what is the computing breakthrough you’d most like to see achieved, and when do you think it will happen? Sort of a related question is, what will a computer look like ten years from now? It’s no fair just saying it’s Moore’s Law, it’s going to be 200,000 gigahertz, and ten terabytes of disk, and a gigabyte of RAM. So you’ve got to be more innovative than that if you’re going to answer that part of it. So let’s just go through and get a quick view of sort of what you think the future is.

Dan, do you want to start out, we’ll just go down the line?

MR. LING: One of the interesting things to think about in terms of what a computer will look like is really this notion of the article, or the box of the computer I think is going to disappear. And rather than having an entity that you think of as the computer that collects within it the processor, the memory, the disk drive, and all of the keyboard and mouse and display, and you think that’s my computer. That’s going to change dramatically. As people think about putting in what is essentially a network fabric into the core of the computer and building around that what you get is a disaggregation. In other words, different portions of the computer, of what you think of as the logical computer, shall we say, can be located in very different places. The processor might be behind the wall. The display that you’re working with might be on your desktop or on another wall, and you may be speaking to microphone that’s mounted somewhere, and so on.

The way you interact with the computer could be very different. But, also more importantly than that, it’s no longer a single artifact that you can carry around with you. A corollary to that you’re already starting to see in the sense that we have lots of sort of devices today that work with the main computer. And so the device might be your cell phone, it might be your PDA, it might be your laptop. And sometimes you’re hooked up and communicating to a server.

That’s the very beginning of it, where all these devices need to work together to provide the overall service to you, the customer. I think that will just get carried to the next step, where the computer itself will get disaggregated, and you will be interacting with various elements of the computer, which are now located behind the wall, invisible.

MR. RASHID: Chris.

MR. BISHOP: One of the very exciting technologies for me is machine learning. One of the great capabilities of the human brain is to learn in the context of new environments, in new situations. So right now we’re quite good, I think, at building machine learning algorithms for a particular problem like recognizing handwriting, or translating speech into text or whatever. Somehow the human brain has an extra capability which has something to do with more unstructured data, more the case of unsupervised learning, just present some data and the human can sort of make sense and extract meaning, and interesting things from that data without very much guidance, without any guidance, even. And that’s something that we don’t yet really understand how to do in the context of machine learning.

I have this notion that some time in the next ten years we’re going to figure out how to do that. We’re going to figure out how to do machine learning in this much more unsupervised fashion that’s much more like the way that the human brain can tackle new problems, and can learn about new skills and new environments. And maybe if we can do that, if we can figure out how to cross that barrier, we get into some kind of positive feedback loop. We could increase the power of machines by just having them learn in a fairly unstructured way, and a fairly hands off way. It’s kind of a dream, but it’s something that in principle could happen over the next ten years or so.

MR. RASHID: Nathan?

MR. MYHRVOLD: I think the key —

the breakthrough I’d like to most see solved is to understand the computational architecture of cognition. In a way it goes to what you’re saying, but even more broader. All of the things that we call intelligence that exist, both in us and in all kinds of other animals to a much lesser degree than us. What is that architecture of cognition? That’s the breakthrough I’d like to see most.

In terms of what a computer will look like ten years from now, I agree a little bit with Dan that there will be network fabric built in and boxes may disappear a little bit. But most of the attributes of computers that you have today are going to stay, I think, pretty much the same for a thing you think of as a computer, because your eyes are a certain way and your arms are a certain length. You can’t put the display too far away, or you can’t read the darn thing. It can’t be too big or you can’t carry it. Whatever. Desktop computers or laptop computers will change a lot, just as they have in the last ten years, but I think that it will still be recognizable as to what it is, much like a car would even over a 20 year period. These cars are dictated by the size of our bodies.

What’s more exciting is the penetration of computing into almost everything. I mean, today your car is a capable computer, and your cell phone is really a computer, and your Blackberry, and your Palm Pilot, and your ten other things. We’re going to see a proliferation of more involvement where most of the computing that you do isn’t with something that is specifically a computer. It’s a whole variety of smart appliances.

MR. LAZOWSKA: Taking where Nathan left off, I’m going to answer a different question than the one you asked. I think there are going to be enormous breakthroughs on the interface between computing and biology, and on the interface between computing and the learning sciences. I think they’re related. The interface between computing and biology, remember what Watson and Crick discovered is the genome is a base four digital code. That’s in some sense an area that we’ve mined a lot over the past ten years, but there’s a lot we can learn about what Nathan described as the architecture of cognition, how does Mother Nature compute, which again Nathan described is an entirely different architecture, an entirely different approach toward computing than what we’re fabricating today. We’re beginning to make bits of progress by a coupling computer scientists with biological scientists in understanding actually how Mother Nature computes, and that’s feeding back into future computer architectures. This is at least 20 year off stuff, but there’s a potential for a real revolution there. The interface between computing and biology is really rich.

In terms of learning sciences, people are beginning to understand much better how we learn. And how we use our senses, what parts of our brain are active in different activities. You could imagine, and this is an area where Eric Horvitz at Microsoft Research has been very active, breakthroughs in the learning sciences that actually help us design computers that enhance cognition, augment cognition in a serious way. Today our user interfaces aren’t designed with any real knowledge of how users learn, how users remember. Remember that what’s happening is human attention and human brain power is fixed, while everything else is growing at an enormous rate. So the question is, how do you design computer systems taking advantage of the way people learn in a way that preserves human attention. If you carry this far enough you could actually imagine that computers became a serious learning device, which they certainly aren’t today.

MR. RASHID: Richard.

MR. NEWTON: I’m going to assume all of that happens, and it may not be ten years from now, it may be longer, but I’m going to take a different, less grandiose one, which is sort of building on that idea I mentioned earlier about ultra low cost computing. It’s not anymore, I don’t think, I hope not, about what we build. It’s about what we do with it that should be where we focus our attention. And to me the challenge and the opportunity is to make sure everybody on this planet can be connected in some interesting way to take advantage of this information technology that we’re building. So to me ultra low cost electronics that we can essentially give away to people, that is incredibly reliable, that’s easy to use, and that provides the value of all the technologies we’ve just heard about, and delivers them throughout the planet should be ultimately our goal.

Dan mentioned that we can build MEMS (microelectromechanical systems) devices, and these micro machines, and how they can be applies in sensing and actuating. Well, if you want to take an extreme point in that same dimension, with .1 micron or below technology we can build chips that are smaller than the size of a grain of sand, that can compute, that can communicate with one another over short distances using RF. In fact, the only challenge we have left is how we power them. But, if they’re that small they can be self powering. They can, in fact, use MEMS devices to scavenge energy from the environment, vibration, heat, light, and power themselves.

Then we can take literally millions of these grains of sand that are actually computing elements, pour them into a special bucket of paint, and paint them on the wall. And what will happen is once the paint dries, or even before, they’ll wake up, communicate with one another, they’ll form a network, an ad hoc network, and we’ll have living fabric within our infrastructure. We can paint them on bridges, they can measure stress and strain, we can paint them inside buildings, they can measure humidity, heat, and so forth. Totally disposable electronics, ultra low cost, that has an entirely different computing paradigm. So it’s taking the appliance model, but reducing it to millions in a room as distinct from a few. That I think is a great opportunity for us all.

MR. RASHID: Ya-Qin, you get the last word.

MR. ZHANG: They actually have said it all. So I’m going to just summarize the points. I think computing, especially personal computing, is moving from primarily a productivity tool to a combination of personal computing, personal communication, and personal storage. The future of computing is becoming more intelligent, more network connected, and also more personalized. And especially the advance in nanotechnology, biology, user interface, artificial intelligence — those things are going to completely and profoundly change the future landscape of computing.

Let me use Bill Gates, and cite one of his sentences. I think it’s from a celebration of the 20th anniversary of PC. If the PC in the last 20 years has been amazing, in the next 20 years it’s going to be astounding. Thanks.

MR. RASHID: Okay. With that our panel is done. I want to thank the panelists for all the time that you’ve put in, thank everybody who provided questions, including those who provided ones that we weren’t able to get to.

Related Posts

Rick Rashid: Microsoft Research 15th Anniversary Celebration

A transcript of remarks by Microsoft Senior Vice President Rick Rashid, University of Washington President Mark Emmert, Microsoft Corporate Vice President Dan Ling and other Microsoft officials during the celebrations of the 15 anniversary of Microsoft Research, September 26, 2006.