Microsoft Research: Natural Language Processing Hits High Gear

SEATTLE, May 3, 2000 — For decades people have eagerly anticipated the day when computers can understand human language. Even before the Enterprise crew in Star Trek used the Universal Translator to communicate instantaneously with other extraterrestrials, scientists were proclaiming machine translation — the ability of a machine to convert one language to another — was just around the corner. Now, several generations later, the predictions are the same–President Clinton promised in his State of the Union address last January that researchers will soon deliver devices that can translate languages
“as fast as we can speak.”
But it appears the technology, at least the kind that can deliver accurate, instantaneous translation, is still…well, just around the corner.

Why is it taking so long? According to many experts, programming computers so that they can process human language is not an easy goal to attain. Long after machines have proven capable of inverting large matrices with speed and grace, they still have not mastered the basics of spoken and written languages. This is because understanding language means, among other things, knowing what concepts a word or phrase represents and knowing how to link those concepts together in a meaningful way. While natural language may be the easiest symbol system for people to learn and use, it has proved to be the hardest for a computer to master.

“It seems so easy to have a computer process language, and yet it isn’t if you consider the machine has no goals, can’t see the context, and doesn’t know why your saying what you’re saying,”
said Lucy Vanderwende, a computational linguist for the Microsoft Research Natural Language Processing Group, a 30-plus member team of researchers focused on designing and building a computer system that will analyze, understand and generate languages that people use naturally.
“Every sentence presents a new challenge to the machine, so our challenge is to get it to recognize these fine distinctions in meaning that we humans can make.”

Despite the challenges, natural language processing, or NLP, is widely regarded as a promising and critically important endeavor in the field of computer research. The general goal for most computational linguists is to imbue the computer with the ability to understand and generate natural language so that eventually people can address their computers through text as though they were addressing another person. The applications that will be possible when NLP capabilities are fully realized are impressive–computers would be able to process natural language, translating languages accurately and in real time, or extracting and summarizing information from a variety of data sources, depending on the users’ requests.

For example, imagine you needed to correspond with a non-English speaking colleague in Japan and that part of your message had to include data that is available only in Spanish. If NLP researchers succeed in their mission, someday you will be able to query that database in natural language, asking,
“What were the profits for the Spanish division of the company last year?”
The computer would analyze your query, retrieve and summarize the relevant data, and provide you the answer in English. Then you could compose your email message in English, and your computer would instantly translate it to Japanese before sending it on to your colleague.

While this level of sophisticated language analysis is not yet a reality, researchers have gained a lot of ground in the past several years, advancing the technologies that will make it possible. The Natural Language Processing Group at Microsoft Research is widely recognized as one of the most unique projects in the field of NLP research, because it is focused on not just one of these applications but on developing a comprehensive NLP system that can support all of them, as well as ones that have not yet been imagined.

Microsoft Research is demonstrating a portion of the system it is designing this week at the Language Technology Joint Conference in Seattle, an annual gathering of the two largest NLP professional organizations in the country–the Association of Computational Linguistics and the Applied Natural Language Processing Association. Attendees will be able to type into a computer a random sentence of their choice in one of six languages and watch the system parse, or diagram, that sentence. Parsing is the base analysis that makes it possible for the computer to perform such activities as information retrieval, translation and dialogue using natural language.

“One of our goals is to get our system to accurately parse the kind of random sentences you might see in email, the kind that would have horrified your high school teacher,”
said Bill Dolan, a researcher who’s been working on NLP technologies at Microsoft Research since 1992.
“Many NLP applications simply reject input that isn’t what a language teacher would consider grammatical. By allowing people at the conference to type in any sentence they want, we hope to demonstrate the breadth of coverage we’ve been building for multiple languages in our system.”

Microsoft is heavily involved in the Language Technology Joint Conference this year. The company is a major sponsor of the event, several researchers from the NLP group will be presenting their work, and Rick Rashid, senior vice president of Microsoft Research, will be one of the keynote speakers.

“Natural language processing is one of the key technologies of the future,”
said Rashid.
“Microsoft recognizes this and is investing a significant level of long-term support to advance the field.”

Long-Term Funding and Unique Approach Set NLP Group Apart

Founded in 1991 as one of the original three Microsoft Research (MSR) teams, the Natural Language Processing group has more than 130 members, making it one of the oldest and largest research groups at Microsoft. The team has grown from a small group of computational linguists to more than 30 as the scope of research has broadened to include Chinese, French, German, Japanese and Spanish languages. In addition, the productization group, created in 1997 by the NLP group, has more than 100 members focused on applying emerging NLP technologies to Microsoft products.

Unlike speech recognition, which is focused on getting computers to analyze the meaning of acoustic input, natural language processing involves getting computers to analyze the meaning of text. To do this, researchers must decide what the internal representation of text input, or a string of words, should look like, and then create algorithms that map a word string into a representation the machine can manipulate in some useful way.

According to Dolan, the main challenge of this task stems from the highly ambiguous nature of language. The meaning of a word string like
“flying planes can be dangerous”
may seem simple enough, but to a software program the word
“can”
could be interpreted as a noun or a verb, and the word
“plane”
could refer to an airplane, geometric object or woodworking tool, depending on its context.

Because the NLP group has as its charter to build an NLP system that can support any NLP-related application, whether it’s machine translation, information retrieval and summarization or natural language dialogue, the project may not reach fruition for some time. One of the things that distinguishes the Natural Language Processing group from nearly every other NLP research group across the country, however, is that funding for the project is viewed by Microsoft as a long-term commitment.

“It’s really wonderful to work in this type of environment,”
said Vanderwende, who was one of the original researchers hired by Microsoft in 1991.
“Microsoft has always backed us 100 percent, giving us the freedom to focus on the long-term goal, all the while gambling to see if we can make a difference in the field of NLP research.”

Another unique characteristic of the NLP group at Microsoft is that its overall mission is so broad–to create a
“built from the ground up”
NLP system. Most other NLP groups, particularly those in academia, are populated by researchers working on their own shorter-term projects that are independent of one another. Other groups, particularly corporate research labs, are trying to pull together various disparate systems, such as English to Japanese and Turkish to English translation, which is very difficult to do because NLP technologies are so diverse. At Microsoft, one researcher might focus on word-level analysis, another might focus on parsing, and yet another will focus on building a semantic network of meanings. While each activity is separate, they represent different levels of analysis for the same system.

“This type of model is very powerful because it allows us to make some very long-term progress. Researchers don’t often get the chance to spend five years working on one system like this,”
said Hisami Suzuki, an NLP researcher at Microsoft working on the Japanese language component of the NLP system.

Another benefit of having such a broad focus is that researchers can capitalize on the work done by others in the group. For example, debugging tools that are developed for the system work across all languages. This is because the code used to produce logical forms, which are simplified graph structures that describe the semantic relationships among words in a sentence , is the same for each language. So researchers working on different languages produce parses, or sentence diagrams, for a specific language, but a single piece of code maps the parses across languages into a single logical form. By sharing code, researchers can make development progress much more quickly.

In the case of Suzuki’s work, which involves developing parsing for Japanese, she can leverage the base parsing technology used to develop English parsing rather than using additional computer science experts to accomplish the same task twice.

The Near-Term Benefits of Long-Term Research

While the NLP group is in the fortunate position of having long-term support from Microsoft, incorporating increasingly sophisticated NLP technologies into Microsoft products along the way is also a top priority. The productization group works with the NLP group in Microsoft Research to ensure the technologies used to develop the system are leveraged and applied to real-world applications.

Text critiquing, information retrieval and database query functions already have been applied to Microsoft products. The grammar-analyzing function, one of the first levels of understanding achieved by the NLP system, has been part of Microsoft Word since 1997. It replaced a grammar checker Microsoft had licensed from another company, which saved Microsoft millions of dollars in royalties, according to Dolan. The NLP group also provided the search engine for Microsoft Encarta in 1999, resulting in more sophisticated search capabilities that allow users to type queries in natural language and get better results than if they used Boolean key words. As the NLP system matures, the sophistication of each of these applications will increase. The next near-term applications of NLP technology are expected to be the addition of grammar checkers and information retrieval for other languages.

Currently the NLP group is heavily focused on its database of logical forms, called MindNet, and the creation of a machine translation application.

MindNet represents an area of research called example-based processing, where a computer processes input based on something it has encountered before. The MindNet database is created by storing and weighting the semantic graphs produced during the analysis of a document or collection of documents. The NLP system uses this database to find links in meaning between words within a single language or across languages. These stored relationships among words give the system a basis for
“understanding,”
allowing it to respond to natural language input. Built using ever-increasing number of logical forms, MindNet contains the contents of several dictionaries and Microsoft Encarta Encyclopedia to enrich its level of understanding. It currently recognizes approximately 25 relation types (e.g. subject, location, time) of links between different kinds of words, and the NLP group believes eventually it will revolutionize interaction with computers and enhance machine translation.

“The ultimate goal for MindNet is that it will simplify computing to the extent that users won’t have to bother with mouse clicks, cursors, menu structures and file names,”
said Dolan.
“They won’t need to know DOS or be familiar with graphical user interfaces. They simply will be able to type in what they want done, and the computer will do it for them.”

The NLP group is also heavily focused on machine translation because it pulls together all the work it has been doing on different languages, and because high-quality automated translation is a top priority for Microsoft. Because it is a global company, Microsoft places high value on any technology that can help non-English speakers use computers more effectively. The work being done by the NLP group to leverage NLP technologies across all languages is expected to open up huge possibilities for these users and help them be more efficient with their work. The group is currently working on six other languages besides English.

According to Suzuki, a lot of the current effort in machine translation is focused on making sure the language-analysis systems being built across languages are coordinated so that they can communicate with one another, an essential element in accurate machine translation.

“This is in some ways the most exciting work going on in the group right now,”
said Suzuki.

Trying to make sure we produce representations that work across all the languages helps ensure our system will be appropriate for all languages and not biased toward any language pair, such and English and French or Japanese and English.

Limitless Possibilities

For now, the group is content to focus on MindNet and machine translation, but as the level of understanding of the NLP system matures, many more real-world applications will be possible.

“If we can pull this off, and I think we can, it will mean that dealing with the computer will become a much more natural process than it has been,”
said Dolan.
“It will create many more possibilities, other applications that we can’t even envision right now.”

Related Posts