Can’t Humans and Computers Just Get Along? Microsoft Research Is Trying to Make Sure They Do

SAN JOSE, Calif., April 30, 2007 – Using a finger instead of a stylus on a small touch-screen device. Making Web searches for images more intuitive. Teaching computer skills to non-literate populations. Using a mouse in mid-air as a pointing device.

These innovative technology pursuits are among hundreds of research projects being presented this week at the international Computer/Human Interaction (CHI) 2007 Conference, which goes into full swing today at the San Jose McEnery Convention Center. They also represent a small sample of the work that Microsoft Researchers are doing to enrich people’s lives by making technology more user-friendly.

Microsoft Researcher Patrick Baudisch, for example, believes that the best technologies are the ones we don’t even notice. By Baudisch’s way of thinking, a technology that’s simple, intuitive and unobtrusive is a technology that people will use liberally to enhance their everyday lives. Some of the latest projects that Baudisch has been working on fit this description, including Shift, an intriguing new technology that lets a person accurately operate a stylus-based device, such as a personal digital assistant (PDA) or ultra-mobile PC (UMPC), with his or her fingers.

Shift technology makes it possible to use a pointer on a PDA that can be guided by finger motion — once the pointer is in position, the user simply lifts his thumb or finger to select the target.

This week, Shift technology is getting noticed, and in a big way. A research paper discussing Shift won a Best Paper Award at CHI 2007. Titled “Shift: A Technique for Operating Pen-Based Interfaces Using Touch,” the paper was written by Baudisch, a research scientist in human-computer interaction (HCI) in the Adaptive Systems and Interaction Research Group at Microsoft Research, in collaboration with University of Toronto researcher Daniel Vogel during Vogel’s internship at Microsoft Research.

The Shift paper is one of three Microsoft Research submissions being honored at CHI, which is celebrating its 25th year as a forum for the exchange of ideas and information about the HCI discipline. A second paper, titled “What Are You Looking For? An Eye-Tracking Study of Information Usage in Web Search,” earned a “Best of CHI” Honorable Mention for Microsoft Researcher Ed Cutrell, who wrote it in collaboration with Zhiwei Guan of the University of Washington. In addition, a paper titled “Do Life-Logging Technologies Support Memory for the Past? An Experimental Study Using SenseCam” won a “Best of CHI” Honorable Mention for a group of researchers from Microsoft’s Cambridge lab in the United Kingdom, including Abigail Sellen, Andrew Fogg, Steve Hodges, Carsten Rother and Ken Wood, working with Mike Aitken of the University of Cambridge.

The three award-winning papers are among 19 Microsoft contributions to the CHI 2007 Conference — one of the largest numbers for corporate participants. The count encompasses papers and notes spanning an array of research areas, including mobile devices, search, inking and emerging markets as well as pen-based interfaces. The common thread running through this fabric of work is the focus on developing user-friendly technology that helps enrich people’s lives.

“We’re extremely proud of our participation at CHI this year,” says Rick Rashid, senior vice president of Microsoft Research. “The impressive number of Microsoft papers accepted for publication, the accolades that our researchers have earned for their work in the HCI space, and the time we commit to chair technical sessions and serve on panel discussions all reflect our desire to improve the computing experience for consumers around the globe.”

Microsoft’s presence at CHI 2007 also includes a booth in the McEnery Convention Center Exhibit Hall and a Wednesday evening reception where recently published HCI resource books by Microsoft authors will be on display. New titles include:

  • “Sketching User Experiences: Getting the Design Right and the Right Design,” by Bill Buxton, a principal researcher at Microsoft Research

  • “Beyond the Desktop Metaphor: Designing Integrated Digital Work Environments,” edited by Victor Kaptelinin, a professor in the Department of Informatics at Umeå University, Sweden, and Mary Czerwinski, a research area manager of the Human-Centered Computing groups and manager of the Visualization and Interaction Research Group at Microsoft Research

  • “The Persona Lifecycle: Keeping People in Mind Throughout Product Design” by John Pruitt, user research manager for the Tablet and Mobile PC Division at Microsoft, and Tamara Adlin, the founder of Seattle consulting firm Adlin Inc.

This year’s CHI experience is the latest manifestation of the active role that Microsoft Research has played in the HCI community for 10 years. Microsoft’s contributions have included everything from reviewing, publishing and presenting papers to collaborating with academia to volunteering on committees and serving on boards. Topping this year’s list are the 19 accepted CHI papers and notes, which reflect a breadth and depth of research into technologies that promise to transform the way people use and interact with computers at home and at work.

“One of the most important reasons people attend the annual CHI conference is to learn something new,” notes Mary Beth Rosson, CHI 2007 Conference co-chair. “CHI 2007 will exceed attendees’ expectations for important perspectives to consider when designing future technologies. Microsoft, along with many other industry leaders and academia, has contributed to the growth of the annual CHI conference for many years.” 

Research Aims to “Reach Beyond”

The theme of CHI 2007, “Reach Beyond,” aligns well with the work being done by Microsoft Researchers. Baudisch, for example, thinks beyond the concept of a PDA as a stylus-based device. He’s observed that many people operate a PDA with their hands instead, especially for short and intermittent interactions. The problem is, soft fleshy fingers and thumbs don’t have the same pinpoint tip as a stylus, so they tend to obscure the on-screen target, inviting errors. Baudisch and Vogel call this problem “occlusion” and invented the Shift technique to address it.

When a user touches the screen surface, Shift makes a copy of the obscured area visible as a callout, along with a pointer that can be guided by finger motion. Once the pointer is in position, the user simply lifts his thumb or finger to select the target.

Extensive testing revealed that this technique overcomes the problem of occlusion and enables people to select small targets reliably, preserving the speed and simplicity of direct touch interaction that they’re accustomed to with larger targets.

“The mantra of my work is what I call ‘escalation,’” explains Baudisch, who will team up with Vogel to present the paper Tuesday afternoon at CHI. “My main assumption is the idea that every user will always use the most available device with the easiest interaction style, then escalate to more complicated hardware and interaction styles only as they’re needed.”

He admits that sounds simple, but it’s not always the way technology designers think. Typically, they focus on how to make devices more powerful. Instead of optimizing power, Baudisch says, we need to optimize the ratio between power and ease of use. He points out that there’s lots of processing power on the computer side and lots of processing power on the human side. The bottleneck is the conduit between them. Consider this, he says: At one time, our experience with computers involved 1MHz machines, while today it involves 3GHz multi-core machines. The only constant in that evolution is the communication with the human.

Baudisch foresees Shift helping people take better advantage of a device that’s small, mobile and increasingly ubiquitous, using movements that are more natural and simple. He respects the fact that even though PDAs are sold as stylus-based devices, users think of them as touch devices. And the user is the ultimate decision-maker.

“When we design these devices, we need to think about how we can allow people to use them in the most convenient way vs. the standard way,” he explains. “HCI is about observing people, and when we do that, we see them using a PDA with their fingers. We as people who make software need to take this into consideration and make technology that works with the way these devices are actually being used.”

Baudisch also had a hand in two other research projects that are being recognized at CHI 2007. The first is a paper titled “earPod: Eyes-Free Menu Selection Using Touch Input and Reactive Audio Feedback,” written with Shengdong Zhao, Pierre Dragicevic, Ravin Balakrishnan and Mark Cignell, four researchers at the University of Toronto. Their touch-based auditory menu technique is viewed as an especially exciting technology for use in mobile device interfaces. The paper is slated for presentation Thursday morning.

Baudisch is also presenting a demo titled “Soap: How to Make a Mouse Work in Mid-Air,” research work done in collaboration with Microsoft Research colleagues Mike Sinclair and Andrew Wilson. Later today at CHI, Baudisch will show how to combine a mouse and mouse pad into “soap,” a device that can be operated in mid-air using one hand. The researchers have used “soap” to control video games, to interact with wall displays and Windows Media Center, and to give slide presentations.

Reaching Beyond Illiteracy

The quest to make technology more user-friendly also inspires Indrani Medhi, a researcher whose paper will be presented Wednesday at CHI 2007. A scientist at Microsoft Research Labs India, Medhi is working to reach beyond illiteracy. Her research paper, titled “Full-Context Videos for First-Time, Non-Literate PC Users,” details Medhi’s efforts to design a text-free user interface that empowers people with little or no formal education — and zero experience with a computer — to achieve a meaningful, useful interaction the first time they have contact with one.

Written in collaboration with fellow Microsoft Researcher Kentaro Toyama, it was the highest-ranked paper accepted this year at alt.chi, a venue for controversial, unusual, challenging or thought-provoking work that otherwise might not be seen at the CHI conference. The paper proposes using full-context video to motivate and aid non-literate, first-time PC users to successfully navigate an application with minimal help. This solution addresses not only the obvious barrier of illiteracy but also the less apparent obstacles to interacting with a computer that poorer populations face, such as lack of awareness of what a PC can deliver, fear and mistrust of the technology and a lack of comprehension about how information relevant to inexperienced users is embedded in the PC.

Studies involving non-literate residents of urban slum communities in Bangalore, India, found that the introduction of full-context video made a dramatic difference in task completion. The usability tests, which focused on a job-search application chosen for its value to trial subjects, showed that completion of a simple job-search task was 100 percent with the use of video, versus six percent for a subject sample without it. The findings suggest significant implications for non-literate populations worldwide, not only for job-search applications but also in other areas that strive to alleviate poverty, such as providing information about government programs and healthcare resources.

Reaching Beyond Tedious Web Image Searching

Imagine looking on the Internet in hopes of locating a specific image. You type the word “tiger” onto the query line of a search engine, and up pops a random set of results that includes everything from the Siberian variety in a zoo to a certain well-known golfer — and a host of irrelevant offerings in between. Because these images vary so greatly in content and style, browsing the results can be frustrating, tedious and unpredictable.

A team of Microsoft Researchers are tackling this problem of ambiguity with IGroup, a proposed image search engine that presents results in “semantic clusters.” In IGroup, a clustering algorithm uses keyword semantic features to organize Web image search results into navigable groups. For example, IGroup would display the results of the keyword “tiger” as self-contained clusters, with images of “white tiger” and “Tiger Woods” not intermixed as they are in a conventional Web image result view. Plus, to make the search process more user-friendly, IGroup displays results as representative thumbnail images organized under a cluster name.

A research paper describing this inventive approach will be presented at CHI 2007 Tuesday afternoon. Titled “IGroup: Presenting Web Image Search Results in Semantic Clusters,” the paper was written by Shuo Wang, a researcher in the Innovation Engineering Group at Microsoft Research Asia, in Beijing, China, in collaboration with Feng Jing and Lei Zhang, two colleagues at Microsoft Research Asia; as well as two Microsoft Research interns, Jibo He, a researcher at Peking University; and Qixing Du, a developer at Tsinghua University.

Wang says he finds the HCI field challenging because computing technologies were first developed for scientific purposes rather than for empowering users in their everyday lives. “With this in mind, designing technologies that are more usable and receptive to user needs requires a deep understanding of users and insights in technologies,” he says. “It was so rewarding to see how efficient and confident IGroup users were during our usability tests compared to the current image search engine.”  

All research papers accepted by CHI judges are presented at the conference as well as published in the CHI Conference Proceedings. Conference organizers note that the CHI Proceedings are read and cited worldwide and have a reaching impact on the development of HCI theory, method and practice.

Related Posts