Embracing Real Life in Virtual Earth

REDMOND, Wash., April 5, 2007 – Imagine being able to visit any location in the world from the comfort of your computer in vivid detail – not just from geography and topography, aspects already currently available in mapping programs, but to monitor the actual current environmental conditions at a point of interest. This is just one of the scenarios being explored in academic research enabled by Microsoft’s SensorMap and Virtual Earth Request for Proposal (RFP) programs. Such technology also has the potential, for example, to allow climatologists and other scientists to examine data over the long-term to track pollution and climate changes.

A screenshot of the SensorMap prototype, which enables a user to query sensors on a map by specifying a geographic region and filter sensors by interests such as temperature, video, traffic and parking.

Microsoft today announced 21 winners of the Virtual Earth and SensorMap RFP programs, with unrestricted funding totaling US$1.1 million. The scenarios described above are examples of the projects that the winning academic researchers are already working on. The awards are made for one year, to bolster existing academic research programs or seed entirely new ones.

“Being able to map real-time data happening in the physical world onto a computer will have tremendous societal impact – observing weather patterns, calculating soil erosion, sensing pollution, the applications seem endless,” says Stewart Tansley, a program manager in the External Research & Programs group in Microsoft Research. “The ability to collect massive amounts of real-time data and apply it to the world we live in by visualizing it on a ‘live’ map is very exciting. Any one of the RFP projects has the potential to change the way people live, commute to work or build a structure, as well as empower scientists in their increasingly important environmental research.”

Understanding the vital role that academia plays, the External Research & Programs group at Microsoft collaborates with university researchers around the world, focusing on current real-world issues, cutting-edge research, challenges facing the academic ecosystem, and innovative approaches to education that prepare students for the challenges of the future. Every year, Microsoft Research publishes RFPs in specific subject areas – in 2006 alone, $4 million in software, technical resources and funding was awarded to the most promising academic researchers.

When the Virtual Earth and SensorMap RFPs were first announced, they sparked worldwide interest – over the few weeks they were open for application, the Virtual Earth RFP received nearly 80 proposals from 17 countries, and SensorMap received 60 proposals from 13 countries.

SensorMap: Browsing the Physical World in Real Time

Live Search Maps, a geo-centric Web interface based on Virtual Earth technology, is useful for visualizing spatial and geographic data such as locations, neighborhoods, weather, and traffic. Researchers expressed interest in creating custom applications that overlay their own data on top of these browsable maps, such as housing information, crime-rates, locations of vehicles and pod casters, and weather. This was made possible when the Virtual Earth team published APIs [application programming interfaces] to overlay location data on their maps – but unfortunately this is a limited solution. Publishing even a single stream of data requires a lot of effort and some programming expertise. Existing applications are mutually incompatible – for example, no single map can show both housing information and crime-rates in an area. And existing solutions are not capable of querying live sensors based on keywords or location, and aggregating the results in useful ways.

The SenseWeb project at Microsoft Research aims to address these challenges by providing a common platform and set of tools for data owners to easily publish data and for users to make queries of the live data sources. The SensorMap platform, developed from the SenseWeb project, transparently provides mechanisms to archive and index data, to process queries, and to aggregate and present results on Web interfaces such as Virtual Earth.

“It’s fantastic; the resources this provides us allows us to expand our project and get to the data more effectively and efficiently. Without this RFP we would have had to build the entire project from scratch,” says Matt Welsh, an assistant professor of computer science at Harvard University who is working on CitySense, an NSF-funded urban-scale sensor network testbed that will allow remote users to reprogram nodes, acquire data, and otherwise experiment with various algorithms and protocols.

The CitySense open testbed will consist of more than 100 nodes – each an embedded PC equipped with long-ranged 802.11 radios and high-fidelity, low maintenance sensors – mounted in weatherproof cases on streetlights throughout the city of Cambridge, Mass. These high-quality sensors, powered by the streetlights they’re attached to and distributed over a large urban area, can monitor such things as airborne pollutants, wind velocity, humidity, temperature, rainfall and automobile traffic, making it possible to develop detailed models of the impact of pollution down to the specific street and neighborhood level. The data collected by the nodes is then displayed on SensorMap.

“We developed the SensorMap portal as a step toward the vision of creating a World Wide Sensor Web,” says Feng Zhao, a principal researcher who leads the group that developed SensorMap technology at Microsoft. “SensorMap enables data owners to easily publish and share sensor data, and also lets users query and browse live data on a geographic interface such as Virtual Earth.”

With the SensorMap RFP, Zhao says, Microsoft hopes to understand and advance what exactly it takes to support open and diverse sets of sensor data publishers and consumers on a common platform, and to develop shared infrastructure and tools for data publishing, data management and data querying and visualization. And such efforts aren’t restricted to stationary sensors.

Sensors can also be mobile. In fact, such sensors already exist today, in the forms of GPS navigators in cars, microphones and cameras in cell phones, wearable pedometers, heart-rate sensors in watches, and in many other portable form factors.

“What if I could create an avatar of myself that recreates in a virtual world what I’m doing in the physical world, so that my family could access it when I’m away on a business trip and keep up with what I’m doing?” asks Tarek Abdelzaher, an associate professor of computer science at the University of Illinois at Urbana-Champaign. “This scenario is possible through mobile sensors and SensorMap technology.”

Abdelzaher’s goal is to enable sharing sensory information in a networked world of mobile sensors while being sensitive to data management issues to protect privacy and prohibit unauthorized access. He proposes to develop two applications: one for ecological science, such as monitoring bird sounds, heartbeat and location from sensors that the bird is wearing for biological study, and one for the social aspect — sharing data from wearable activity-monitoring sensors, such as a jacket that can tell most of the time what the person wearing it is doing (a prototype already exists today).

“It’s essentially enabling us to share physical world experiences the way we share pages on the Web,” Abdelzaher says. “The potential is endless; from bringing the outdoors to a school child’s biology lab to bringing people closer together.”

Microsoft’s Zhao agrees. “Just like the Web that let millions of us publish and share news, documents, images, here at Microsoft we think making live data from sensors accessible can dramatically change the way people live and work today,” he says. “With this technology I will know what the trail condition is like before I head out on a run. I will know the weather, the temperature and other microscopic conditions. This is just one small example of how knowing location-specific environmental conditions will be useful in our daily lives.”

3D Mapping and Detailed Aerial Images with Virtual Earth

Virtual Earth is an online mapping service that enables users to search, discover, explore, plan and share information about specific locations. Currently, with its use of traditional road maps, labeled aerial photo views, low-angle high-resolution aerial photos and proximity searching capabilities, Virtual Earth provides unique opportunities for developers to incorporate both location and local search features into their Web applications, which now include 3D maps. But the ability to create even richer functionality depends largely on the availability of imagery assets – an expensive proposition today.

Thus, through programs such as the Virtual Earth RFP, Microsoft is making data assets available to academia, encouraging innovation around information visualization, location-based searching, discovery and sharing.

Frank Dellaert, an associate professor in the College of Computing at Georgia Tech University, is working on “City Capture,” a project that aims to enable a user to navigate around the world in three dimensions. To do this, Dellaert has proposed that several Microsoft Research-pioneered GigaPixel Sensors – that have high-resolution, high-focal length camera lenses mounted on modified telescope pan-tilt rigs – be installed throughout a city to capture its evolution over time.

A screenshot of downtown Seattle, with engineering-precision 3D models generated by Virtual Earth, streaming over the Internet and rendering in real-time.

“The idea is that the sensors will capture a single panorama of very high resolution, which contains several billions of pixels,” Dellaert says. “A normal camera user might take 10 pictures and stitch them together to create a panorama, but now we are talking about hundreds, thousands of pictures being stitched together, taking into account the position of the sun, the movements of the clouds, and other environmental factors that will really give you the sense of ‘being there.'”

Multiple cameras are necessary, he explains, because one camera is not enough to capture range and distance. At least two are necessary to provide depth perception – and additional cameras provide even greater detail. By taking several pictures every day from each viewpoint, the project will create an extremely rich record of the city’s evolution over time. This data would then be integrated with Virtual Earth. Dellaert calls this the GigaPixel Spotlight, in which the captured high-resolution panoramas are projected onto existing 3D models on Virtual Earth to provide up-to-date or historical high-resolution texture maps. Ideally, the level of detail rendered would parallel the user’s needs and hardware capabilities.

“Eventually, you would be able to virtually visit every city in the world, at any point in time since the data capturing began,” Dellaert says. And with such rich 3D rendering, a user could even in theory give detailed driving directions to someone half a world away – down to the pothole they can avoid coming up in one block – without ever having set foot in that city before.

The data will also serve as source imagery for the 4D Cities project at Georgia Tech’s College of Computing, an ongoing effort funded by the National Science Foundation (NSF) and Microsoft to create 4D models of urban environments from historical imagery to capture their evolution over time.

Craig Knoblock, a senior project leader at the Information Sciences Institute and a research professor at the University of Southern California, has another vision for Virtual Earth. He wants users to have the ability to fuse existing online sources – such as street maps, property survey maps, maps of oil and natural gas fields – with aerial images. By doing so, they would be able to view a precise mapping of what would typically be a fairly ambiguous aerial photo – a cityscape with a few distinct landmarks. However, this kind of mapping is only possible with a fully detailed street map showing all road intersections (the current method by which precise mapping is determined).

Knoblock’s project would align abstract street maps with known road networks and other maps – and integrate them with Virtual Earth, so that users will be able to select individual map ‘layers’ and display them on available aerial images of a particular region – meaning they can be as specific or as general as suits their needs.

“There are many maps available online, but they are hard to find and difficult to use,” says Knoblock. “Fusing existing maps with the aerial imagery will provide a much richer experience, especially when the user can choose the layers he or she is interested in.” Knoblock adds that eventually, he’d like to create a system to index all the maps available online, not just road maps, but also those that mark fast food restaurants, gas stations, oil fields, coffeehouses, et cetera, so that people can choose any of those layers to fuse with the map they’re viewing.

In addition to the unrestricted grants, each of today’s 21 winners of the SensorMap and Virtual Earth RFPs received additional research support from Microsoft. SensorMap RFP winners will receive access to Microsoft’s SensorMap geographic sensor-data publishing platform, which enables the ability to integrate and publish searchable data through a map interface, and Virtual Earth RFP winners will receive Microsoft’s Web-based geographic imagery platform combined with a special software development kit to explore potential applications of location-based Web searches.

The winning projects are already generating interest – and enthusiasm. “I think the most exciting thing is how this technology is so immediately impactful,” says Evelyne Viegas, a program manager with External Research & Programs. “We are working with academics who are literally mapping new ways for us to live on this planet. The proposals themselves are fascinating, but I cannot wait to see the results of this research.”

Related Posts