The art of augmentation: Human intelligence and artificial intelligence working together
A Microsoft Distinguished Scientist allays fears about our AI future, but stresses the need for ethics.
Hollywood loves making movies about computers going crazy, robots running riot, and technology taking over. Science fiction blockbusters with apocalyptic twists often top the box office.
But have they ever made you wonder about something more serious? After all, advances in artificial intelligence (AI) are gathering pace. So, what are the chances of machines one day becoming smarter than us?
Hsiao-Wuen Hon shakes his head reassuringly. “When I started working on AI in the early days, I was worried about this. I remember watching Terminator 2,” he says with a smile. “But then I got to know more about AI and all the good it can do. And guess what? I don’t worry about AI in that way anymore.”
Dr. Hon is chief of Microsoft Research Asia (MSRA) – a world-leading center of excellence in computer science. He has been at the forefront of cutting edge AI research for more than three decades and holds the rare title of “Microsoft Distinguished Scientist” before becoming a Corporate Vice President in 2015.
“Al can augment what we do, but it cannot replace us,” he says. “Our minds can create and hypothesize. From there, we can use AI technologies and the cloud to produce new solutions and new realities. But everything starts with us.
“Just imagine if Einstein had had AI to help test out his theories.”
Instead of a movie-like dystopia, Dr. Hon sees a future in which “AI and HI” can work together. That is, artificial intelligence and human intelligence combining to put people – and their ideas – first.
Ultimately, he says, people are responsible for how AI is developed and applied. “Maybe one day, someone will work out how AI writes its own algorithms. But right now, I don’t see that happening.”
Unlike Hollywood, he’s not worried about smart machines going rogue. Instead, he and others are more concerned about the behavior and motives of the people who develop and implement technology.
READ: A look inside “Tools and Weapons: The Promise and the Peril of the Digital Age.”
For AI to move ahead, society has to trust it. And, to gain and keep that trust, scientists, technologists, and policymakers should “not only ask what computers can do but also what computers should do,” he says.
Dr. Hon is one of many experts calling for a framework of ethical safeguards and regulations to ensure the progress of technology stays on a responsible path.
Six ethical principles of trust for AI:
- Fairness: AI systems should treat all people fairly.
- Inclusiveness: AI systems should empower everyone and engage people
- Reliability & Safety: AI systems should perform reliably and safely
- Transparency: AI systems should be understandable
- Privacy & Security: AI systems should be secure and respect privacy
- Accountability: AI systems should have algorithmic accountability
Augmenting human activity has long been the aim of scientists at MSRA. And over the past 20 years, they have notched up an impressive list of world-first breakthroughs.
Using the massive computational power of the cloud, its researchers have made great strides in AI-related areas like natural language processing, machine learning, and image recognition.
READ: Cloud and AI have immense potential to transform society
Here just a few examples. In an era of fake news and bogus content, they have developed technologies that can easily detect manipulated faces generated by known algorithms.
They are also leading the way in machine-based translation among a wide range of languages.
Recently, MSRA announced it had developed an AI system that has mastered the ancient Chinese game of Mahjong with the wider goal of learning how to help people solve real-life problems and handle uncertainties.
This breakthrough is the latest in an impressive record of producing algorithms that can match, or go beyond, “human parity” in performance, which means they can carry out tasks as well as or better than people.
For instance, researchers from its Natural Language Processing (NLP) Group and Microsoft’s Speech Dialog Research Group in the United States developed the system that was the first to exceed human parity in Stanford University’s Conversational Question (CoQA) Challenge. Here, machines are measured on their ability to understand and be questioned on passages of text, much like in a conversation.
MSRA is also taking its research and applying it directly to real world challenges. It is partnering with a growing list of companies and organizations to deliver new efficiencies in various industries and sectors and to create new business models.
For instance, researchers from its Natural Language Processing (NLP) Group and Microsoft’s Speech Dialog Research Group in the United States developed the system that was the first to exceed human parity in Stanford University’s Conversational Question (CoQA) Challenge. Here, machines are measured on their ability to understand and be questioned on passages of text, much like in a conversation.
MSRA is also taking its research and applying it directly to real-world challenges. It is partnering with a growing list of companies and organizations to deliver new efficiencies in various industries and sectors and to create new business models.
READ stories about new solutions and technologies at innovationASIA
Many of these solutions are being applied to repetitive work, such as operating machinery, monitoring production lines, scanning images, cataloging information, and sifting through huge amounts of data – the sort of stuff that most of us would find mundane and tiresome to do.
“This frees up workers to do more rewarding tasks, to use their creativity in more productive ways,” he says.
By taking over these tasks, Dr. Hon says, AI frees people from drudgery. That means they will have the time and energy to tap their imaginations, creativity, and emotions – attributes that he doubts will ever be replicated artificially.
Scientists at MSRA have been working with these partners:
- Investment company, China Assets Management Co. is now trialing data-analyzing AI that helps finance professionals make better buy-and-sell decisions for their clients.
- Pharmaceutical maker Pfizer and Peking Union Medical College Hospital are developing a system to help infection specialists more quickly and accurately diagnose types of fungal infections and provide relevant information.
- Shipping company OOCL has an AI model that helps to manage its massive fleet of cargo ships and the containers they carry, resulting in major operational savings.
- Education company Pearson has “Longman English+,” an English language training app that provides Chinese students with a personalized learning experience.
- Insurer, China Taiping, has jointly developed several AI quantitative investment funds. One of these was named China Insurance Industry-Innovation Insurance Asset Management Product of 2019.
The argument of augmentation sounds like good news for humanity. But will the pessimists be persuaded? Dr. Hon has his doubts because the fear of technological change has been around for generations.
He cites a Time magazine cover story back in 1950 that asked whether machines would one day supplant people.
“Most people didn’t really know what a computer was back then. Even so, they were talking exactly about what we are talking about now. We can build machines that are stronger than us and faster than us, and there are no concerns. But when it comes to building machines that might become smarter than us, some of us will always worry.”
Suspicion about AI, he says, has been around ever since scientists started working on it. Initially, their aim was to “simulate” human intelligence. But their research soon went in a different direction. And, that eventually led to the development of today’s AI models, which are based on big data knowledge discovery.
Dr. Hon doubts science will ever come close to creating machines that can think for itself without input from people.
To do that would mean having to replicate human consciousness. Not only would that be hard – if not impossible – to do, why would we want to do it?
“If something has consciousness, you cannot control it,” he says. “Why build a machine that will not listen to you or obey you?
“In a movie, it might be interesting to create something that goes out of control and fights us. But in reality, what’s the point?”