Skip to Main Content

Why artificial intelligence needs the human touch

Forget the scare stories: AI is produced by engineers and its creators have the capability to design it in ways that enhance, rather than hinder, peoples’ lives


“I propose to consider the question, ‘can machines think?’ This should begin with definitions of the meaning of the word ‘machine’ and ‘think’.”

So wrote computing pioneer Alan Turing in the introduction to his seminal paper on machine intelligence, published in the journal Mind in 1950.

That paper – introducing the ‘imitation game’, Turing’s adversarial test for machine intelligence, in which a human must decide whether what we now call a chatbot is a human or a computer – helped spark the field of research which later became more widely known as artificial intelligence.

Whilst no researcher has yet made a general purpose thinking machine – what’s known as artificial general intelligence – that passes Turing’s test, a wide variety of special purpose AIs have been created to focus on, and solve, very specific problems, such as image and speech recognition, and defeating chess and Go champions.

However, whenever AI hits trouble – such as when prototype autonomous cars cause accidents, robots look like eliminating jobs, or AI algorithms access personal data without permission, the news media surfaces major concerns about a societal downside to AI.

One trope in such stories is that AI is a hard-to-harness technology, one that could run away from human control at any time. But the truth is far more nuanced. At Microsoft, the aim is to use AI as a tool just like any other – one that’s used by engineers to achieve an end that strongly benefits people in one way or another, whether they are at home, or at work in fields as diverse as education, healthcare, aerospace, manufacturing or retail.

“We are trying to teach machines to learn so that they can do things that humans currently do, but in turn they should help people by augmenting their experiences,” says Microsoft CEO Satya Nadella.

This is highly do-able. Intelligent machines are not magic, but are products engineered by people using advanced hardware and software tools.

Granted, machine learning means AI systems can learn to undertake tasks that they were not programmed to do – that is one of their major talents – but designers are duty bound to ensure that whatever actions their AI can undertake, the system it is part of does so within acceptable, safe-as-possible bounds.

It’s code, but not as we know it

For Nadella, the way AI is used is simply akin to regular software engineering: “We are creating AI for use in our products and we do so using a set of design guidelines. In the same way as we have a set of guidelines for designing user interfaces, for example, we also have a set of guidelines for creating ‘tasteful’ AI, if you like.”

Making software easy to use with a user-friendly front-end might seem to have little in common with building AI based on deep neural networks – the kind that can recognize images, speech, aromas or even cats. But what they share, says Nadella, is a need to be trusted: people won’t use software that loses their work or reveals private data. They need to depend upon it.

“So, our first guideline is to build artificial intelligence that augments human capability, creating AI that generates trust in the technology, because it is designed to preserve security and privacy,” he says.

Still another guideline, he adds, is to combat the common AI industry claim that intelligent algorithms do their own thing in a “black box”, making decisions as they learn about the world in ways that are not subject to critical engineering review.

But that risks building systems that might exhibit not only unexpected but highly unwanted behaviors – so it has to be tackled. “We have to create transparency in that black box and open it up for inspection,” says Nadella.

Branches everywhere

To work out where AI goes from here, however, it is perhaps worth asking how the tech industry got into a situation where something as seemingly deterministic as a computer program can be making decisions its creators can struggle to understand.

The answer is that the history of computing is one in which machines have been making ever finer decisions based on the technology of the day – and at a certain point they make so many micro-decisions that they begin to resemble human decision making.

This ability to make decisions is called conditional branching – it’s what separates a computer from a calculator. A calculator merely performs mathematical operations in sequence. But a computer can run a sequence of instructions, its program, and has the ability to choose to run different sets of instructions, or the same instructions again in a loop perhaps, under different conditions.

In other words the machine can decide that if condition A applies, then the computer should do B or C – for example if the machine can’t connect to WiFi (condition A) try Bluetooth (B) or the USB port (C).

In the nineteenth century, the mathematician Charles Babbage in Britain envisioned – but never built – a mechanical computer with the ability to perform conditional branching. Later, Turing’s theory of computing was harnessed by his colleagues at the Bletchley Park codebreaking centre in 1943 to construct Colossus, the world’s first digital computer, using voltages in electronic vacuum tubes to represent data.

After moving from those slow valves to faster transistors and then to ever denser, superfast microprocessors, computers are now capable of vast amounts of decision making. To make them more flexible still, however, deep neural networks have been developed so that they can be trained to make decisions without exhaustive programming, for applications such as pattern, speech and face recognition.

Just such a deep neural network led Microsoft to a major breakthrough in speech recognition last October, when researchers behind an AI based transcription system achieved parity with human transcription of conversation. The feat paves the way for better speech recognition AI in consumer products like Xbox, Skype Translator and Cortana, the digital speech assistant in Windows 10.

Prompted by that success, Harry Shum, who heads Microsoft’s Artificial Intelligence and Research group, said he was blown away at the rate of progress deep learning is delivering. “Even five years ago, I wouldn’t have thought we could have achieved this. I just wouldn’t have thought it would be possible,” he said.

It did not end there. That same week, another of Microsoft’s deep neural networks won first place in a computer vision competition, the COCO image segmentation challenge, in which competitors’ technology had the task of revealing where certain objects are in an image. It is not hard for humans, because we know pretty much what every object is in our world. But image segmentation, as this task is known, is tough for computers because they do not know where the boundaries of objects are. “That’s the hardest part of the picture to figure out,” says Baining Guo, assistant managing director of Microsoft Research Asia.

The trick with AI now, Microsoft believes, is to coax AI into trying to understand what it is seeing and hearing – or indeed smelling and tasting – so that it can answer questions addressed to it both via speech and the company’s regiments of intelligent chatbots.

Making AI access ubiquitous is the idea. “In the next phase we should be asking how we can democratize access to AI, rather than worshipping the five or six companies that do a lot of it today. AI will need to be everywhere,” predicts Nadella.

Indeed, the company is well on its way to spreading the fruits of AI through its bot framework, as Microsoft calls its chatbot venture, which aims to introduce conversational computing with background AI into text messages, Skype, Slack, Facebook Messenger and email.

Who knows? Maybe one day one of those bots will be able to answer the opening question in Turing’s Mind paper – can machines think? – in the affirmative.