5 takeaways from Brad Smith’s speech at the RISE conference

Artificial intelligence and its growing role in shaping technology was a key topic at the RISE technology conference in Hong Kong in July. Speaking at the event, Microsoft President Brad Smith highlighted AI’s increasing ability to perceive the world as humans do, including recognizing images, translating languages and identifying patterns. The capacity for computers to think at a human-like level creates potential for AI to benefit people and solve problems around the world. But Smith cautioned that AI’s vast promise comes with an equally large challenge — to ensure that AI evolves within an ethical framework that upholds shared values.

Here are five key takeaways from Smith’s talk.

Giving computers an ethical compass

As artificial intelligence enables computers to make decisions that were previously made only by humans, ensuring AI is guided by an ethical framework is critical, Smith said. Microsoft identified six guiding principles for building ethics into its AI systems, starting with fairness and a focus on identifying bias. Even the best facial recognition systems “still do a better job of identifying men than women, and do a better job of identifying the faces of people with a light complexion than with darker skin,” Smith said. “We need to address the risk of bias.”

AI must be reliable and safe, he said, and AI systems need to be private and secure to protect people’s personal information. This technology must also be inclusive and able to serve people from different countries and of various ages and skill levels, Smith said.

Underpinning those four principles is a need for transparency — sharing information about how AI works — and accountability.

“It’s fundamentally important that we ensure that computers remain accountable to people, and that we ensure that people who design these computers remain accountable to everyone else in society under regulation and law,” Smith said.

Preparing employees for a digital workplace

Over the last 15 years, more jobs have involved tasks that require digital skills — whether that means using a computer, a mobile device or another computing platform at work, Smith said. Digital jobs tend to pay more and can provide a path to prosperity, he said, but workers need the skills to access them. Equipping employees with those abilities is “one of the fundamental challenges and opportunities” for governments, businesses and nonprofits around the world, Smith said.

Creating more opportunities to learn coding is key to building a digitally skilled workforce, Smith said. He mentioned the Hour of Code, an initiative launched by the nonprofit organization Code.org that has provided coding instruction to 100 million people worldwide, and government programs in Hong Kong and Singapore that are teaching coding to young people and adults continuing their education.

“We’re starting to see governments recognize that people are going to need, throughout their lives, to go back to school to learn new skills,” Smith said. “One of the things we have to do as we look to the future is bring the opportunity to code to everyone.”

Anne Taylor, who works on the Microsoft Accessibility team, which strives to make products and services accessible for all customers.

Tapping AI to solve the world’s big problems

Microsoft has long been known for suites of products, Smith said, and the company is now bringing that approach to a new suite of programs, AI for Good. This initiative’s first program, AI for Earth, was started in 2017 and brings advances in computer science to four environmental areas of focus: biodiversity, water, agriculture and climate change.

Under this program, Microsoft is committing $50 million over five years to provide seed grants to nongovernmental organizations, startups and researchers in more than 20 countries, Smith said. The most promising projects will receive additional funding, and Microsoft will use insights gleaned to build new products and tools. The program is already showing success, Smith said — the use of AI helped farmers in Tasmania improve their yields by 15 percent while reducing environmental runoffs. And in Singapore, AI helped reduce electrical consumption in buildings by almost 15 percent.

“We’re finding that AI, indeed, has the potential to help solve some of the world’s most pressing problems,” he said.

Improving accessibility for people with disabilities

Computers can see and hear. They can tell people what’s going on around them. Those abilities position AI to help the more than one billion people worldwide who have disabilities, Smith said.

“One of the things we’ve learned over the last year is that it’s quite possible that AI can do more for people with disabilities than for any other group on the planet,” he said.

Recognizing that potential, Microsoft in May announced AI for Accessibility, a $25 million, five-year initiative focused on using AI to help people with disabilities. The program provides grants of technology, AI expertise and platform-level services to developers, NGOs, inventors and others working on AI-first solutions to improve accessibility. Microsoft is also investing in its own AI-powered solutions, such as real-time, speech-to-text transcription and predictive text functionality.

Smith pointed to Seeing AI, a free Microsoft app designed for people who are blind or have low vision, as an example of the company’s efforts. This app, which provides narration to describe a person’s surroundings, identify currency and even gauge emotions on people’s faces, has been used over four million times since being launched a year ago.

“AI is absolutely a game-changer for people with disabilities,” Smith said.

Governing AI: a Hippocratic Oath for coders?

For AI to fulfill its potential to serve humanity, it must adhere to “timeless values,” Smith said. But defining those values in a diverse world is challenging, he acknowledged. AI is “posing for computers every ethical question that has existed for people,” he said, and requires an approach that takes into account a broad range of philosophies and ethical traditions.

University students and professors have been seeking to create a Hippocratic Oath for AI, Smith said, similar to the pledge doctors take to uphold specific ethical standards. Smith said a broader global conversation about the ethics of AI is needed, and ultimately, a new legal framework.

“We’re going to have to develop these ethical principles, and we’re going to have to work through the details that sometimes will be difficult,” he said. “Because the ultimate question is whether we want to live in a future of artificial intelligence where only ethical people create ethical AI, or whether we want to live in a world where, at least to some degree, ethical AI is required and assured for all of us.

“There’s only one way to do that, and that is with a new generation of laws.”

Lead image credit:  S3studio/Getty Images

Follow Brad Smith on Twitter and LinkedIn.