By Antony Cook, Associate General Counsel, Corporate External and Legal Affairs, Microsoft Asia. This article was originally posted on LinkedIn.
Technology has brought us closer than ever before, providing opportunities to share knowledge and create value on a scale unprecedented in human history. It has fundamentally changed the way we consume information, shop and interact with our family and friends.
The increasing adoption of Artificial Intelligence (AI) is changing the game even further. There is no doubt that as with other great technological advances of the past, AI will bring about vast changes, some of which are hard to imagine today. Brad Smith in his book The Future Computed: Artificial Intelligence and its role in society, has envisaged a world where in the next 20 years personal digital assistants will be trained to anticipate our needs, help manage our schedule, prepare us for meetings, assist as we plan our social lives, reply to and route communications, and drive cars.
Beyond our personal lives, AI will enable breakthrough advances in areas like healthcare, agriculture, education and transportation. It’s already happening in impressive ways. As was the case with previous significant technological advances, we’ll need to be thoughtful about how we address the societal issues that these changes bring about. Most importantly, ensuring that AI is developed in a responsible manner so that people trust it – ensuring that it is not fed with human biases and prejudices.
Consumers Acceptance of AI
A recent Microsoft and IDC Asia/Pacific, Understanding Consumer Trust in Digital Services in Asia Pacific study revealed that overall, consumers are optimistic (49%) about the future of AI most (75%) had a positive outlook towards AI with regard to the workplace and the impact on their own jobs.
The study revealed that in general consumers are comfortable with the prospect of AI taking over suggestions and recommendations for life’s more mundane activities like commuting to work, choosing entertainment or making regular payments. However, consumers want to retain control over higher-level decisions like hiring staff or making investments.
Consumers expect organizations to harness AI in a way that will benefit them, not put them at a disadvantage. And expectations are highest in industries like financial services, healthcare and education where duty of care is paramount.
Building Trust in AI
As AI begins to augment human understanding and decision-making in fields like education, healthcare, transportation, agriculture, energy and manufacturing, it will raise new questions. How can we best ensure that AI is safe and reliable? How can we trust AI and attain the benefits of AI while protecting privacy? And more importantly, who will regulate the policies for AI?
The people who are building AI systems are required to comply with a broad range of laws but when it comes to regulation and forming policies or its ethical implementation, consumers strongly believed that government (43%) and technology companies (35%) should be the driving force behind this.
Trust and its role in underwriting all forms of commercial exchange has been recognized in economics since the time of its founder, Adam Smith. Without trust we can’t buy groceries using cash from the local supermarket, let alone trade cryptocurrency online via a brokerage in Slovakia.
As the role of AI continues to grow, business leaders, policymakers, researchers, academics and representatives of nongovernmental groups must work together to ensure that AI-based technologies are designed and deployed in a manner that will earn the trust of the people who use them and the individuals whose data is being collected.
Ultimately, AI is the tool which will help us build all possible tools and Trust is foundational. It must be earned and sustained.