By THOMAS DAEMEN
It was great to join StartupAUS and hundreds of entrepreneurs and policymakers at the recent National Policy Hack in Brisbane. Our team of start-ups, larger companies and policymakers spent the day exploring fascinating ethical questions around the development and use of Artificial Intelligence. Although we all came to the hackathon with different perspectives, one thing was clear at the outset – everyone recognised both the remarkable potential and the ethical challenges associated with the growing use of AI.
We are already seeing how AI can help doctors reduce medical mistakes, farmers improve yields, teachers customise instruction and researchers unlock solutions to protect our planet. The technology will also automate many mundane and repetitive tasks, freeing humans to devote our time and energy to more pleasurable and creative endeavours. Microsoft’s approach to AI is to create a platform that is open and interoperable. We encourage experimentation by everyone.
But there is a caveat to all this progress. The people who build AI systems are naturally required to comply with existing laws while simultaneously considering a wide array of novel legal and ethical questions about how technology will affect society. As AI begins to augment human understanding and decision-making in fields like education, healthcare, transport, agriculture, energy and manufacturing, how can we ensure it treats everyone fairly? How do we make AI safe and reliable? What responsibility do organisations have to protect privacy? And should decisions made with the help of AI systems be fully transparent and accountable?
Fairness
It is essential that AI-based technologies are designed and deployed in a way that earns the trust of the people who use them – and particularly the individuals whose data is being collected. Central to this is ensuring that AI systems and their associated algorithms treat people fairly.
Ideally, when an AI system recommends a course of action – for example, with respect to a patient’s medical treatment or a bank’s decision whether to approve a home loan application – the same recommendation should apply to everyone in an identical situation. Computers, in theory, are not subject to the biases influencing human decision-making. Yet AI systems are designed by humans with their own biases, and are only as good as the data fed into them. Even where an algorithm makes another algorithm, the original algorithm was made by humans. An insight or prediction isn’t more honest simply because it was generated by a bot.
Transparency
Secondly, when AI systems inform decisions that affect people’s lives, many in society will demand the right to understand how those decisions are made. This is particularly important when such decisions have public policy implications. For example, are patients entitled to know if an AI algorithm has prioritised or rejected treatment for their condition? The best way to engender trust is to provide explanations that include contextual information about how an AI system works and interacts with data. When such transparency exists, people with an interest can evaluate the decision-making process itself.
Above all, does society call on government to say that humans must have a role when AI systems are being used to decide issues of profound social importance? While these issues often come across as highly-academic debating points at global conferences, they pose very real, practical challenges for those creating AI systems that have the power to change people’s lives.
Reliability and security
As with any technology, trust in AI systems depends on whether they can be operated reliably, safely and consistently. This means not only under normal circumstances, but also in unexpected conditions or when they are under attack. It is also vital that these systems have appropriate privacy and security protections. Simply put, people will not share personal data – which is essential for AI to help inform decisions – unless they are confident their privacy is protected, and their data is secure.
Two pieces of data privacy legislation introduced in 2018 create new opportunities for Australian organisations to proactively treat and manage their data like an asset. The Privacy Amendment (Notifiable Data Breaches) Act 2017 became law in Australia on 22 February. This was followed by Europe’s General Data Protection Regulation (GDPR) in May. Every organisation developing or using AI systems should heed these new regimes and acknowledge the critical balance that must be struck on these vital issues.
Accountability
Finally, as with other technologies and products, the people who design and deploy AI systems must be accountable for how their systems operate. To establish accountability norms for AI, governments and businesses should draw on experience in other fields, including healthcare and privacy. Those responsible for AI systems should consider best practices and check periodically whether these are being adhered to and working effectively.
At Microsoft, we have established an AI and Ethics in Engineering and Research Committee. Comprising senior leaders from across our engineering, research, consulting and legal functions, the group was created to identify, study and recommend policies, procedures, and best practices regarding the influence of AI on people and society. We hope it will help pave the way toward a common framework of principles to guide a new generation of AI-enabled systems – ensuring they are used ethically and responsibly for the benefit of the world.