Seven things to know about Responsible AI

Natasha Crampton, Chief Responsible AI Officer, Microsoft

Artificial intelligence is rapidly transforming our world. Whether it’s ChatGPT or the new Bing, our recently announced AI-powered search experience, there has been a lot of excitement about the potential benefits.  

But with all the excitement, naturally there are questions, concerns, and curiosity about this latest development in tech, particularly when it comes to ensuring that AI is used responsibly and ethically. Microsoft’s Chief Responsible AI Officer, Natasha Crampton, was in the UK to meet with policymakers, civil society members, and the tech community to hear perspectives about what matters to them when it comes to AI, and to share more about Microsoft’s approach.  

We spoke with Natasha to understand how her team is working to ensure that a responsible approach to AI development and deployment is at the heart of this step change in how we use technology. Here are seven key insights Natasha shared with us. 

1. Microsoft has a dedicated Office of Responsible AI

“We’ve been hard at work on these issues since 2017, when we established our research-led Aether committee (Aether is an acronym for AI, Ethics and Effects in Engineering and Research). It was here we really started to go deeper on what these issues really mean for the world. From this, we adopted a set of principles in 2018 to guide our work.  

The Office of Responsible AI was then established in 2019 to ensure we had a comprehensive approach to Responsible AI, much like we do for Privacy, Accessibility, and Security. Since then, we’ve been sharpening our practice, spending a lot of time figuring out what a principle such as accountability actually means in practice.  

We’re then able to give engineering teams concrete guidance on how to fulfil those principles, and we share what we have learned with our customers, as well as broader society.”  

2. Responsibility is a key part of AI design — not an afterthought 

“In the summer of 2022, we received an exciting new model from OpenAI. Straightaway we assembled a group of testers and had people probe the raw model to understand what its capabilities and its limitations were.  

The insights generated from this research helped Microsoft think about what the right mitigations will be when we combine this model with the power of web search. It also helped OpenAI, who are constantly developing their model, to try to bake more safety into them. 

We built new testing pipelines where we thought about the potential harms of the model in a web search context. We then developed systematic approaches to measurement so we could better understand what some of main challenges we could have with this type of technology — one example being what is known as ‘hallucination’, where the model may make up facts that are not actually true.  

By November we’d figured out how we can measure them and then better mitigate them over time. We designed this product with Responsible AI controls at its core, so they’re an inherent part of the product. I’m proud of the way in which the whole responsible AI ecosystem came together to work on it.” 

3. Microsoft is working to ground responses in search results  

“Hallucinations are a well-known issue with large language models generally. The main way Microsoft can address them in the Bing product is to ensure the output of the model is grounded in search results.  

This means that the response provided to a user’s query is centred on high-ranking content from the web, and we provide links to websites so that users can learn more.  

Bing ranks web search content by heavily weighting features such as relevance, quality and credibility, and freshness. We consider grounded responses to be responses from the new Bing, in which claims are supported by information contained in input sources, such as web search results from the query, Bing’s knowledge base of fact-checked information, and, for the chat experience, recent conversational history from a given chat. Ungrounded responses are those in which a claim is not grounded in those input sources.  

We knew there would be new challenges that would emerge when we invited a small group of users to try the new Bing, so we designed the release strategy to be an incremental one so we could learn from early users. We’re grateful for those learnings, as it helps us make the product stronger. Through this process we have put new mitigations in place, and we are continuing to evolve our approach.” 

 4. Microsoft’s Responsible AI Standard is intended for use by everyone

“In June 2022, we decided to publish the Responsible AI standard. We don’t normally publish our internal standards to the general public, but we believe it is important to share what we’ve learned in this context, and help our customers and partners navigate through what can sometimes be new terrain for them, as much as it is for us.  

When we build tools within Microsoft to help us identify and measure and mitigate responsible AI challenges, we bake those tools into our Azure machine learning (ML) development platform so our customers can also use them for their own benefit. 

For some of our new products built on OpenAI, we’ve developed a safety system so that our customers can take advantage of our innovation and our learnings as opposed to having to build all this tech for themselves from scratch. We want to ensure our customers and partners are empowered to make responsible deployment decisions.” 

5. Diverse teams and viewpoints are key to ensuring Responsible AI

“Working on Responsible AI is incredibly multidisciplinary, and I love that. I work with researchers, such as the team at Microsoft UK’s Research Lab in Cambridge, engineers and policy makers. It’s crucial that we have diverse perspectives applied to our work for us to be able to move forward in a responsible way. 

By working with a huge range of people across Microsoft, we harness the full strength of our Responsible AI ecosystem in building these products. It’s been a joy to get our cross-functional teams to a point where we really understand each other’s language. It took time to get to there, but now we can strive toward advancing our shared goals together.  

But it can’t just be people at Microsoft making all the decisions in building this technology. We want to hear outside perspectives on what we’re doing, and how we could do things differently. Whether it’s through user research or ongoing dialogues with civil society groups, it’s essential we’re bringing the everyday experiences of different people into our work.  It’s something we must always be committed to because we can’t build technology that serves the world unless we have open dialogue with the people who are using it and feeling the impacts of it in their lives.” 

6. AI is technology built by humans, for humans

“At Microsoft, our mission is to empower every person and every organisation on the planet to achieve more. That means we make sure we’re building technology by humans, for humans. We should really look at this technology as a tool to amplify human potential, not as a substitute.  

On a personal level, AI helps me grapple with vast amounts of information. One of my jobs is to track all regulatory AI developments and help Microsoft develop positions. Being able to use technology to help me summarise large numbers of policy documents quickly enables me to ask follow-up questions to the right people.”

7. We’re currently on the frontiers — but Responsible AI is a forever job

“One of the exciting things about this cutting-edge technology is that we’re really on the frontiers. Naturally there are a range of issues in development that we are dealing with for the very first time, but we’re building on six years of responsible AI work.  

There are still a lot of research questions where we know the right questions to ask, but we don’t necessarily have the right answers in all cases. We will need to continually look around those corners, ask the hard questions, and over time we’ll be able to build up patterns and answers. 

What makes our Responsible AI ecosystem at Microsoft so strong is that we do combine the best of research, policy, and engineering. It’s this three-pronged approach that helps us look around corners and anticipate what’s coming next. It’s an exciting time in technology and I’m very proud of the work my team is doing to bring this next generation of AI tools and services to the world in a responsible way.”  

Ethical AI integration: 3 tips to get started 

You’ve seen the technology, you’re keen to try it out – but how do you ensure responsible AI is a part of your strategy? Here are Natasha’s top three tips: 

  1. Think deeply about your use case. Ask yourself, what are the benefits you are trying to secure? What are the potential harms you are trying to avoid? An Impact Assessment can be a very helpful step in developing your early product design.  
  2. Assemble a diverse team to help test your product prior to release and on an ongoing basis. Techniques like red-teaming can help push the boundaries of your systems and see how effective your protections are. 
  3. Be committed to ongoing learning and improvementAn incremental release strategy helps you learn and adapt quickly. Make sure you have strong feedback channels and resources for continual improvement. Leverage resources that reflect best practices wherever possible. 

Find out more: There are a host of resources, including tools, guides and assessment templates, on Microsoft’s Responsible AI principle hub to help you navigate AI integration ethically.  

Tags: , ,

Related Posts