Illustration of a boat in water

Making it easier for companies to build and ship AI people can trust 

by Vanessa Ho

Generative AI is transforming many industries, but businesses often struggle with how to create and deploy safe and secure AI tools as technology evolves. Leaders worry about the risk of AI generating incorrect or harmful information, leaking sensitive data, being hijacked by attackers or violating privacy laws — and they’re sometimes ill-equipped to handle the risks.  

“Organizations care about safety and security along with quality and performance of their AI applications,” says Sarah Bird, chief product officer of Responsible AI at Microsoft. “But many of them don’t understand what they need to do to make their AI trustworthy, or they don’t have the tools to do it.”  

To bridge the gap, Microsoft provides tools and services that help developers build and ship trustworthy AI systems, or AI built with security, safety and privacy in mind. The tools have helped many organizations launch technologies in complex and heavily regulated environments, from an AI assistant that summarizes patient medical records to an AI chatbot that gives customers tax guidance.  

The approach is also helping developers work more efficiently, says Mehrnoosh Sameki, a Responsible AI principal product manager at Microsoft. 

This post is part of Microsoft’s Building AI Responsibly series, which explores top concerns with deploying AI and how the company is addressing them with its responsible AI practices and tools.

“It’s very easy to get to the first version of a generative AI application, but people slow down drastically before it goes live because they’re scared it might expose them to risk, or they don’t know if they’re complying with regulations and requirements,” she says. “These tools expedite deployment and give peace of mind as you go through testing and safeguarding your application.”  

The tools are part of a holistic method that Microsoft provides for building AI responsibly, honed by expertise in identifying, measuring, managing and monitoring risk in its own products — and making sure each step is done. When generative AI first emerged, the company assembled experts in security, safety, fairness and other areas to identify foundational risks and share documentation, something it still does today as technology changes. It then developed a thorough approach for mitigating risk and tools for putting it into practice.  

The approach reflects the work of an AI Red Team that identifies emerging risks like hallucinations and prompt attacks, researchers who study deepfakes, measurement experts who developed a system for evaluating AI, and engineers who build and refine safety guardrails. Tools include the open source framework PyRIT for red teams to identify risks, automated evaluations in Azure AI Foundry for continuously measuring and monitoring risks, and Azure AI Content Safety for detecting and blocking harmful inputs and outputs.  

Microsoft also publishes best practices for choosing the right model for an application, writing system messages and designing user experiences as part of building a robust AI safety system.  

“We use a defense-in-depth approach with many layers protecting against different types of risks, and we’re giving people all the pieces to do this work themselves,” Bird says. 

For the tax-preparation company that built a guidance chatbot, the capability to correct AI hallucinations was particularly important for providing accurate information, says Sameki. The company also made its chatbot more secure, safe and private with filters that block prompt attacks, harmful content and personally identifiable information.  

She says the health care organization that created the summarization assistant was especially interested in tools for improving accuracy and creating a custom filter to make sure the summaries didn’t omit key information.  

“A lot of our tools help as debugging tools so they could understand how to improve their application,” Sameki says. “Both companies were able to deploy faster and with a lot more confidence.”  

Microsoft is also helping organizations improve their AI governance, a system of tracking and sharing important details about the development, deployment and operation of an application or model. Available in private preview in Azure AI Foundry, AI reports will give organizations a unified platform for collaborating, complying with a growing number of AI regulations and documenting evaluation insights, potential risks and mitigations.

“It’s hard to know that all the pieces are working if you don’t have the right governance in place,” says Bird. “We’re making sure that Microsoft’s AI systems are compliant, and we’re sharing best practices, tools and technologies that help customers with their compliance journey.”  

The work is part of Microsoft’s goal to help people do more with AI and share learnings that make the work easier for everyone.  

“Making our own AI systems trustworthy is foundational in what we do, and we want to empower customers to do the same,” Bird says. 

Learn more about Microsoft’s Responsible AI work.

Lead illustration by Makeshift Studios / Rocio Galarza. Story published on January 22, 2025