By Brad Smith, Vice Chair & President; Natasha Crampton, Chief Responsible AI Officer
The following is the foreword to the inaugural edition of our annual Responsible AI Transparency Report. The FULL REPORT is available at this link.
We believe we have an obligation to share our responsible AI practices with the public, and this report enables us to record and share our maturing practices, reflect on what we have learned, chart our goals, hold ourselves accountable, and earn the public’s trust.
In 2016, our Chairman and CEO, Satya Nadella, set us on a clear course to adopt a principled and human-centered approach to our investments in artificial intelligence (AI). Since then, we have been hard at work building products that align with our values. As we design, build, and release AI products, six values – transparency, accountability, fairness, inclusiveness, reliability and safety, and privacy and security – remain our foundation and guide our work every day.
To advance our transparency practices, in July 2023, we committed to publishing an annual report on our responsible AI program, taking a step that reached beyond the White House Voluntary Commitments that we and other leading AI companies agreed to. This is our inaugural report delivering on that commitment, and we are pleased to publish it on the heels of our first year of bringing generative AI products and experiences to creators, non-profits, governments, and enterprises around the world.
As a company at the forefront of AI research and technology, we are committed to sharing our practices with the public as they evolve. This report enables us to share our maturing practices, reflect on what we have learned, chart our goals, hold ourselves accountable, and earn the public’s trust. We’ve been innovating in responsible AI for eight years, and as we evolve our program, we learn from our past to continually improve. We take very seriously our responsibility to not only secure our own knowledge but also to contribute to the growing corpus of public knowledge, to expand access to resources, and promote transparency in AI across the public, private, and non-profit sectors.
In this inaugural annual report, we provide insight into how we build applications that use generative AI; make decisions and oversee the deployment of those applications; support our customers as they build their own generative applications; and learn, evolve, and grow as a responsible AI community. First, we provide insights into our development process, exploring how we map, measure, and manage generative AI risks. Next, we offer case studies to illustrate how we apply our policies and processes to generative AI releases. We also share details about how we empower our customers as they build their own AI applications responsibly. Last, we highlight how the growth of our responsible AI community, our efforts to democratize the benefits of AI, and our work to facilitate AI research benefit society at large.
There is no finish line for responsible AI. And while this report doesn’t have all the answers, we are committed to sharing our learnings early and often and engaging in a robust dialogue around responsible AI practices. We invite the public, private organizations, non-profits, and governing bodies to use this first transparency report to accelerate the incredible momentum in responsible AI we’re already seeing around the world.
Click here to read the full report.