Microsoft Switzerland joins Pilot Gen AI Redteaming Network by ETH and EDA

Read the original announcement here

The Swiss Call for Trust & Transparency has today launched a Pilot Gen AI Redteaming Network. The network unites all stakeholders – tech companies and public research institutions alike – to work collectively on disclosing, replicating, and mitigating the most urgent safety issues of generative AI systems. As of mid January 2024, 12 major tech companies have committed to joining forces with the network, thereby significantly advancing AI safety.

Large language models (LLMs) are natural language processing programs that use artificial neural networks to generate written responses. They pose some risks that are not yet fully explored, which include (1) Potential biased, inaccurate content, depending on the data they were trained on; (2) Vulnerability to abuse, such as being used to create custom malware; (3) Potential legal issues, such as copyright violation; and (4) Potential behavioral issues, such as providing harmful advice. These issues raise ethical challenges that need addressing to make LLMs beneficial and equitable and to enable their wider adoption

Tech companies working on LLMs are making marked efforts to assess and manage risks. After all, they, too, are invested in gaining the public’s trust and driving forward wide, safe adoption of their products. However, these attempts are individual to each company and are therefore fragmented. Another issue is that users cannot always ascertain whether an AI system has been tested or verified.

“Securing AI systems is a team sport,” says Catrin Hinkel, CEO of Microsoft Switzerland. “At Microsoft, we firmly believe that when you create powerful technologies, you also must ensure the technology is developed and used responsibly. We are committed to a practice of responsible AI by design, guided by a core set of principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency and accountability.”

Exploring and disclosing threats for the benefit of all users

This is where the Swiss Call for Trust and Transparency in AI comes in, a joint initiative of the Swiss Foreign Ministry and ETH AI Center. As a cornerstone of their work, the initiators have launched today, at AI House Davos, a Risk Exploration and Mitigation Network that will investigate from both the attacker’s and the defender’s perspective and share its findings with all participants.

The academic lead is shared between external pageFlorian Tramèr, Professor at ETH Zurich, where he leads the Secure and Private AI Lab, and an associated faculty member of ETH AI Center; and external pageCarmela Troncoso, Associate Professor at the Security and Privacy Engineering Laboratory of EPFL. The coordination of the efforts is overseen by Alexander Ilic, Executive Direct​​ of ETH AI Center.

As of mid-​January 2024, 12 major tech companies have committed to taking part in the network. Those are: Aleph Alpha, appliedAI Institute for Europe, AWS, Cohere, Hugging Face, IBM, Microsoft, Roche, SAP, Swisscom, The Global Fund, and Zurich Insurance Group.

«Safe and responsible AI is a must for a data driven and scaled organization like Zurich Insurance Group», says Ericson Chan, Group Chief Information & Digital Officer at Zurich Insurance Group. «It is critical for us to work together, from Gen AI model training to inferencing, so we continue to inspire Digital Trust at the dawn of this hyper-​innovation era.»

Towards effective testing and regulation

As a fully transparent system, this red-​teaming network will allow all stakeholders – including tech companies and public research institutions – to work collectively on disclosing and mitigating the most urgent issues. Those efforts will also aid regulators in developing effective, standardized AI testing.

“Looking at AI systems with the mindset of a bad actor tells us not just how to secure AI but also how to make the digital space as a whole more resilient.” said Sebastian Hallensleben, Co-​Chair for AI Risk & Accountability at OECD ONE.AI, during the launch event today in Davos.

Companies in the network will share scenarios and threat models for AI models with researchers, so that they can be tested and attacked to reveal potential vulnerabilities. Results will be shared first within the group, so that the participants can work on mitigations before they are disclosed in public. All results will be fed into a database of attack vectors and mitigation strategies, thereby fostering collaboration and knowledge-​sharing among all stakeholders.

“AI is crucial for the world, that’s clear – the technology’s benefits have been paramount. But we as a society cannot overlook potential risks. They need to be mitigated from the get-​go, and for that we need openness and transparency. We at IBM welcome the creation of the Risk Exploration and Mitigation Network for Generative AI, which will be a great complement to other initiatives, such as the recently launched AI Alliance, to ensure AI is developed and deployed safely.” said Alessandro Curioni, IBM Research VP of Europe and Africa and the Director of IBM Research Europe – Zurich.

Related Posts

Microsoft’s Work Trend Index 2024: Swiss Knowledge Workers Outpace Global AI Adoption Trend

Microsoft’s Annual Work Trend Index highlights a significant embrace of generative AI among Swiss knowledge workers, with the vast majority incorporating it into their regular workflow. Additionally, a substantial number of Swiss managers prioritize AI proficiency over experience when hiring new talent. However, over half express concern about their organization’s top leadership not having a clear strategy or vision for AI integration.

Microsofts Work Trend Index 2024: Schweizer Wissensarbeiter liegen über dem globalen Durchschnitt beim Einsatz von KI

Der jährliche Work Trend Index von Microsoft zeigt, dass generative KI bei Schweizer Wissensarbeitern hoch im Kurs steht und von einer grossen Mehrheit in die regulären Arbeitsabläufe integriert wird. Darüber hinaus gibt eine beträchtliche Anzahl von Schweizer Managern bei der Einstellung neuer Talente KI-Kenntnissen Vorrang vor Erfahrung. Mehr als die Hälfte der Befragten ist jedoch besorgt darüber, dass die Führungsspitze ihres Unternehmens keine klare Strategie oder Vision für die Integration von KI hat.

Rapport Work Trend Index 2024 par Microsoft: Les travailleurs du savoir suisses dépassent les tendances mondiales en matière d’adoption de l’intelligence artificielle

Le rapport Work Trend Index 2024 de Microsoft a révélé l’adoption significative de l’IA générative parmi les travailleurs du savoir suisses, la grande majorité d’entre eux intégrant cette technologie à leur routine professionnelle. Beaucoup de managers suisses privilégient également la maîtrise de l’IA à l’expérience lorsqu’ils recrutent de nouveaux talents. Cependant, plus de la moitié s’inquiètent du fait que les dirigeants n’ont pas de stratégie ou de vision claires en matière d’intégration de l’IA.

Providing further transparency on our responsible AI efforts

We believe we have an obligation to share our responsible AI practices with the public, and this report enables us to record and share our maturing practices, reflect on what we have learned, chart our goals, hold ourselves accountable, and earn the public’s trust.