- The “Economic Impact of Generative AI” report from Access Partnership, ELSAM, and Microsoft reveals what the economic opportunity of generative AI means for industry and workforce readiness in Indonesia. The report in English can be accessed here.
- Microsoft shared a blueprint of five things that governments can consider for AI policy, law, and regulation. The blueprint can be accessed here.
- Microsoft announced a new Copilot Copyright Commitment to improve support for commercial customers’ intellectual property when using commercial Microsoft Copilot and Bing Chat Enterprise services and content output – as long as customers follow the product settings.
- Microsoft released Azure AI Content Safety, a new service that helps customers detect and filter customer-generated and AI content in their apps and services.
Jakarta, 30 October 2023 – “The Economic Impact of Generative AI: The Future of Work in Indonesia” report released by Access Partnership in collaboration with ELSAM and supported by Microsoft reveals that the use of Generative AI to complement work activities could help unlock USD 243.5 billion in production capacity across the Indonesian economy. This is equivalent to 18% of Indonesia’s GDP in 2022.
Dharma Simorangkir, President Director of Microsoft Indonesia said, “The new generation of AI, which is Generative AI, helps us to interact with data in new ways. From summarizing text, detecting anomalies, to recognizing images. Its natural language interface allows us to interact with this technology using everyday language, and its ability as a reasoning engine helps us identify patterns and draw insights much faster. The combination of these two capabilities allows every person and organization to have their own copilot; sparking creativity, accelerating discovery, and increasing efficiency. When utilized responsibly, all of this will have a positive impact on the economy.”
The positive impact of Generative AI is big, and organizations of all sizes and industries, or even individuals in Indonesia, have started to integrate this technology into their business operations and daily lives. For example, to improve personalization in customer service, increase education about new types of technology, or find new ideas.
“These examples show how AI can help people focus on essential elements of their tasks, not replace them. Because, after all, AI can only work with data provided by humans, and is developed to improve human competence,” Dharma continued.
New opportunities are still on the horizon. To realize these opportunities, the same report details at least three aspects that require our attention, including (1) Improving access and usage, (2) Managing risk, and (3) Encouraging innovation – all with responsibility as the main foundation.
Improving Access and Usage
Improving AI access and usage requires adequate infrastructure and a skilled workforce. Generative AI’s natural language and reasoning engine capabilities can also democratize AI – making it less challenging for individuals to use the technology. In practice, new skills still need to be mastered – such as prompting, doing analytical evaluation, and problem-solving. At the same time, AI regulations that govern the responsible development and use of AI also play an important role in maximizing the benefits or positive impact of the technology.
“In a democratic society, one of our fundamental principles is that no one is immune to the law. That’s why we feel it’s appropriate for regulators and policymakers to increase oversight, and consider new laws and regulations. We will continue to actively participate by sharing our experiences and insights on responsible AI practices. We have also released a whitepaper titled Governing AI: A Blueprint for the Future, which seeks to answer the question of how we need to manage AI,” said Ajar Edi, Director of Government Affairs, Microsoft Indonesia & Brunei Darussalam.
Efforts to unlock opportunities and mitigate risks are not limited to increasing access or designing comprehensive regulations, but also require coordinated efforts to form a formula for responsible AI, both in terms of development and use. This formula can also be made part of a company’s strategy or principles for individual use of AI.
“When we at Microsoft adopted six AI ethical principles in 2018, we noted that one principle—accountability—is the foundation for all the others: fairness, reliability and safety, privacy and security, inclusivity, and transparency. This is a fundamental need to ensure that machines remain effectively supervised by humans, and the people who design and operate them remain accountable to everyone else. In short, we must always ensure that AI remains under human control. This should be a top priority for tech companies and governments alike,” Ajar continued.
To help create an overall responsible AI ecosystem, Microsoft has released Microsoft Responsible AI Standard version 2 and the Microsoft Responsible AI Impact Assessment Report to the public; the result of years of experience, learning, and feedback that Microsoft received.
The last aspect is finding the right balance between protecting and encouraging innovation. As the development of AI policy and regulatory frameworks continues, there are questions and concerns about the use of generative AI technologies in realizing new opportunities. Therefore, close collaboration between the government and the private sector is needed to foster an innovative environment.
To drive such innovation, Microsoft has announced three enterprise AI Customer Commitments, with the Copilot Copyright Commitment as one of the extensions. The Copilot Copyright Commitment strengthens intellectual property indemnification support for commercial Copilot services. With that, if a third party sues a commercial customer for copyright infringement for using Microsoft Copilot or its output, Microsoft will defend the customer and pay any damages or settlement costs resulting from the lawsuit, as long as the customer uses the guardrails and content filters that Microsoft built into Microsoft products.
These guardrails and content filters, for example, can be found in Azure AI Content Safety, which has been generally available since 17 October 2023. This new service helps detect and filter harmful user-generated and AI-generated content in apps and customer services. Content Safety includes text and image detection to find offensive, risky, or unwanted content; such as profanity, adult content, gore, violence, hate speech, and more.