Skip to Main Content
Skip to main content
Stories Asia

Building trust in the digital world

As we celebrate Safer Internet Day on 5 February, let us take a closer look at some of the key trust and cybersecurity concerns around technology, and how businesses, governments and other stakeholders can work together to create a more trusted, secure and better digital world for everyone.

Powerful technologies, such as cloud computing and artificial intelligence (AI), are driving rapid and seismic changes in our world – industries are being reinvented, jobs being redefined, and economies being transformed.

But this technology-driven transformation also raises important questions about how technology is developed and used, and its impact on society. People are wondering if they can trust the technology they use and if they can trust the organizations designing, developing and deploying these new technologies.

To address some of the pressing questions around trust, and to discuss how we can foster greater trust in this digital age, Mary Jo Schrade, Assistant General Counsel and Regional Director, Digital Crimes Unit (DCU), Microsoft Asia, had the opportunity to engage with three technology thought leaders in a panel discussion at Microsoft’s Digital Trust Asia event:


Here is an extract of the key areas of discussion at the panel discussion:

 On Building Policies to Foster Trust

Jared Ragland: What we see throughout Asia Pacific and the world is the need for good foundational rules that governments can establish to support the expectations of both the private sector and the consumers.

There are two areas we focus a lot on, and they are both foundational for building trust. The first is cybersecurity – it is important to ensure that governments have rules and systems in place so that the participants in the ecosystem have shared expectations on how to respond to the emerging threats to their systems.  The other is personal data protection, and this is important as it ensures that the services and the technologies consumers use are adequately protected, and their personal information will be utilized in ways that they expect.

There are negative consequences of getting these two foundational pillars wrong and we have seen enough examples that prove it.

Antony Cook: One of the risks in the current policy environment is that we see regulations being set before we understand what technology can do. If policymakers are not thorough in their understanding of how specific technologies work and do not understand their implications, it can impede their ability to set appropriate regulations.

Therefore, it is important to have open dialogue between policymakers and technology leaders in relation to new technologies and how they should be regulated. For example, we have called for thoughtful government regulation on facial recognition technology because we believe it requires the participation of both the public and private sector to develop norms and a regulatory framework around acceptable uses.

On Securing the Digital World

Dr. Biplab Sikdar: People are more aware of cybersecurity now than ever. Cybersecurity courses are definitely well-subscribed, and I do see a change in the mindset of students and the public, in general, on security issues. I do think our programmers and our new computer science graduates know much more about security issues now, and security is no longer an afterthought.

However, in environments such as the start-up scene where it is hyper-competitive, companies are often focused on how quickly they can get to the market. They might focus more on go to market strategy or technological advancement, resulting in security becoming an afterthought. That is what has happened with many of the Internet of Things (IoT) organizations.

Antony Cook: The proliferation of connected devices and cloud-based services has opened new venues of attack for cybercriminals and other malicious actors. Over 1 billion people were victims of cybercrime last year.  Protecting our customers and the wider community is a responsibility we all must take seriously. We understand technology has an important role to play but that technology alone is not enough. That’s why we need to take a broad view of security that goes beyond technology to include industry, and policy partnerships.

On Building Trust in AI

 Jared Ragland: This issue really raises the importance of having diversity and people who are trained in subjects like the humanities to be part of the AI development processes. We need to look at how people who are well-versed in topics such as ethics, as opposed to engineers who are driven to solve problems, can potentially help to address the issues we face around AI. Clearly, there is a need for a diverse educational platform.

Dr. Biplab Sikdar: As an academic, what is top of mind for me is fairness. Any AI development that is being rolled out must be fair in all kinds of terms. For example, when it comes to job offers, if an AI is looking at the resumes and picking out who to interview, we need to ensure it is fair. Biases should not creep into the algorithm from the way we are training it or from the data we are feeding it.

Antony Cook: The approach that we have taken at Microsoft is to think of how we want to be responsible in the development of AI and be transparent and clear with our customers and our stakeholders on the principles we are going to apply to its development. And so, we have developed a set of six principles that we believe should govern the development of AI. These include ensuring that systems are fair, reliable and safe, private and secure, inclusive, transparent and accountable.

The other part that I think is critical to the development of AI is the set of discussions that must take place between civil society, governments, and industry stakeholders.

There are different points of views, and there are some in the tech industry who have advocated that self-regulation is the way to go. One of the things that Microsoft has been clear about is that regulation is appropriate and needed because the development and use of AI will necessarily involve issues that are going to impact more than just an organization’s interest. While Microsoft and the other technology companies should have the responsibility to self-regulate, the interests of society and the governments also need to be considered as we contemplate what is the right approach. As such, the development of AI should be a broader debate that involves the appropriate stakeholders.

At the end of the day, we understand the value of partnership, of backing words with actions, of being transparent, of having discourses and exchanging knowledge with others. At Microsoft, our approach begins with a simple acknowledgement that in this time of rapid change and growth in AI, we can’t move ahead without considering the impact of this new technology on individuals, businesses and society.