AI Explained

Hear about the latest topics in AI and our unique approach

AI basics

  • Artificial intelligence (AI): Artificial Intelligence is the ability of a computer system to deal with ambiguity, by making predictions using previously gathered data, and learning from errors in those predictions in order to generate newer, more accurate predictions about how to behave in the future.
  • Machine Learning (ML): Machine learning is the process of using mathematical models of data to help a computer learn without direct instruction. It’s considered a subset of artificial intelligence (AI). Machine learning uses algorithms to identify patterns within data, and those patterns are then used to create a data model that can make predictions. With increased data and experience, the results of machine learning are more accurate.
  • Machine learning techniques:
    • Supervised learning: Addressing datasets with labels or structure, data acts as a teacher and “trains” the machine, increasing in its ability to make a prediction or decision.
    • Unsupervised learning: Addressing datasets without any labels or structure, finding patterns and relationships by grouping data into clusters.
    • Reinforcement learning: A computer program helps determine outcome based upon a feedback loop.
  • Transfer Learning: In machine learning, it is desirable to be able to transfer learned knowledge from some “source” task to downstream “target” tasks. This is known as transfer learning —a simple and efficient way to obtain performant machine learning models, especially when there is little training data or compute available for solving the target task.
  • Deep learning / neural networks: Deep learning is an advanced type of machine learning that uses networks of algorithms that are inspired by the structure of the brain, known as neural networks. A deep neural network has nested neural nodes, and each question that it answers leads to a set of related questions. Deep learning typically requires a large data set to train on; training sets for deep learning are sometimes made up of millions of data points. After a deep neural network has been trained on these large data sets, it can handle more ambiguity than a shallow network. That makes it useful for applications like image recognition, where AI needs to find the edges of a shape before it can identify what’s in the image.
  • Transformer model: A transformer is a deep learning model that adopts the mechanism of self-attention, differentially weighing the significance of each part of the input data or tracking relationships in data, like word order in a sentence. It is used primarily in the fields of natural language processing (NLP) and computer vision (CV) but can be applied to other scenarios including fraud detection or to help design new medicines. Transformer-based language models in NLP have driven rapid progress in recent years fueled by computation at scale, large datasets, and advanced algorithms and software to train these models. Language models with large numbers of parameters, more data, and more training time acquire a richer, more nuanced understanding of language. As a result, they generalize well as effective zero- or few-shot learners, with high accuracy on many NLP tasks and datasets. Researchers at Microsoft have been on the forefront of deep learning transformer-based models, including Turing for rich language and Florence for visual recognition.
  • Few-shot/zero-shot learning : Deep neural networks including pre-trained large language models like Turing-NLG and GPT-3 require thousands of labeled training examples to obtain state-of-the-art performance for downstream tasks and applications. Such a large number of labeled examples is difficult and expensive to acquire in practice, especially as you scale these models to hundreds of different languages, thousands of different tasks and domains. Microsoft has researched techniques for few-shot and zero-shot learning to obtain state-of-the-art performance while using very few to no labels for the target task.
  • Generative AI: Generative AI refers to a category of AI that uses systems called neural networks to analyze data, find patterns and use these patterns to generate or create a new output, such as text, photo, video, code, data, and more. Examples of how Microsoft is using generative AI include GitHub Copilot, the new Bing and Bing Chat experience, new Viva Sales and Dynamics 365 experiences, DALL-E in Designer and Bing Image Creator. We are also making this technology available to customers through the Azure OpenAI Service.
  • ChatGPT: ChatGPT was built by OpenAI and is fine-tuned from a model in the GPT-3.5 series, which OpenAI finished training in early 2022. ChatGPT was trained on an Azure AI supercomputing infrastructure and uses a transformer-based neural network architecture, which is pre-trained on a large dataset of text and fine-tuned on conversational interactions. When a user inputs a statement or question, ChatGPT generates a response by analyzing the input and considering the context of the conversation. ChatGPT can also be fine-tuned on specific tasks such as answering questions or providing customer service. ChatGPT is available in preview in Azure OpenAI Service, developers can integrate custom AI-powered experiences directly into their own applications, including enhancing existing bots to handle unexpected questions, recapping call center conversations to enable faster customer support resolutions, creating new ad copy with personalized offers, automating claims processing, and more, all backed by the unique supercomputing and enterprise capabilities of Azure.We’re commonly asked if we incorporate ChatGPT into our own products. We do not have any direct integrations with ChatGPT, but we do integrate GPT models. For example, we announced new versions of the Bing search engine, the Edge browser, and a new chat experience, brought together and powered by AI, to serve as your copilot for the web. This experience is powered by a new, powerful, next-generation OpenAI large language model that is more powerful than ChatGPT and customized for search.
  • Copilot: A copilot is an application that uses modern AI/LLMs like GPT-4 to assist people with complex (cognitive) tasks. We first introduced the concept of a copilot nearly two years ago with GitHub Copilot, an AI pair programmer that assists developers with writing code. We believe the copilot represents both a new paradigm in AI-powered software and a profound shift in the way that software is built – from imagining new product scenarios, to the user experience, the architecture, the services that it uses, how to think about safety and security, and more.
  • Plugin: A critical component of this vision is the copilot’s ability to interact with other software through extensibility tools called plugins. Plugins are tools first introduced for ChatGPT and more recently for Bing that augment the capabilities of AI systems, enabling them to interact with APIs from other software and services to retrieve real-time information, incorporate company and other business data, perform new types of computations, and act on the user’s behalf.

Large AI Models

Researchers at Microsoft and elsewhere are making progress on developing large AI systems that can process information in more sophisticated ways. The recent explosion of training data, coupled with large amounts of fast compute facilitated by the cloud, has enabled new applications of neural network architectures that have allowed us to train large AI models that can accomplish a wide variety of tasks at an unprecedented scale. We’ve used our supercomputer infrastructure to train state-of-the-art AI models, including Turing for rich language understanding, Z-Code and Z-Code++ for translation and summarization across hundreds of languages, and Florence for visual recognition. OpenAI also used Microsoft infrastructure to train GPT, DALL-E and Codex.

However, these models are only valuable if they are accessible and cost-effective for others to use them and build on top of them, so we’re working to make AI technology more efficient, in both training and application. We’ve made advances with DeepSpeed, for training efficiency, and ONNX Runtime, which gives high-performance inference support for large Transformer-based models, helping to optimize cost and latency.

It’s our goal to responsibly advance cutting-edge AI research and democratize these AI models as a new technology platform. Microsoft is already using these models in a broad range of scenarios across our services like Bing, Office, Dynamics 365, Power Platform, GitHub and LinkedIn. Now, we’re making these transformational capabilities, built on cutting-edge advancements in AI, available for organizations to build upon and customize through Azure AI.

Our approach to deployment/development

We’re optimistic about AI’s potential to foster innovation, create economic progress, and accelerate productivity, satisfaction, and growth. Microsoft has been working for years to advance the field of AI, and also publicly guide how these technologies are created and used on our platforms in responsible and ethical ways. It’s our goal to democratize breakthroughs in AI, including large language models, in a responsible way, to help people and organizations be more productive, and go on to solve the most pressing problems our society faces today. Last year we shared an update on our Microsoft’s Responsible AI Standard, a framework to guide how we build AI systems. We’re committed to sharing what we have learned, invite feedback from others, and continue to contribute to the broader discussion about building better norms and practices around AI. We also believe in intentional and iterative deployment, which means we’re committed to taking the time to understand potential harms, working to mitigate them, and ongoing monitoring to ensure our standards continue to be met. For example, we took a measured approach to release Microsoft Designer, Bing Image creator, and Azure OpenAI service, which allow us to gather feedback, apply learnings and improve the experience before expanding further.


Our work with OpenAI

Microsoft and OpenAI have partnered closely since 2019 to accelerate breakthroughs in AI, forming our partnership around a shared ambition to responsibly advance cutting-edge AI research and democratize AI as a new technology platform.

Through our initial investment and collaboration, Microsoft and OpenAI pushed the frontier of cloud supercomputing technology, announcing our first top-5 supercomputer in 2020 and subsequently constructing multiple AI supercomputing systems at massive scale. OpenAI has used this infrastructure to train its breakthrough models, which are now deployed in Azure to power category-defining AI products like GitHub Copilot, DALL·E 2 and ChatGPT. As OpenAI’s exclusive cloud provider, Azure powers all OpenAI workloads across research, products and API services. We’ve also increased our investments in the development and deployment of specialized supercomputing systems to accelerate OpenAI’s groundbreaking independent AI research and have continued to build out Azure’s leading AI infrastructure to help customers build and deploy their AI applications on a global scale.

We deploy OpenAI’s models across our consumer and enterprise products, including GitHub Copilot,  DALL·E 2 in Microsoft Designer and Bing Image Creator, and PowerApps Ideas, and through Azure OpenAI Service, which empowers organizations and developers to build cutting-edge AI applications through direct access to OpenAI models backed by Azure’s trusted, enterprise-grade capabilities and AI-optimized infrastructure and tools.

These innovations have captured imaginations and introduced large-scale AI as a powerful technology platform that we believe will create transformative impact at the magnitude of the personal computer, the internet, mobile devices and the cloud.

Underpinning all of our efforts is Microsoft and OpenAI’s shared commitment to building AI systems and products that are trustworthy and safe. OpenAI’s leading research on AI Alignment and Microsoft’s Responsible AI Standard not only establish a leading and advancing framework for the safe deployment of our own AI technologies, but will also help guide the industry toward more responsible outcomes.

We’ll continue to work with OpenAI to explore solutions that harness the power of AI and advanced natural language generation, and we’re excited for future collaboration together.

Future of work

We believe that AI is going to be the ultimate amplifier. It will augment the work that people do by freeing up time for more creativity, imagination, and human ingenuity – leading to not only an increase in productivity, but satisfaction. We’re just scratching the surface on the power of these large language models. Building on the success of GitHub Copilot, we envision a world where everyone, no matter their profession, can have a copilot for everything they do.

Microsoft is focused on responsibly creating AI that enables people to achieve greater productivity, growth, and satisfaction in the work they do. When people are freed from repetitive or tedious tasks, they can tap into their human ingenuity to focus on more strategic or creative tasks.

As AI systems evolve, we expect the nature of some jobs will change, and that new jobs will be created. These shifts are similar to the changes we’ve seen with other major technological advances such as the invention of the printing press, telephone or the internet.

We expect this shift will require new ways of thinking about skills and training to ensure that workers are prepared for the future and that there is enough talent available for critical jobs.

To help people get the training and skills they need to thrive in today’s economy and prepare for the future, Microsoft is focusing on three areas:

  • Preparing today’s students for tomorrow’s jobs
  • Helping today’s workers gain the skills they need to participate in the digital economy
  • Working with nonprofits, civic organizations and government leaders to help more people access digital skills.

While we can’t say with certainty how the job market will be impacted by AI, we are committed to developing this technology responsibly with an understanding of its impact on society.

AI and education

We fundamentally believe that AI will not only amplify what people can do, but also inspire curiosity and creativity to explore new applications. As with any new paradigm shift in technology or advent of new tools, we’ll need to have conversations about how to best incorporate these technologies into areas like education and help students be able to use these tools critically. As adoption of this technology increases, we are committed to ensuring its benefits are shared equitability across society, institutions and organizations

AI offers significant opportunities in education, for example offering the prospect of a personal digital tutor to every child, helping democratize access to high-quality education. It can help children advance critical thinking and creative expression and also assist teachers in developing creative new ways to engage children as it frees them from administrative and repetitive tasks. Like any new advancements, teaching will have to modify current practices to mitigate risks and ensure that children are able to realize the benefits of AI technology.  We will have to help children learn new skills to engage with new technology, helping them develop the ability to ask the right questions and use AI to communicate in more efficient and effective ways.

Related Posts