Microsoft is bringing Microsoft 365 Copilot to more customers and releasing new research that shows how AI will change the way we work
REDMOND, Wash. — May 9, 2023 — Earlier this year, Microsoft Corp. introduced Microsoft 365 Copilot, which will bring powerful new generative AI capabilities to apps millions of people use every day like Microsoft Word, Excel, PowerPoint, Outlook, Microsoft Teams and more.
On Tuesday, the company announced it is expanding access to the Microsoft 365 Copilot preview and introducing new features. The company also released new data and insights from its 2023 Work Trend Index report: “Will AI Fix Work?”
The data shows that the pace of work has accelerated faster than humans can keep up, and it’s impacting innovation. Next-generation AI will lift the weight of work. Organizations that move first to embrace AI will break the cycle — increasing creativity and productivity for everyone.
“This new generation of AI will remove the drudgery of work and unleash creativity,” said Satya Nadella, chairman and CEO, Microsoft. “There’s an enormous opportunity for AI-powered tools to help alleviate digital debt, build AI aptitude and empower employees.”
The report shares three key insights for business leaders as they look to understand and responsibly adopt AI for their organization:
Digital debt is costing us innovation: We’re all carrying digital debt: The volume of data, emails and chats has outpaced our ability to process it all. There is an opportunity to make our existing communications more productive. Every minute spent managing this digital debt is a minute not spent on creative work. Sixty-four percent of employees don’t have enough time and energy to get their work done and those employees are 3.5x more likely to say they struggle with being innovative or thinking strategically. Of time spent in Microsoft 365, the average person spends 57% communicating and only 43% creating.
There’s a new AI-employee alliance: For employees, the promise of relief outweighs job loss fears and managers are looking to empower employees with AI, not replace. Forty-nine percent of people say they’re worried AI will replace their jobs, but even more — 70% — would delegate as much work as possible to AI in order to lessen their workloads. In fact, leaders are 2x more likely to say that AI would be most valuable in their workplace by boosting productivity rather than cutting headcount.
Every employee needs AI aptitude: Every employee, not just AI experts, will need new core competencies such as prompt engineering in their day to day. Eighty-two percent of leaders anticipate employees will need new skills in the AI era, and as of March 2023, jobs on LinkedIn in the U.S. mentioning GPT have increased by 79% year over year. This new, in-demand and AI-centric skillset will have ripple effects across everything from resumes to job postings.
“The pace and volume of work have increased exponentially and are outpacing humans’ ability to keep up,” said Jared Spataro, CVP, Modern Work and Business Applications. “In a world where creativity is the new productivity, digital debt is more than an inconvenience — it’s a threat to innovation. Next-generation AI will lift the weight of work and free us all to focus on the work that matters.”
To empower businesses in the AI era, Microsoft is introducing the Microsoft 365 Copilot Early Access Program with an initial wave of 600 enterprise customers worldwide in an invitation-only paid preview program. In addition, the following new capabilities will be added to Microsoft 365 Copilot and Microsoft Viva:
Copilot in Whiteboard will make Microsoft Teams meetings and brainstorms more creative and effective. Using natural language, you can ask Copilot to generate ideas, organize ideas into themes, create designs that bring ideas to life, and summarize Whiteboard content.
By integrating DALL-E, OpenAI’s image generator, into Copilot in PowerPoint, users will be able to ask Copilot to create custom images to support their content.
Copilot in Outlook will offer coaching tips and suggestions on clarity, sentiment and tone to help users write more effective emails and communicate more confidently.
Copilot in OneNote will use prompts to draft plans, generate ideas, create lists and organize information to help customers find what they need easily.
Copilot in Viva Learning will use a natural language chat interface to help users create a personalized learning journey including designing upskilling paths, discovering relevant learning resources and scheduling time for assigned trainings.
To help every customer get AI-ready, Microsoft is also introducing the Semantic Index for Copilot, a new capability we’re starting to roll out to all Microsoft 365 E3 and E5 customers.
Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.
For more information, press only:
Microsoft Media Relations, WE Communications, (425) 638-7777, [email protected]
Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at https://news.microsoft.com/microsoft-public-relations-contacts.
REDMOND, Wash., and VERONA, Wis. — April 17, 2023— Microsoft Corp. and Epic on Monday announced they are expanding their long-standing strategic collaboration to develop and integrate generative AI into healthcare by combining the scale and power of Azure OpenAI Service1 with Epic’s industry-leading electronic health record (EHR) software. The collaboration expands the long-standing partnership, which includes enabling organizations to run Epic environments on the Microsoft Azure cloud platform.
This co-innovation is focused on delivering a comprehensive array of generative AI-powered solutions integrated with Epic’s EHR to increase productivity, enhance patient care and improve financial integrity of health systems globally. One of the initial solutions is already underway, with UC San Diego Health, UW Health in Madison, Wisconsin, and Stanford Health Care among the first organizations starting to deploy enhancements to automatically draft message responses.
“A good use of technology simplifies things related to workforce and workflow,” said Chero Goswami, chief information officer at UW Health. “Integrating generative AI into some of our daily workflows will increase productivity for many of our providers, allowing them to focus on the clinical duties that truly require their attention.”
Another solution will bring natural language queries and interactive data analysis to SlicerDicer, Epic’s self-service reporting tool, helping clinical leaders explore data in a conversational and intuitive way.
“Our exploration of OpenAI’s GPT-4 has shown the potential to increase the power and accessibility of self-service reporting through SlicerDicer, making it easier for healthcare organizations to identify operational improvements, including ways to reduce costs and to find answers to questions locally and in a broader context,” said Seth Hain, senior vice president of research and development at Epic.
Leading industry experts have highlighted the urgent need for health systems and hospitals to address intense pressures on costs and margins. Approximately half of U.S. hospitals finished 2022 with negative margins as widespread workforce shortages and increased labor expenses, as well as supply disruptions and inflationary effects, caused expenses to meaningfully outpace revenue increases.2 Industry participants recognize that achieving long-term financial sustainability through increased productivity and technological efficiency is a mission-critical strategic priority.3
“The urgent and critical challenges facing healthcare systems and their providers demand a comprehensive approach combining Azure OpenAI Service with Epic’s industry-leading technology,” said Eric Boyd, corporate vice president, AI Platform, Microsoft. “Our expanded partnership builds on a long history of collaboration between Microsoft, Nuance and Epic, including our work to help healthcare organizations migrate their Epic environments to Azure. Together we can help providers deliver significant clinical and business outcomes leveraging the power of the Microsoft Cloud and Epic.”
When creating technologies that can change the world, Microsoft believes organizations need to ensure that the technology is used responsibly. Microsoft is committed to creating responsible AI by design that is guided by a core set of principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. Microsoft is putting those principles into practice across the company to develop and deploy AI that will have a positive impact on society, taking a cross-company approach through cutting-edge research, best-of-breed engineering systems, and excellence in policy and governance.
Visit the Microsoft, Nuance and Epic booths at the 2023 HIMSS Global Health Conference in Chicago to learn more about new and enhanced AI-powered solutions and areas of shared innovation.
About Epic
Epic develops software to help people get well, help people stay well, and help future generations be healthier. Visit www.epic.com/about.
About Microsoft
Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.
1 Azure and Azure OpenAI Service, including any of its component technologies, is intended for general-purpose use and is not intended or made available: (1) as a medical device; (2) for the diagnosis of disease or other conditions, or in the cure, mitigation, treatment or prevention of a disease; or (3) as a substitute for the professional clinical advice, opinion, or judgment of a treating healthcare professional. Azure and Azure OpenAI Service has not been evaluated by the U.S. FDA or similar regulatory agency as a medical device, and users of Azure and Azure OpenAI Service are responsible for ensuring the regulatory compliance of their use or any solution they build using Azure and Azure OpenAI
Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at https://news.microsoft.com/microsoft-public-relations-contacts.
Trained across security and networking disciplines and armed with trillions of data signals, Security Copilot dramatically increases the reach, speed and effectiveness of any security team
REDMOND, Wash. — March 28, 2023 — Microsoft Corp. on Tuesday announced it is bringing the next generation of AI to cybersecurity with the launch of Microsoft Security Copilot, giving defenders a much-needed tool to quickly detect and respond to threats and better understand the threat landscape overall. Security Copilot will combine Microsoft’s vast threat intelligence footprint with industry-leading expertise to augment the work of security professionals through an easy-to-use AI assistant.
“Today the odds remain stacked against cybersecurity professionals. Too often, they fight an asymmetric battle against relentless and sophisticated attackers,” said Vasu Jakkal, corporate vice president, Microsoft Security. “With Security Copilot, we are shifting the balance of power into our favor. Security Copilot is the first and only generative AI security product enabling defenders to move at the speed and scale of AI.”
Security Copilot is designed to work seamlessly with security teams, empowering defenders to see what is happening in their environment, learn from existing intelligence, correlate threat activity, and make more informed, efficient decisions at machine speed.
Simplifying complexity and accelerating responses
In a world where there are 1,287 password attacks per second, fragmented tools and infrastructure have not been enough to stop attackers. And although attacks have increased 67% over the past five years, the security industry has not been able to hire enough cyberrisk professionals to keep pace. This has led to defenders who are overwhelmed searching for well-disguised attacks within an impossibly large volume of expanding network traffic and other signals.
Security Copilot will simplify complexity and amplify the capabilities of security teams by summarizing and making sense of threat intelligence, helping defenders see through the noise of web traffic and identify malicious activity.
It will also help security teams catch what others miss by correlating and summarizing data on attacks, prioritizing incidents and recommending the best course of action to swiftly remediate diverse threats, in time.
Continually learning to augment the expertise of security teams
Security Copilot will also continually learn and improve to help ensure that security teams are operating with the latest knowledge of attackers, their tactics, techniques and procedures. The product will provide ongoing access to the most advanced OpenAI models to support demanding security tasks and applications. Its visibility into threats is powered by both the customer organization’s security data and Microsoft’s vast threat analysis footprint.
These capabilities can empower security teams of any size with the skills and abilities of much larger organizations. In addition, Security Copilot helps address skills shortages in cybersecurity by bridging knowledge gaps and enhancing workflows, threat actor profiles and incident reporting across teams.
“Advancing the state of security requires both people and technology — human ingenuity paired with the most advanced tools that help apply human expertise at speed and scale,” said Charlie Bell, executive vice president, Microsoft Security. “With Security Copilot we are building a future where every defender is empowered with the tools and technologies necessary to make the world a safer place.”
Built on the Microsoft platform and industry-leading threat intelligence
Microsoft is uniquely qualified to help customers explore and adapt AI to boost their cybersecurity defenses. Microsoft Security is actively tracking more than 50 ransomware gangs as well as more than 250 unique nation-state cybercriminal organizations, and receives 65 trillion threat signals every day. Microsoft technology blocks more than 25 billion brute-forced password theft attempts every second, and more than 8,000 security professionals at Microsoft analyze more security signals than almost any other company — on average Microsoft’s Security Operations Center analysts utilize over 100 different data sources.
Acquisitions like RiskIQ and Miburo give Microsoft breath of signal and depth intelligence on threat actors that no one else has. Security Copilot also integrates natively with a growing list of Microsoft Security products, such as Microsoft Sentinel and Microsoft Defender, to help customers create an end-to-end experience across their entire security program.
Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.
For more information, press only:
Microsoft Media Relations, WE Communications, (425) 638-7777, [email protected]
Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at https://news.microsoft.com/microsoft-public-relations-contacts.
Harnessing the power of AI, Microsoft 365 Copilot turns your words into the most powerful productivity tool on the planet
REDMOND, Wash. — March 16, 2023 — On Thursday, Microsoft Corp. announced it is bringing the power of next-generation AI to its workplace productivity tools with Microsoft 365 Copilot. Currently in testing with select commercial customers, Copilot combines the power of large language models (LLMs) with business data and the Microsoft 365 apps, to unleash creativity, unlock productivity and uplevel skills.
Customers will experience Microsoft 365 Copilot in two ways
Microsoft 365 Copilot isn’t just a better way of doing the same things. It’s an entirely new way of working. Copilot will work alongside Microsoft 365 customers in two ways:
First, it is embedded in the Microsoft 365 apps people use every day — Word, Excel, PowerPoint, Outlook, Teams and more.
Today, the company also announced an entirely new experience: Business Chat. Business Chat works across the LLM, the Microsoft 365 apps, and a customer’s calendar, emails, chats, documents, meetings and contacts to do things that people weren’t able to do before. With natural language prompts like “tell my team how we updated the product strategy,” Business Chat will generate a status update based on the morning’s meetings, emails and chat threads.
With Copilot, the customer is always in control. Customers decide what to keep, modify or discard. With these new tools, people can be more creative in Word, more analytical in Excel, more expressive in PowerPoint, more productive in Outlook and more collaborative in Teams.
“Copilot combines the power of large language models with your data and apps to turn your words into the most powerful productivity tool on the planet,” said Jared Spataro, corporate vice president, Modern Work and Business Applications, Microsoft. “By grounding in your business content and context, Copilot delivers results that are relevant and actionable. It’s enterprise-ready, built on Microsoft’s comprehensive approach to security, compliance, privacy and responsible AI. Copilot marks a new era of computing that will fundamentally transform the way we work.”
A whole new way to work
Microsoft 365 Copilot is seamlessly integrated in the apps people use every day, helping them stay in the flow of work and freeing them to focus more on the task at hand and less on the busy work.
Copilot in Word writes, edits, summarizes and creates right alongside people as they work.
Copilot in PowerPoint enables the creation process by turning ideas into a designed presentation through natural language commands.
Copilot in Excel helps unlock insights, identify trends or create professional-looking data visualizations in a fraction of the time.
Copilot in Outlook can help synthesize and manage the inbox to allow more time to be spent on actually communicating.
Copilot in Teams makes meetings more productive with real-time summaries and action items directly in the context of the conversation.
Copilot in Power Platform will help developers of all skill levels accelerate and streamline development with low-code tools with the introduction of two new capabilities within Power Apps and Power Virtual Agents.
Business Chat brings together data from across documents, presentations, email, calendar, notes and contacts to help summarize chats, write emails, find key dates or even write a plan based on other project files.
Microsoft has built Microsoft 365 Copilot with its existing commitments to data security and privacy in the enterprise, grounded in AI principles and Responsible AI Standard and decades of research. Copilot’s large language models are not trained on customer content or on individual prompts. Microsoft 365 Copilot will deliver enterprise-ready AI at scale.
Earlier this month Microsoft announced Dynamics 365 Copilot as the world’s first AI Copilot in both CRM and ERP to bring the next-generation AI to every line of business. And now, Copilot is coming to all Microsoft productivity apps — Word, Excel, PowerPoint, Outlook, Teams, Microsoft Viva, Power Platform and more. Specifics on pricing and licensing will be shared soon. Microsoft 365 Copilot is already being tested with a small group of customers to get the critical feedback required to improve these models as they scale.
Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.
For more information, press only:
Microsoft Media Relations, WE Communications, (425) 638-7777, [email protected]
Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at https://news.microsoft.com/microsoft-public-relations-contacts.
“I think with some confidence I can say that 2023 is going to be the most exciting year that the AI community has ever had,” writes Kevin Scott, chief technology officer at Microsoft, in a Q&A on the company’s AI blog. He acknowledges that he also thought 2022 was the most exciting year for AI, but he believes that the pace of innovation is only increasing. This is particularly true with generative AI, which doesn’t simply analyze large data sets but is a tool people can use to create entirely new works. We can already see its promise in systems like GPT-3, which can do anything from helping copyedit and summarize text to providing inspiration, and DALL-E 2, which can create useful and arresting works of art based on text inputs. Here are some of Scott’s predictions about how AI will change the way we work and play.
1. It Will Unleash Our Creativity
As generative AI becomes more popular and accessible, more people will be able to use the technology for creative expression, whether it’s helping them produce sophisticated artworks or write moving poetry. In his blog post, Scott describes how new AI tools are democratizing access to design. “An AI system such as DALL-E 2 doesn’t turn ordinary people into professional artists, but it gives a ton of people a visual vocabulary that they didn’t have before—a new superpower they didn’t think they would ever have.” DALL-E 2 already shows up in tools like Microsoft Designer, but there’s exciting potential for it to help many more people unleash creative ideas in ways that were once only available to trained professionals.
2. It Will Make Coding Much More Accessible
Generative AI innovations like GitHub Copilot, an AI pair programmer built using OpenAI’s Codex AI system, can translate natural human language into programming code, essentially turning our practical intentions into complex pieces of software. Among Copilot users, 40 percent of the code in some popular programming languages is being generated by Copilot, a figure that is set to increase. In a recent talk at the Fortune Brainstorm AI conference, Scott pointed to the example of people noodling around with the capabilities of ChatGPT (which is powered by GPT-3.5) to hint at the future potential. “It really opens up the aperture of who can actually use AI now,” he says. “We’ll need new sorts of specialties in the future, but you don’t need to have a PhD in computer science anymore to build an AI application, which I think is really, really exciting.”
The upshot is an acceleration of the iterative cycle, as human beings tweak and refine the AI’s work in a virtuous, back-and-forth collaborative process.
3. It Will Become Our Copilot in Other Ways Too
In an essay for Wired UK, Scott sketches a scenario in which AI helps us do our jobs better. Like coding assistance with GitHub Copilot, industries from construction to healthcare, technology to law, could potentially benefit from a form of AI assistance. “The applications are potentially endless, limited only by one’s ability to imagine scenarios in which productivity-assisting software could be applied to complex cognitive work, whether that be editing videos, writing scripts, designing new molecules for medicines, or creating manufacturing recipes from 3D models,” he writes. While there’s concern about how AI will impact human jobs, Scott describes in his post how, with thoughtful application, these AI tools have the potential to augment and amplify human capability, enabling people to spend less time on repetitive tasks. These models will also “democratize access to AI,” he writes, so “you’ll have a more diverse group of people being able to participate in the creation of technology.”
4. It Will Unlock Faster Iteration
Generative AI may significantly reduce the legwork of the creative process by helping designers iterate on product concepts and helping writers generate first drafts of press releases, essays, and scripts, assisting with graphic design–heavy posters, video edits, and more. Scott notes in his Wired essay that it has the potential to “allow knowledge workers to spend their time on higher order cognitive tasks, and effectively transforming how a great many of us interact with technology to get things done.” The upshot is an acceleration of the iterative cycle, as human beings tweak and refine the AI’s work in a virtuous, back-and-forth collaborative process. We will become adept at developing techniques to edit and modify generated images, text, drawings, and even molecules or proteins to be used in new medicines, creating better results more quickly through careful collaboration with AI.
5. It Will Make Work More Enjoyable
In the AI blog post, Scott observes that AI tools for programmers have the potential to vastly improve the overall work experience. “People now have new and interesting and fundamentally more effective tools than they’ve had before,” he notes. “This is exactly what we’re seeing with the experiences developers are having with Copilot; they are reporting that Copilot helps them stay in the flow and keeps their minds sharper during what used to be boring and repetitive tasks.” This also extends to low-code and no-code tools in products like Power Platform that are opening new potential across job functions, roles, and processes. “We did a study that found using no-code or low-code tools led to more than an 80 percent positive impact on work satisfaction, overall workload, and morale.”
Generative AI has the capacity to profoundly alter the working practices of a range of vocations, giving rise to new professions and transforming established ones. With ethical and thoughtful deployment, it is a tool that could help precipitate a revolution in creativity—one that enables everyone to better express their humanity.
For example, I’ve been playing around with an experimental system I built for myself using GPT-3 designed to help me write a science fiction book, which is something that I’ve wanted to do since I was a teenager. I have notebooks full of synopses I’ve created for theoretical books, describing what the books are about and the universes where they take place. With this experimental tool, I have been able to get the logjam broken. When I wrote a book the old-fashioned way, if I got 2,000 words out of a day, I’d feel really good about myself. With this tool, I’ve had days where I can write 6,000 words in a day, which for me feels like a lot. It feels like a qualitatively more energizing process than what I was doing before.
This is the “copilot for everything” dream—that you would have a copilot that could sit alongside you as you’re doing any kind of cognitive work, helping you not just get more done, but also enhancing your creativity in new and exciting ways.
This increase in productivity is clearly a boost to your satisfaction. Why do these tools bring more joy to work?
All of us use tools to do our work. Some of us really enjoy acquiring the tools and mastering them and figuring out how to deploy them in a super effective way to do the thing that we’re trying to do. I think that is part of what’s going on here. In many cases, people now have new and interesting and fundamentally more effective tools than they’ve had before. We did a study that found using no-code or low-code tools led to more than an 80% positive impact on work satisfaction, overall workload and morale by users. Especially for tools that are in their relatively early stages, that’s just a huge benefit to see.
For some workers, it’s literally enhancing that core flow that you get into when you’re doing the work; it speeds you up. It’s like having a better set of running shoes to go run a race or marathon. This is exactly what we’re seeing with the experiences developers are having with Copilot; they are reporting that Copilot helps them stay in the flow and keeps their minds sharper during what used to be boring and repetitive tasks. And when AI tools can help to eliminate drudgery from a job, something that is super repetitive or annoying or that was getting in their way of getting to the thing that they really enjoy, it unsurprisingly improves satisfaction.
Personally, these tools let me be in flow state longer than I was before. The enemy of creative flow is distraction and getting stuck. I get to a point where I don’t know quite how to solve the next thing, or the next thing is, like, “I’ve got to go look this thing up. I’ve got to context switch out of what I was doing to go solve the subproblem.” These tools increasingly solve the subproblem for me so that I stay in the flow.
In addition to GitHub Copilot and DALL∙E 2, AI is showing up in Microsoft products and services in other ways. How is next-generation AI improving current products such as Teams and Word?
This is the big untold story of AI. To date, most of AI’s benefits are spread across 1,000 different things where you may not even fully appreciate how much of the product experience that you’re getting is coming from a machine learned system.
For example, we’re sitting here in this Teams call on video and, in the system, there are all these parameters that were learned by a machine learning algorithm. There are jitter buffers for the audio system to smooth out the communication. The blur behind you on your screen is a machine learning algorithm at work. There are more than a dozen machine learning systems that make this experience more delightful for the both of us. And that is certainly true across Microsoft.
We’ve gone from machine learning in a few places to literally 1,000 machine learning things spread across different products, everything from how your Outlook email client works, your predictive text in Word, your Bing search experience, to what your feed looks like in Xbox Cloud Gaming and LinkedIn. There’s AI all over the place making these products better.
One of the big things that has changed in the past two years is it used to be the case that you would have a model that was specialized to each one of these tasks that we have across all our products. Now you have a single model that gets used in lots of places because they’re broadly useful. Being able to invest in these models that become more powerful with scale—and then having all the things built on top of the model benefit simultaneously from improvements that you’re making—is tremendous.
Microsoft’s AI research and development continues through initiatives such as AI4Science and AI for Good. What excites you most about this area of AI?
The models in these basic science applications have the same scaling properties as large language models. You build a model, you get it into some self-supervised mode where it’s learning from a simulation or it’s learning from its own ability to observe a particular domain, and then the model that you get out of it lets you dramatically change the performance of an application—whether you’re doing a computational fluid dynamics simulation or you’re doing molecular dynamics for drug design.
There’s immense opportunity there. This means better medicines, it means maybe we can find the catalyst we don’t have yet to fix our carbon emission problem, it means across the board accelerating how scientists and other folks with big ideas can work to try to solve society’s biggest challenges.
How have breakthroughs in computing techniques and hardware contributed to the advances in AI?
The fundamental thing underlying almost all of the recent progress we’ve seen in AI is how critical the importance of scale has proven to be. It turns out that models trained on more data with more compute power just have a much richer and more generalized set of capabilities. If we want to keep driving this progress further—and to be clear, right now we don’t see any end to the benefits of increased scale—we need to optimize and scale up our compute power as much as we possibly can.
We announced our first Azure AI supercomputer two years ago, and at our Build developer conference this year I shared that we now have multiple supercomputing systems that we’re pretty sure are the largest and most powerful AI supercomputers in the world today. We and OpenAI use this infrastructure to train nearly all of our state-of-the-art large models, whether that’s our Turing, Z-code and Florence models at Microsoft or the GPT, DALL∙E and Codex models at OpenAI. And we just recently announced a collaboration with NVIDIA to build a supercomputer powered by Azure infrastructure combined with NVIDIA GPUs.
Some of this progress has just been via brute force compute scale with bigger and bigger clusters of GPUs. But maybe even a bigger breakthrough is the layer of software that optimizes how models and data are distributed across these giant systems, both to train the models and then to serve them to customers. If we’re going to put forth these large models as platforms that people can create with, they can’t only be accessible to the tiny number of tech companies in the world with enough resources to build giant supercomputers.
So, we’ve invested a ton in software like DeepSpeed to boost training efficiency, and the ONNX Runtime for inference. They optimize for cost and latency and generally help us make bigger AI models more accessible and valuable for people. I’m super proud of the teams we have working on these technologies because Microsoft is really leading the industry here, and we’re open sourcing all of it so others can keep improving.
These advances are all playing out amid an ongoing concern that AI is going to impact jobs. How do you think about the issue of AI and jobs?
We live in a time of extraordinary complexity and historic macroeconomic change, and as we look out 5, 10 years into the future, even to just achieve a net neutral balance for the whole world, we’re going to need new forms of productivity for all of us to be able to continue enjoying progress. We want to be building these AI tools as platforms that lots of people can use to build businesses and solve problems. We believe that these platforms democratize access to AI to far more people. With them, you’ll get a richer set of problems solved and you’ll have a more diverse group of people being able to participate in the creation of technology.
With the previous instantiation of AI, you needed a huge amount of expertise just to get started. Now you can call Azure Cognitive Services, you can call the Azure OpenAI Service and build complicated products on top of these things without necessarily having to be so expert at AI that you’ve got to be able to train your own large model from scratch.
Microsoft is proud to be a platinum sponsor of the 36th annual conference on Neural Information Processing Systems (NeurIPS), which is widely regarded as the world’s most prestigious research conference on artificial intelligence and machine learning.
Microsoft has a strong presence at NeurIPS again this year, with more than 150 of our researchers participating in the conference and 122 of our research papers accepted. Our researchers are also taking part in 10 workshops, four competitions and a tutorial.
In one of the workshops, AI for Science: Progress and Promises, a panel of leading researchers will discuss how artificial intelligence and machine learning have the potential to advance scientific discovery. The panel will include two Microsoft researchers: Max Welling, Vice President and Distinguished Scientist, Microsoft Research AI4Science, who will serve as moderator, and Peter Lee, Corporate Vice President, Microsoft Research and Incubations.
Of the 122 Microsoft research papers accepted for the conference, seven have been selected for oral presentations during the virtual NeurIPS experience the week of December 4th. The oral presentations provide a deeper dive into each of the featured research topics.
In addition, two other Microsoft research papers received Outstanding Paper Awards for NeurIPS 2022. One of those papers, Gradient Estimation with Discrete Stein Operators, explains how researchers developed a gradient estimator that achieves substantially lower variance than state-of-the-art estimators with the same number of function evaluations, which has the potential to improve problem solving in machine learning. In the other paper, A Neural Corpus Indexer for Document Retrieval, researchers demonstrate that an end-to-end deep neural network that unifies training and indexing stages can significantly improve the recall performance of traditional document retrieval methods.
Spotlight: On-Demand EVENT
Microsoft Research Summit 2022
On-Demand Watch now to learn about some of the most pressing questions facing our research community and listen in on conversations with 120+ researchers around how to ensure new technologies have the broadest possible benefit for humanity.
Below we have provided the titles, authors and abstracts for all seven of the Microsoft research papers chosen for oral presentations at NeurIPS, with links to additional information for those who want to explore the topics more fully:
Uni[MASK]: Unified Inference in Sequential Decision Problems
Micah Carroll, Orr Paradise, Jessy Lin, Raluca Georgescu, Mingfei Sun, David Bignell, Stephanie Milani, Katja Hofmann, Matthew Hausknecht, Anca Dragan, Sam Devlin
Abstract:Randomly masking and predicting word tokens has been a successful approach in pre-training language models for a variety of downstream tasks. In this work, we observe that the same idea also applies naturally to sequential decision making, where many well-studied tasks like behavior cloning, offline RL, inverse dynamics, and waypoint conditioning correspond to different sequence maskings over a sequence of states, actions, and returns. We introduce the UniMASK framework, which provides a unified way to specify models which can be trained on many different sequential decision-making tasks. We show that a single UniMASK model is often capable of carrying out many tasks with performance similar to or better than single-task models. Additionally, after fine tuning, our UniMASK models consistently outperform comparable single-task models.
K-LITE: Learning Transferable Visual Models with External Knowledge
Abstract: The new generation of state-of-the-art computer vision systems are trained from natural language supervision, ranging from simple object category names to descriptive captions. This form of supervision ensures high generality and usability of the learned visual models, based on the broad concept coverage achieved through large-scale data collection process. Alternatively, we argue that learning with external knowledge about images is a promising way which leverages a much more structured source of supervision and offers sample efficiency.
In this paper, we propose K-LITE (Knowledge-augmented Language-Image Training and Evaluation), a simple strategy to leverage external knowledge for building transferable visual systems: In training, it enriches entities in natural language with WordNet and Wiktionary knowledge, leading to an efficient and scalable approach to learning image representations that uses knowledge about the visual concepts; In evaluation, the natural language is also augmented with external knowledge and then used to reference learned visual concepts (or describe new ones) to enable zero-shot and few-shot transfer of the pre-trained models. We study the performance of K-LITE on two important computer vision problems, image classification and object detection, benchmarking on 20 and 13 different existing datasets, respectively. The proposed knowledge-augmented models show significant improvement in transfer learning performance over existing methods. Our code is released at https://github.com/microsoft/klite.
Extreme Compression for Pre-trained Transformers Made Simple and Efficient
Abstract: Extreme compression, particularly ultra-low bit precision (binary/ternary) quantization, has been proposed to fit large NLP models on resource-constraint devices. However, to preserve the accuracy for such aggressive compression schemes, cutting-edge methods usually introduce complicated compression pipelines, e.g., multi-stage expensive knowledge distillation with extensive hyperparameter tuning. Also, they oftentimes focus less on smaller transformer models that have already been heavily compressed via knowledge distillation and lack a systematic study to show the effectiveness of their methods.
In this paper, we perform a very comprehensive systematic study to measure the impact of many key hyperparameters and training strategies from previous. As a result, we find out that previous baselines for ultra-low bit precision quantization are significantly under-trained. Based on our study, we propose a simple yet effective compression pipeline for extreme compression.
Our simplified pipeline demonstrates that:
(1) we can skip the pre-training knowledge distillation to obtain a 5-layer \bert while achieving better performance than previous state-of-the-art methods, like TinyBERT;
(2) extreme quantization plus layer reduction is able to reduce the model size by 50x, resulting in new state-of-the-art results on GLUE tasks.
On the Complexity of Adversarial Decision Making
Dylan J Foster, Alexander Rakhlin, Ayush Sekhari, Karthik Sridharan
Abstract: A central problem in online learning and decision making—from bandits to reinforcement learning—is to understand what modeling assumptions lead to sample-efficient learning guarantees. We consider a general adversarial decision-making framework that encompasses (structured) bandit problems with adversarial rewards and reinforcement learning problems with adversarial dynamics. Our main result is to show—via new upper and lower bounds—that the Decision-Estimation Coefficient, a complexity measure introduced by Foster et al. in the stochastic counterpart to our setting, is necessary and sufficient to obtain low regret for adversarial decision making. However, compared to the stochastic setting, one must apply the Decision-Estimation Coefficient to the convex hull of the class of models (or, hypotheses) under consideration. This establishes that the price of accommodating adversarial rewards or dynamics is governed by the behavior of the model class under convexification, and recovers a number of existing results –both positive and negative. En route to obtaining these guarantees, we provide new structural results that connect the Decision-Estimation Coefficient to variants of other well-known complexity measures, including the Information Ratio of Russo and Van Roy and the Exploration-by-Optimization objective of Lattimore and György.
Maximum Class Separation as Inductive Bias in One Matrix
Tejaswi Kasarla, Gertjan J. Burghouts, Max van Spengler, Elise van der Pol, Rita Cucchiara, Pascal Mettes
Abstract: Maximizing the separation between classes constitutes a well-known inductive bias in machine learning and a pillar of many traditional algorithms. By default, deep networks are not equipped with this inductive bias and therefore many alternative solutions have been proposed through differential optimization. Current approaches tend to optimize classification and separation jointly: aligning inputs with class vectors and separating class vectors angularly.
This paper proposes a simple alternative: encoding maximum separation as an inductive bias in the network by adding one fixed matrix multiplication before computing the softmax activations. The main observation behind our approach is that separation does not require optimization but can be solved in closed-form prior to training and plugged into a network. We outline a recursive approach to obtain the matrix consisting of maximally separable vectors for any number of classes, which can be added with negligible engineering effort and computational overhead. Despite its simple nature, this one matrix multiplication provides real impact. We show that our proposal directly boosts classification, long-tailed recognition, out-of-distribution detection, and open-set recognition, from CIFAR to ImageNet. We find empirically that maximum separation works best as a fixed bias; making the matrix learnable adds nothing to the performance. The closed-form implementation and code to reproduce the experiments are available on GitHub.
Censored Quantile Regression Neural Networks for Distribution-Free Survival Analysis
Abstract: This paper considers doing quantile regression on censored data using neural networks (NNs). This adds to the survival analysis toolkit by allowing direct prediction of the target variable, along with a distribution-free characterization of uncertainty, using a flexible function approximator. We begin by showing how an algorithm popular in linear models can be applied to NNs. However, the resulting procedure is inefficient, requiring sequential optimization of an individual NN at each desired quantile. Our major contribution is a novel algorithm that simultaneously optimizes a grid of quantiles output by a single NN. To offer theoretical insight into our algorithm, we show firstly that it can be interpreted as a form of expectation-maximization, and secondly that it exhibits a desirable `self-correcting’ property. Experimentally, the algorithm produces quantiles that are better calibrated than existing methods on 10 out of 12 real datasets.
Abstract: Motivated by the recent empirical successes of deep generative models, we study the computational complexity of the following unsupervised learning problem. For an unknown neural network \(F:\mathbb{R}^d\to\mathbb{R}^{d’}\), let \(D\) be the distribution over \(\mathbb{R}^{d’}\) given by pushing the standard Gaussian \(\mathcal{N}(0,\textrm{Id}_d)\) through \(F\). Given i.i.d. samples from \(D\), the goal is to output \({any}\) distribution close to \(D\) in statistical distance.
We show under the statistical query (SQ) model that no polynomial-time algorithm can solve this problem even when the output coordinates of \(F\) are one-hidden-layer ReLU networks with \(\log(d)\) neurons. Previously, the best lower bounds for this problem simply followed from lower bounds for \(supervised\) \(learning\) and required at least two hidden layers and \(poly(d)\) neurons [Daniely-Vardi ’21, Chen-Gollakota-Klivans-Meka ’22].
The key ingredient in our proof is an ODE-based construction of a compactly supported, piecewise-linear function \(f\) with polynomially-bounded slopes such that the pushforward of \(\mathcal{N}(0,1)\) under \(f\) matches all low-degree moments of \(\mathcal{N}(0,1)\).
For instance, TaylorMade Golf Company turned to Microsoft Syntex for a comprehensive document management system to organize and secure emails, attachments and other documents for intellectual property and patent filings. At the time, company lawyers manually managed this content, spending hours filing and moving documents to be shared and processed later.
With Microsoft Syntex, these documents are automatically classified, tagged and filtered in a way that’s more secure and makes them easy to find through search instead of needing to dig through a traditional file and folder system. TaylorMade is also exploring ways to use Microsoft Syntex to automatically process orders, receipts and other transactional documents for the accounts payable and finance teams.
Other customers are using Microsoft Syntex for contract management and assembly, noted Teper. While every contract may have unique elements, they are constructed with common clauses around financial terms, change control, timeline and so forth. Rather than write those common clauses from scratch each time, people can use Syntex to assemble them from various documents and then introduce changes.
“They need AI and machine learning to spot, ‘Hey, this paragraph is very different from our standard terms. This could use some extra oversight,’” he said.
“If you’re trying to read a 100-page contract and look for the thing that’s significantly changed, that’s a lot of work versus the AI helping with that,” he added. “And then there’s the workflow around those contracts: Who approves them? Where are they stored? How do you find them later on? There’s a big part of this that’s metadata.”
When DALL∙E 2 gets personal
The availability of DALL∙E 2 in Azure OpenAI Service has sparked a series of explorations at RTL Deutschland, Germany’s largest privately held cross-media company, about how to generate personalized images based on customers’ interests. For example, in RTL’s data, research and AI competence center, data scientists are testing various strategies to enhance the user experience by generative imagery.
RTL Deutschland’s streaming service RTL+ is expanding to offer on-demand access to millions of videos, music albums, podcasts, audiobooks and e-magazines. The platform relies heavily on images to grab people’s attention, said Marc Egger, senior vice president of data products and technology for the RTL data team.
“Even if you have the perfect recommendation, you still don’t know whether the user will click on it because the user is using visual cues to decide whether he or she is interested in consuming something. So artwork is really important, and you have to have the right artwork for the right person,” he said.
Imagine a romcom movie about a professional soccer player who gets transferred to Paris and falls in love with a French sportswriter. A sports fan might be more inclined to check out the movie if there’s an image of a soccer game. Someone who loves romance novels or travel might be more interested in an image of the couple kissing under the Eiffel Tower.
Combining the power of DALL∙E 2 and metadata about what kind of content a user has interacted with in the past offers the potential to offer personalized imagery on a previously inconceivable scale, Egger said.
“If you have millions of users and millions of assets, you have the problem that you simply can’t scale it – the workforce doesn’t exist,” he said. “You would never have enough graphic designers to create all the personalized images you want. So, this is an enabling technology for doing things you would not otherwise be able to do.”
Egger’s team is also considering how to use DALL∙E 2 in Azure OpenAI Service to create visuals for content that currently lacks imagery, such as podcast episodes and scenes in audiobooks. For instance, metadata from a podcast episode could be used to generate a unique image to accompany it, rather than repeating the same generic podcast image over and over.
RTL Deutchland, Germany’s largest privately held crossmedia company, is exploring how to use DALL∙E 2 in Azure OpenAI Service to engage people browsing its streaming service RTL+. One idea is to use DALL∙E 2 to generate unique images to illustrate individual podcast episodes, rather than relying on the same podcast cover art.
Along similar lines, a person who is listening to an audiobook on their phone would typically look at the same book cover art for each chapter. DALL∙E 2 could be used to generate a unique image to accompany each scene in each chapter.
Using DALL∙E 2 through Azure OpenAI Service, Egger added, provides access to other Azure services and tools in one place, which allows his team to work efficiently and seamlessly. “As with all other software-as-a-service products, we can be sure that if we need massive amounts of imagery created by DALL∙E, we are not worried about having it online.”
The appropriate and responsible use of DALL∙E 2
No AI technology has elicited as much excitement as systems such as DALL∙E 2 that can generate images from natural language descriptions, according to Sarah Bird, a Microsoft principal group project manager for Azure AI.
“People love images, and for someone like me who is not visually artistic at all, I’m able to make something much more beautiful than I would ever be able to using other visual tools,” she said of DALL∙E 2. “It’s giving humans a new tool to express themselves creatively and communicate in compelling and fun and engaging ways.”
Her team focuses on the development of tools and techniques that guide people toward the appropriate and responsible use of AI tools such as DALL∙E 2 in Azure AI and that limit their use in ways that could cause harm.
To help prevent DALL∙E 2 from delivering inappropriate outputs in Azure OpenAI Service, OpenAI removed the most explicit sexual and violent content from the dataset used to train the model, and Azure AI deployed filters to reject prompts that violate content policy.
In addition, the team has integrated techniques that prevent DALL∙E 2 from creating images of celebrities as well as objects that are commonly used to try to trick the system into generating sexual or violent content. On the output side, the team has added models that remove AI generated images that appear to contain adult, gore and other types of inappropriate content.
– On World Sight Day, Haleon and Microsoft are launching a joint effort to make health products more accessible for people who are blind or have low vision, with artificial intelligence (AI) technology that narrates product labels. – New enhancements in the free Microsoft Seeing AI app will help advance inclusivity and improve accessibility. – Across the U.S. and U.K., consumers will hear important label information for over 1,500 everyday consumer health products such as Sensodyne, Centrum, Emergen-C, ChapStick and Aquafresh.
Microsoft Seeing AI app scans Sensodyne barcode for health information.
REDMOND, Wash. — Oct. 12, 2022 — On Wednesday, Haleon, a global leader in consumer health, and Microsoft Corp. announced a new collaboration to make everyday health more accessible for people who are blind, have low vision or have difficulty reading product labels due to low literacy. Together, the companies are expanding functionality in the Microsoft Seeing AI app to provide consumers with more detailed labelling information for over 1,500 Haleon products across the U.S. and U.K. Seeing AI is a free mobile app designed to help people who have trouble seeing by narrating the world around them.
With today’s launch on World Sight Day, people will hear packaging information through Seeing AI by scanning the barcode of Haleon products. This will provide an audio read-out of important information, such as product name, ingredients and usage instructions. Through Seeing AI’s enhanced functionality, Haleon will help empower people to care for their own health independently by listening to label information narrated through the Seeing AI application.
Haleon’s inaugural Health Inclusivity Index, which sets a new global standard for measuring health inclusivity, makes clear that to improve health inclusivity, individuals and communities need to be provided with the power and the tools to truly take their health into their own hands. Haleon, driven by its purpose to deliver better everyday health with humanity, is committed to helping make healthcare more achievable, inclusive and sustainable. The Seeing AI collaboration with Microsoft is one of Haleon’s first new initiatives to champion health inclusivity. The Microsoft Seeing AI app can be a benefit to:
The 1 million Americans who live with blindness (CDC, “Fast Facts of Common Eye Disorders, https://www.cdc.gov/visionhealth/basics/ced/fastfacts.htm)
The 12 million U.S. adults over the age of 40 who are blind or have low vision (CDC, “Fast Facts of Common Eye Disorders, https://www.cdc.gov/visionhealth/basics/ced/fastfacts.htm)
The Seeing AI app was developed by a team of Microsoft engineers spearheaded by project lead and engineering manager Saqib Shaikh, who lost his sight at the age of seven and was driven to develop the app by his passion for using technology to improve people’s lives.
Saqib Shaikh, engineering manager at Microsoft, said: “I’m really excited to see the launch of this enhanced product recognition functionality, developed in collaboration with Haleon. Seeing AI’s intelligent barcode scanner plays audio cues to help you find the barcode, and now the information displayed for Haleon products is coming straight from the manufacturer, providing richer information including usage instructions and ingredients. This can be invaluable for someone who cannot read the label, leading to greater independence.”
Katie Williams, U.S. chief marketing officer at Haleon said, “We believe everyone should have access to self-care products, services and the information needed to make informed, proactive choices about their health needs. Haleon initiated this collaboration with Microsoft via its Seeing AI app to make consumer health more accessible, achievable and inclusive. We are proud to help make better everyday health more in reach for the blind and those with low vision.”
The Seeing AI app is free to download from the Apple App Store and will be available on Android in the future. To use Seeing AI on Haleon’s products, users should hold their phone camera over the packaging barcode. The app will read out the product name and all text on the package. Users can skip ahead or move back to the relevant section they want to listen to, for example, which flavor or how to use the product. The Haleon barcode functionality will launch today in the U.S. and U.K. first, with plans to expand globally and add additional languages in the future.
About Haleon U.S.
Haleon (NYSE: HLN) is a leading global consumer health company with a portfolio of brands trusted daily by millions of people. In the United States, the organization employs more than 4,700 people who are united by Haleon’s powerful purpose to deliver better everyday health with humanity. Haleon’s products span five categories: Oral Health, Pain Relief, Respiratory Health, Digestive Health, and Wellness. Built on scientific expertise, innovation, and deep human understanding, Haleon’s brands include Abreva, Advil, Benefiber, Centrum, ChapStick®, Emergen-C, Excedrin, Flonase, Gas-X, Natean, Nexium, Nicorette, Parodontax, Polident, Preparation H, Pronamel, Sensodyne, Robitussin, Theraflu, TUMS, Voltaren, and more. For more information on Haleon and its brands, please visit www.haleon.com or contact [email protected].
About the Haleon Health Inclusivity Index
Today’s announcement closely follows the launch of the Health Inclusivity Index, developed by Economist Impact and supported by Haleon. The world-first global study of 40 countries measures how successful countries are in using policy to remove the personal, social, cultural, and political barriers which could otherwise prevent people and communities from achieving good physical and mental health. The number of countries assessed in the study will grow to over 80 over the next two years as part of a new three-year partnership between Haleon and Economist Impact. The report has been commissioned by Haleon as part of its commitment to making better everyday health more achievable, inclusive and sustainable, with the company aiming to create more opportunities for people to be included in everyday health, reaching 50 million people a year by 2025.
About Microsoft
Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.
For more information, press only:
Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777, [email protected]
Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at https://news.microsoft.com/microsoft-public-relations-contacts.
FARMINGTON, Wash. – The gently rolling hills here in eastern Washington have long grown rich harvests of wheat, barley and lentils.
Fifth-generation farmer Andrew Nelson is adding a new bumper crop to that bounty: Data.
He gathers it from sensors in the soil, drones in the sky and satellites in space. They feed Nelson information about his farm at distinct points, every day, all year long — temperature variations, soil moisture and nutrient levels, plant health and more.
Nelson in turn feeds that data into Project FarmVibes, a new suite of farm-focused technologies from Microsoft Research. Starting today, Microsoft will open source these tools so researchers and data scientists — and the rare farmer like Nelson, who is also a software engineer — can build upon them to turn agricultural data into action that can help boost yields and cut costs.
The first open-source release is FarmVibes.AI. It is a sample set of algorithms aimed at inspiring the research and data science community to advance data-driven agriculture. Nelson is using this AI-powered toolkit to help guide decisions at every phase of farming, from before seeds go into the ground until well after harvest.
FarmVibes.AI algorithms, which run on Microsoft Azure, predict the ideal amounts of fertilizer and herbicide Nelson should use and where to apply them; forecast temperatures and wind speeds across his fields, informing when and where he plants and sprays; determine the ideal depth to plant seeds based on soil moisture; and tell him how different crops and practices can keep carbon sequestered in his soil.
Andrew Nelson studies a FarmVibes.AI image identifying grass weeds in one of his fields. It was created from multispectral drone imagery and will inform Nelson’s treatment decisions later this fall. (Photo: Dan DeLong for Microsoft)
“Project FarmVibes is allowing us to build the farm of the future,” said Nelson, who has partnered with Microsoft Research to turn his 7,500 acres into a proving ground for Project FarmVibes. “We’re showcasing the impact technology and AI can have in agriculture. For me, Project FarmVibes is saving a lot in time, it’s saving a lot in costs and it’s helping us control any issues we have on the farm.”
The new tools sprouted from Microsoft’s work with large customers like Land O’ Lakes and Bayer to integrate and analyze data. Project FarmVibes reflects more recent research in precision and sustainable agriculture.
By open sourcing its latest research tools, Microsoft wants to spread them far beyond Washington to help tackle the world’s urgent food problem, said Ranveer Chandra, managing director of Research for Industry.
By 2050, we’ll need to roughly double global food production to feed the planet, Chandra said. But as climate change accelerates, water levels drop and arable lands vanish, doing that sustainably will be a huge challenge.
“We believe one of the most promising approaches to address this problem is data-driven agriculture,” he said.