Posted on Leave a comment

AI helping vulnerable communities in India better understand heat wave dangers

As heat waves grow more common, frequent, and intense in India and around the world, researchers say it’s having a disproportionate effect on some of the world’s poorest communities.

In India, that harsher impact is being felt in the nation’s slums, which researchers say can be as much as 6°C (42.8°F) warmer than other parts of the city

“In the slums, it is so difficult to step out and find shade on a hot summer day,” Anshu Sharma said. “It is so congested. The houses are often made of tin sheets, which heat up much faster compared to other materials.”

Sharma is the co-founder of SEEDS, a New Delhi-based disaster response and preparedness non-profit organization, which has measured the temperature disparity between the slums and other parts of the city.

Since 2017, SEEDS has been working with communities most vulnerable to heat waves, to help people come up with solutions to beat the heat. And now, with the support of Microsoft’s AI for Humanitarian Action grant, SEEDS has developed an artificial intelligence (AI) model to predict the impact of multiple hazards like cyclones, earthquakes or heat waves in any given area.

The model, called Sunny Lives, has generated heat wave risk information for around 125,000 people living in slums in New Delhi, India’s sprawling capital, and Nagpur, a central Indian city susceptible to intense heat waves.

Here’s the story of how SEEDS married cutting-edge technology, shoe-leather surveys, and collaborative project management to raise awareness about what is often an invisible enemy.

An invisible enemy

a shot of a congested urban slum with tin-roofed shacks in delhi underneath a flyover
Most makeshift construction materials used in urban slums trap more heat, thus increasing the indoor temperature. The roofs are often made of tin sheets and houses are crammed together without windows or ventilation. (Photo: Amit Verma for Microsoft)

In humans, heat stress is known to cause higher blood pressure, extreme fatigue, and sleeping troubles. The risk of heat stress is highest outdoors, between noon and 4 p.m. But, alarmingly, in some cases staying indoors might be more dangerous.

Quite simply, if you don’t live in a house made from the right kind of materials, it could be hotter indoors than it is outdoors. Sharma shared some numbers to drive home the point.

“Suppose the outdoor temperature is about 38°C (100.4°F),” he said. “If you’re in a tin shack in a slum, the indoor temperature can be as high as 45°C (113°F). And it’s the older people, and young children, who spend the day indoors, that suffer.”

Central to the problem is the fact that most makeshift construction materials used in urban slums trap more heat, thus increasing the indoor temperature. The roofs are often made of tin sheets and houses are crammed together without windows or ventilation.

A study recently published in Nature examined the indoor temperature variations in different housing types across five low-income locations in South Asia. One of the key findings from the study was about the monthly temperature variation in tin-roofed houses in a village in the western Indian state of Maharashtra: in the months of May and June, the temperature was a good three or four degrees Celsius (37.4°F-39.2°F) higher compared to the outdoor temperature.

That’s been an especially concerning scenario this year.

Blazing summers in the Indian subcontinent are considered the norm, but even by the region’s own standards, the heat this year was intense and widespread. In mid-May, the India Meteorological Department reported record high temperatures between 45°C (113°F) and 50°C (122°F) in several parts of the country.

Experts say such intense heat waves are likely to continue. According to a study published in the Weather and Climate Extremes journal last year, India saw more than double the number of heat waves between 2000-2019 than it did between 1980-1999.

“In the future, these kinds of heat waves are going to be normal,” Professor Petteri Taalas, the secretary general of the World Meteorological Organization (WMO), said in a recent report.

Posted on Leave a comment

AI-equipped drones study dolphins on the edge of extinction

Small in size and with distinctive, rounded dorsal fin, Māui dolphins are one of the rarest and most threatened dolphins in the sea, with a known population of just 54. Decades of fishing practices, such as gillnetting off the west coast of New Zealand in the South Pacific have pushed this sub-species to near extinction.

Maui dolphins swimming.
Photo courtesy of MAUI63.

Now scientists and conservationists are using a combination of drones, AI and cloud technologies to learn more about these rare marine mammals. They say the solution can also be applied to study other species fighting for survival in the world’s oceans.

The effort is part of a growing trend toward using AI and other technologies to more effectively collect and analyze data for environmental conservation. For example, Microsoft AI for Earth’s partner, Conservation Metrics, combines machine learning, remote sensing and scientific expertise to increase the scale and effectiveness of wildlife surveys. NatureServe, another partner organization, leverages Esri ArcGIS tools and Microsoft cloud computing to generate high-resolution habitat maps for imperiled species.

The scientists and conservationists with the not-for-profit group MAUI63 are using AI and other tools to support the conservation of the Māui dolphins, named after the Polynesian demigod, Māui.

MAUI63 team pose with the drone.
From left, Willy Wang, MAUI63 co-founder, Hayley Nessia, pilot, Pete Carscallen, pilot and Tane van der Boon, MAUI63 co-founder, pose after a survey flight. Photo courtesy of MAUI63.

Māui dolphins play an important part of the ecological and spiritual fabric of Aotearoa — the Māori name for New Zealand. They inhabit the waters off the west coast of the country’s North Island — also known as Te Ika-a-Māui, which translates to “the Fish of Māui.”

Weighing 50 kilograms and measuring up to 1.7 meters when fully grown, Māui dolphins are one of the smallest members of the marine dolphin family and among the most elusive. They have white, grey and black markings and black rounded dorsal fins. Unlike human facial features, the markings don’t vary between animals, meaning individuals can’t be identified with the naked eye. Conventional ways of monitoring and studying these fast-moving animals at sea have proved problematic and costly. Researchers admit relatively little is known about their behavior, particularly in winter when weather conditions deteriorate.

Now, MAUI63 believes it has a solution: an AI-powered drone that can efficiently find, track and identify dolphins. The aim of their work, according to co-founder and marine biologist, Professor Rochelle Constantine, is to “give certainty to our uncertainty.”

“Currently everything we know about them is from summer. We know virtually nothing about them in winter,” she says.

Constantine, together with technology and innovation specialist Tane van der Boon and drone enthusiast Willy Wang, formed MAUI63 in 2018. At the time, the Māui dolphin population was estimated at 63 individuals. That figure has since dropped to 54.

Over drinks at a pub, Van der Boon, who is the group’s CEO, and Wang came up with the idea of leveraging drones, machine learning and cloud computing to study the dolphins. “I was getting interested in computer learning — I really saw how teaching computers to see is quite an amazing thing. All the things that we could start to solve and do really intrigued me,” he says.

The Māui dolphins’ rounded fins differ from the more pointed-shaped fins of other dolphins. That meant existing computer vision models were not fit for identifying Māui dolphins. So, van der Boon spent “a couple of months of nights and weekends” teaching himself how to build a model. He then painstakingly tagged Māui dolphin images from internet footage to train it to identify them.

Close up of the Maui dolphin rounded fin.
Māui dolphins, including young calf, swim off the coast of Hamiltons Gap in Auckland, New Zealand. Photo courtesy of University of Auckland, Oregon State and the Department of Conservation.

It was the first challenge of many. Four years of development, testing and fundraising followed. The team also had to gain specialist qualifications to fly their 4.5 meter-wingspan drone out to sea. They spotted their first Māui dolphins earlier this year.

“It was pretty exciting. We were sitting in the van, the drone was 16 kilometers down the coast, and we could see the AI detecting dolphins as we were doing circles around them,” van der Boon says.

Development has been helped along by funding under New Zealand’s Cloud and AI Country plan, which includes funding for projects with sustainable societal impact, as well as support from Microsoft Philanthropies ANZ. The solution combines an 8K ultra high-definition still camera and a full HD gimbal camera with an object detection model for spotting dolphins, and an open-source algorithm originally developed for facial recognition. Hosted on Microsoft Azure, it gathers data that will be used to identify individual animals by the shape and size of their dorsal fins and any scratches and marks on them.

MAUI63 is also developing an app called Sea Spotter, funded by Microsoft, which uses Azure Functions to allow people to upload photos of Māui sightings and use an AI algorithm to learn which individual they saw. Being able to pinpoint the Māui dolphin’s habitat is crucial for understanding how to protect them against threats, according to the conservationists.

Constantine says the risk of Māui dolphins being caught as bycatch in the nets of fishing boats is now “extremely low” thanks to a marine sanctuary that was put in place around their known habitat in 2008 and expanded in 2020. Nonetheless, they may stray outside these protected areas. That is why MAUI63 is working on an integration project with fishing companies to ultimately notify their crews of sightings made by the drone in real time.

Three Maui dolphins shown underwater and and tagged from drone footage.
MAUI63 uses an object detection computer vision model to spot dolphins from the drone footage that was collected as a part of a survey. Photo courtesy of MAUI63.

Another threat is toxoplasmosis, a disease caused by a parasite that lives in cat feces. It enters the marine food chain through runoff from the land, causing stillbirths and death in marine mammals. “If you understand where dolphins are on a regular basis, you can start to look at the areas where toxoplasmosis might be entering the water and maybe something can be done about that,” says van der Boon.

MAUI63’s aim is to provide scientifically robust information to conservation decision-makers. “We’re just trying to collect the data and make it available to anyone who needs it. We’re not here to make decisions on how they should or shouldn’t be protected. That’s key to us because everyone has quite different views on it,” says van der Boon. At this stage, he says, it is far from certain that MAUI63’s work will help prevent extinction, but what everyone can agree on is that it is worth trying.

Māui dolphins hold a special significance for many indigenous Māori — they are considered to be kaitiaki (guardians) that helped guide the waka (canoes) of their ancestors when they first came to Aotearoa hundreds of years ago.

Environmental scientist Dr. Aroha Spinks says protecting them is essential to increasing the mauri, or life force, of the environment. “From a Māori point of view — which is also backed up by science — the health of the environment affects the health and wellbeing of the people,” she says.

MAUI63 plans to make its learnings and technology available to people working with other marine species, such as a potential project in Antarctica with the European Union Environmental Council. Constantine hopes the high-tech approach will be as game changing for other researchers as it has been for her. “It makes such a huge difference to my world and the conversations I can have, and the information we can give to governments and the public about how to make conservation decisions that really matter.”

Top image: MAUI63 uses a combination of drones, AI and cloud technologies to learn more about Maui dolphins. Video courtesy of MAUI63.

Posted on Leave a comment

Microsoft launches Project AirSim, an end-to-end platform to accelerate autonomous flight

Project AirSim uses the power of Azure to generate massive amounts of data for training AI models on exactly which actions to take at each phase of flight, from takeoff to cruising to landing. It will also offer libraries of simulated 3D environments representing diverse urban and rural landscapes as well as a suite of sophisticated pretrained AI models to help accelerate autonomy in aerial infrastructure inspection, last-mile delivery and urban air mobility.

Project AirSim is available today in limited preview. Interested customers can contact the Project AirSim team to learn more.

It arrives as advances in AI, computing and sensor technology are beginning to transform how we move people and goods, said Gurdeep Pall, Microsoft corporate vice president for Business Incubations in Technology & Research. And this isn’t just happening in remote areas home to wind farms; with urban density on the rise, gridlocked roads and highways simply can’t cut it as the quickest way to get from Point A to Point B. Instead, businesses will look to the skies and autonomous aircraft.

“Autonomous systems will transform many industries and enable many aerial scenarios, from the last-mile delivery of goods in congested cities to the inspection of downed power lines from 1,000 miles away,” Pall said. “But first we must safely train these systems in a realistic, virtualized world. Project AirSim is a critical tool that lets us bridge the world of bits and the world of atoms, and it shows the power of the industrial metaverse – the virtual worlds where businesses will build, test and hone solutions and then bring them into the real world.”

Accelerating aerial autonomy

High-fidelity simulation was at the heart of AirSim, an earlier open-source project from Microsoft Research that is being retired but inspired today’s launch. AirSim was a popular research tool, but it required deep expertise in coding and machine learning.

Now, Microsoft has transformed that open-source tool into an end-to-end platform that allows Advanced Aerial Mobility (AAM) customers to more easily test and train AI-powered aircraft in simulated 3D environments.

“Everyone talks about AI, but very few companies are capable of building it at scale,” said Balinder Malhi, engineering lead for Project AirSim. “We created Project AirSim with the key capabilities we believe will help democratize and accelerate aerial autonomy – namely, the ability to accurately simulate the real world, capture and process massive amounts of data and encode autonomy without the need for deep expertise in AI.”

With Project AirSim, developers will be able to access pretrained AI building blocks, including advanced models for detecting and avoiding obstacles and executing precision landings. These out-of-the-box capabilities eliminate the need for deep machine learning expertise, helping expand the universe of people who can start training autonomous aircraft, Malhi said.

A simulated drone flying next to a wind turbine.
North Dakota-based Airtonomy uses Project AirSim to train autonomous aerial vehicles that inspect critical infrastructure like wind turbines. Photo courtesy of Airtonomy.

Airtonomy, which participated in an early access program for Project AirSim, used it to help customers launch remote inspections of critical infrastructure quickly and safely, without the time, expense and risk of sending a crew to remote locations – and without deep technical backgrounds.

“We create autonomous capture routines for the frontline worker – people who don’t use drones and robots on a regular basis but need them to act like any other tool within their service vehicle,” Riedy said. With Airtonomy, not only does the drone inspect the asset automatically, the captured data is automatically contextualized at the moment of capture. These features can be extended to any asset in any industry, enabling novel and automated end-to-end workflows.

“It’s amazing to see those responsible for our nation’s infrastructure use these tools literally with a push of a button and have a digital representation at their fingertips for things like outages, disaster response or route maintenance. Project AirSim is transforming how robotics and AI can be used in an applied fashion,” he said.

Using data from Bing Maps and other providers, Project AirSim customers will also be able to create millions of detailed 3D environments and also access a library of specific locations, like New York City or London, or generic spaces, like an airport.

Microsoft is also working closely with industry partners to extend accurate simulation to weather, physics and – crucially – the sensors an autonomous machine uses to “see” the world. A collaboration with Ansys leverages their high-fidelity physics-based sensor simulations to enable customers with rich ground truth information for autonomous vehicles. Meanwhile, Microsoft and MathWorks are working together so customers can bring their own physics models to the AirSim platform using Simulink.

As simulated flights occur, huge volumes of data get generated. Developers capture all that data and use it to train AI models through various machine learning methods.

Gathering this data is impossible to do in the real world, where you can’t afford to make millions of mistakes, said Matt Holvey, director of intelligent systems at Bellwhich also participated in Project AirSim’s early access program. Often, you can’t afford to make one.

Given that, Bell is turning to Project AirSim to hone the ability of its drones to land autonomously. It’s a tough problem. What if the landing pad is covered in snow, or leaves or standing water? Will the aircraft be able to recognize it? What if the rotor blades kick up dust, obscuring the vehicle’s vision? AirSim let Bell train its AI model on thousands of ‘what if’ scenarios in a matter of minutes, helping it practice and perfect a critical maneuver before attempting it in the real world.

The future of autonomous flight

The emerging world of advanced aerial mobility will launch a diverse cast of vehicles into the skies, from hobbyist drones to sophisticated eVTOLs (electric vertical take-off and landing) aircraft carrying passengers. And the potential use cases are almost limitless, Microsoft says: inspecting powerlines and ports, ferrying packages and people in crowded cities, operating deep inside cramped mines or high above farmlands.

Four simulated drones fly through forest settings.
In Project AirSim’s high-fidelity environments, AI models learn through trial and error exactly which actions to take at each phase of flight. Image from Microsoft.

But technology alone won’t usher in the world of autonomous flight. The industry must also chart a pathway through the world’s airspace monitoring systems and regulatory environments. The Project AirSim team is actively engaged with standards bodies, civil aviation and regulatory agencies in shaping the necessary standards and means of compliance to accelerate this industry.

Microsoft also plans to work with global civil aviation regulators on how Project AirSim might help with the certification of safe autonomous systems, Pall said, potentially creating scenarios inside AirSim that an autonomous vehicle must successfully navigate. In one case there is blinding rain, in one case deep winds, in one case it loses GPS connectivity. If the vehicle can still get from Point A to Point B every time, Pall said, that could be an important step toward certification.

Posted on Leave a comment

Online math tutoring service uses AI to help boost students’ skills and confidence

Like many students around the world, Eithne, 14, in Chorley, United Kingdom, was struggling to keep up in math at school after more than a year of COVID-19 related disruptions. In June 2021, her parents signed her up for a summer program offered by Eedi, an online math tutoring service.

“Just dealing with lockdown, she hadn’t had enough of a really good background,” said her mother, Arianna. “She missed most of the Year 7 Maths, then Year 8. So, we thought, ‘Let’s give it a go, let’s see where she needs a bit of help.’”

Newly enrolled students on Eedi are asked to take a dynamic quiz of 10 multiple choice diagnostic questions that the service uses to learn where students struggle most in math. This information allows the service to place students on a learning pathway to overcome those specific obstacles, or misconceptions.

“We ask them a question based roughly on their age group and then we say, ‘Well, what’s the next best question to ask them based on their previous answer?’” explained Iris Hulls, the head of operations at Eedi. “We learn as much about them as possible to predict either growth or comfort topics for them.”

The dynamic quiz is powered by AI developed by researchers at the Microsoft Research Lab in Cambridge, United Kingdom, who specialize in machine learning algorithms that help people make decisions.

The AI uses each answer to predict the probability the student will correctly answer each of thousands of other possible next questions and then weighs those probabilities to decide what question to ask next to pinpoint knowledge gaps.

The information gleaned from the quiz is akin to what a teacher might learn from a one-on-one conversation with a student, explained Cheng Zhang, a Microsoft principal researcher at the lab who led the development of the machine learning model that powers Eedi’s dynamic quiz.

“If the student doesn’t know 3 times 7, we may want to ask 1 plus 1,” Zhang said. “We want to adapt the quiz based on the previous answer.”

Once students’ misconceptions are identified, the Eedi platform slots students onto a learning pathway that helps them overcome their misconceptions and do better in math at school.

Eithne was slotted onto a pathway that included a review of topics covered in Year 8 and prepared her for success in Year 9, including geometry.

“It’s very good for finding your weaknesses and your strengths and being able to understand why you’re maybe not as good in this one area,” Eithne said. “You’re able to realize, ‘I’ve been doing this wrong for ages.’”

A girl sits at a desk with a laptop interacting with an online math quiz
Eithne, 14, in Chorley, United Kingdom, gained confidence in math through lessons on Eedi, an online tutoring service that uses AI developed by Microsoft. Photo by Jonathan Banks.

Good questions, good data

The success of Microsoft’s next-best-question model hinges on the data used to train it, noted Zhang. In Eedi’s case, these are thousands of vetted, high-quality diagnostic questions developed specifically to help teachers identify student misconceptions about math topics.

“Our technology is just an enhancer that makes this high-quality data give more insights,” Zhang said.

Diagnostic questions are well-thought-through multiple choice questions that have one correct answer and three wrong answers, with each wrong answer designed to reveal a specific misconception.

“Maths lends itself quite well to this kind of multiple-choice assessment because more often than not there’s a right answer and these wrong answers; it’s much less subjective than some of the humanities subjects,” said Craig Barton, an Eedi co-founder and the company’s director of education.

Barton latched on to the power of diagnostic questions when, as a math teacher, he attended a training course on formative assessments and learned that well-formulated wrong answers can provide insight to why a student is struggling.

“In the past, it was always kids got things right, which is fine, or they got things wrong and then I had to start doing detective work to figure out where they were going wrong,” he said. “That’s okay if you work one-to-one, but if you’ve got 30 kids in a class, that’s potentially quite time consuming.”

Good diagnostic questions, Barton said, must be clear and unambiguous, check for one thing, be answerable in 20 seconds, link each wrong answer to a misconception and ensure that a student is unable to answer it correctly while having a key misconception.

“This notion that the kids can’t get it right whilst having a key misconception is the hardest one to factor in, but it’s probably the most important,” he said.

For example, consider the question: “Which of the following is a multiple of 6? – A: 20, B: 62, C: 24, or D: 26.”

According to Barton, on the surface this is a decent question. That’s because students could think a “multiple” means the “6” is the first number (B) or last number (D), or the student could have difficulty with their multiplication tables and select A. The correct answer is C: 24.

“But the major flaw in this question is if you don’t know the difference between a factor and a multiple, you could get this question right, whereas experience will tell us that the biggest misconception students have with multiples is they mix them up with factors,” he said.

A better question to ask, then, is, “Which of these is a multiple of 15? – A: 1, B: 5, C: 60 or D: 55.” That’s because the possible answers include factors and multiples. The correct answer is C: 60. A student who confuses factors with multiples might instead pick A: 1 or B: 5, and a student who needs work on multiplication might pick D: 55.

“When you write these things, you’ve really got to think, ‘What are all the different ways kids can go wrong and how am I going to capture those in three wrong answers?’” Barton explained.

Screenshot of an online math quiz asking for the mean of five numbers with four choices for answers
In this diagnostic question, the correct answer is “B:4.” Students who answer “A:20” took the first step to find the mean, totaling the numbers. “C:3” represents confusion between the concepts of median and mean. “D:2” is a mix up of the concepts mode and mean.

Teacher tools to online tutor

After the workshop, Barton went home and wrote about 50 diagnostic questions and tested them out on students in his class. They worked.

Barton is also a math book author and podcaster with thousands of followers on social media. He used his influence to spread the word on diagnostic questions and collaborated with Eedi co-founder Simon Woodhead to build an online database with thousands of diagnostic questions for teachers to access for their lesson planning.

“Then I thought, ‘Wait a minute, we could do something a bit better than this,’” Barton said. “’Imagine if the kids could answer the questions online and we could capture that data and then, before you know it, we’ve got insights into specific areas where students struggle.’”

The website exploded in popularity and attracted investors as well as the attention of Hulls, who along with colleagues was exploring options to use data to scale and make the benefits of math tutoring accessible to more families. The team formed Eedi. An advisor introduced them to Zhang and her team’s research on the next-best-question algorithm, which aims to accelerate decision making by gathering and analyzing relevant personal information.

At the time, the Microsoft researchers were working on healthcare scenarios, using AI to help doctors more efficiently make decisions about what tests to order to diagnose patient ailments.

For example, if a patient walks into an emergency room with a hurt arm, the doctor will ask a series of questions leading up to an X-ray, such as “How did you hurt your arm?” and, “Can you move your fingers?” instead of, “Do you have a cold?” because the answer will reveal relevant information for this patient’s treatment. The next-best-question algorithm automates this information gathering process.

The advisor thought the model would work well with Eedi’s dataset of diagnostic questions, automating the collection of information a tutor could glean from a one-on-one conversation with a student.

“We were aware that we had collected a lot of data. We wanted to do smarter stuff with our data; we wanted to be able to predict what misconceptions students might have before they even answer questions,” said Woodhead, who is Eedi’s chief data scientist.

The Eedi team worked with the Microsoft researchers to train the model on their diagnostic questions to efficiently pinpoint where students need the most support in math.

The model works without collecting any personal identifying information from the students, Woodhead noted.

“It doesn’t need to know a name. It doesn’t need to know an email address. It’s looking at patterns,” he said.

From this information, the system can pinpoint the best lessons for students to take on Eedi. Without that guidance, students tend to rely on strategies they’re already using at school, which isn’t the right starting point for the majority of students who are looking for a private tutor, according to Hulls.

“It really helps direct the children and their families at home to know where to start,” she said.

Posted on Leave a comment

How AI is helping create more inclusive TV experience in Japan for those who are deaf or hard of hearing

Around the world, there is an increased demand for subtitles. In the United Kingdom for instance, the BBC reports that subtitles are primarily intended to serve viewers with loss of hearing, but they are used by a wide range of people: around 10% of broadcast viewers use subtitles regularly, increasing to 35 percent for some online content. The majority of these viewers are not hard of hearing.” 

Similar trends are being recorded around the world for television, social media and other channels that provide video content.  

Is it estimated that in Japan, over 360,000 people are Deaf or Hard of Hearing – 70,000 of them use sign language as their primary form of communication, while the rest prefer written Japanese as the primary way of accessing content. Additionally, with nearly 30 percent of people in Japan aged 65 or older, the Japan Hearing Aid Industry Association estimates 14.2 million people have a hearing disability.  

Major Japanese broadcasters have subtitles for a majority of their programs, which requires a process that includes dedicated staff and the use of specialized equipment valued at tens of millions of Japanese yens. “Over 100 local TV channels in Japan face barriers in providing subtitles for live programs due to the high cost of equipment and limitations of personnel” said Muneya Ichise from SI-com. The local stations are of high importance to the communities they serve, with the local news programs conveying significant updates concerning the area and its population.  

To address this accessibility need, starting 2018, SI-com and its parent company, ISCEC Japan, have been piloting with local TV stations innovative and cost-efficient ways of introducing subtitles to live broadcasting. Their technical solution to offer subtitles for live broadcasting, AI Mimi, is an innovative pairing between human input and the power of Microsoft Azure Cognitive Service, creating a more accurate and faster solution through the hybrid format. Furthermore, ISCEC is able to compensate for the shortage of people inputting subtitles locally by leveraging their own specialized personnel. AI-Mimi has also been introduced at Okinawa University and the innovation was recognized and awarded a Microsoft AI for Accessibility grant. 

Based on extensive testing and user feedback, themed around the need for bigger fonts and better display of the subtitles on the screen, SI-com is able to create a model with over 10 lines of subtitles on the right side of the TV screen, moving away from the more commonly used version with only two lines in display at the bottom. In December 2021, they demoed the technology for the first time, in a live broadcast, partnering with a local TV channel in Nagasaki. 

Two presenters in a live TV program with subtitles provided real time on the right side using a combination of AI and human input.
TV screenshot of demo with local TV channel in Nagasaki
Posted on Leave a comment

Microsoft’s framework for building AI systems responsibly

Responsible AI graphic

Today we are sharing publicly Microsoft’s Responsible AI Standard, a framework to guide how we build AI systems. It is an important step in our journey to develop better, more trustworthy AI. We are releasing our latest Responsible AI Standard to share what we have learned, invite feedback from others, and contribute to the discussion about building better norms and practices around AI. 

Guiding product development towards more responsible outcomes
AI systems are the product of many different decisions made by those who develop and deploy them. From system purpose to how people interact with AI systems, we need to proactively guide these decisions toward more beneficial and equitable outcomes. That means keeping people and their goals at the center of system design decisions and respecting enduring values like fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.    

The Responsible AI Standard sets out our best thinking on how we will build AI systems to uphold these values and earn society’s trust. It provides specific, actionable guidance for our teams that goes beyond the high-level principles that have dominated the AI landscape to date.  

The Standard details concrete goals or outcomes that teams developing AI systems must strive to secure. These goals help break down a broad principle like ‘accountability’ into its key enablers, such as impact assessments, data governance, and human oversight. Each goal is then composed of a set of requirements, which are steps that teams must take to ensure that AI systems meet the goals throughout the system lifecycle. Finally, the Standard maps available tools and practices to specific requirements so that Microsoft’s teams implementing it have resources to help them succeed.  

Core components of Microsoft’s Responsible AI Standard graphic
The core components of Microsoft’s Responsible AI Standard

The need for this type of practical guidance is growing. AI is becoming more and more a part of our lives, and yet, our laws are lagging behind. They have not caught up with AI’s unique risks or society’s needs. While we see signs that government action on AI is expanding, we also recognize our responsibility to act. We believe that we need to work towards ensuring AI systems are responsible by design. 

Refining our policy and learning from our product experiences
Over the course of a year, a multidisciplinary group of researchers, engineers, and policy experts crafted the second version of our Responsible AI Standard. It builds on our previous responsible AI efforts, including the first version of the Standard that launched internally in the fall of 2019, as well as the latest research and some important lessons learned from our own product experiences.   

Fairness in Speech-to-Text Technology  

The potential of AI systems to exacerbate societal biases and inequities is one of the most widely recognized harms associated with these systems. In March 2020, an academic study revealed that speech-to-text technology across the tech sector produced error rates for members of some Black and African American communities that were nearly double those for white users. We stepped back, considered the study’s findings, and learned that our pre-release testing had not accounted satisfactorily for the rich diversity of speech across people with different backgrounds and from different regions. After the study was published, we engaged an expert sociolinguist to help us better understand this diversity and sought to expand our data collection efforts to narrow the performance gap in our speech-to-text technology. In the process, we found that we needed to grapple with challenging questions about how best to collect data from communities in a way that engages them appropriately and respectfully. We also learned the value of bringing experts into the process early, including to better understand factors that might account for variations in system performance.  

The Responsible AI Standard records the pattern we followed to improve our speech-to-text technology. As we continue to roll out the Standard across the company, we expect the Fairness Goals and Requirements identified in it will help us get ahead of potential fairness harms. 

Appropriate Use Controls for Custom Neural Voice and Facial Recognition 

Azure AI’s Custom Neural Voice is another innovative Microsoft speech technology that enables the creation of a synthetic voice that sounds nearly identical to the original source. AT&T has brought this technology to life with an award-winning in-store Bugs Bunny experience, and Progressive has brought Flo’s voice to online customer interactions, among uses by many other customers. This technology has exciting potential in education, accessibility, and entertainment, and yet it is also easy to imagine how it could be used to inappropriately impersonate speakers and deceive listeners. 

Our review of this technology through our Responsible AI program, including the Sensitive Uses review process required by the Responsible AI Standard, led us to adopt a layered control framework: we restricted customer access to the service, ensured acceptable use cases were proactively defined and communicated through a Transparency Note and Code of Conduct, and established technical guardrails to help ensure the active participation of the speaker when creating a synthetic voice. Through these and other controls, we helped protect against misuse, while maintaining beneficial uses of the technology.  

Building upon what we learned from Custom Neural Voice, we will apply similar controls to our facial recognition services. After a transition period for existing customers, we are limiting access to these services to managed customers and partners, narrowing the use cases to pre-defined acceptable ones, and leveraging technical controls engineered into the services. 

Fit for Purpose and Azure Face Capabilities 

Finally, we recognize that for AI systems to be trustworthy, they need to be appropriate solutions to the problems they are designed to solve. As part of our work to align our Azure Face service to the requirements of the Responsible AI Standard, we are also retiring capabilities that infer emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup.  

Taking emotional states as an example, we have decided we will not provide open-ended API access to technology that can scan people’s faces and purport to infer their emotional states based on their facial expressions or movements. Experts inside and outside the company have highlighted the lack of scientific consensus on the definition of “emotions,” the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability. We also decided that we need to carefully analyze all AI systems that purport to infer people’s emotional states, whether the systems use facial analysis or any other AI technology. The Fit for Purpose Goal and Requirements in the Responsible AI Standard now help us to make system-specific validity assessments upfront, and our Sensitive Uses process helps us provide nuanced guidance for high-impact use cases, grounded in science. 

These real-world challenges informed the development of Microsoft’s Responsible AI Standard and demonstrate its impact on the way we design, develop, and deploy AI systems.  

For those wanting to dig into our approach further, we have also made available some key resources that support the Responsible AI Standard: our Impact Assessment template and guide, and a collection of Transparency Notes. Impact Assessments have proven valuable at Microsoft to ensure teams explore the impact of their AI system – including its stakeholders, intended benefits, and potential harms – in depth at the earliest design stages. Transparency Notes are a new form of documentation in which we disclose to our customers the capabilities and limitations of our core building block technologies, so they have the knowledge necessary to make responsible deployment choices. 

Core principles graphic
The Responsible AI Standard is grounded in our core principles

A multidisciplinary, iterative journey
Our updated Responsible AI Standard reflects hundreds of inputs across Microsoft technologies, professions, and geographies. It is a significant step forward for our practice of responsible AI because it is much more actionable and concrete: it sets out practical approaches for identifying, measuring, and mitigating harms ahead of time, and requires teams to adopt controls to secure beneficial uses and guard against misuse. You can learn more about the development of the Standard in this video.   

While our Standard is an important step in Microsoft’s responsible AI journey, it is just one step. As we make progress with implementation, we expect to encounter challenges that require us to pause, reflect, and adjust. Our Standard will remain a living document, evolving to address new research, technologies, laws, and learnings from within and outside the company.  

There is a rich and active global dialog about how to create principled and actionable norms to ensure organizations develop and deploy AI responsibly. We have benefited from this discussion and will continue to contribute to it. We believe that industry, academia, civil society, and government need to collaborate to advance the state-of-the-art and learn from one another. Together, we need to answer open research questions, close measurement gaps, and design new practices, patterns, resources, and tools.  

Better, more equitable futures will require new guardrails for AI. Microsoft’s Responsible AI Standard is one contribution toward this goal, and we are engaging in the hard and necessary implementation work across the company. We’re committed to being open, honest, and transparent in our efforts to make meaningful progress. 

Posted on Leave a comment

Microsoft announces Viva Sales, redefining the seller experience and enhancing productivity

Viva Sales works with any seller’s CRM to automate data entry and brings AI-powered intelligence to sellers in Microsoft 365 and Microsoft Teams

REDMOND, Wash. — June 16, 2022 On Thursday, Microsoft Corp. announced Microsoft Viva Sales, a new seller experience application. Viva Sales enriches any CRM system with customer engagement data from Microsoft 365 and Microsoft Teams, and leverages AI to provide personalized recommendations and insights for sellers to be more connected with their customers. This helps sellers more seamlessly personalize their customer engagements toward faster deal closure.

Employees are demanding more from their employers in today’s hybrid world — from the tools they use, to the hours and locations they work. This is especially true for salespeople. Viva Sales enables sellers to capture insights from across Microsoft 365 and Teams, eliminate manual data entry, and receive AI-driven recommendations and reminders — all the while staying in the flow of work. Viva Sales streamlines the seller experience by surfacing the insights with the right context within tools salespeople already use, saving sellers time and providing the organization with a more complete view of the customer.

“The future of selling isn’t a new system. It’s bringing the information sellers need at the right time, with the right context, into the tools they know, so their work experience can be streamlined,” said Judson Althoff, Executive Vice President and Chief Commercial Officer at Microsoft. “Empowering sellers to spend more time with their customers has been our goal — and we’ve done that by reimagining the selling experience with Viva Sales.”

“Sellers rely on digital collaboration and productivity tools to connect with customers and close deals, but a lot of the insights they uncover with these tools don’t make it into the CRM,” said Paul Greenberg, founder and managing principal, The 56 Group. “Microsoft is taking on this challenge by offering a solution that complements the CRM. Viva Sales automates the busy work, captures critical information about the customer and helps sellers get the job done.”

Reimagining the seller experience

Viva Sales builds on Microsoft Viva, which was launched last year. Viva provides an integrated employee experience platform that brings together communications, knowledge, learning, goals and insights to empower every person and team to be their best from anywhere. Viva Sales delivers the first role-based Viva application designed specifically for sellers:

  • Viva Sales provides tools for sellers to do their jobs, while providing the insights that sales leadership needs. As sellers are working, they can tag customers in Outlook, Teams or Office applications like Excel, and Viva Sales will automatically capture it as a customer record, layered with all relevant data about the customer. Being able to automatically capture this level of customer engagement data was not available previously. This data can easily be shared with team members while collaborating in Office and Teams without retyping or looking it up in a CRM.
  • Powered by data and AI, Viva Sales recommends next steps to progress a customer through the sales funnel, prioritizes work and next steps, and enables sellers to access full history and customer interaction materials. Real-time customer insights provide a deeper understanding of where each customer is in their purchase journey, and how to guide the relationship.
  • Viva Sales also provides AI-driven recommendations to enable sellers to enhance their customer engagement — optimizing follow-through with next best steps, actionable reminders, and recommendations to accelerate and close more sales.  Viva Sales is using Context IQ, announced last fall, to ensure relevant content is connected across Microsoft apps and services — like Dynamics 365 and Microsoft 365 — so sellers save time and stay in the flow of work.

With Viva Sales, we are creating a new category of application addressing the selling experience. Microsoft is uniquely positioned to offer this type of application within the productivity and collaboration apps employees are already using.

To learn more, visit the Official Microsoft Blog.

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications, (425) 638-7777, [email protected]

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at https://news.microsoft.com/microsoft-public-relations-contacts.

Posted on Leave a comment

P&G and Microsoft co-innovate to build the future of digital manufacturing

Fork lift driver moves a pallet of Bounty paper towels in a warehouse
P&G and Microsoft announce collaboration to build the future of digital manufacturing. Photo courtesy of P&G

Microsoft technology empowers scalability for consumer products leader

CINCINNATI and REDMOND, Wash. — June 8, 2022 — On Wednesday, The Procter & Gamble Company (NYSE: PG) (P&G) and Microsoft Corp. announced a new multiyear collaboration that will leverage the Microsoft Cloud to help create the future of digital manufacturing at P&G.

The two companies will co-innovate to accelerate and expand P&G’s digital manufacturing platform and leverage the Industrial Internet of Things (IIoT) to bring products to consumers faster, increase customer satisfaction and improve productivity to reduce costs.

“Together with Microsoft, P&G intends to make manufacturing smarter by enabling scalable predictive quality, predictive maintenance, controlled release, touchless operations and manufacturing sustainability optimization — which has not been done at this scale in the manufacturing space to date. At P&G, data and technology are at the heart of our business strategy and are helping create superior consumer experiences. This first-of-its-kind co-innovation agreement will digitize and integrate data to increase quality, efficiency and sustainable use of resources to help deliver those superior experiences.”

P&G and Microsoft logos“Together with Microsoft, P&G intends to make manufacturing smarter by enabling scalable predictive quality, predictive maintenance, controlled release, touchless operations and manufacturing sustainability optimization — which has not been done at this scale in the manufacturing space to date,” said P&G CIO Vittorio Cretella. “At P&G, data and technology are at the heart of our business strategy and are helping create superior consumer experiences. This first-of-its-kind co-innovation agreement will digitize and integrate data to increase quality, efficiency and sustainable use of resources to help deliver those superior experiences.”

With Microsoft Azure as the foundation, the new collaboration marks the first time that P&G can digitize and integrate data from more than 100 manufacturing sites around the world and enhance its AI, machine learning and edge computing services for real-time visibility. This will enable P&G employees to analyze production data and leverage artificial intelligence to immediately make decisions that drive improvement and exponential impact. Accessing this level of data, at scale, is rare within the consumer goods industry.

P&G selected Microsoft as its preferred cloud provider to build the future of digital manufacturing based on a four-year history of successfully working together on data and AI. The new collaborative effort will:

  • Allow for better utilization of data, AI capabilities and digital twins technology.
  • Optimize manufacturing environmental sustainability efforts.
  • Increase workforce efficiency and productivity.

“We are excited to help P&G accelerate its digital manufacturing platform using Microsoft Azure, AI and IIoT to accommodate volatility in the consumer products industry with innovative, agile solutions that can easily scale based on market conditions. Our partnership will further P&G’s growth and business transformation through digital technology that seamlessly connects people, assets, workflow and business processes that promote resiliency.”

“We are excited to help P&G accelerate its digital manufacturing platform using Microsoft Azure, AI and IIoT to accommodate volatility in the consumer products industry with innovative, agile solutions that can easily scale based on market conditions,” said Judson Althoff, Microsoft’s chief commercial officer. “Our partnership will further P&G’s growth and business transformation through digital technology that seamlessly connects people, assets, workflow and business processes that promote resiliency.”

Empowering technicians and advancing operations with IIoT

P&G is already innovating and using Azure IoT Hub and IoT Edge to help manufacturing technicians analyze insights with greater speed and efficiency, creating improvements in the production of its baby care and paper products with pilot projects happening in Egypt, India, Japan and the United States.

Diapers and data: Quality control and process improvements

P&G is making advancements in its diaper manufacturing process to reduce manufacturing downtime, minimize scrap and lower maintenance expenses by automatically detecting and resolving the largest causes of line stops and rework using machine learning. The production of diapers involves assembling many layers of material at high speed with great precision to ensure optimal absorbency, superior leak protection and outstanding comfort. The new IIoT platform uses machine telemetry and high-speed analytics to continuously monitor production lines to provide early detection and prevention of potential issues in the material flow. This improves cycle time, reduces rework losses and ensures quality, while simultaneously improving operator productivity.

Pioneering paper towels

In a pilot with Microsoft, P&G can now better predict finished paper towel sheet lengths, improving the ability to deliver the right amount of product to the consumer. With the new IIoT platform, P&G can collect data from sensors on the manufacturing line and use technologies like advanced algorithms, machine learning and predictive analytics so it can improve manufacturing efficiencies.

Increasing sustainability and predicting equipment failure

To optimize manufacturing sustainability, P&G will use Microsoft’s machine learning and data storage platforms to improve energy utilization across its paper machines in Family Care. With the efficiency and speed of cloud computing, P&G teams can analyze large volumes of holistic data sets and pinpoint energy efficiency and machine maintenance opportunities across the manufacturing process. The Azure platform will allow P&G to easily integrate event summary data — such as production runs, downtime, changeovers and more — along with historical data.

Co-innovation with a new Digital Enablement Office and incubator

To accelerate technology integration and support pilot programs, Microsoft and P&G have co-created a Digital Enablement Office (DEO) staffed by experts from both organizations. They will jointly deploy the Azure platform, and the DEO also intends to serve as an incubator to create high-priority business scenarios in the areas of product manufacturing and packaging processes that can be implemented across P&G.

About Procter & Gamble

P&G serves consumers around the world with one of the strongest portfolios of trusted, quality, leadership brands, including Always®, Ambi Pur®, Ariel®, Bounty®, Charmin®, Crest®, Dawn®, Downy®, Fairy®, Febreze®, Gain®, Gillette®, Head & Shoulders®, Lenor®, Olay®, Oral-B®, Pampers®, Pantene®, SK-II®, Tide®, Vicks®, and Whisper®. The P&G community includes operations in approximately 70 countries worldwide. Please visit http://www.pg.com for the latest news and information about P&G and its brands. For other P&G news, visit us at www.pg.com/new.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777, [email protected]

Libby Coulton, P&G Media Relations, [email protected], and Rick Cohen, H&K for P&G, [email protected]

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at https://news.microsoft.com/microsoft-public-relations-contacts.

Posted on Leave a comment

Studying whether AI can drive innovation in personal assistant devices and sign language

Advancing tech innovation and combating the data dessert that exists related to sign language have been areas of focus for the AI for Accessibility program. Towards those goals, in 2019 the team hosted a sign language workshop, soliciting applications from top researchers in the field. Abraham Glasser, a Ph.D. student in Computing and Information Sciences and a native American Sign Language (ASL) signer, supervised by Professor Matt Huenerfauth, was awarded a three-year grant. His work would focus on a very pragmatic need and opportunity: driving inclusion by concentrating on and improving common interactions with home-based smart assistants for people who use sign language as a primary form of communication. 

Since then, faculty and students in the Golisano College of Computing and Information Sciences at Rochester Institute of Technology (RIT) conducted the work at the Center for Accessibility and Inclusion Research (CAIR). CAIR publishes research on computing accessibility and it includes many Deaf and Hard of Hearing (DHH) students operating bilingually in English and American Sign Language. 

To begin this research, the team investigated how DHH users would optimally prefer to interact with their personal assistant devices, be it a smart speaker other type of devices in the household that respond to spoken command. Traditionally, these devices have used voice-based interaction, and as technology evolved, newer models now incorporate cameras and display screens. Currently, none of the available devices on the market understand commands in ASL or other sign languages, so introducing that capability is an important future tech development to address an untapped customer base and drive inclusion. Abraham explored simulated scenarios in which, through the camera on the device, the tech would be able to watch the signing of a user, process their request, and display the output result on the screen of the device.  

Some prior research had focused on the phases of interacting with a personal assistant device, but little included DHH users. Some examples of available research included studying device activation, including the concerns of waking up a device, as well as device output modalities in the form for videos, ASL avatars and English captions. The call to action from a research perspective included collecting more data, the key bottleneck, for sign language technologies.  

To pave the way forward for technological advancements it was critical to understand what DHH users would like the interaction with the devices to look like and what type of commands they would like to issue. Abraham and the team set up a Wizard-of-Oz videoconferencing setup. A “wizard” ASL interpreter had a home personal assistant device in the room with them, joining the call without being seen on camera. The device’s screen and output would be viewable in the call’s video window and each participant was guided by a research moderator. As the Deaf participants signed to the personal home device, they did not know that the ASL interpreter was voicing the commands in spoken English. A team of annotators watched the recording, identifying key segments of the videos, and transcribing each command into English and ASL gloss. 

Abraham was able to identify new ways that users would interact with the device, such as “wake-up” commands which were not captured in previous research. 

Six photographs of video screenshots of ASL signers who are looking into the video camera while they are in various home settings. The individuals shown in the video are young adults of a variety of demographic backgrounds, and each person is producing an ASL sign.
Screenshots of various “wake up” signs produced by participants during the study conducted remotely by researchers from the Rochester Institute of Technology.  Participants were interacting with a personal assistant device, using American Sign Language (ASL) commands which were translated by an unseen ASL interpreter, and they spontaneously used a variety of ASL signs to activate the personal assistant device before giving each command.  The signs here include examples labeled as: (a) HELLO, (b) HEY, (c) HI, (d) CURIOUS, (e) DO-DO, and (f) A-L-E-X-A.
Posted on Leave a comment

How one of the world’s largest wind companies is using AI to capture more energy

In 1898, Hans Søren Hansen arrived in Lem, Denmark, a small farming town about 160 miles from Copenhagen. The 22-year-old was eager to make his way in business and bought a blacksmith shop. In time, he became known to those in the area for his innovative spirit.

Hansen’s business went on to change with the times, morphing into building steel window frames. Future generations continued to expand on Hansen’s openness to change, evolving to building hydraulic cranes, and ultimately, in 1987, becoming Vestas Wind Systems, one of the largest wind turbine manufacturers in the world.

That tenacity to adapt and succeed has continued to define Vestas, which is now looking to optimize wind energy efficiency for customers who use its turbines in 85 countries.

Working on a proof of concept with Microsoft and Microsoft partner minds.ai, Vestas successfully used artificial intelligence (AI) and high-performance computing to generate more energy from wind turbines by optimizing what is known as wake steering.

That potential energy increase is important. But also important, Vestas says, was the rapidity with which the proof of concept was developed – in a few months – and what that could mean for putting it into place. The company is not the first to study the issue, but the expedited results were a differentiator for it.

Sven Jesper Knudsen, Vestas Chief Specialist and modeling and analytics module design owner
Sven Jesper Knudsen, Vestas Chief Specialist and modeling and analytics module design owner.

“This is a theoretical exercise that has been living in the research community for years,” says Sven Jesper Knudsen, Vestas chief specialist and modeling and analytics module design owner. “And there have been some demonstrations by both our competitors and also some wind farm owners. We wanted to see if we could try to shorten the development cycle.

“Time to market is essential to the whole wind industry to meet aggressive targets that we all have,” Knudsen says.

Wind, like solar, energy is a clean alternative to fossil fuels for creating electricity. Both wind and solar are of growing importance as the world looks to decrease the use of coal, gas and crude oil to reduce carbon emissions to meet climate change goals.

Wind power also is one of the fastest-growing renewable energy technologies, according to the International Energy Agency (IEA), an organization that works with governments and industry to help them shape and secure a sustainable energy future.

In 2050, two-thirds of the world’s total energy supply will come from wind, solar, bioenergy, geothermal and hydro energy, with wind power expected to increase 11-fold, the agency said in a report last year, Net Zero by 2050: A Roadmap for the Global Energy Sector.

“In the net zero pathway, global energy demand in 2050 is around 8% smaller than today, but it serves an economy more than twice as big and a population with 2 billion more people,” the IEA says in the report.

Wind energy has many advantages. But one challenge is that the amount of energy that is harnessed can change daily based on wind conditions. Finding ways to better capture every part of wind energy is important to Vestas – hence what began last year as the “Grand Challenge,” as the company described it.

A woman works in Vestas’ blades factory in Nakskov, in south Denmark. (Photos courtesy of Vestas)
A woman works in Vestas’ blades factory in Nakskov, in south Denmark. (Photo courtesy of Vestas)

Wind turbines cast a wake, or a “shadow effect” that can slow other turbines that are located downstream, Knudsen says. Energy can be recaptured using wake steering, turning turbine rotors to point away from oncoming wind to deflect the wake.

“The idea is that you control that shadow effect away from downstream turbines and you then channel more wind energy to these downstream turbines,” he says.

To accomplish this, Vestas used Microsoft Azure high-performance computing, Azure Machine Learning and help from Microsoft partner minds.ai, which used DeepSim, its reinforcement learning-based controller design platform.

Reinforcement learning is a type of machine learning in which AI agents can interact and learn from their environment in real-time, and largely by trial and error. Reinforcement learning tests out different actions in either a real or simulated world and gets a reward – say, higher points – when actions achieve a desired result.

Vestas’ use of Azure high-performance computing also meant getting results faster.