Posted on Leave a comment

Responsible machine learning capabilities build trust in AI systems, developers say

Anyone who runs a business knows that one of the hardest things to do is accuse a customer of malfeasance. That’s why, before members of Scandinavian Airlines’ fraud detection unit accuse a customer of attempting to scam the carrier’s loyalty points program, the detectives need confidence that their case is solid.

“It would hurt us even more if we accidentally managed to say that something is fraud, but it isn’t,” said Daniel Engberg, head of data analytics and artificial intelligence for SAS, which is headquartered in Stockholm, Sweden.

The airline is currently flying a reduced schedule with limited in-flight services to help slow the spread of COVID-19, the disease caused by the novel coronavirus. Before the restrictions, SAS handled more than 800 departures per day and 30 million passengers per year. Maintaining the integrity of the EuroBonus loyalty program is paramount as the airline waits for regular operations to resume, noted Engberg.

EuroBonus scammers, he explained, try to gain as many points as quickly as possible to either book reward travel for themselves or to sell. When fraud occurs, legitimate customers lose an opportunity to claim seats reserved for the loyalty program and SAS loses out on important business revenue.

Today, a large portion of leads on EuroBonus fraud come from an AI system that Engberg and his team built with Microsoft Azure Machine Learning, a service for building, training and deploying machine learning models that are easy to understand, protect and control.

The SAS AI system processes streams of real-time flight, transaction, award claims and other data through a machine learning model with thousands of parameters to find patterns of suspicious behavior.

To understand the model predictions, and thus chase leads and build their cases, the fraud detection unit relies on an Azure Machine Learning capability called interpretability, powered by the InterpretML toolkit. This capability explains what parameters were most important in any given case. For example, it could point to parameters that suggest a scam of pooling points from ghost accounts to book flights.

Model interpretability helps take the mystery out of machine learning, which in turn can build confidence and trust in model predictions, noted Engberg.

“If we build the trust in these models, people start using them and then we can actually start reaping the benefits that the machine learning promised us,” he said. “It’s not about explainability for explainability’s sake. It’s being able to provide both our customers and our own employees with insights into what these models are doing and how they are taking positions for us.”

A graphic for Azure Machine Learning, three sections for Understand, Control, and Protect
Graphic courtesy of Microsoft.

Understand, protect and control your machine learning solution

Over the past several years, machine learning has moved out of research labs and into the mainstream, and has transformed from a niche discipline for data scientists with Ph.D.s to one where all developers are expected to be able to participate, noted Eric Boyd, corporate vice president of Microsoft Azure AI in Redmond, Washington.

Microsoft built Azure Machine Learning to enable developers across the spectrum of data science expertise to build and deploy AI systems. Today, noted Boyd, all developers are increasingly asked to build AI systems that are easy to explain and that comply with non-discrimination and privacy regulations.

“It is very challenging to have a good sense of, ‘Hey, have I really assessed whether my model is behaving fairly?’ or ‘Do I really understand why this particular model is predicting the way it is?’” he said.

To navigate these hurdles, Microsoft today announced innovations in responsible machine learning that can help developers understand, protect and control their models throughout the machine learning lifecycle. These capabilities can be accessed through Azure Machine Learning and are also available in open source on GitHub.

The ability to understand model behavior includes the interpretability capabilities powered by the InterpretML toolkit that SAS uses to detect fraud in the EuroBonus loyalty program.

In addition, Microsoft said the Fairlearn toolkit, which includes capabilities to assess and improve the fairness of AI systems, will be integrated with Azure Machine Learning in June.

Microsoft also announced that WhiteNoise, a toolkit for differential privacy, is now available to developers to experiment with in open source on GitHub and can also be accessed through Azure Machine Learning. The differential privacy capabilities were developed in collaboration with researchers at the Harvard Institute for Quantitative Social Science and School of Engineering.

Differential privacy techniques make it possible to derive insights from private data while providing statistical assurances that private information such as names or dates of birth can be protected.

For example, differential privacy could enable a group of hospitals to collaborate on building a better predictive model on the efficacy of cancer treatments while at the same time helping to adhere to legal requirements to protect the privacy of hospital information and helping to ensure that no individual patient data leaks out from the model.

Azure Machine Learning also has built-in controls that enable developers to track and automate their entire process of building, training and deploying a model. This capability, known to many as machine learning and operations, or MLOps, provides an audit trail to help organizations meet regulatory and compliance requirements.

“MLOps is really thinking around the operational, repeatable side of machine learning,” said Boyd. “How do I keep track of all the different experiments that I have run, the parameters that were set with them, the datasets that were used in creating them. And then I can use that to recreate those same things.”

Sarah Bird standing in front of a city skyline with a river in the background
Sarah Bird, Microsoft’s responsible AI lead for Azure AI based in New York City, helps create tools that make responsible machine learning accessible to all developers. Photo courtesy of Sarah Bird.

Contextual bandits and responsibility

In the mid-2010s, Sarah Bird and her colleagues at Microsoft’s research lab in New York were working on a machine learning technology called contextual bandits that learn through exploration experiments how to perform specific tasks better and better over time.

For example, if a visitor to a news website clicks on a story about cats, the contextual bandit learns to present the visitor more stories about cats. To keep learning, the bandit performs experiments such as showing the visitor stories about the Jacksonville Jaguars, a sports team, and the hit musical “Cats.” What story the visitor clicks is another learning data point that leads to greater personalization.

“When it works, it is amazing, you get personalization lifts that you’ve never seen before,” said Bird, who now leads responsible AI efforts for Azure AI. “We started talking to customers and working with our sales team to see who wants to pilot this novel research tech.”

The sales leads gave Bird pause. As potential customers floated ideas about using contextual bandits to optimize the job interview process and insurance claim adjudications, she realized that many people lacked understanding on how contextual bandits work.

“I started saying, ‘Is it even ethical to do experimentation in those scenarios?’” Bird recalled.

The question led to discussions with colleagues in the Fairness, Accountability, Transparency and Ethics in AI research group, or FATE, and a research collaboration on the history of experimental ethics and the implications for reinforcement learning, the type of machine learning behind contextual bandits.

“The technology is good enough that we are using it for real use cases, and if we are using it for real use cases that affect people’s lives, then we better make sure that it is fair and we better make sure that it is safe,” said Bird, who now focuses full time on the creation of tools that make responsible machine learning accessible to all developers.

Huskies, wolves and scammers

Within a few years, ethical AI research had exploded around the world. Model fairness and interpretability were hot topics at major industry gatherings and responsible machine learning tools were being described in the academic literature.

In 2016, for example, Marco Tulio Ribeiro, now a senior researcher at Microsoft’s research lab in Redmond, presented a technique in an academic conference paper to explain the prediction of any classifier, such as computer vision models trained to classify between objects in photos.

To demonstrate the technique, he deliberately trained a classifier to predict “wolf” if a photo had a snowy background and “husky” if there was no snow. He then ran the model on photos of wolves mostly in snowy backgrounds and huskies mostly without snow and showed the results to machine learning experts with two questions: Do you trust the model? How is it making predictions?

A collage of images of wolves and huskies that a machine learning model tried to decipher
Microsoft senior researcher Marco Tulio Ribeiro found that many machine learning experts trusted this model that predicts whether an image is of a wolf or husky. Then he gave them the model explanation, which shows the predictions are based on whether there is snow in the background. “Even experts are likely to be fooled by a bad model,” he said. Graphic courtesy of Microsoft. Photos via Getty.

Many of the machine learning experts said they trusted the model and presented theories on why it was predicting wolves or huskies such as wolves have pointier teeth, noted Ribeiro. Less than half mentioned the background as a potential factor and almost no one zeroed in on the snow.

“Then I showed them the explanations, and after seeing the explanations, of course everyone basically got it and said, ‘Oh, it is just looking at the background,’” he said. “This is a proof-of-concept; even experts are likely to be fooled by a bad model.”

A refined version of Ribeiro’s explanation technique is one of several interpretability capabilities available to all developers using interpretability on Azure Machine Learning, the toolkit that SAS’s fraud detection unit uses to build cases against scammers in the EuroBonus loyalty program.

Other AI solutions that SAS is creating with Azure Machine Learning include one for ticket sales forecasting and a system that optimizes fresh food stocking for in-flight purchases. The fresh food solution reduced food waste by more than 60% before fresh food sales were halted as part of global efforts to slow the spread of COVID-19.

Engberg and his data analytics and artificial intelligence team continue to build, train and test machine learning models, including further experimentation with the Azure Machine Learning capabilities for interpretability and fairness.

“The more we go into things affecting our customers or us as individuals, I think these concepts of fairness, explainable AI, responsible AI, will be even more important,” said Engberg.

Assessing and mitigating unfairness

Bird’s colleagues in FATE pioneered many of the capabilities in the Fairlearn toolkit. The capabilities allow developers to examine model performance across groups of people such as those based on gender, skin tone, age and other characteristics.

“It could be you have a great idea of what fairness means in an application and because these models are so complex, you might not even notice that it doesn’t work as well for one group of people as another group,” explained Bird. “Fairlearn is allowing you to find those issues.”

Eric Boyd stands with arms folded in a white background
Eric Boyd, Microsoft corporate vice president of Azure AI in Redmond, Wash., said innovations in responsible machine learning can help developers build AI systems that are easy to explain and comply with non-discrimination and privacy regulations. Photo courtesy of Microsoft.

EY, a global leader in assurance, tax, transaction and advisory services, piloted fairness capabilities in the Fairlearn toolkit on a machine learning model the firm built for automated lending decisions.

The model was trained on mortgage adjudication data from banks that includes transaction and payment history and credit bureau information. This type of data is generally used to enable assessment of the client’s capability and willingness to pay back a loan. But it also raises concerns about regulatory, legal issues and potential unfairness against applicants of specific demographics.

EY used Fairlearn to evaluate the fairness of model outputs with regards to biological sex. The toolkit, which surfaces results on a visual and interactive dashboard, revealed a 15.3 percentage point difference between positive loan decisions for males versus females.

The Fairlearn toolkit allowed the modelling team at EY to quickly develop and train multiple remediated models and visualize the common trade-off between fairness and model accuracy. The team ultimately landed on a final model that optimized and preserved overall accuracy but reduced the difference between males and females to 0.43 percentage points.

The ability for any developer to assess and mitigate unfairness in their models is becoming essential across the financial industry, noted Boyd.

“Increasingly we’re seeing regulators looking closely at these models,” he said. “Being able to document and demonstrate that they followed the leading practices and have worked very hard to improve the fairness of the datasets are essential to being able to continue to operate.”

Responsible machine learning

Bird believes machine learning is changing the world for the better, but she said all developers need the tools and resources to build models in ways that put responsibility front and center.

Consider, for example, a research collaboration within the medical community to compile COVID-19 patient datasets to build a machine learning model that predicts who is at high risk of serious complications from the novel coronavirus.

Before such a model is deployed, she said, the developers need to make sure they understand how it makes decisions in order to explain the process to doctors and patients. The developers will also want to asses fairness, ensuring the model captures the known elevated risks to males, for example.

“I don’t want a model that never predicts that men are high risk, that would be terrible,” said Bird. “Then, obviously, I want to make sure that the model is not revealing the data of the people it was trained on, so you need to use differential privacy for that.”

Top image: An SAS AI-powered fraud detection tool processes streams of real-time flight information along with transaction, award claims and other data through a machine learning model to find patterns of suspicious behavior. An Azure Machine Learning capability called interpretability explains what model parameters were most important in any given case of suspected fraud. Photo courtesy of SAS.

Related:

John Roach writes about Microsoft research and innovation. Follow him on Twitter.

Posted on Leave a comment

FedEx and Microsoft join forces to transform commerce

Companies to collaborate to reinvent the end-to-end commerce experience globally; FedEx Surround announced today as first solution

MEMPHIS, Tenn., and REDMOND, Wash. — May 18, 2020 — FedEx Corp. (NYSE: FDX) and Microsoft Corp. (Nasdaq: MSFT) on Monday announced a new multiyear collaboration to help transform commerce by combining the global digital and logistics network of FedEx with the power of Microsoft’s intelligent cloud. Together, FedEx and Microsoft aim to create opportunities for their customers through multiple joint offerings powered by Azure and Dynamics 365 that will use data and analytics solutions to reinvent the most critical aspects of the commerce experience and enable businesses to better compete in today’s increasingly digital landscape.

“FedEx has been reimagining the supply chain since our first day of operation, and we are taking it to a new level with today’s announcement,” said Frederick W. Smith, chairman and CEO, FedEx. “Together with Microsoft, we will combine the immense power of technology with the vast scale of our infrastructure to help revolutionize commerce and create a network for what’s next for our customers.”

This new strategic alliance represents a long-term commitment between the companies to fuel innovation, collaborate on product development and share subject-matter expertise. Today, FedEx networks link more than 99% of the world’s gross domestic product across 220 countries and territories, and Microsoft Azure is trusted by more than 95% of Fortune 500 companies. By combining the breadth and scale of the FedEx network and the Microsoft cloud, businesses will have an unprecedented level of control and insight into the global movement of goods.

“Now more than ever, organizations are counting on an efficient and capable supply chain to remain competitive and open for business,” said Satya Nadella, CEO, Microsoft.  “Together with FedEx, we will apply the power of Azure, Dynamics 365 and their AI capabilities to this urgent need, building new commerce experiences that transform logistics for our mutual customers around the world.”

Announcing FedEx Surround — driving digital illumination of the physical world

FedEx Surround, the first solution resulting from the FedEx and Microsoft collaboration, allows any business to enhance visibility into its supply chain by leveraging data to provide near-real-time analytics into shipment tracking, which will drive more precise logistics and inventory management. While most carriers are reactive to global logistics dynamics with limited visibility options built into a package’s journey, FedEx Surround will provide near-real-time insights — down to the granular level of ZIP code, for example — to shine a digital light on the progress and movement of physical inventory.

The benefits offered by FedEx Surround will extend to any business with a supply chain and particularly those that depend on highly time-sensitive deliveries. For example, a hospital may urgently need a package to help save a life, or a part may need rapid transport to a manufacturing facility to avoid an operational shutdown. In each instance, the near-real-time data insights provided by FedEx Surround offer a significant advantage to not only the organizations using the platform but also the people they serve.

FedEx Surround can also collect multiple data points gathered through the enhanced scanning and proprietary IoT technology of FedEx and analyze them using Microsoft’s broad suite of AI, machine learning and analytics solutions. This will provide participating businesses with not only enhanced visibility of a package’s location during its journey, but also knowledge of global commerce conditions and external challenges in near-real-time, such as severe weather or natural disasters, mechanical delays, clearance issues, and incorrect addresses. This unprecedented level of data-driven insight will give FedEx Surround customers the opportunity to intervene early and act to avoid logistical slowdowns before they occur to reduce friction and costs. And with every package that ships, FedEx Surround will analyze past trends to identify future opportunities for streamlined shipping, creating a stronger and more resilient commercial ecosystem.

Information regarding FedEx Surround availability will be shared beginning this summer, and customer access will be rolled out in the months ahead.

FedEx and Microsoft expect to announce additional solutions as part of their collaboration in the coming months, which will leverage even more Microsoft technologies, including Dynamics 365. This includes reimagining commerce experiences for businesses to offer consumers more integrated ways to shop, and faster and more efficient deliveries.

Those interested in more details, including visual assets, should visit www.fedex.com/transform.

About FedEx Corp.

FedEx Corp. (NYSE: FDX) provides customers and businesses worldwide with a broad portfolio of transportation, e-commerce and business services. With annual revenues of $70 billion, the company offers integrated business solutions through operating companies competing collectively and managed collaboratively, under the respected FedEx brand. Consistently ranked among the world’s most admired and trusted employers, FedEx inspires its more than 475,000 team members to remain focused on safety, the highest ethical and professional standards and the needs of their customers and communities. To learn more about how FedEx connects people and possibilities around the world, please visit about.fedex.com.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777, [email protected]

FedEx Media Relations, (901) 434-8100, [email protected]

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at https://news.microsoft.com/microsoft-public-relations-contacts.

Forward-Looking Statements

Certain statements in this press release may be considered forward-looking statements, such as statements relating to FedEx and Microsoft managements’ views with respect to future events and financial performance. Forward-looking statements include those preceded by, followed by or that include the words “will,” “may,” “could,” “would,” “should,” “believes,” “expects,” “anticipates,” “plans,” “estimates,” “targets,” “projects,” “intends” or similar expressions. Such forward-looking statements are subject to risks, uncertainties and other factors which could cause actual results to differ materially from historical experience or from future results expressed or implied by such forward-looking statements. Potential risks and uncertainties include, but are not limited to, the rate of future e-commerce growth and our ability to successfully compete in the e-commerce market, our ability to successfully introduce e-commerce solutions in the expected time frame, customer acceptance of the FedEx Surround platform and other factors which can be found in FedEx Corp.’s and Microsoft’s respective press releases and filings with the Securities and Exchange Commission. Any forward-looking statement speaks only as of the date on which it is made. FedEx and Microsoft do not undertake or assume any obligation to update or revise any forward-looking statement, whether as a result of new information, future events or otherwise.

Posted on Leave a comment

Join Microsoft and industry leaders for an AI virtual summit on June 2

Now more than ever, artificial intelligence is proving its value in helping businesses across sectors address and adjust to massive change, disruption and unpredictability.  

On June 2, Microsoft will host an AI virtual summit to bring together industry leaders and experts to discuss the benefits of putting AI into action. 

During the event, you’ll hear how business leaders are using AI to respond to and recover from recent challenges, as well as examples of game-changing solutions to problems spanning business, government, education and other areas.  

Mitra Azizirad, Corporate Vice President of AI & Innovation at Microsoft, will kick off the event with a keynote on the value of creating an AI-powered organization. 

Register here to attend Imagining the Future: Microsoft AI Virtual Summit, where you’ll learn: 

  • What it takes to become an AI-powered organization; 
  • The power of AI for manufacturing and supply chains; 
  • How AI is advancing health and medical services; 
  • Microsoft’s approach to responsible, inclusive, trusted AI; and 
  • How enterprises can focus on immediate requirements while creating their own AI futures. 

View a full agenda or register for the event. You can also join Microsoft on LinkedIn on June 2 at 9 am PT to view the livestream of Mitra Azizirad’s keynote. 

 

 

Posted on Leave a comment

Leading businesses reveal the power of combining human ingenuity with AI

Businesses of all sizes are experiencing exceptional disruption and change as they grapple with strategies to stabilize and return to growth. In this new environment, human ingenuity, innovation and adaptability will be critically important.

As a result of COVID-19, businesses’ digital transformation is accelerating more rapidly than ever before. As Satya Nadella, Microsoft CEO, recently observed during our earnings announcement, “We’ve seen two years’ worth of digital transformation in two months.”

However, as people move to distributed working and companies move essential workloads to the cloud, what isn’t instantly apparent is the growing role artificial intelligence (AI) is playing at the heart of digital transformation. For some organizations, its use had already accelerated. Others are looking at bringing forward the benefits AI can deliver.

AI is helping us discover, learn, ideate and make decisions. It’s making business operations more efficient, enhancing product and service development, and enabling new customer experiences. In industries like health care, it’s helping improve patient outcomes and save lives.

Before the pandemic, most of our customers were addressing a similar challenge: How do they ensure their people have the right skills and mindset to thrive in a world where AI is driving real business impact?

Co-workers at Mott MacDonald
Mott MacDonald personal assistant Laura Smith talks with civil apprentice Shey Sewell.

To learn more, Microsoft carried out a major international research project* focusing on AI and skills, and discovered the most successful organizations are as focused on developing the skills of their people as they are with deploying new AI-powered technology.

The business environment has changed significantly over the past few months, but the insights we gathered from our research, and the roadmap they provide, still hold true.

The core finding: “In this new environment, companies will need ingenuity to help restart their businesses,” says Azeem Azhar, AI industry expert and founder of the Exponential View. “In the context of new rules for engaging with customers, partners and stakeholders, innovative and adaptable organizations will do better. This research identifies that AI-leading firms are better positioned because they are investing more in a wide range of skills and are more focused on how new technology can augment their workers.”

Posted on Leave a comment

How one Microsoft mom inspired health care companies to embrace the life-saving potential of AI

Family of four
The Mulholland family at home: Kyle, Conor, Melissa and Emma. (Photo by Scott Eklund/Red Box Pictures)

Conor, now 5, is already benefitting from AI in his life. After undergoing 12 surgeries by the age of 2 to deal with the effects caused by PUV and related issues, he was diagnosed with autism at age 3, and has had difficulty speaking. He is working with speech therapists, and is also benefitting from an app called Helpicto that uses AI to convert spoken words into a series of images.

Created by French company Equadex, the app, too, came from the heart – several of the company’s employees have personal experiences with autism. Equadex created Helpicto using Azure Cognitive Services and the Microsoft Azure cloud platform.

“It’s very hard to have a child not be able to speak or communicate with you,” Mulholland says. “He was 4 when he first said, ‘Mama.’ When he did, it was amazing to hear it.”

Conor also learned how to say “Dada” and “Amma,” or Emma, for his 7-year-old sister, who dotes on him.

“It’s really sweet to see how caring she is and how much she wants him to be successful,” Mulholland says. “She helps him practice on words. I can see her being in a field of study someday that’s very focused on helping others, whether it’s as a teacher or doctor, or something along those lines. Her life will be forever changed because of having a brother like him.”

Mulholland says she is “humbled” by all the support she has had – from her husband, Kyle, an accountant who stays at home with Conor, to Microsoft for giving her a platform to tell her story, to the companies that want to hear and embrace it.

“I always encourage people, ‘Don’t pigeonhole yourself, think of ways that you can really harness technology to drive greater good, because sometimes those solutions are right in front of you,’” she says. “And imagine how great of a world we could live in if we had more stories like this.”

Top photo: Melissa Mulholland with son Conor. (Photo by Scott Eklund/Red Box Pictures)

Posted on Leave a comment

NBA announces new multiyear partnership with Microsoft to redefine and personalize the fan experience

Microsoft becomes an Official Technology Partner for the NBA; together the companies will create a direct-to-consumer platform that delivers new fan engagement experiences and enhanced streaming capabilities powered by Microsoft Azure and its AI capabilities.

Logos for the NBA and Microsoft alongside a basketball

NEW YORK — April 16, 2020 The National Basketball Association (NBA) and Microsoft Corp. on Thursday announced a new multiyear alliance, which will transform the way fans experience the NBA. As part of this collaboration, Microsoft will become the Official Artificial Intelligence Partner and an Official Cloud and Laptop Partner for the NBA, Women’s National Basketball Association (WNBA), NBA G League, and USA Basketball beginning with the 2020-21 NBA season.

Microsoft and NBA Digital — co-managed by the NBA and Turner Sports — will create a new, innovative, direct-to-consumer platform on Microsoft Azure that will use machine learning and artificial intelligence to deliver next-generation, personalized game broadcasts and other content offerings as well as integrate the NBA’s various products and services from across its business. The platform will reimagine how fans engage with the NBA from their devices by customizing and localizing experiences for the NBA’s global fanbase, which includes the 1.8 billion social media followers across all league, team and player accounts.

Beyond delivering live and on-demand game broadcasts through Microsoft Azure, the NBA’s vast array of data sources and extensive historical video archive will be surfaced to fans through state-of-the-art machine learning, cognitive search and advanced data analytics solutions. This will create a more personalized fan experience that tailors the content to the preferences of the fan, rewards participation, and provides more insights and analysis than ever. Additionally, this platform will enable the NBA to uncover unique insights and add new dimensions to the game for fans, coaches and broadcasters. The companies will also explore additional ways technology can be used to enhance the NBA’s business and game operations.

As part of the partnership, Microsoft will become the entitlement partner of the NBA Draft Combine beginning next season and an associate partner of future marquee events, including NBA All-Star, MGM Resorts NBA Summer League and WNBA All-Star.

“We are thrilled to serve as the official AI partner of the NBA,” said Satya Nadella, CEO, Microsoft. “Together, we’ll bring fans closer to the game and players they love with new personalized experiences powered by Microsoft Azure.”

“This partnership with Microsoft will help us redefine the way our fans experience NBA basketball,” said Adam Silver, NBA commissioner. “Our goal, working with Microsoft, is to create customized content that allows fans — whether they are in an NBA arena or watching from anywhere around the world — to immerse themselves in all aspects of the game and engage directly with our teams and players.”

About the NBA

The NBA is a global sports and media business built around four professional sports leagues: the National Basketball Association, the Women’s National Basketball Association, the NBA G League and the NBA 2K League. The NBA has established a major international presence with games and programming in 215 countries and territories in 47 languages, and merchandise for sale in more than 100,000 stores in 100 countries on six continents. NBA rosters at the start of the 2019-20 season featured 108 international players from 38 countries and territories. NBA Digital’s assets include NBA TV, NBA.com, the NBA App and NBA League Pass. The NBA has created one of the largest social media communities in the world, with 1.8 billion likes and followers globally across all league, team, and player platforms. Through NBA Cares, the league addresses important social issues by working with internationally recognized youth-serving organizations that support education, youth and family development, and health-related causes.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777, [email protected]

Chase Kressel, NBA, [email protected]

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at https://news.microsoft.com/microsoft-public-relations-contacts.

Posted on Leave a comment

Autonomous systems 101: Q&A about building intelligent control systems

The AI for Business and Technology blog is always looking for ways to help our readers understand how their businesses can benefit from the latest in artificial intelligence and technology. Today, we are talking with Microsoft Senior Applied AI Engineer Kingsuk Maitra. Kingsuk has a PhD in electrical engineering and now leads customer success engagements for autonomous systems at Microsoft.

Blog: Let’s start by figuring out what it is we’re talking about. What exactly is an autonomous system?

Kingsuk: Well, the basic idea of automation is to do repetitive tasks without human involvement, using an established pattern that is somewhat predictable. An autonomous system, on the other hand, is way more than automation because the system is also making informed and intuitive decisions with a substantive amount of knowledge and know-how.

If you expose an autonomous system to uncharted territory, it can make a recommendation to inform decision making, whereas strict automation wouldn’t be able to do anything without explicit human intervention. This essentially frees up human resources and ingenuity for making much more informed decisions. And it also gives you a lot more leverage and liberty and latitude when it comes to ensuring quality and preventing human errors.

Blog: How does an autonomous system learn to make these recommendations?

Kingsuk: Well, artificial intelligence at a very basic level allows a machine to learn from existing experience and existing data. Traditional machine learning is predicated on the availability of large quantities of data. But in the real-life scenarios where autonomous systems are critical to day-to-day operations, such as industrial control systems the environment is often uncertain, and data is sparse. It’s noisy, unstructured and messy, and there’s no easy way to collect a lot of data and methodically label it. So what you can do is model the environment where an autonomous system is supposed to make an impact, and then let the autonomous system explore that simulated environment while being supervised by an operator.

That’s the Microsoft approach, which incorporates machine teaching and reinforcement learning. Years of expertise and experience from a seasoned human operator in a particular vertical can be incorporated into the knowledge base through machine teaching, and that is layered on top of the inputs and signals from the low-fidelity simulator. The autonomous system learns by testing out various actions and being rewarded as it takes the correct action, which is reinforcement learning.

Workers in hard hats observe equipment in a factory environment
Intelligent control systems can help machinery and processes adapt to dynamic environments in real time.

Blog: What kinds of industries could use autonomous systems?

Kingsuk: This type of solution is scalable across multiple verticals, be it manufacturing, industrial automation, energy and many others. These verticals all have their own specialized simulators, and each of them has hundreds of years of research and billions of dollars in development that has gone into the discipline to make them very mature disciplines. So Microsoft’s point of view is that we are not offering a black box solution that is going to go in and disrupt everything they have known for all that time. What we are saying is we use AI to augment the human learning that already exists in those spaces, offering this one solution that can scale. We are not replacing anything, just adding to it.

And not only is this a way to find new solutions to existing problems, but it also offers the opportunity to solve problems that were previously thought unsolvable.

Blog: What’s one example of autonomous systems being applied?

One great example of this is new product introduction, or NPI, which is a complex problem. Most of the time, a new product has a long wish list of properties it needs to have, and the way it often works is a kind of educated guesswork. There might be 50 to 200 parameters, and a person uses heuristics and trial and error, working each parameter sequentially, and it takes several months in a best-case scenario.

With machine teaching and autonomous systems, you can optimize all those parameters, work simultaneously and in parallel and the whole process takes just weeks.

Not only does this save time but it reduces waste, which is better for the environment, and it allows the product to get to market quicker, when it is actually in demand. The market can change so quickly that something that was needed months earlier may no longer be needed.

We at Microsoft are also using this technology internally for power and efficiency optimization of our buildings, which will not only save money but will help us move toward our sustainability and carbon neutrality goals.

Posted on Leave a comment

New AI tools help writers be more clear, concise and inclusive in Office and across the web

Putting people at the center of AI design

Mira Lane, the lead of Microsoft’s Ethics & Society team, is charged with ensuring that the principles articulated at the highest levels of the company to guide the responsible use of AI find their way into the heads of researchers conducting user testing and the hands of engineers writing code. It starts with asking the right questions, she said.

Her team of philosophers, engineers, security experts, designers and trainers works closely with product teams to consider what data or models should be used, who might be directly or indirectly affected by a new technology, what kinds of people should be interviewed to identify unintentional harms and how those insights can be folded into product design.

“The thing that we’re trying to do is help people design technology in a really intentional way, so you really understand what the effects of the tech are and can look around the corner to how it might be used or misused,” Lane said.

For teams that incorporate AI into productivity tools, one of the most important principles is to keep people at the center of the process.

“We bring a lot of focus to making sure the experiences we’re delivering are actually valuable,” said Penny Collisson, principal design research manager for Office. “We have lots of conversations with customers where we never even mention AI. We’re talking about understanding the expressed or latent needs or pain points that people have and then we go back and try to think about how AI could fit in.”

Microsoft has developed 18 best practices that researchers and product designers use to guide their work. But a lot of that work involves listening to people with different levels of tech adoption, socioeconomic backgrounds, geography, physical abilities or attitudes about AI and privacy.

If you talk to people with learning disabilities, for instance, some have a fear of starting with a blank page. That insight helped guide improved dictation offerings in Word for the web, which makes it easier to create content with one’s voice and use speech-to-text to get thoughts down on paper.

Creating good user experiences with AI is more complicated than asking people for feedback on whether they prefer one type of control over another, or which interface is easier to navigate, said Jon Friedman, Microsoft corporate vice president for design and research.

“The kicker and power of AI is that everyone’s experience is unique. Before, we were designing for the mean because solutions were closer to one size fits all. And now we are designing each thing to be a special size to fit each individual,” Friedman said.

“So making sure we’re talking to a much broader set of people and hearing everyone’s voice is really important to give people what they truly need,” he said.

In one example, Microsoft designers and engineers who were interested in building a better screen reader for people who are blind or with low vision built a relationship with the Washington State School for the Blind and began interviewing and observing how those students consume information and approach tasks for the day. That work led to Play My Emails in Outlook mobile, which turned out to also be useful for anyone who wants a jump on their day but can’t safely look at a screen while commuting or cooking breakfast for kids.

Through interviews and equipment that simulated the experience of having macular degeneration, the design team began to understand the massive cognitive load that’s required to listen for pertinent information among a sea of extraneous details like dates and time stamps and even punctuation marks that screen readers include as they scan from left to right.

“It was like listening for a needle in a haystack, and the fatigue level was really high,” Friedman said.

So the team used AI to offer the most important information upfront and in a much more conversational way. Having Cortana, Microsoft 365’s personal productivity assistant, tell you that someone sent you an email in the past hour about scheduling a meeting this afternoon is more useful than knowing the precise time stamp, Friedman said.

Play My Emails also provides summary information like how many unread emails are in your inbox and how long it would take to listen to them. That helps people decide if they have enough commute time or brain space while they’re rushing to get out of the house to focus on the task.

“We started on this path because we thought inclusive design was an important philosophy that we needed to start living and breathing in product,” Friedman said. “But the team quickly realized that there’s a lot of instances where people are situationally blind or looking at screens when it’s not safe, and that’s when they realized this is something that could be useful for people in a lot of different contexts.”

Posted on Leave a comment

Using AI, people who are blind are able to find familiar faces in a room

Paralympics in Brazil

Project Tokyo was born out of a challenge, in early 2016, from senior leaders at Microsoft to create AI systems that would go beyond completing tasks such as fetching sports scores and weather forecasts or identifying objects. Morrison said creating tools for people who are blind and with low vision was a natural fit for the project, because people with disabilities are often early adopters of new technology.

“It is not about saying, ‘Let’s build something for blind people,’” Morrison said. “We are working with blind people to help us imagine the future, and that future is about new experiences with AI.”

Morrison and her colleague Ed Cutrell, a senior principal researcher at Microsoft’s research lab in Redmond, Washington, were tapped to lead the project. Both have expertise in designing technologies with people who are blind or with low vision and decided to begin by trying to understand how an agent technology could augment, or extend, the capabilities of these users.

To start, they followed a group of athletes and spectators with varying levels of vision on a trip from the United Kingdom to the 2016 Paralympic Games in Rio de Janeiro, Brazil, observing how they interacted with other people as they navigated airports, attended sporting venues and went sightseeing, among other activities. A key learning, noted Cutrell, was how an enriched understanding of social context could help people who are blind or with low vision make sense of their environment.

“We, as humans, have this very, very nuanced and elaborate sense of social understanding of how to interact with people – getting a sense of who is in the room, what are they doing, what is their relationship to me, how do I understand if they are relevant for me or not,” he said. “And for blind people a lot of the cues that we take for granted just go away.”

This understanding spurred a series of workshops with the blind and low vision community that were focused on potential technologies that could provide such an experience. Peter Bosher, an audio engineer in his mid-50s who has been blind most of his life and worked with the Project Tokyo team, said the concept of a technology that provided information about the people around him resonated immediately.

“Whenever I am in a situation with more than two or three people, especially if I don’t know some of them, it becomes exponentially more difficult to deal with because people use more and more eye contact and body language to signal that they want to talk to such-and-such a person, that they want to speak now,” he said. “It is really very difficult as a blind person.”

Microsoft researcher Ed Cutrell in his office with HoloLens devices sitting in front of him on his desk
Ed Cutrell, a senior principal researcher with Microsoft’s research organization in Redmond, Washington, is a co-leader of Project Tokyo. On his desk are several modified Microsoft HoloLenses that project researchers are using to help people who are blind and with low vision learn who is where in their social environment. Photo by Dan DeLong.

A modified HoloLens

Once the Project Tokyo researchers understood the type of AI experience they wanted to create, they set out to build the enabling technology. They started with the original Microsoft HoloLens, a mixed reality headset that projects holograms into the real world that users can manipulate.

“HoloLens gives us a ton of what we need to build a real time AI agent that can communicate the social environment,” said Grayson during a demonstration of the technology at Microsoft’s research lab in Cambridge.

For example, the device has an array of grayscale cameras that provide a near 180-degree view of the environment and a high-resolution color camera for high-accuracy facial recognition. In addition, the speakers above the user’s ears allow for spatialized audio – the creation of sounds that seem to be coming from specific locations around the user.

Machine learning experts on the Project Tokyo team then developed computer vision algorithms that provide varying levels of information about who is where in the user’s environment. The models run on graphical processing units, known as GPUs, that are housed in the black chest that Grayson carted off to Regan’s house for the user testing with Theo.

One model, for example, detects the pose of people in the environment, which provides a sense of where and how far away people are from the user. Another analyzes the stream of photos from the high-resolution camera to recognize people and determine if they have opted to make their names known to the system. All this information is relayed to the user through audio cues.

For example, if the device detects a person one meter away on the user’s left side, the system will play a click that sounds like it is coming from one meter away on the left. If the system recognizes the person’s face, it will play a bump sound, and if that person is also known to the system, it will announce their name.

When the user only hears a click but wants to know who the person is, a second layer of sound that resembles an elastic band stretching guides the user’s gaze toward the person’s face. When the lens’ central camera connects with the person’s nose, the user hears a high-pitched click and, if the person is known to the system, their name.

“I particularly like the thing that gives you the angle of gaze because I’m never really sure what is the sensible angle for your head to be at,” said Bosher, who worked with the Project Tokyo team on the audio experience early in the design process and returned to the Cambridge lab to discuss his experience and check out the latest iteration. “That would be a great tool for learning body language.”

Peter Bosher interacts with Microsoft researchers while wearing a HoloLens, sitting at a table in a research lab, with a Microsoft Surface Book computer in the foreground
Peter Bosher, middle, an audio engineer who is blind who worked with the Project Tokyo team early in the design process, checks out the latest iteration of the system at Microsoft’s research lab in Cambridge, UK, with researchers Martin Grayson, left, and Cecily Morrison, right. Photo by Jonathan Banks.

Prototyping with adults

As the Project Tokyo team has developed and evolved the technology, the researchers routinely invite adults who are blind or with low vision to test the system and provide feedback. To facilitate more direct social interaction, for example, the team removed the lenses from the front of the HoloLens.

Several users expressed a desire to unobtrusively get the information collected by the system without constantly turning their heads, which felt socially awkward. The feedback prompted the Project Tokyo team to work on features that help users quickly learn who is around them by, for example, asking for an overview and getting a spatial readout of all the names of people who have given permission to be recognized by the system.

Another experimental feature alerts the user with a spatialized chime when someone is looking at them, because people with typical vision often establish eye contact to initiate a conversation. Unlike the bump, however, the chime is not followed by a name.

“We already use the name when you look at somebody,” Grayson explained to Emily, a tester in her 20s who has low vision and visited the Cambridge lab to learn about the most recent features. “But also, by not giving the name, it might draw your attention to turn to somebody who is trying to get your attention. And by turning to them, you find out their name.”

“I totally agree with that. That is how sighted people react. They capture someone out of the corner of their eye, or you get that sense, and go, ‘Cecily,’” Emily said.

The modified HoloLens the researchers showed to Emily also included an LED strip affixed above the band of cameras. A white light tracks the person closest to the user and turns green when the person has been identified to the user. The feature lets communication partners or bystanders know they’ve been seen, making it more natural to initiate a conversation.

The LED strip also provides people an opportunity to move out of the device’s field of view and not be seen, if they so choose. “When you know you are about to be seen, you can also decide not to be seen,” noted Morrison. “If you know when you are being seen, you know when you are not being seen.”

Posted on Leave a comment

Boosting the global fight against mosquito-borne diseases

Australia’s Far North Queensland has been declared essentially dengue-free for the first time in about a century, following an intensive release campaign. Other targeted efforts are making strong progress in Asia and South America, where authorities have long tried to wipe out mosquito populations with insecticides.

“Our Wolbachia method is natural and self-sustaining,” Green says. “As a large-scale public health intervention, we believe that this a cost-effective way. The evidence so far is that it can sustain itself in local populations for up to seven years. And we expect it will continue.”

How machine learning and AI will help take the fight global 

The Program’s data science partner, Gramener, is developing machine learning for the AI model. It will tap the Program’s existing release point records as well as many other datasets on human population densities, land use, industrial sites, weather, and other variables. Satellite imagery will be a big part of mapping out large urban areas with strategic and granular accuracy. 

The aim is to have the ability to pinpoint multiple impactful release points within blocks of as little as 100 square meters.

READ MORE: Gramener is building AI tools to help biodiversity conservation

“We want to target the areas where our intervention is needed most,” Green says. “We will be able to release Wolbachia mosquitoes where they will have the most effect with analysis at a countrywide scale instead of at a neighborhood scale. Our ambition is to be able to look at a whole country and run the model over all its urban areas and let it give an unprecedented snapshot of where we can have the most impact.”

[embedded content]

Joppa says machine learning and AI are potent tools for not-for-profits that want to tackle big challenges but have limited resources.

“The World Mosquito Program started with the objective of figuring out how to attack a problem. In this case, they worked out how to neutralize the disease-carrying ability of mosquitoes. Then they worked out where they needed to release these mosquitoes.

“They started collecting tons of data. It then became a really messy data problem as they tried to compare a bunch of different data sets to work out where they could be most efficient.

“Ultimately, this is where machine learning comes in. It allows you to take all of that data, abstract it down to a single estimate of probability and map it out. It is cost-effective, and it is super scalable. Instead of figuring out data visualization and analytics for one particular area, you can now do it for an entire city, for an entire country, for the entire world.

“That is because the data sets they are using are globally generalizable. One model that works here can work everywhere.”

Top imagery: Wolbachia mosquitoes are released in rural Fiji.