post

Artificial intelligence takes on ocean trash

Inspiration sometimes arrives in strange ways. Here is the story of how a dirty disposable diaper led to the development of an artificial intelligence (AI) solution to help rid the world’s coasts of massive amounts of waste and garbage.

It starts in 2005: Camden Howitt is surfing off Puerto Escondido on Mexico’s wild west coast when, suddenly, a floating diaper smacks him in the face. He paddles back to shore in disgust, only to stumble upon a discarded toilet seat lying on the sand.

When Howitt returns home to New Zealand half a world away, his heart sinks when he sees how much trash and other refuse is also washing up on its geographically isolated, and once pristine, 15,000-kilometer (9,300-mile) coastline.

Many people might merely shrug at such a seemingly intractable global problem and see it as just too hard to fix. But not Howitt. His mission became clear: He would dedicate his life to protecting paradise.

Camden Howitt, Co-Founder of Sustainable Coastlines.

Co-Founder Sam Judd came up with the idea of forming a non-profit organization while surfing in the Galápagos Islands in 2008. A year later, the two created Sustainable Coastlines in New Zealand to educate, motivate, and empower individuals and communities to clean up and restore their coastal environments and waterways.

It was the start of an obsession, and one that has just attracted a grant from AI for Earth: Microsoft’s US$50 million, five-year commitment to put AI in the hands of those working to protect our planet across four key areas – agriculture, biodiversity, climate change, and water.

Microsoft President, Brad Smith (middle), during his visit to New Zealand with Camden Howitt (right) and Sustainable Coastlines development lead Dr Sandy Britain (left).

“This kind of initiative is exactly what our planet needs – something simple, but effective, that can easily be adopted at grass-roots level to make a difference, empowering every community to keep their environment clean and make the world a better place for future generations,” Microsoft President Brad Smith said on visit to New Zealand in March.

At Lyall Bay, near the capital Wellington, Smith braved a blustery day to see the organization’s litter-busting technology in action. He helped collect garbage from the beach, then logged and categorized it in the Sustainable Coastlines’ uniquely comprehensive database.

Since it started, Sustainable Coastlines and its growing legions of volunteers have removed enough trash from shorelines around New Zealand and the Pacific to fill the equivalent of nearly 45 shipping containers. They have picked up tens of millions of individual items, 77% of which are single-use plastic.

It’s an impressive achievement, but the problem of ocean garbage is getting worse and is a global scourge that has no boundaries. Howitt ’s vision now is “to combine my deep love for the outdoors with a passion for designing systemic tools for large-scale change.” To get there, Sustainable Coastlines has teamed up with  Microsoft and its innovative technology partner, Enlighten Designs.

To find out more, I recently visited Sustainable Coastlines’ headquarters in the nation’s most populous city, Auckland.

New Zealand Prime Minister, Jacinda Ardern (middle), at the opening of The Flagship Education Centre, with Sam Judd (left) and Camden Howitt (right).

Howitt looks more or less how you might expect a passionate, ocean-loving environmentalist to look. His beard is long and rugged, his tan is deep, and his determination is strong. Before long he is proudly showing me around the building, The Flagship Education Center, which was opened by New Zealand Prime Minister Jacinda Ardern in October last year.

His organization is determined to be sustainable in practice as well as in name. The building captures and recycles its own water. Membrane roofing both insulates and breaks down airborne pollutants into non-toxic by-products. All gray and black water is treated and composted on site. Its offices are powered by state-of-the-art solar panels and batteries that contribute excess power to the city’s standard electricity grid.

Later, Howitt opens up about the scale of the environmental challenges and myths confronting his homeland of long beaches and hundreds of islands at the western edge of the South Pacific. Over the years, a carefully constructed clean-and-green brand has made foreign tourism a massive money-spinner for New Zealand’s economy. And, many Kiwis honestly regard themselves as “tidy” citizens.

Yet the World Bank ranks New Zealand as the planet’s tenth-largest per capita producer of urban waste, well ahead of the United States at 19th. “That’s a top ten that no one wants to be in,” Howitt says. “As New Zealand’s population rockets and we consume like there’s no tomorrow, we could easily rise in that ranking.”

He hopes new technologies and solutions can help reverse this disturbing trend.

Enlighten Designs has built a platform that employs intelligent digital storytelling and visualization tools as part of Microsoft’s Cognitive Services suite. And, together with Microsoft, it is also developing a national litter database that will not only track the impact of clean-up efforts on waste but also generate accurate, scientifically valid data and insights.

post

Collaborate with others and keep track of to-dos with new AI features in Word

Focus is a simple but powerful thing. When you’re in your flow, your creativity takes over, and your work is effortless. When you’re faced with distractions and interruptions, progress is slow and painful. And nowhere is that truer than when writing.

Microsoft Word has long been the standard for creating professional-quality documents. Technologies like Editor—Word’s AI-powered writing assistant—make it an indispensable tool for the written word. But at some point in the writing process, you’ll need some information you don’t have at your fingertips, even with the best tools. When this happens, you likely do what research tells us many Word users do: leave a placeholder in your document and come back to it later to stay in your flow.

Today, we’re starting to roll out new capabilities to Word that help users create and fill in these placeholders without leaving the flow of their work. For example, type TODO: finish this section or <<insert closing here>> and Word recognizes and tracks them as to-dos. When you come back to the document, you’ll see a list of your remaining to-dos, and you can click each one to navigate back to the right spot.

Animated screenshot of a Word document open using the AI-powered To-Do feature.

Once you’ve created your to-dos, Word can also help you complete them. If you need help from a friend or coworker, just @mention them within a placeholder. Word sends them a notification with a “deep link” to the relevant place in the document. Soon, they’ll be able to reply to the notification with their contributions, and those contributions will be inserted directly into the document—making it easy to complete the task with an email from any device.

Over time, Office will use AI to help fill in many of these placeholders. In the next few months, Word will use Microsoft Search to suggest content for a to-do like <<insert chart of quarterly sales figures>>. You will be able to pick from the results and insert content from another document with a single click.

These capabilities are available today for Word on the Mac for Office Insiders (Fast) as a preview. We’ll roll these features out to all Office 365 subscribers soon for Word for Windows, the Mac, and the web.

Get started as an Office for Mac Insider

Office Insider for Mac has two speeds: Insider Fast and Insider Slow. To get access to this and other new feature releases, you’ll need a subscription to Office 365. To select a speed, open Microsoft Auto Update and on the Help menu select Check for Updates.

As always, we would love to hear from you, please send us your thoughts at UserVoice or visit us on Twitter or Facebook. You can also let us know how you like the new features by clicking the smiley face icon in the upper-right corner of Word.

post

New updates to Azure AI boost business productivity, expand dev capabilities

As companies increasingly look to transform their businesses with AI, we continue to add improvements to Azure AI to make it easy for developers and data scientists to deploy, manage, and secure AI functions directly into their applications with a focus on the following solution areas:

  1. Leveraging machine learning to build and train predictive models that improve business productivity with Azure Machine Learning.
  2. Applying an AI-powered search experience and indexing technologies that quickly find information and glean insights with Azure Search.
  3. Building applications that integrate pre-built and custom AI capabilities like vision, speech, language, search, and knowledge to deliver more engaging and personalized experiences with our Azure Cognitive Services and Azure Bot Service.

Today, we’re pleased to share several updates to Azure Cognitive Services that continue to make Azure the best place to build AI. We’re introducing a preview of the new Anomaly Detector Service which uses AI to identify problems so companies can minimize loss and customer impact. We are also announcing the general availability of Custom Vision to more accurately identify objects in images. 

From using speech recognition, translation, and text-to-speech to image and object detection, Azure Cognitive Services makes it easy for developers to add intelligent capabilities to their applications in any scenario. To this date more than a million developers have already discovered and tried Cognitive Services to accelerate breakthrough experiences in their application.

Anomaly detection as an AI service

Anomaly Detector is a new Cognitive Service that lets you detect unusual patterns or rare events in your data that could translate to identifying problems like credit card fraud.

Today, over 200 teams across Azure and other core Microsoft products rely on Anomaly Detector to boost the reliability of their systems by detecting irregularities in real-time and accelerating troubleshooting. Through a single API, developers can easily embed anomaly detection capabilities into their applications to ensure high data accuracy, and automatically surface incidents as soon as they happen.

Common use case scenarios include identifying business incidents and text errors, monitoring IoT device traffic, detecting fraud, responding to changing markets, and more. For instance, content providers can use Anomaly Detector to automatically scan video performance data specific to a customer’s KPIs, helping to identify problems in an instant. Alternatively, video streaming platforms can apply Anomaly Detector across millions of video data sets to track metrics. A missed second in video performance can translate to significant revenue loss for content providers that monetize on their platform.

Custom Vision: automated machine learning for images

With the general availability of Custom Vision, organizations can also transform their business operations quickly and accurately identifying objects in images.

Powered by machine learning, Custom Vision makes it easy and fast for developers to build, deploy, and improve custom image classifiers to quickly recognize content in imagery. Developers can train their own classifier to recognize what matters most in their scenarios, or export these custom classifiers to run them offline and in real time on iOS (in CoreML), Android (in TensorFlow), and many other devices on the edge. The exported models are optimized for the constraints of a mobile device providing incredible throughput while still maintaining high accuracy.

Today, Custom Vision can be used for a variety of business scenarios. Minsur, the largest tin mine in the western hemisphere, located in Peru, applies Custom Vision to create a sustainable mining practice by ensuring that water used in the mineral extraction process is properly treated for reuse on agriculture and livestock by detecting treatment foam levels. They used a combination of Cognitive Services Custom Vision and Azure video analytics to replace a highly manual process so that employees can focus on more strategic projects within the operation.

Screenshot of the Custom Vision platform

Screenshot of the Custom Vision platform, where you can train the model to detect unique objects in an image, such as your brand’s logo.

Starting today, Custom Vision delivers the following improvements:

  • High quality models – Custom Vision features advanced training with a new machine learning backend for improved performance, especially on challenging datasets and fine-grained classification. With advanced training, you can specify a compute time budget and Custom Vision will experimentally identify the best training and augmentation settings.
  • Iterate with ease – Custom Vision makes it simple for developers to integrate computer vision capabilities into applications with 3.0 REST APIs and SDKs. The end to end pipeline is designed to support the iterative improvement of models, so you can quickly train a model, prototype in real world conditions, and use the resulting data to improve the model which gets models to production quality faster.
  • Train in the cloud, run anywhere – The exported models are optimized for the constraints of a mobile device, providing incredible throughput while still maintaining high accuracy. Now, you can also export classifiers to support Azure Resource Manager (ARM) for Raspberry Pi 3 and the Vision AI Dev Kit.

For more information, visit the Custom Vision Service Release Notes.

Get started today

Today’s milestones illustrate our commitment to make the Azure AI platform suitable for every business scenario, with enterprise-grade tools that simplify application development, and industry leading security and compliance for protecting customers’ data.

To get started building vision and search intelligent apps, please visit the Cognitive Services site.

post

What’s new with Seeing AI

Saqib Shaikh holds his camera phone in front of his face with Seeing AI open on the screen

By Saqib Shaikh, Software Engineering Manager and Project Lead for Seeing AI

Seeing AI provides people who are blind or with low vision an easier way to understand the world around them through the cameras on their smartphones. Whether in a room, on a street, in a mall or an office – people are using the app to independently accomplish daily tasks like never before. Seeing AI helps users read printed text in books, restaurant menus, street signs and handwritten notes, as well as identify banknotes and products via their barcode. Leveraging on-device facial-recognition technology, the app can even describe the physical appearance of people and predict their mood.

Today, we are announcing new Seeing AI features for the enthusiastic community of users who share their experiences with the app, recommend new capabilities and suggest improvements for its functionalities. Inspired by this rich feedback, here are the updates rolling out to Seeing AI to enhance the user’s experience:

  • Explore photos by touch: Leveraging the Custom Vision Service in tandem with the Computer Vision API, this new feature enables users to tap their finger to an image on a touch-screen to hear a description of objects within an image and the spatial relationship between them. Users can explore photos of their surroundings taken on the Scene channel, family photos stored in their photo browser, and even images shared on social media by summoning the options menu while in other apps.
  • Native iPad support: For the first time we’re releasing iPad support, to provide a better Seeing AI experience that accounts for the larger display requirements. iPad support is particularly important to individuals using Seeing AI in academic or other professional settings where they are unable to use a cellular device.
  • Channel improvements: Users can now customize the order in which channels are shown, enabling easier access to favorite features. We’ve also made it easier to access the face recognition function while on the Person channel, by relocating the feature directly on the main screen. Additionally, when analyzing photos from other apps, the app will now provide audio cues that indicate Seeing AI is processing the image.

Since the app’s launch in 2017, Seeing AI has leveraged AI technology and inclusive design to help people with more than 10 million tasks. If you haven’t tried Seeing AI yet, download it for free on the App Store. If you have, please share your thoughts, feedback or questions with us at seeingai@microsoft.com, or through the Disability Answer Desk and Accessibility User Voice Forum.

post

Seattle Times: ‘Even one cigarette’ in pregnancy can raise risk of babies’ death, Seattle Children’s and Microsoft find

It’s no surprise that smoking during pregnancy is unhealthy for the fetus — just as it’s unhealthy for the person smoking. But the powerful combination of medical research and data science has given new insights into the risks involved, specifically when it comes to babies suddenly dying in their sleep.

The risk of Sudden Unexpected Infant Death (SUID) increases with every cigarette smoked during pregnancy, according to a joint study by Seattle Children’s Research Institute and Microsoft data scientists.

Further, while smoking less or quitting during pregnancy can help significantly, a risk of SUID exists even if a person stops smoking right before becoming pregnant, the team demonstrated.

“Any amount of smoking, even one cigarette, can double your risk,” said Tatiana Anderson, a post-doctoral research fellow at Children’s who worked on the study, which was published Monday in the journal Pediatrics.

Most Read Local Stories

Anderson and the rest of the team estimate that smoking during pregnancy is responsible for 800 of the approximately 3,700 SUID deaths in the United States every year. That’s 22 percent of all SUID cases.

The team analyzed vast data sets from the Centers for Disease Control and Prevention (CDC) that included every baby born in the United States from 2007 to 2011. In that time span, more than 20 million babies were born and 19,127 died of SUID, which includes Sudden Infant Death Syndrome (SIDS).

The study found that the risk of SUID doubles if a person goes from not smoking to smoking just one cigarette daily throughout pregnancy. At a pack a day (20 cigarettes), the risk is tripled compared to nonsmokers. The odds plateau from there.

The chance of SUID decreases when women quit smoking or smoke less: Women who tapered their smoking by the third trimester showed a 12 percent decreased risk. Quitting altogether by the third trimester lowered the risk of SUID by 23 percent.

The biggest predictor of SUID risk was the average number of cigarettes smoked daily throughout the three trimesters of pregnancy, rather than smoking more or less at any particular point.

“Thus, a woman who smoked 20 cigarettes per day in the first trimester and reduced to 10 cigarettes per day in subsequent trimesters had a similarly reduced SUID risk as a woman who averaged 13 cigarettes per day in each trimester,” the study states.

Having such precise data about the effects of smoking before and during pregnancy better arms health-care providers to speak with their patients, Anderson said.

“Doctors need to have frank discussions with patients,” she said. “Every cigarette you can eliminate on a daily basis will reduce your risk of SUID.”

Microsoft data scientists teamed up with the Children’s researchers after John Kahan, who heads up customer data and analytics for Microsoft, lost his son Aaron to SIDS in 2003. After Aaron’s death, days after he came home from the hospital, Kahan started the Aaron Matthew SIDS Research Guild. In 2016, he climbed Mount Kilimanjaro to raise money for SIDS research.

When he returned from Africa, he found out his team at Microsoft had been working with the available data on infant deaths. Their goal was to use algorithms to analyze the data and help come up with a way to save babies like Aaron from SUID.

Juan Lavista, a member of Kahan’s team at that time, is now the senior director of data science at the AI For Good research lab, which is part of an initiative called AI for Humanitarian Action, launched by Microsoft president Brad Smith. The idea behind the initiative is to use artificial intelligence to tackle some of the world’s most difficult problems, and it has allowed Lavista to work on things like the SUID study full time instead of cramming it in around his day job.

Data scientists can use computing power to work with huge data sets to help solve confounding issues like SUID, climate change and immigration, Lavista said.

“There are many problems the world has that, we believe, we can make a difference with AI,” he said.

The collaboration has been exciting for Anderson, the Children’s research fellow. She says this unusual partnership between the medical world and the technology sector has applications in many different fields.

“I think it is really exciting because it is a concept that absolutely can be used to ask questions outside of SIDS,” Anderson said. “Everybody is there because they want to make a difference. It is very much a collaborative effort.”

The scientists at Microsoft and Children’s aren’t stopping with the publication of this study. Lavista said they are delving into other questions surrounding SUID, such as the impact of prenatal care, how the age of an infant relates to sudden death and an examination of what SUID looks like in all 50 states.

post

Is drought on the horizon? Researchers turn to AI in a bid to improve forecasts

As winter drags on, some people wonder whether to pack shorts for a late-March escape to Florida, while others eye April temperature trends in anticipation of sowing crops. Water managers in the western U.S. check for the possibility of early-spring storms to top off mountain snowpack that is crucial for irrigation, hydropower and salmon in the summer months.

Unfortunately, forecasts for this timeframe — roughly two to six weeks out — are a crapshoot, noted Lester Mackey, a statistical machine learning researcher at Microsoft’s New England research lab in Cambridge, Massachusetts. Mackey is bringing his expertise in artificial intelligence to the table in a bid to increase the odds of accurate and reliable forecasts.

“The subseasonal regime is where forecasts could use the most help,” he said.

Mackey knew little about weather and climate forecasting until Judah Cohen, a climatologist at Atmospheric and Environmental Research, a Verisk business that consults about climate risk in Lexington, Massachusetts, reached out to him for help using machine learning techniques to tease out repeating weather and climate patterns from mountains of historical data as a way to improve subseasonal and seasonal forecast models.

The preliminary machine learning based forecast models that Mackey, Cohen and their colleagues developed outperformed the standard models used by U.S. government agencies to generate subseasonal forecasts of temperature and precipitation two to four weeks out and four to six weeks out in a competition sponsored by the U.S. Bureau of Reclamation.

Mackey’s team recently secured funding from Microsoft’s AI for Earth initiative to improve and refine its technique with an eye toward advancing the technology for the social good.

“Lester is working on this because it is a hard problem in machine learning, not because it is a hard problem in weather forecasting,” noted Lucas Joppa, Microsoft’s chief environmental officer who runs the AI for Earth program, as he explained why his group is helping fund the research. “It just so happens that the techniques he is interested in exploring have huge applicability in weather forecasting, which happens to have huge applicability in broader societal and economic domains.”

Fields being irrigated on the edge of the desert in the Cuyama Valley Photo by Getty Images.

AI on the brain

Mackey said current weather models perform well up to about seven days in advance, and climate forecast models get more reliable as the time horizon extends from seasons to decades. Subseasonal forecasts are a middle ground, relying on a mix of variables that impact short-term weather such as daily temperature and wind and seasonal factors such as the state of El Niño and the extent of sea ice in the Arctic.

Cohen contacted Mackey out of a belief that machine learning, the arm of AI that encompasses recognizing patterns in statistical data to make predictions, could help improve his method of generating subseasonal forecasts by gleaning insights from troves of historical weather and climate data.

“I am basically doing something like machine learned pattern recognition in my head,” explained Cohen, noting that weather patterns repeat throughout the seasons and from year to year and that therefore pattern recognition can and should inform longer-term forecasts. “I thought maybe I can improve on what I am doing in my head with some of the machine learning techniques that are out there.”

Using patterns in historical weather data to predict the future was standard practice in weather and climate forecast generation until the 1980s. That’s when physical models of how the atmosphere and oceans evolve began to dominate the industry. These models have grown in popularity and sophistication with the exponential rise in computing power.

“Today, all of the major climate centers employ massive supercomputers to simulate the atmosphere and oceans,” said Mackey. “The forecasts have improved substantially over time, but they make relatively little use of historical data. Instead, they ingest today’s weather conditions and then push forward their differential equations.”

A combine harvester moving on a snow-covered fieldPhoto by Getty Images.

Forecast competition

As Mackey and Cohen were discussing a research collaboration, Cohen received notice of a competition sponsored by the U.S. Bureau of Reclamation to improve subseasonal forecasts of temperature and precipitation in the western U.S. The government agency is interested in improved subseasonal forecasts to better prepare water managers for shifts in hydrologic regimes, including the onset of drought and wet weather extremes.

“I said, ‘Hey, what do you think about trying to enter this competition as a way to motivate us, to make some progress,’” recalled Cohen.

Mackey, who was an assistant professor of statistics at Stanford University in California prior to joining Microsoft’s research organization and remains an adjunct professor at the university, invited two graduate students to participate on the project. “None of us had experience doing work in this area and we thought this would be a great way to get our feet wet,” he said.

Over the course of the 13-month competition, the researchers experimented with two types of machine learning approaches. One combed through a kitchen sink of data containing everything from historical temperature and precipitation records to data on sea ice concentration and the state of El Niño as well as an ensemble of physical forecast models. The other approach focused only on historical data for temperature when forecasting temperature or precipitation when forecasting precipitation.

“We were making forecasts every two weeks and between those forecasts we were acquiring new data, processing it, building some of the infrastructure for testing out new methods, developing methods and evaluating them,” Mackey explained. “And then every two weeks we had to stop what we were doing and just make a forecast and repeat.”

Toward the end of the competition, Mackey’s team discovered that an ensemble of both machine learning approaches performed better than either alone.

Final results of the were announced today. Mackey, Cohen and their colleagues captured first place in forecasting average temperature three to four weeks in advance and second place in forecasting total precipitation five and six weeks out.

A flooded river under a walking bridgePhoto by Getty Images.

Forecast for the future

After the competition, the collaborators combined their ensemble of machine learning approaches with the standard models used by U.S. government agencies to generate subseasonal forecasts and found that the combined models improved the accuracy of the operational forecast by between 37 and 53 percent for temperature and 128 and 154 percent for precipitation. These results are reported in a paper the team posted on arXiv.org.

“I think we will continue to see these types of approaches be further refined and increase in the breadth of their use within the field of forecasting,” said Kenneth Nowak, water availability research coordinator with the U.S. Bureau of Reclamation, who organized the forecast rodeo. He added that government agencies will “look for opportunities to leverage” machine learning in future generations of operational forecast models.

Microsoft’s AI for Earth program is providing funding to Mackey and colleagues to hire an intern to expand and refine their machine learning based forecasting technique. The collaborators also hope that other machine learning researchers will be drawn to the challenge of cracking the code to accurate and reliable subseasonal forecasts. To encourage these efforts, they have made available to the public the dataset they created to train their models.

Cohen, who kicked off the collaboration with Mackey out of a curiosity about the potential impact of AI on subseasonal to seasonal climate forecasts, said, “I see the benefit of machine learning, absolutely. This is not the end; more like the beginning. There is a lot more that we can do to increase its applicability.”

Related:

John Roach writes about Microsoft research and innovation. Follow him on Twitter.

post

18 best practices for human-centered AI design

Eighteen best practices for human-centered AI design

By Mihaela Vorvoreanu, Saleema Amershi, and Penny Collisson

Today we’re excited to share a set of Guidelines for Human-AI Interaction. These 18 guidelines can help you design AI systems and features that are more human-centered. Based on more than two decades of thinking and research, they have been validated through a rigorous study published in CHI 2019.

Why do we need guidelines for human-AI interaction?

While classic interaction guidelines hold with AI systems, attributes of AI services, including their accuracy, failure modalities, and understandability raise new challenges and opportunities. Consistency, for example, is a classic design guideline that advocates for predictable behaviors and minimizing unexpected changes. AI components, however, can be inconsistent because they may learn and adapt over time.

We need updated guidance on designing interactions with AI services that provide meaningful experiences, keeping the user in control and respecting users’ values, goals, and attention.

Why these guidelines?

AI-focused design guidance is blooming across UX conferences, the tech press, and within individual design teams. That’s exciting, but it can be hard to know where to start. We wanted to help with that, so…

  • We didn’t just make these up! They come from more than 20 years of work. We read numerous research papers, magazine articles, and blog posts. We synthesized a great deal of knowledge acquired across the design community into a set of guidelines that apply to a wide range of AI products, are specific, and are observable at the UI level.
  • We validated the guidelines through rigorous research. We tested the guidelines through three rounds of validation with UX and HCI experts. Based on their feedback, we iterated the guidelines until experts confirmed that they were clear and specific.

Let’s dive into the guidelines!

The guidelines are grouped into four categories that indicate when during a user’s interactions they apply: upon initial engagement with the system, during interaction, when the AI service guesses wrong, and over time.

Initially

1. Make clear what the system can do.

2. Make clear how well the system can do what it can do.

The guidelines in the first group are about setting expectations: What are the AI’s capabilities? What level of quality or error can a user expect? Over-promising can hurt perceptions of the AI service.

PowerPoint’s QuickStarter illustrates Guideline 1, Make clear what the system can do. QuickStarter is a feature that helps you build an outline. Notice how QuickStarter provides explanatory text and suggested topics that help you understand the feature’s capabilities.

During Interaction

3. Time services based on context.

4. Show contextually relevant information.

5. Match relevant social norms.

6. Mitigate social biases.

This subset of guidelines is about context. Whether it’s the larger social and cultural context or the local context of a user’s setting, current task, and attention, AI systems should take context into consideration.

AI systems make inferences about people and their needs, and those depend on context. When AI systems take proactive action, it’s important for them to behave in socially acceptable ways. To apply Guidelines 5 and 6 effectively, ensure your team has enough diversity to cover each other’s blind spots.

Acronyms in Word highlights Guideline 4, Show contextually relevant information. It displays the meaning of abbreviations employed in your own work environment relative to the current open document.

When Wrong

8. Support efficient dismissal.

9. Support efficient correction.

10. Scope services when in doubt.

11. Make clear why the system did what it did.

Most AI services have some rate of failure. The guidelines in this group recommend how an AI system should behave when it is wrong or uncertain, which will inevitably happen.

The system might not trigger when expected, or might trigger at the wrong time, so it should be easy to invoke (Guideline 7) and dismiss (Guideline 8). When the system is wrong, it should be easy to correct it (Guideline 9), and when it is uncertain, Guideline 10 suggests building in techniques for helping the user complete the task on their own. For example, the AI system can gracefully fade out, or ask the user for clarification.

Auto Alt Text automatically generates alt text for photographs by using intelligent services in the cloud. It illustrates Guideline 9, Support efficient correction, because automatic descriptions can be easily modified by clicking the Alt Text button in the ribbon.

Over Time

12. Remember recent interactions.

13. Learn from user behavior.

14. Update and adapt cautiously.

15. Encourage granular feedback.

16. Convey the consequences of user actions.

17. Provide global controls.

18. Notify users about changes.

The guidelines in this group remind us that AI systems are like getting a new puppy: they are long-term investments and need careful planning so they can learn and improve over time. Learning (Guideline 13) also means that AI systems change over time. Changes need to be managed cautiously so the system doesn’t become unpredictable (Guideline 14). You can help users manage inherent consistencies in system behavior by notifying them about changes (Guideline 18).

Ideas in Excel empowers users to understand their data through high-level visual summaries, trends, and patterns. It encourages granular feedback (Guideline 15) on each suggestion by asking, “Is this helpful?”

What’s next?

If you’d like some more ideas, stay tuned for another post on this work where we share some of the uses we’ve been working with at Microsoft. We’d love to hear about your experiences with the guidelines. Please share them in comments.

Want more?

Authors

Mihaela Vorvoreanu is a program manager working on human-AI interaction at Microsoft Research.

Saleema Amershi is a researcher working on human-AI interaction at Microsoft Research AI.

Penny Marsh Collisson is a user research manager working on AI in Office.

With thanks to our team who developed The Guidelines for Human-AI Interaction: Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul Bennett, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, and Eric Horvitz.

Thanks also to Ruth Kikin-Gil for her thoughtful collaboration, and for curating examples for this post.

post

New research: Leaders at successful companies embracing AI

The race is on
The survey results show that high-growth companies are not only more than twice as likely to actively use AI compared to lower growth companies, but they also have bigger plans in a much shorter timeframe. Of the double-digit growth companies surveyed, while almost all (94%) intend to use in AI for decision making within the next three years, more than half plan to do so over the next 12 months. In comparison, the majority of low-growth companies are only looking to invest in decision-making AI in the next three to five years.

“What’s striking about the research is the difference between double-digit growth companies and those with lower growth,” says Susan Etlinger, Industry analyst with the Altimeter Group. “Double-digit growth companies are further along in their AI deployments, but also see a greater urgency in using more AI. They are looking at a one to three year timeframe – often really focused on the coming year. Lower growth companies are looking at more of a 5-year timeframe. What this says to me is that the more you know, the higher your sense of urgency is.”

Crucially, it’s not too late for those companies and leaders who are further behind in their AI journeys to start now, to increase their chances of remaining competitive.

Start small, learn fast and scale
The research findings have shown that AI is successfully utilised by leaders to invest more time in humans, while helping them create and execute new strategies. In addition, we have seen how leaders value AI’s ability to help them grow their own skills.

Evidence showing that the fastest-growing companies have invested – and will continue to invest – in AI also highlights the importance of ensuring that business, inspired by their leadership, progress on their AI journey sooner, rather than later, before they run the risk of losing their competitive edge to more progressive companies.

In Microsoft EMEA President Michel van der Bel’s words: “Start small, but start with intention. This will help teams build trust, learn from feedback and build confidence. In a nutshell, this is what will help get your AI journey off with a strong start.” Progress today, and reap the benefits for both yourself as a leader, and your company as a whole, tomorrow.

For more information on progressing your AI journey, please feel free to visit our AI business resources.
.


post

New AI and IoT solution frees skilled fish farming workers in Japan to focus on more demanding tasks

Japan’s labor-intensive fish farming sector has taken a major step toward automation with the adoption of an artificial intelligence (AI) and Internet of Things (IoT) solution that frees up highly skilled workers from a crucial, but highly time-consuming, task.

The breakthrough follows half a year of field tests at Kindai University, which plays a significant role in the national production of red sea bream – a fish known in Japanese as “Madai” that is prized by sushi and sashimi lovers both at home and abroad.

The university’s Aquaculture Research Institute hatches red sea bream and raises them to a juvenile stage, known as fingerlings. Every year, it sells around 12 million fingerlings to fish farms that grow them to adult size for the market. To meet rising demand for the delicacy, Kindai’s workers must hand sort as many as 250,000 fingerlings a day.

Fingerlings in a holding pen.

Japan’s aging demographics and other factors have made the recruitment of experienced sorters difficult, particularly when so much repetitive work is required. To counter this problem, the university approached its long-term partner company, Toyota Tsusho, which in turn brought in Microsoft Japan to help come up with ways of automating a number of processes. The aim is to relieve workers of manually intensive duties so that they can focus their valuable skills on more demanding tasks.

This latest innovation centers on software that automatically regulates the flow of water through pumps that transfer fingerlings from their pens and onto conveyor belts for sorting. IoT and AI tools continuously monitor and adjust the flows.

Now automated … Fingerlings being pumped from their pens.

“There are three processes involved in sorting fingerlings,” explains Naoki Taniguchi, who manages the Institute’s Larval Rearing Division and is Deputy General Manager of the Aquaculture Technology and Production Center. “Firstly, we gather the fingerlings near the mouth of the pump that sucks them along with seawater from the fish pens without injuring them. To do this, we must constantly adjust the pump’s water flow to the conveyor belts. Lastly, we sort them by removing fingerlings that are too small or defective from the conveyor belts.”

Naoki Taniguchi of the Institute’s Larval Rearing Division Aquaculture Technology and Production Center

Taniguchi said adjusting the water flow from a pump is crucial.

“If the flow is too fast, too many fingerlings will make it onto the conveyor belts, and our sorting teams won’t be able to keep up, and some fish that should be removed will be missed. If the flow is too slow, too few fingerlings will be sorted, and production will fall short. Until now, it’s been a process entrusted only to a few operators with sufficient experience.”

The new automated transfer system was created with Microsoft Azure Machine Learning Studio using image analysis technology that recognizes the changing ratio of fish shapes and vacant areas within a pump’s pipes. From this, the system machine-learned how expert human operators adjust flows optimally.

Field tests started last year, and within six months the system achieved the same flow control results as an operator.

Taniguchi said employees, who often used to spend their whole working day just adjusting water flows, are now able to devote their time to applying their rich experience in streamlining other fish farming processes. They will also be able to pass on their technical knowledge to a new generation of aquaculture specialists.

Sorting fingerlings on conveyor belts.

He hopes greater automation will make jobs in the sector more attractive to younger workers looking to build careers.

“Japan’s fishing industry employs about 150,000 people. But 80 percent of them are more than 40 years old. It is vital that we attract young people to the industry,” he said. “This automatic transfer system is just the beginning of our journey. Ultimately, we aim to automate the sorting process itself as well.”

 READ MORE:  AI and fish farming: High-tech help for a sushi and sashimi favorite in Japan

post

New research: Emotion and Cognition in the Age of AI

Given the accelerating pace of change around the globe, the worlds of school and work are undergoing massive transformations. New technologies such as artificial intelligence are empowering today’s students to address big challenges that motivate them, such as reversing climate change and slowing the spread of disease. At the same time, collaboration tools, mixed reality and social media are bringing them closer to one another than ever before.

To successfully navigate these changes and to leverage the opportunities ahead of them, we need to prepare students with the diverse skills they will need in the future.

To better understand how to prepare today’s kindergartners to thrive in work and life, last year we released research about the Class of 2030.

Our findings highlighted two core themes: Student-centric approaches such as personalized learning and the growing importance of social-emotional skills.

Social-emotional skills such as collaboration, empathy and creativity have long been essential, but our research revealed they have become newly important to employers and educators alike. Social-emotional skills are also necessary for well-being, which is a key predictor of academic and employment success.

So this year we decided to dig deeper, to better understand what educators and schools worldwide are doing to enhance students’ skills and well-being and to understand how technology can help. We worked with the Economist Intelligence Unit (EIU) to survey more than 760 educators in 15 countries. From Mexico to Sweden and from Indonesia to Canada, we listened to the voices of educators. We also interviewed leading experts on and reviewed 90 pieces of research.

Click the excerpt above to view the full infographic.


What we learned is that educators around the globe are placing a high priority on student well-being and they are actively seeking ways to nurture it in their classrooms, across the school environment, and in their communities.

According to the survey, 80 percent of educators believe that well-being is critical for academic success, for developing foundational literacies and for cultivating strong communication skills, and 70 percent of educators say well-being has grown in importance for K-12 students during their careers.

At the same time, school systems have not moved as quickly as educators to prioritize well-being. Only 53 percent of educators said their schools have a formal policy in place to support students’ well-being. Individual educators can do great things in their own classrooms. But to impact well-being at scale, systemic approaches are needed.

We identified some common barriers that educators encounter in trying to help improve their students’ well-being:

  • 64 percent of educators said they lack the resources or time to support students’ well-being
  • 71 percent of respondents said changes to enhance student well-being need to be driven by school leaders

And, we asked educators what technologies they find most beneficial in overcoming these barriers. Three areas stood out:

  • 58 percent mentioned immersive experiences that allow students to explore scenarios from the perspective of others, which show strong promise for promoting social-emotional skills, particularly empathy
  • 49 percent cited tools that foster collaboration among students
  • 46 percent of educators favor tools that help collect and analyze data about students’ emotional states

In addition, technology provides the critical scale to take any of these approaches beyond a single classroom.

To help identify best practices, we took a close look at schools where teachers report their students enjoy higher-than-average well-being. We found several common threads. These leader schools are more likely to:

  • Have a formal plan to promote well-being
  • Measure and monitor well-being as well as academic achievement
  • Support inclusive classroom practices that amplify student voice
  • Engage purposefully with the community
  • Take a whole-school approach to professional learning

A complete summary of our research results will be released in March. In the meantime, we invite you to join our free webinar series on Teaching Happiness, starting February 25, 2019, for a broader exploration of the skills that empower students to lead happy and fulfilling lives.

We are excited to be on this journey together with all of you, to learn from you, and to contribute our insights and our technologies to help every student on the planet achieve more.