Posted on Leave a comment

Breaking Bard: Using AI to unlock Shakespeare’s greatest works

Spoiler alert: At the end of Romeo and Juliet, they both die.

OK, as spoilers go, it’s not big. Most people have read the play, watched one of the famous films or sat through countless school lessons devoted to William Shakespeare and his work. They know it doesn’t end well for Verona’s most famous couple.

In fact, the challenge is finding something no one knows about the world-famous, 300-year-old play. That’s where artificial intelligence can help.

Phil Harvey, a Cloud Solution Architect at Microsoft in the UK, used the company’s Text Analytics API on 19 of The Bard’s plays. The API, which is available to anyone as part of Microsoft’s Azure Cognitive Services, can be used to identify sentiment and topics in text, as well as pick out key phrases and entities. This API is one of several Natural Language Processing (NLP) tools available on Azure.

By creating a series of colourful, Power BI graphs (below) showing how negative (red) or positive (green) the language used by The Bard’s characters was, he hoped to shine a new light on some of the greatest pieces of literature, as well as make them more accessible to people who worry the plays are too complex to easily understand.

Harvey said: “People can see entire plotlines just by looking at my graphs on language sentiment. Because visual examples are much easier to absorb, it makes Shakespeare and his plays more accessible. Reading language from the 16th and 17th centuries can be challenging, so this is a quick way of showing them what Shakespeare is trying to do.

“It’s a great example of data giving us new things to know and new ways of knowing it; it’s a fundamental change to how we process the world around us. We can now pick up Shakespeare, turn it into a data set and process it with algorithms in a new way to learn something I didn’t know before.”

What Harvey’s graphs reveal is that Romeo struggles with more extreme emotions than Juliet. Love has a much greater effect on him challenging stereotypes of the time that women – the fairer sex – were more prone to the highs and lows of relationships.

“It’s interesting to see that the male lead is the one with more extreme emotions,” Harvey added. “The longest lines, both positive and negative, are spoken by him. Juliet is steadier; she is positive and negative but not extreme in what she says. Romeo is a fellow of more extreme emotion, he’s bouncing around all over the place.

Macbeth is also interesting because there are these two peaks of emotion, and Shakespeare uses the wives at these points to turn the story. I also looked at Helena and Hermia in A Midsummer Night’s Dream, because they have a crossed-over love story. They are both positive at the start but then they find out something and it gets negative towards the end.”

<img data-attachment-id="74802" data-permalink="https://news.microsoft.com/en-gb/2019/04/23/breaking-bard-using-microsoft-ai-to-unlock-shakespeares-greatest-works/ancient-architecture-art-189532/" data-orig-file="http://www.sickgaming.net/blog/wp-content/uploads/2019/04/breaking-bard-using-ai-to-unlock-shakespeares-greatest-works.jpg" data-orig-size="6000,3376" data-comments-opened="1" data-image-meta="{"aperture":"0","credit":"","camera":"","caption":"","created_timestamp":"0","copyright":"","focal_length":"0","iso":"0","shutter_speed":"0","title":"","orientation":"0"}" data-image-title="ancient-architecture-art-189532" data-image-description="

statue of William Shakespeare

” data-medium-file=”https://3er1viui9wo30pkxh1v2nh4w-wpengine.netdna-ssl.com/wp-content/uploads/prod/sites/68/2019/04/ancient-architecture-art-189532-300×169.jpg” data-large-file=”http://www.sickgaming.net/blog/wp-content/uploads/2019/04/breaking-bard-using-ai-to-unlock-shakespeares-greatest-works.jpg” class=”wp-image-74802 size-full” src=”http://www.sickgaming.net/blog/wp-content/uploads/2019/04/breaking-bard-using-ai-to-unlock-shakespeares-greatest-works.jpg” alt=”statue of William Shakespeare” width=”6000″ height=”3376″ srcset=”http://www.sickgaming.net/blog/wp-content/uploads/2019/04/breaking-bard-using-ai-to-unlock-shakespeares-greatest-works.jpg 6000w, https://3er1viui9wo30pkxh1v2nh4w-wpengine.netdna-ssl.com/wp-content/uploads/prod/sites/68/2019/04/ancient-architecture-art-189532-300×169.jpg 300w, https://3er1viui9wo30pkxh1v2nh4w-wpengine.netdna-ssl.com/wp-content/uploads/prod/sites/68/2019/04/ancient-architecture-art-189532-768×432.jpg 768w” sizes=”(max-width: 6000px) 100vw, 6000px”>

The project required AI working alongside humans to truly understand and fully appreciate Shakespeare’s plays

His Shakespeare graphs are the final step in a long process. After downloading a text file of The Bard’s plays from the internet, Harvey had to process the data to prepare it for Microsoft’s AI algorithms. He removed all the stage directions, keeping the act and scene numbers, the characters’ names and what they said. He then uploaded the text to Microsoft Cognitive Services API, a set of tools that can be used in apps, websites and bots to see, hear, speak, understand and interpret users through natural methods of communication.

The Text Analytics API is pre-trained with an extensive body of text with sentiment associations. The model uses a combination of techniques during text analysis, including text processing, part-of-speech analysis, word placement and word associations.

After scanning the Shakespeare plays, Microsoft’s NLP tool gave the lines of dialogue a score between zero and one – scores close to one indicated a positive sentiment, and scores close to zero indicated a negative sentiment.

However, before you start imagining a world in which only robots read books before telling humans the gist of what happened, Harvey discovered some unexpected challenges with his test.

While the AI system worked well for Shakespeare plays that contained straightforward plots and dialogue, it struggled to determine if more nuanced speech was positive or negative. The algorithm couldn’t work out whether Hamlet’s mad ravings were real or imagined, whether characters were being deceptive or telling the truth. That meant that the AI labelled events as positive when they negative, and vice-versa. The AI believed The Comedy of Errors was a tragedy because of the physical, slapstick moments in the play.

Everything you need to know about Microsoft’s cloud

Harvey realised that the parts of the plays that dealt with what truly makes us unique as humans – joking, elation, lying, double meanings, subterfuge, sarcasm – could only be noticed and interpreted by human readers. His project required AI working alongside humans to truly understand and fully appreciate Shakespeare.

Harvey insists that his experiments with Shakespeare’s plays are just a starting point but that the same combination of AI and humans can eventually be extended to companies and their staff, too.

“Take the example of customers phoning their energy company,” he said. “With Microsoft’s NLP tools, you could see if conversations that happen after 5pm are more negative than those that happen at 9am, and deploy staff accordingly. You could also see if a call centre worker turns conversations negative, even if they start out positive, and work with that person to ensure that doesn’t happen in the future.

“It can help companies engage with data in a different way and assist them with everyday tasks.”

Harvey also said journalists could use the tool to see how readers are responding to their articles, or social media experts would get an idea of how consumers viewed their brand.

For now, Harvey is concentrating on the Classics and is turning his attention to Charles Dickens, if he can persuade the V&A in London to let him study some of their manuscripts.

“In the V&A manuscripts, you can see where Dickens has crossed out words. I would love to train a custom vision model on that to get a page by page view of his changes. I could then look at a published copy of the text and see which parts of the book he worked on most; maybe that part went well but he had trouble with this bit. Dickens’s work was serialised in newspapers, so we might be able to deduce whether he was receiving feedback from editors that we didn’t know about. I think that’s amazing.”

Tags: , , , , , , ,

Posted on Leave a comment

Machine teaching: How people’s expertise makes AI even more powerful

Most people wouldn’t think to teach five-year-olds how to hit a baseball by handing them a bat and ball, telling them to toss the objects into the air in a zillion different combinations and hoping they figure out how the two things connect.

And yet, this is in some ways how we approach machine learning today — by showing machines a lot of data and expecting them to learn associations or find patterns on their own.

For many of the most common applications of AI technologies today, such as simple text or image recognition, this works extremely well.

But as the desire to use AI for more scenarios has grown, Microsoft scientists and product developers have pioneered a complementary approach called machine teaching. This relies on people’s expertise to break a problem into easier tasks and give machine learning models important clues about how to find a solution faster. It’s like teaching a child to hit a home run by first putting the ball on the tee, then tossing an underhand pitch and eventually moving on to fastballs.

“This feels very natural and intuitive when we talk about this in human terms but when we switch to machine learning, everybody’s mindset, whether they realize it or not, is ‘let’s just throw fastballs at the system,’” said Mark Hammond, Microsoft general manager for Business AI. “Machine teaching is a set of tools that helps you stop doing that.”

Machine teaching seeks to gain knowledge from people rather than extracting knowledge from data alone. A person who understands the task at hand — whether how to decide which department in a company should receive an incoming email or how to automatically position wind turbines to generate more energy — would first decompose that problem into smaller parts. Then they would provide a limited number of examples, or the equivalent of lesson plans, to help the machine learning algorithms solve it.

In supervised learning scenarios, machine teaching is particularly useful when little or no labeled training data exists for the machine learning algorithms because an industry or company’s needs are so specific.

YouTube Video

In difficult and ambiguous reinforcement learning scenarios — where algorithms have trouble figuring out which of millions of possible actions it should take to master tasks in the physical world — machine teaching can dramatically shortcut the time it takes an intelligent agent to find the solution.

It’s also part of larger goal to enable a broader swath of people to use AI in more sophisticated ways. Machine teaching allows developers or subject matter experts with little AI expertise, such as lawyers, accountants, engineers, nurses or forklift operators, to impart important abstract concepts to an intelligent system, which then performs the machine learning mechanics in the background.

Microsoft researchers began exploring machine teaching principles nearly a decade ago, and those concepts are now working their way into products that help companies build everything from intelligent customer service bots to autonomous systems.

“Even the smartest AI will struggle by itself to learn how to do some of the deeply complex tasks that are common in the real world. So you need an approach like this, with people guiding AI systems to learn the things that we already know,” said Gurdeep Pall, Microsoft corporate vice president for Business AI. “Taking this turnkey AI and having non-experts use it to do much more complex tasks is really the sweet spot for machine teaching.”

Today, if we are trying to teach a machine learning algorithm to learn what a table is, we could easily find a dataset with pictures of tables, chairs and lamps that have been meticulously labeled. After exposing the algorithm to countless labeled examples, it learns to recognize a table’s characteristics.

But if you had to teach a person how to recognize a table, you’d probably start by explaining that it has four legs and a flat top. If you saw the person also putting chairs in that category, you’d further explain that a chair has a back and a table doesn’t. These abstractions and feedback loops are key to how people learn, and they can also augment traditional approaches to machine learning.

“If you can teach something to another person, you should be able to teach it to a machine using language that is very close to how humans learn,” said Patrice Simard, Microsoft distinguished engineer who pioneered the company’s machine teaching work for Microsoft Research. This month, his team moves to the Experiences and Devices group to continue this work and further integrate machine teaching with conversational AI offerings.

Machine teaching researchers Patrice Simard, Alici Edelman Pelton and Riham Mansour sit in their Microsoft research office
Microsoft researchers Patrice Simard, Alicia Edelman Pelton and Riham Mansour (left to right) are working to infuse machine teaching into Microsoft products. Photo by Dan DeLong for Microsoft.

Millions of potential AI users

Simard first started thinking about a new paradigm for building AI systems when he noticed that nearly all the papers at machine learning conferences focused on improving the performance of algorithms on carefully curated benchmarks. But in the real world, he realized, teaching is an equally or arguably more important component to learning, especially for simple tasks where limited data is available.

If you wanted to teach an AI system how to pick the best car but only had a few examples that were labeled “good” and “bad,” it might infer from that limited information that a defining characteristic of a good car is that the fourth number of its license plate is a “2.” But pointing the AI system to the same characteristics that you would tell your teenager to consider — gas mileage, safety ratings, crash test results, price — enables the algorithms to recognize good and bad cars correctly, despite the limited availability of labeled examples.

In supervised learning scenarios, machine teaching improves models by identifying these high-level meaningful features. As in programming, the art of machine teaching also involves the decomposition of tasks into simpler tasks. If the necessary features do not exist, they can be created using sub-models that use lower level features and are simple enough to be learned from a few examples. If the system consistently makes the same mistake, errors can be eliminated by adding features or examples.

One of the first Microsoft products to employ machine teaching concepts is Language Understanding, a tool in Azure Cognitive Services that identifies intent and key concepts from short text. It’s been used by companies ranging from UPS and Progressive Insurance to Telefonica to develop intelligent customer service bots.

“To know whether a customer has a question about billing or a service plan, you don’t have to give us every example of the question. You can provide four or five, along with the features and the keywords that are important in that domain, and Language Understanding takes care of the machinery in the background,” said Riham Mansour, principal software engineering manager responsible for Language Understanding.

Microsoft researchers are exploring how to apply machine teaching concepts to more complicated problems, like classifying longer documents, email and even images. They’re also working to make the teaching process more intuitive, such as suggesting to users which features might be important to solving the task.

Imagine a company wants to use AI to scan through all its documents and emails from the last year to find out how many quotes were sent out and how many of those resulted in a sale, said Alicia Edelman Pelton, principal program manager for the Microsoft Machine Teaching Group.

As a first step, the system has to know how to identify a quote from a contract or an invoice. Oftentimes, no labeled training data exists for that kind of task, particularly if each salesperson in the company handles it a little differently.

If the system was using traditional machine learning techniques, the company would need to outsource that process, sending thousands of sample documents and detailed instructions so an army of people can attempt to label them correctly — a process that can take months of back and forth to eliminate error and find all the relevant examples. They’ll also need a machine learning expert, who will be in high demand, to build the machine learning model. And if new salespeople start using different formats that the system wasn’t trained on, the model gets confused and stops working well.

By contrast, Pelton said, Microsoft’s machine teaching approach would use a person inside the company to identify the defining features and structures commonly found in a quote: something sent from a salesperson, an external customer’s name, words like “quotation” or “delivery date,” “product,” “quantity,” or “payment terms.”

It would translate that person’s expertise into language that a machine can understand and use a machine learning algorithm that’s been preselected to perform that task. That can help customers build customized AI solutions in a fraction of the time using the expertise that already exists within their organization, Pelton said.

Pelton noted that there are countless people in the world “who understand their businesses and can describe the important concepts — a lawyer who says, ‘oh, I know what a contract looks like and I know what a summons looks like and I can give you the clues to tell the difference.’”

Microsoft CVP Gurdeep Pall talks in front of a presentation on a TV monitor
Microsoft Corporate Vice President for Business AI Gurdeep Pall talks at a recent conference about autonomous systems solutions that employ machine teaching. Photo by Dan DeLong for Microsoft.

Making hard problems truly solvable

More than a decade ago, Hammond was working as a systems programmer in a Yale neuroscience lab and noticed how scientists used a step-by-step approach to train animals to perform tasks for their studies. He had a similar epiphany about borrowing those lessons to teach machines.

That ultimately led him to found Bonsai, which was acquired by Microsoft last year. It combines machine teaching with deep reinforcement learning and simulation to help companies develop “brains” that run autonomous systems in applications ranging from robotics and manufacturing to energy and building management. The platform uses a programming language called Inkling to help developers and even subject matter experts decompose problems and write AI programs.

Deep reinforcement learning, a branch of AI in which algorithms learn by trial and error based on a system of rewards, has successfully outperformed people in video games. But those models have struggled to master more complicated real-world industrial tasks, Hammond said.

Adding a machine teaching layer — or infusing an organization’s unique subject matter expertise directly into a deep reinforcement learning model — can dramatically reduce the time it takes to find solutions to these deeply complex real-world problems, Hammond said.

For instance, imagine a manufacturing company wants to train an AI agent to autonomously calibrate a critical piece of equipment that can be thrown out of whack as temperature or humidity fluctuates or after it’s been in use for some time. A person would use the Inkling language to create a “lesson plan” that outlines relevant information to perform the task and to monitor whether the system is performing well.

Armed with that information from its machine teaching component, the Bonsai system would select the best reinforcement learning model and create an AI “brain” to reduce expensive downtime by autonomously calibrating the equipment. It would test different actions in a simulated environment and be rewarded or penalized depending on how quickly and precisely it performs the calibration.

Telling that AI brain what’s important to focus on at the outset can short circuit a lot of fruitless and time-consuming exploration as it tries to learn in simulation what does and doesn’t work, Hammond said.

“The reason machine teaching proves critical is because if you just use reinforcement learning naively and don’t give it any information on how to solve the problem, it’s going to explore randomly and will maybe hopefully — but frequently not ever — hit on a solution that works,” Hammond said. “It makes problems truly solvable whereas without machine teaching they aren’t.”

Related machine teaching links:

 Jennifer Langston writes about Microsoft research and innovation. Follow her on Twitter.

Posted on Leave a comment

This Slovak startup is using AI and drones to help preserve natural water cycles

Working on their model for more than three decades, Rain for Climate founder Michal Kravčík has gathered scientists, ecologists, hydrologists, entrepreneurs and government agencies around him to create a plan to restore climate stability:

“The research we’ve been working on for years has shown that climate change is not just about high greenhouse gas production, but especially about desertification – the planet’s drying out. According to the analyses, we have about five years to act, otherwise the ecosystem will be irreversibly damaged,” says Vlado Zaujec , CEO and co-founder of Rain for Climate.

[embedded content]

When AI meets nature
Rain for Climate’s unique solution involves gathering territorial data with drones, which can provide perspective and information faster and more accurately than standard ground-level analysis. This data can then be used to create a personalised report for each customer, based on the needs of their land, providing bespoke technical solutions from a catalogue of over 5,000 different possible measures and actions.

As a Microsoft AI for Earth grant winner, the company has been given free Azure credits to help power and develop an AI solution to more accurately analyse drone data, at a faster rate. Currently in an internal testing phase, the solution also makes use of machine learning, allowing the AI to also improve with time, teaching itself to spot patterns and make connections across its ever-growing data pool.

“We get a lot of data from the drones, which we can quickly analyse thanks to artificial intelligence and machine learning, made possible by our Microsoft grant,” says Vlado Zaujec. “Based on the evaluation, experts can then prepare a water retention project, where they select different technical solutions from a catalogue of more than 5000 measures. The type of solution, size, location and materials used reflect the uniqueness of each territory.”

As with all environmental projects, Rain for Climate’s goal is seen by its founder as a long-distance endeavour. The AI-powered solution that the company is working on is an evolution in its journey towards its ultimate goal of restoring water to its natural balance in affected areas, as quickly, and as efficiently as possible. Thanks to technology and innovative companies like Rain for Climate, we look forward to seeing more innovative solutions that will help conserve our planet.

Posted on Leave a comment

Artificial intelligence takes on ocean trash

Inspiration sometimes arrives in strange ways. Here is the story of how a dirty disposable diaper led to the development of an artificial intelligence (AI) solution to help rid the world’s coasts of massive amounts of waste and garbage.

It starts in 2005: Camden Howitt is surfing off Puerto Escondido on Mexico’s wild west coast when, suddenly, a floating diaper smacks him in the face. He paddles back to shore in disgust, only to stumble upon a discarded toilet seat lying on the sand.

When Howitt returns home to New Zealand half a world away, his heart sinks when he sees how much trash and other refuse is also washing up on its geographically isolated, and once pristine, 15,000-kilometer (9,300-mile) coastline.

Many people might merely shrug at such a seemingly intractable global problem and see it as just too hard to fix. But not Howitt. His mission became clear: He would dedicate his life to protecting paradise.

Camden Howitt, Co-Founder of Sustainable Coastlines.

Co-Founder Sam Judd came up with the idea of forming a non-profit organization while surfing in the Galápagos Islands in 2008. A year later, the two created Sustainable Coastlines in New Zealand to educate, motivate, and empower individuals and communities to clean up and restore their coastal environments and waterways.

It was the start of an obsession, and one that has just attracted a grant from AI for Earth: Microsoft’s US$50 million, five-year commitment to put AI in the hands of those working to protect our planet across four key areas – agriculture, biodiversity, climate change, and water.

Microsoft President, Brad Smith (middle), during his visit to New Zealand with Camden Howitt (right) and Sustainable Coastlines development lead Dr Sandy Britain (left).

“This kind of initiative is exactly what our planet needs – something simple, but effective, that can easily be adopted at grass-roots level to make a difference, empowering every community to keep their environment clean and make the world a better place for future generations,” Microsoft President Brad Smith said on visit to New Zealand in March.

At Lyall Bay, near the capital Wellington, Smith braved a blustery day to see the organization’s litter-busting technology in action. He helped collect garbage from the beach, then logged and categorized it in the Sustainable Coastlines’ uniquely comprehensive database.

Since it started, Sustainable Coastlines and its growing legions of volunteers have removed enough trash from shorelines around New Zealand and the Pacific to fill the equivalent of nearly 45 shipping containers. They have picked up tens of millions of individual items, 77% of which are single-use plastic.

It’s an impressive achievement, but the problem of ocean garbage is getting worse and is a global scourge that has no boundaries. Howitt ’s vision now is “to combine my deep love for the outdoors with a passion for designing systemic tools for large-scale change.” To get there, Sustainable Coastlines has teamed up with  Microsoft and its innovative technology partner, Enlighten Designs.

To find out more, I recently visited Sustainable Coastlines’ headquarters in the nation’s most populous city, Auckland.

New Zealand Prime Minister, Jacinda Ardern (middle), at the opening of The Flagship Education Centre, with Sam Judd (left) and Camden Howitt (right).

Howitt looks more or less how you might expect a passionate, ocean-loving environmentalist to look. His beard is long and rugged, his tan is deep, and his determination is strong. Before long he is proudly showing me around the building, The Flagship Education Center, which was opened by New Zealand Prime Minister Jacinda Ardern in October last year.

His organization is determined to be sustainable in practice as well as in name. The building captures and recycles its own water. Membrane roofing both insulates and breaks down airborne pollutants into non-toxic by-products. All gray and black water is treated and composted on site. Its offices are powered by state-of-the-art solar panels and batteries that contribute excess power to the city’s standard electricity grid.

Later, Howitt opens up about the scale of the environmental challenges and myths confronting his homeland of long beaches and hundreds of islands at the western edge of the South Pacific. Over the years, a carefully constructed clean-and-green brand has made foreign tourism a massive money-spinner for New Zealand’s economy. And, many Kiwis honestly regard themselves as “tidy” citizens.

Yet the World Bank ranks New Zealand as the planet’s tenth-largest per capita producer of urban waste, well ahead of the United States at 19th. “That’s a top ten that no one wants to be in,” Howitt says. “As New Zealand’s population rockets and we consume like there’s no tomorrow, we could easily rise in that ranking.”

He hopes new technologies and solutions can help reverse this disturbing trend.

Enlighten Designs has built a platform that employs intelligent digital storytelling and visualization tools as part of Microsoft’s Cognitive Services suite. And, together with Microsoft, it is also developing a national litter database that will not only track the impact of clean-up efforts on waste but also generate accurate, scientifically valid data and insights.

Posted on Leave a comment

Collaborate with others and keep track of to-dos with new AI features in Word

Focus is a simple but powerful thing. When you’re in your flow, your creativity takes over, and your work is effortless. When you’re faced with distractions and interruptions, progress is slow and painful. And nowhere is that truer than when writing.

Microsoft Word has long been the standard for creating professional-quality documents. Technologies like Editor—Word’s AI-powered writing assistant—make it an indispensable tool for the written word. But at some point in the writing process, you’ll need some information you don’t have at your fingertips, even with the best tools. When this happens, you likely do what research tells us many Word users do: leave a placeholder in your document and come back to it later to stay in your flow.

Today, we’re starting to roll out new capabilities to Word that help users create and fill in these placeholders without leaving the flow of their work. For example, type TODO: finish this section or <<insert closing here>> and Word recognizes and tracks them as to-dos. When you come back to the document, you’ll see a list of your remaining to-dos, and you can click each one to navigate back to the right spot.

Animated screenshot of a Word document open using the AI-powered To-Do feature.

Once you’ve created your to-dos, Word can also help you complete them. If you need help from a friend or coworker, just @mention them within a placeholder. Word sends them a notification with a “deep link” to the relevant place in the document. Soon, they’ll be able to reply to the notification with their contributions, and those contributions will be inserted directly into the document—making it easy to complete the task with an email from any device.

Over time, Office will use AI to help fill in many of these placeholders. In the next few months, Word will use Microsoft Search to suggest content for a to-do like <<insert chart of quarterly sales figures>>. You will be able to pick from the results and insert content from another document with a single click.

These capabilities are available today for Word on the Mac for Office Insiders (Fast) as a preview. We’ll roll these features out to all Office 365 subscribers soon for Word for Windows, the Mac, and the web.

Get started as an Office for Mac Insider

Office Insider for Mac has two speeds: Insider Fast and Insider Slow. To get access to this and other new feature releases, you’ll need a subscription to Office 365. To select a speed, open Microsoft Auto Update and on the Help menu select Check for Updates.

As always, we would love to hear from you, please send us your thoughts at UserVoice or visit us on Twitter or Facebook. You can also let us know how you like the new features by clicking the smiley face icon in the upper-right corner of Word.

Posted on Leave a comment

New updates to Azure AI boost business productivity, expand dev capabilities

As companies increasingly look to transform their businesses with AI, we continue to add improvements to Azure AI to make it easy for developers and data scientists to deploy, manage, and secure AI functions directly into their applications with a focus on the following solution areas:

  1. Leveraging machine learning to build and train predictive models that improve business productivity with Azure Machine Learning.
  2. Applying an AI-powered search experience and indexing technologies that quickly find information and glean insights with Azure Search.
  3. Building applications that integrate pre-built and custom AI capabilities like vision, speech, language, search, and knowledge to deliver more engaging and personalized experiences with our Azure Cognitive Services and Azure Bot Service.

Today, we’re pleased to share several updates to Azure Cognitive Services that continue to make Azure the best place to build AI. We’re introducing a preview of the new Anomaly Detector Service which uses AI to identify problems so companies can minimize loss and customer impact. We are also announcing the general availability of Custom Vision to more accurately identify objects in images. 

From using speech recognition, translation, and text-to-speech to image and object detection, Azure Cognitive Services makes it easy for developers to add intelligent capabilities to their applications in any scenario. To this date more than a million developers have already discovered and tried Cognitive Services to accelerate breakthrough experiences in their application.

Anomaly detection as an AI service

Anomaly Detector is a new Cognitive Service that lets you detect unusual patterns or rare events in your data that could translate to identifying problems like credit card fraud.

Today, over 200 teams across Azure and other core Microsoft products rely on Anomaly Detector to boost the reliability of their systems by detecting irregularities in real-time and accelerating troubleshooting. Through a single API, developers can easily embed anomaly detection capabilities into their applications to ensure high data accuracy, and automatically surface incidents as soon as they happen.

Common use case scenarios include identifying business incidents and text errors, monitoring IoT device traffic, detecting fraud, responding to changing markets, and more. For instance, content providers can use Anomaly Detector to automatically scan video performance data specific to a customer’s KPIs, helping to identify problems in an instant. Alternatively, video streaming platforms can apply Anomaly Detector across millions of video data sets to track metrics. A missed second in video performance can translate to significant revenue loss for content providers that monetize on their platform.

Custom Vision: automated machine learning for images

With the general availability of Custom Vision, organizations can also transform their business operations quickly and accurately identifying objects in images.

Powered by machine learning, Custom Vision makes it easy and fast for developers to build, deploy, and improve custom image classifiers to quickly recognize content in imagery. Developers can train their own classifier to recognize what matters most in their scenarios, or export these custom classifiers to run them offline and in real time on iOS (in CoreML), Android (in TensorFlow), and many other devices on the edge. The exported models are optimized for the constraints of a mobile device providing incredible throughput while still maintaining high accuracy.

Today, Custom Vision can be used for a variety of business scenarios. Minsur, the largest tin mine in the western hemisphere, located in Peru, applies Custom Vision to create a sustainable mining practice by ensuring that water used in the mineral extraction process is properly treated for reuse on agriculture and livestock by detecting treatment foam levels. They used a combination of Cognitive Services Custom Vision and Azure video analytics to replace a highly manual process so that employees can focus on more strategic projects within the operation.

Screenshot of the Custom Vision platform

Screenshot of the Custom Vision platform, where you can train the model to detect unique objects in an image, such as your brand’s logo.

Starting today, Custom Vision delivers the following improvements:

  • High quality models – Custom Vision features advanced training with a new machine learning backend for improved performance, especially on challenging datasets and fine-grained classification. With advanced training, you can specify a compute time budget and Custom Vision will experimentally identify the best training and augmentation settings.
  • Iterate with ease – Custom Vision makes it simple for developers to integrate computer vision capabilities into applications with 3.0 REST APIs and SDKs. The end to end pipeline is designed to support the iterative improvement of models, so you can quickly train a model, prototype in real world conditions, and use the resulting data to improve the model which gets models to production quality faster.
  • Train in the cloud, run anywhere – The exported models are optimized for the constraints of a mobile device, providing incredible throughput while still maintaining high accuracy. Now, you can also export classifiers to support Azure Resource Manager (ARM) for Raspberry Pi 3 and the Vision AI Dev Kit.

For more information, visit the Custom Vision Service Release Notes.

Get started today

Today’s milestones illustrate our commitment to make the Azure AI platform suitable for every business scenario, with enterprise-grade tools that simplify application development, and industry leading security and compliance for protecting customers’ data.

To get started building vision and search intelligent apps, please visit the Cognitive Services site.

Posted on Leave a comment

What’s new with Seeing AI

Saqib Shaikh holds his camera phone in front of his face with Seeing AI open on the screen

By Saqib Shaikh, Software Engineering Manager and Project Lead for Seeing AI

Seeing AI provides people who are blind or with low vision an easier way to understand the world around them through the cameras on their smartphones. Whether in a room, on a street, in a mall or an office – people are using the app to independently accomplish daily tasks like never before. Seeing AI helps users read printed text in books, restaurant menus, street signs and handwritten notes, as well as identify banknotes and products via their barcode. Leveraging on-device facial-recognition technology, the app can even describe the physical appearance of people and predict their mood.

Today, we are announcing new Seeing AI features for the enthusiastic community of users who share their experiences with the app, recommend new capabilities and suggest improvements for its functionalities. Inspired by this rich feedback, here are the updates rolling out to Seeing AI to enhance the user’s experience:

  • Explore photos by touch: Leveraging the Custom Vision Service in tandem with the Computer Vision API, this new feature enables users to tap their finger to an image on a touch-screen to hear a description of objects within an image and the spatial relationship between them. Users can explore photos of their surroundings taken on the Scene channel, family photos stored in their photo browser, and even images shared on social media by summoning the options menu while in other apps.
  • Native iPad support: For the first time we’re releasing iPad support, to provide a better Seeing AI experience that accounts for the larger display requirements. iPad support is particularly important to individuals using Seeing AI in academic or other professional settings where they are unable to use a cellular device.
  • Channel improvements: Users can now customize the order in which channels are shown, enabling easier access to favorite features. We’ve also made it easier to access the face recognition function while on the Person channel, by relocating the feature directly on the main screen. Additionally, when analyzing photos from other apps, the app will now provide audio cues that indicate Seeing AI is processing the image.

Since the app’s launch in 2017, Seeing AI has leveraged AI technology and inclusive design to help people with more than 10 million tasks. If you haven’t tried Seeing AI yet, download it for free on the App Store. If you have, please share your thoughts, feedback or questions with us at [email protected], or through the Disability Answer Desk and Accessibility User Voice Forum.

Posted on Leave a comment

Seattle Times: ‘Even one cigarette’ in pregnancy can raise risk of babies’ death, Seattle Children’s and Microsoft find

It’s no surprise that smoking during pregnancy is unhealthy for the fetus — just as it’s unhealthy for the person smoking. But the powerful combination of medical research and data science has given new insights into the risks involved, specifically when it comes to babies suddenly dying in their sleep.

The risk of Sudden Unexpected Infant Death (SUID) increases with every cigarette smoked during pregnancy, according to a joint study by Seattle Children’s Research Institute and Microsoft data scientists.

Further, while smoking less or quitting during pregnancy can help significantly, a risk of SUID exists even if a person stops smoking right before becoming pregnant, the team demonstrated.

“Any amount of smoking, even one cigarette, can double your risk,” said Tatiana Anderson, a post-doctoral research fellow at Children’s who worked on the study, which was published Monday in the journal Pediatrics.

Most Read Local Stories

Anderson and the rest of the team estimate that smoking during pregnancy is responsible for 800 of the approximately 3,700 SUID deaths in the United States every year. That’s 22 percent of all SUID cases.

The team analyzed vast data sets from the Centers for Disease Control and Prevention (CDC) that included every baby born in the United States from 2007 to 2011. In that time span, more than 20 million babies were born and 19,127 died of SUID, which includes Sudden Infant Death Syndrome (SIDS).

The study found that the risk of SUID doubles if a person goes from not smoking to smoking just one cigarette daily throughout pregnancy. At a pack a day (20 cigarettes), the risk is tripled compared to nonsmokers. The odds plateau from there.

The chance of SUID decreases when women quit smoking or smoke less: Women who tapered their smoking by the third trimester showed a 12 percent decreased risk. Quitting altogether by the third trimester lowered the risk of SUID by 23 percent.

The biggest predictor of SUID risk was the average number of cigarettes smoked daily throughout the three trimesters of pregnancy, rather than smoking more or less at any particular point.

“Thus, a woman who smoked 20 cigarettes per day in the first trimester and reduced to 10 cigarettes per day in subsequent trimesters had a similarly reduced SUID risk as a woman who averaged 13 cigarettes per day in each trimester,” the study states.

Having such precise data about the effects of smoking before and during pregnancy better arms health-care providers to speak with their patients, Anderson said.

“Doctors need to have frank discussions with patients,” she said. “Every cigarette you can eliminate on a daily basis will reduce your risk of SUID.”

Microsoft data scientists teamed up with the Children’s researchers after John Kahan, who heads up customer data and analytics for Microsoft, lost his son Aaron to SIDS in 2003. After Aaron’s death, days after he came home from the hospital, Kahan started the Aaron Matthew SIDS Research Guild. In 2016, he climbed Mount Kilimanjaro to raise money for SIDS research.

When he returned from Africa, he found out his team at Microsoft had been working with the available data on infant deaths. Their goal was to use algorithms to analyze the data and help come up with a way to save babies like Aaron from SUID.

Juan Lavista, a member of Kahan’s team at that time, is now the senior director of data science at the AI For Good research lab, which is part of an initiative called AI for Humanitarian Action, launched by Microsoft president Brad Smith. The idea behind the initiative is to use artificial intelligence to tackle some of the world’s most difficult problems, and it has allowed Lavista to work on things like the SUID study full time instead of cramming it in around his day job.

Data scientists can use computing power to work with huge data sets to help solve confounding issues like SUID, climate change and immigration, Lavista said.

“There are many problems the world has that, we believe, we can make a difference with AI,” he said.

The collaboration has been exciting for Anderson, the Children’s research fellow. She says this unusual partnership between the medical world and the technology sector has applications in many different fields.

“I think it is really exciting because it is a concept that absolutely can be used to ask questions outside of SIDS,” Anderson said. “Everybody is there because they want to make a difference. It is very much a collaborative effort.”

The scientists at Microsoft and Children’s aren’t stopping with the publication of this study. Lavista said they are delving into other questions surrounding SUID, such as the impact of prenatal care, how the age of an infant relates to sudden death and an examination of what SUID looks like in all 50 states.

Posted on Leave a comment

Is drought on the horizon? Researchers turn to AI in a bid to improve forecasts

As winter drags on, some people wonder whether to pack shorts for a late-March escape to Florida, while others eye April temperature trends in anticipation of sowing crops. Water managers in the western U.S. check for the possibility of early-spring storms to top off mountain snowpack that is crucial for irrigation, hydropower and salmon in the summer months.

Unfortunately, forecasts for this timeframe — roughly two to six weeks out — are a crapshoot, noted Lester Mackey, a statistical machine learning researcher at Microsoft’s New England research lab in Cambridge, Massachusetts. Mackey is bringing his expertise in artificial intelligence to the table in a bid to increase the odds of accurate and reliable forecasts.

“The subseasonal regime is where forecasts could use the most help,” he said.

Mackey knew little about weather and climate forecasting until Judah Cohen, a climatologist at Atmospheric and Environmental Research, a Verisk business that consults about climate risk in Lexington, Massachusetts, reached out to him for help using machine learning techniques to tease out repeating weather and climate patterns from mountains of historical data as a way to improve subseasonal and seasonal forecast models.

The preliminary machine learning based forecast models that Mackey, Cohen and their colleagues developed outperformed the standard models used by U.S. government agencies to generate subseasonal forecasts of temperature and precipitation two to four weeks out and four to six weeks out in a competition sponsored by the U.S. Bureau of Reclamation.

Mackey’s team recently secured funding from Microsoft’s AI for Earth initiative to improve and refine its technique with an eye toward advancing the technology for the social good.

“Lester is working on this because it is a hard problem in machine learning, not because it is a hard problem in weather forecasting,” noted Lucas Joppa, Microsoft’s chief environmental officer who runs the AI for Earth program, as he explained why his group is helping fund the research. “It just so happens that the techniques he is interested in exploring have huge applicability in weather forecasting, which happens to have huge applicability in broader societal and economic domains.”

Fields being irrigated on the edge of the desert in the Cuyama Valley Photo by Getty Images.

AI on the brain

Mackey said current weather models perform well up to about seven days in advance, and climate forecast models get more reliable as the time horizon extends from seasons to decades. Subseasonal forecasts are a middle ground, relying on a mix of variables that impact short-term weather such as daily temperature and wind and seasonal factors such as the state of El Niño and the extent of sea ice in the Arctic.

Cohen contacted Mackey out of a belief that machine learning, the arm of AI that encompasses recognizing patterns in statistical data to make predictions, could help improve his method of generating subseasonal forecasts by gleaning insights from troves of historical weather and climate data.

“I am basically doing something like machine learned pattern recognition in my head,” explained Cohen, noting that weather patterns repeat throughout the seasons and from year to year and that therefore pattern recognition can and should inform longer-term forecasts. “I thought maybe I can improve on what I am doing in my head with some of the machine learning techniques that are out there.”

Using patterns in historical weather data to predict the future was standard practice in weather and climate forecast generation until the 1980s. That’s when physical models of how the atmosphere and oceans evolve began to dominate the industry. These models have grown in popularity and sophistication with the exponential rise in computing power.

“Today, all of the major climate centers employ massive supercomputers to simulate the atmosphere and oceans,” said Mackey. “The forecasts have improved substantially over time, but they make relatively little use of historical data. Instead, they ingest today’s weather conditions and then push forward their differential equations.”

A combine harvester moving on a snow-covered fieldPhoto by Getty Images.

Forecast competition

As Mackey and Cohen were discussing a research collaboration, Cohen received notice of a competition sponsored by the U.S. Bureau of Reclamation to improve subseasonal forecasts of temperature and precipitation in the western U.S. The government agency is interested in improved subseasonal forecasts to better prepare water managers for shifts in hydrologic regimes, including the onset of drought and wet weather extremes.

“I said, ‘Hey, what do you think about trying to enter this competition as a way to motivate us, to make some progress,’” recalled Cohen.

Mackey, who was an assistant professor of statistics at Stanford University in California prior to joining Microsoft’s research organization and remains an adjunct professor at the university, invited two graduate students to participate on the project. “None of us had experience doing work in this area and we thought this would be a great way to get our feet wet,” he said.

Over the course of the 13-month competition, the researchers experimented with two types of machine learning approaches. One combed through a kitchen sink of data containing everything from historical temperature and precipitation records to data on sea ice concentration and the state of El Niño as well as an ensemble of physical forecast models. The other approach focused only on historical data for temperature when forecasting temperature or precipitation when forecasting precipitation.

“We were making forecasts every two weeks and between those forecasts we were acquiring new data, processing it, building some of the infrastructure for testing out new methods, developing methods and evaluating them,” Mackey explained. “And then every two weeks we had to stop what we were doing and just make a forecast and repeat.”

Toward the end of the competition, Mackey’s team discovered that an ensemble of both machine learning approaches performed better than either alone.

Final results of the were announced today. Mackey, Cohen and their colleagues captured first place in forecasting average temperature three to four weeks in advance and second place in forecasting total precipitation five and six weeks out.

A flooded river under a walking bridgePhoto by Getty Images.

Forecast for the future

After the competition, the collaborators combined their ensemble of machine learning approaches with the standard models used by U.S. government agencies to generate subseasonal forecasts and found that the combined models improved the accuracy of the operational forecast by between 37 and 53 percent for temperature and 128 and 154 percent for precipitation. These results are reported in a paper the team posted on arXiv.org.

“I think we will continue to see these types of approaches be further refined and increase in the breadth of their use within the field of forecasting,” said Kenneth Nowak, water availability research coordinator with the U.S. Bureau of Reclamation, who organized the forecast rodeo. He added that government agencies will “look for opportunities to leverage” machine learning in future generations of operational forecast models.

Microsoft’s AI for Earth program is providing funding to Mackey and colleagues to hire an intern to expand and refine their machine learning based forecasting technique. The collaborators also hope that other machine learning researchers will be drawn to the challenge of cracking the code to accurate and reliable subseasonal forecasts. To encourage these efforts, they have made available to the public the dataset they created to train their models.

Cohen, who kicked off the collaboration with Mackey out of a curiosity about the potential impact of AI on subseasonal to seasonal climate forecasts, said, “I see the benefit of machine learning, absolutely. This is not the end; more like the beginning. There is a lot more that we can do to increase its applicability.”

Related:

John Roach writes about Microsoft research and innovation. Follow him on Twitter.

Posted on Leave a comment

18 best practices for human-centered AI design

Eighteen best practices for human-centered AI design

By Mihaela Vorvoreanu, Saleema Amershi, and Penny Collisson

Today we’re excited to share a set of Guidelines for Human-AI Interaction. These 18 guidelines can help you design AI systems and features that are more human-centered. Based on more than two decades of thinking and research, they have been validated through a rigorous study published in CHI 2019.

Why do we need guidelines for human-AI interaction?

While classic interaction guidelines hold with AI systems, attributes of AI services, including their accuracy, failure modalities, and understandability raise new challenges and opportunities. Consistency, for example, is a classic design guideline that advocates for predictable behaviors and minimizing unexpected changes. AI components, however, can be inconsistent because they may learn and adapt over time.

We need updated guidance on designing interactions with AI services that provide meaningful experiences, keeping the user in control and respecting users’ values, goals, and attention.

Why these guidelines?

AI-focused design guidance is blooming across UX conferences, the tech press, and within individual design teams. That’s exciting, but it can be hard to know where to start. We wanted to help with that, so…

  • We didn’t just make these up! They come from more than 20 years of work. We read numerous research papers, magazine articles, and blog posts. We synthesized a great deal of knowledge acquired across the design community into a set of guidelines that apply to a wide range of AI products, are specific, and are observable at the UI level.
  • We validated the guidelines through rigorous research. We tested the guidelines through three rounds of validation with UX and HCI experts. Based on their feedback, we iterated the guidelines until experts confirmed that they were clear and specific.

Let’s dive into the guidelines!

The guidelines are grouped into four categories that indicate when during a user’s interactions they apply: upon initial engagement with the system, during interaction, when the AI service guesses wrong, and over time.

Initially

1. Make clear what the system can do.

2. Make clear how well the system can do what it can do.

The guidelines in the first group are about setting expectations: What are the AI’s capabilities? What level of quality or error can a user expect? Over-promising can hurt perceptions of the AI service.

PowerPoint’s QuickStarter illustrates Guideline 1, Make clear what the system can do. QuickStarter is a feature that helps you build an outline. Notice how QuickStarter provides explanatory text and suggested topics that help you understand the feature’s capabilities.

During Interaction

3. Time services based on context.

4. Show contextually relevant information.

5. Match relevant social norms.

6. Mitigate social biases.

This subset of guidelines is about context. Whether it’s the larger social and cultural context or the local context of a user’s setting, current task, and attention, AI systems should take context into consideration.

AI systems make inferences about people and their needs, and those depend on context. When AI systems take proactive action, it’s important for them to behave in socially acceptable ways. To apply Guidelines 5 and 6 effectively, ensure your team has enough diversity to cover each other’s blind spots.

Acronyms in Word highlights Guideline 4, Show contextually relevant information. It displays the meaning of abbreviations employed in your own work environment relative to the current open document.

When Wrong

8. Support efficient dismissal.

9. Support efficient correction.

10. Scope services when in doubt.

11. Make clear why the system did what it did.

Most AI services have some rate of failure. The guidelines in this group recommend how an AI system should behave when it is wrong or uncertain, which will inevitably happen.

The system might not trigger when expected, or might trigger at the wrong time, so it should be easy to invoke (Guideline 7) and dismiss (Guideline 8). When the system is wrong, it should be easy to correct it (Guideline 9), and when it is uncertain, Guideline 10 suggests building in techniques for helping the user complete the task on their own. For example, the AI system can gracefully fade out, or ask the user for clarification.

Auto Alt Text automatically generates alt text for photographs by using intelligent services in the cloud. It illustrates Guideline 9, Support efficient correction, because automatic descriptions can be easily modified by clicking the Alt Text button in the ribbon.

Over Time

12. Remember recent interactions.

13. Learn from user behavior.

14. Update and adapt cautiously.

15. Encourage granular feedback.

16. Convey the consequences of user actions.

17. Provide global controls.

18. Notify users about changes.

The guidelines in this group remind us that AI systems are like getting a new puppy: they are long-term investments and need careful planning so they can learn and improve over time. Learning (Guideline 13) also means that AI systems change over time. Changes need to be managed cautiously so the system doesn’t become unpredictable (Guideline 14). You can help users manage inherent consistencies in system behavior by notifying them about changes (Guideline 18).

Ideas in Excel empowers users to understand their data through high-level visual summaries, trends, and patterns. It encourages granular feedback (Guideline 15) on each suggestion by asking, “Is this helpful?”

What’s next?

If you’d like some more ideas, stay tuned for another post on this work where we share some of the uses we’ve been working with at Microsoft. We’d love to hear about your experiences with the guidelines. Please share them in comments.

Want more?

Authors

Mihaela Vorvoreanu is a program manager working on human-AI interaction at Microsoft Research.

Saleema Amershi is a researcher working on human-AI interaction at Microsoft Research AI.

Penny Marsh Collisson is a user research manager working on AI in Office.

With thanks to our team who developed The Guidelines for Human-AI Interaction: Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul Bennett, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, and Eric Horvitz.

Thanks also to Ruth Kikin-Gil for her thoughtful collaboration, and for curating examples for this post.