post

From seed to sip: How Anheuser-Busch InBev uses AI to drive growth

Anheuser-Busch InBev (AB InBev) is using artificial intelligence to drive growth and innovation across all dimensions of its global brewing business.

The brewer of Budweiser, Corona, Stella Artois and more than 500 other beer brands has built a worldwide analytics platform on the Microsoft Azure cloud, enabling the company to draw data driven-insights about everything from optimal barley growing conditions to drivers of successful sales promotions.

Tassilo Festetics, AB InBev’s vice president for global solutions, shared insights about the company’s AI strategy at a recent AI in Business event in San Francisco, which Transform edited into an abbreviated Q&A.

How is Anheuser-Busch InBev using AI today?

Tassilo Festetics, AB InBev’s vice president for global solutions.
Tassilo Festetics.

FESTETICS:  The question is not going to be where we deploy AI, but where is it not going to be deployed, because we see it in so many different fields.

Can you share an example or two?

FESTETICS: Smart Barley, which is one of our platforms, enables us to work with farmers to use artificial intelligence to improve their yields, reduce water usage, reduce fertilizer usage and create a much more sustainable environment. We started there five years ago.

Now we see AI in the customer-facing area with chatbots and voice. Customers are expecting to have the same frictionless interaction with every company that they’re also having in their private life. Conversational bots that allow your customers to interact with your company in that way are a basic machine learning algorithm.

We also use AI in our supply chain and back-office operations. We use Azure to simplify tasks that people are performing every day and to make people’s lives much more focused on real added-value activities rather than just on transactional activities.

How did you get started in your AI transformation?

FESTETICS: When our company was born, the cloud was not there yet. Microsoft was even not there yet. We were born in 1366. So obviously we are not a digital company. We are a company that’s being digitized.

Our company has grown over time, as a large global organization our data landscape was fragmented. For us the first step was really looking at how we basically get data together, how do we harmonize it, how do we platform it. When we looked at the entire data infrastructure we said, ‘OK let’s just not touch it. Let’s hope it doesn’t break.’

We basically rebuilt everything totally as if we were starting a new company today. With advantages in technology and the cloud you can do that. And that saves a lot of time and allows you certainly to be much more agile. But for sure it’s the biggest barrier to get that data right at the beginning.

How did you develop AI expertise within AB InBev?

FESTETICS: We were very lucky that our senior management understood very early on that this was something that we should work on. So we were early to invest in new resources to join the company, because obviously we didn’t have them around. But then we also started to develop and improve the capabilities of our own people.

Last year I took my entire team to Berkeley. We spent a week on just machine learning. And it was very fun, because normally if I take my team anywhere, they are very — well, they know a lot of things. So if you put them in a room with professors after one day they will probably be explaining to the professors how life works.

In this course about machine learning you could hear a pin drop after the lesson, because everybody was still processing. And that’s, I think, the important part — that you really continue learning and you continue to build those capabilities inside of your company.

By getting new people in and by developing new skills in your people you start to see different approaches to problem solving. These people will start to find ways to deploy new technologies, new methodologies inside of the company to provide better customer service, better waste management, improved ROI on certain activities. It really starts a different way of thinking.

What advice would you give to companies that are just getting started with AI?

FESTETICS: Really start looking at your data early, because data is the fundamental part. There is no AI without data. Then start looking at the areas where you have the best business cases, where can you drive the most value for your company.

Top photo: At the Conversations in AI event, Microsoft CVP of Azure Julia White leads a panel discussion with (left to right) Tassilo Festetics, vice president, global solutions, Anheuser-Busch InBev; Abhishek Pani, senior director of AI product and data science, Adobe; Jack Brown, SVP of product and software, Arccos Golf; and Fiona Tan, SVP of customer technology, Walmart Labs.

Related:

  • Watch how Anheuser-Busch InBev taps data for even better beer.
  • Read more about how customers including ABInBev are using AI.
  • Learn more about Microsoft’s AI tools for businesses.
post

Fishy business: Putting AI to work in Australia’s Darwin Harbour

Identifying and counting fish species in murky water filled with deadly predators is a difficult job. But fisheries scientists in the Northern Territory are working on an artificial intelligence project with Microsoft that has incredible potential for marine science around the world.

Your mission should you choose to accept it, is to go into one of Australia’s largest harbours and count the fish. Think this sounds daunting? You don’t know the half of it.

First, there’s the water. There’s a lot of it in Darwin Harbour – five times more than Sydney Harbour, to be precise. Heavy tides swell more than seven metres then retract, leaving little visibility in their wake.

And if you think you’ve got some occupational hazards at work, try getting your job done in an environment teeming with some of the world’s most intimidating apex predators – saltwater crocodiles, along with tiger, bull and hammerhead sharks. More than 300 salties are caught in the harbour each year.

This is the daunting task of the Department of Primary Industry and Resources for the Northern Territory Government, as it goes about ensuring fisheries resources are sustainably managed and developed for future generations.

Identifying and counting fish species in murky water filled with deadly predators makes diving to count fish species impossible.

Murky water filled with deadly predators like the saltwater crocodile make diving to count and identify fish species impossible.

“If you’re in the water with a crocodile you aren’t taking a calculated risk. You’re going to be a statistic. That’s it. If you’re in the water and he’s there, he wants you and you’re gone.” – Wayne Baldwin, Research Technical Officer, NT Fisheries

If shooting fish in a barrel is a metaphor for something all-too-easy, the correct metaphor for something exceptionally challenging might be counting fish in Darwin Harbour. Yet the NT Fisheries team, led by Dr Shane Penny, Fisheries Research Scientist, do it every day. As the old saying goes, you can’t manage what you can’t measure, so their work begins with knowing how many fish there are.

But they were bogged down by the time it took to wade through hours of underwater footage. The team needed to assess the abundance of critical fish species faster and more accurately, while maintaining a safe distance from deadly predators.

A meeting of the minds

It was from these murky depths that an innovative project showed the potential for artificial intelligence (AI) to support the important work being done by this team of marine biologists. Amid rising debate about the potential impact of AI on society, a collaboration between these scientists and Microsoft engineers became an opportunity to test out its powers as a force for good. Could technology hold the key to safely, accurately and rapidly counting fish – giving the NT Fisheries team more time to devote to analysing this data and improving the sustainable management of NT fish stocks?

The NT Fisheries team had high hopes. They had been using a baited remote underwater video (BRUV) to help with high-risk data gathering. The camera allows the team to see what’s in the water without going in. But even with BRUV on their side, the task was formidable.

Using a GoPro, researchers at NT Fisheries begin the process of assessing critical fish species.

Shane Penny, Fisheries Research Scientist and his team using baited underwater cameras.

“We’ve had quite a few problems with sharks coming in and taking the baits away. Tawny sharks have learned how to open our baits and suck it all out before we have a chance to collect any video.”
– Wayne Baldwin, Research Technical Officer, NT Fisheries

Then there was the sheer quantity of work involved. Once the video is collected, terabytes of footage must be viewed, and its content scoured and quantified. To put this in perspective, a single terabyte would store 500-hours of your favourite movies. The team was identifying vast quantities of different fish species and tracking their behaviour. This diversity and the murkiness of the water meant classification was often far from simple.

Steve van Bodegraven, a Microsoft machine learning engineer and Darwin local, worked with the NT Fisheries team over several months to see whether computer vision would be up to the ambitious task of identifying fish in underwater images.

In a similar way to how tags are suggested for friends and relatives in the photos you upload to social media – through repeated exposure and the discovery of patterns – the project’s success depended on feeding the system with training images. Along the way they had to confront an array of unusual problems. For example, how would Microsoft’s AI solution respond to fish like gold-spotted cod that can change colour to blend into their environment?

“We went in and talked to them about how they work and the challenges they face,” van Bodegraven says. “From that we tried to figure out how we could help. Everything we do is explorative, so we don’t necessarily have solutions out of the box.”

Three months and thousands of images later, results are encouraging to the scientists. To date the system is showing great potential, having learnt to identify 15 different species, from black jewfish to golden snapper which are under careful management to rebuild breeding stocks.

fisheries gif

The AI solution automates the laborious process of counting local fish stocks by progressively learning to identify different varieties of fish.

“We threw a few test images of fish it’s never seen before and it’s managed to pull those out and differentiate them from the fish it does know about. Once we had that first positive identification of a fish, we really felt we were onto something. From there it was just a matter of finding the right tools to improve and optimise.”
Dr Shane Penny, Fisheries Research Scientist

With each new fish analysed, the power of the machine learning technology increases. Samantha Nowland, the team’s Darwin-born research assistant, sees the potential for such systems to change the game in marine management.  NT has some of the most pristine waters in the world with healthy populations of endangered species such as sawfish and sharks. The development of this technology and its availability may help other areas of the world to improve their understanding of aquatic resources and ensure they are managed sustainably.

Beyond the harbour

While there’s already talk of using the system to create a global database of fish species, the NT Fisheries team is focused on analysing trends, coming up with management plans and expanding its reach.

“It’s going to help us monitor any marine species in Darwin Harbour and around the region,” Penny says. “We have a lot of endangered species and many more where we don’t have enough data. We need research projects that can identify species accurately.”

Microsoft’s van Bodegraven hopes it will open people’s eyes to the transformative potential of AI in fisheries and marine management and beyond. The project has already piqued the interest of fisheries departments across Australia, while the possibility of using the technology to monitor other animal species, like the iconic Kookaburra, is being actively explored.

Microsoft is also exploring how it could support similar projects elsewhere. By making the technology available via open source platform GitHub, the technology giant is encouraging others to build AI solutions that address their unique scenarios.

“Projects like this set a new precedent. Hopefully it will make people curious and give them the confidence to explore the application of AI in their industries,” van Bodegraven says. “It’s going to change industries and societies. The potential is only limited by imagination.”

Steve van Bodegraven, Machine Learning Engineer at Microsoft and Dr Shane Penny, Fisheries Research Scientist at NT Fisheries review the identified fish species using the AI solution.
post

Feeding the world with AI-driven agriculture innovation

In the 1950s and 1960s, plant biologist Norman Borlaug famously led the “Green Revolution,” developing high-yield grains that helped drive up global food production when paired with innovations in chemical fertilizers, irrigation, and mechanized cultivation. By so doing, Borlaug and his peers helped save a billion people from starvation. However, this new form of farming was not sustainable and created multiple environmental issues.

Today, farmers are using technology to transform production again, driven by the need to feed more with less and to address the impacts of industrial farming on the environment. Currently, nearly half of current food produced, or 2 billion tons a year, ends up as waste, while an estimated 124 million people in 51 countries face food insecurity or worse. In addition, new sources of arable land are limited, fresh water levels are receding, and climate change puts pressure on resources and will lower agricultural production over time. Governments need to solve these issues swiftly, as the world’s population is slated to grow from 7.6 billion to 9.8 billion 2050. Agencies and companies will need to team with growers to drive a 70 percent increase in food production.

The good news is that we’re now in the midst of a second Green Revolution that’s part of the Fourth Industrial Revolution. Here’s how technology innovation, driven by big data, the Internet of Things (IoT), artificial intelligence (AI), and machine learning, will reap a more bountiful harvest.

A vision for AI in agriculture

Farmers are deploying robots, ground-based wireless sensors, and drones to assess growing conditions. They then capitalize on cloud services and edge computing to process the data. By 2050, the typical farm is expected to generate an average of 4.1 million data points every day.

AI and machine learning interpret findings for farmers, helping them continually tweak crop inputs to boost yields. Farmers can use AI to determine the optimal date to sow crops, precisely allocate resources such as water and fertilizer, identify crop diseases for swifter treatment, and detect and destroy weeds. Machine learning makes these activities smarter over time. It can also help farmers forecast the year ahead by using historic production data, long-term weather forecasts, genetically modified seed information, and commodity pricing predictions, among other inputs, to recommend how much seed to sow.

Such precision farming technology augments and extends farmers’ deep knowledge about their land, making production more sustainable. Advanced technology can increase farm productivity by 45 percent while reducing water intake by 35 percent. However, the key is ensuring equitable access: Often the communities that need AI the most lack the physical and technology infrastructure required to support it.

Connecting communities with broadband

Access to high-speed connectivity and reliable power are still challenges in many parts of the world. That’s one reason Microsoft and its partners are bringing affordable broadband to rural communities in countries such as Colombia, India, Kenya, South Africa, and the United States through the Airband Initiative.

When communities are connected, farmers can benefit from AI and machine learning, even if they lack internet access to their individual farms. Microsoft employee Prashant Gupta and his team used advanced analytics and machine learning to create a Personalized Village Advisory Dashboard for 4,000 farmers in 106 villages and a Sowing App for 175 farmers in a district in the southeastern coastal state of Andhra Pradesh in India. Farmers with simple SMS-enabled phones can access Sowing App recommendations, which apply AI to data such as weather and soil conditions to optimize planting times. Farmers who followed the AI-driven advice increased yields by 30 percent over those who adhered to traditional planting schedules.

Using IoT and AI on individual farms

Farmers with connectivity can use IoT to get customized recommendations. The Microsoft FarmBeats program, driven by principal researcher Ranveer Chandra, has developed an end-to-end IoT platform that uses low-cost sensors, drones, and vision/machine learning algorithms to increase the productivity and profitability of farms. FarmBeats is part of Microsoft AI for Earth, a program that provides cloud and AI tools to teams seeking to develop sustainable solutions to global environmental issues.

In the United States, FarmBeats solves the problem of internet connectivity by accessing unused TV white spaces to set up high-bandwidth links between a farmer’s home internet connections and an IoT base station on the farm. Sensors, cameras, and drones connect to this base station, which is both solar- and battery-powered. To avoid unexpected shutdowns due to battery drain, the base station uses weather forecasts to plan its power usage. Similarly, drones leverage an IoT-driven algorithm based on wind patterns to help accelerate and decelerate mid-flight, reducing battery draw.

IoT data processing—for bandwidth-hogging information like drone videos, photos, and sensor feedback—is done by a PC at the farmer’s home. The PC performs local computations and consolidates findings into lower-memory summaries, which can be distributed over bandwidth more easily, while also serving as a backup during network outages.

AI for everyone means more food for the world

Over time, AI will help farmers evolve into agricultural technologists, using data to optimize yields down to individual rows of plants. Farmers without connectivity can get AI benefits right now, with tools as simple as an SMS-enabled phone and the Sowing App. Meanwhile, farmers with Wi-Fi access can use FarmBeats to get a continually AI-customized plan for their lands. With such IoT- and AI-driven solutions, farmers can meet the world’s needs for increased food sustainably—growing production and revenues without depleting precious natural resources.

Be the first to know about new advancements in the Microsoft AI farming initiative. Follow us at FarmBeats.

To stay up to date on the latest news about Microsoft’s work in the cloud, bookmark this blog and follow us on TwitterFacebook, and LinkedIn.

post

AI transforms photo management for Japanese pro baseball

Sports stars are among the most photographed people on the planet today. Their on-field performance, style, gestures, and facial expressions are almost continuously captured digitally for fans, the media, commercial use, and, ultimately, posterity.

It’s not unusual for thousands of pictures to be shot from all angles at any professional encounter nowadays. So, a typical season is likely to produce virtual mountains of images for major clubs and competitions in most sports.

Now, professional baseball in Japan is turning to artificial intelligence and the cloud to handle the magnitude of what has been a laborious and time-consuming task – photo management.

Sports photos can have immediate, lasting, and lucrative value – but only if they are kept in well organized and cataloged collections that can be accessed efficiently. IMAGE WORKS – a service of iconic Japanese photo giant, Fujifilm – manages the Nippon Professional Baseball’s (NPB) cloud-based Content Images Center (CIC).

Here curators sort images, identify players in each image and tag those images with that information. It sounds simple, but the volume of imagery now being produced is huge. The usual way of managing this is simply not keeping up.

To understand why let’s look at the special place baseball holds in modern Japan where it has been a wildly popular game since the 1930s. While its rules differ slightly from those of America’s favorite pastime, the NPB is to Japan is what Major League Baseball (MLB) is to the United States. The NPB consists of two top professional leagues: the Central League and the Pacific League. Each has six teams, and each holds 146 games a season, playing on most days of the week from March to October. Each league then holds its own playoffs, which are followed by the seven-game Nippon Series Championship between the two league champions – in a spectacle similar to that of World Series in the United States.

The automatic player name-tagging function can often identify players even in images that do not show their faces.

There is a steady deluge of images from across the country for much of the year with about 3,000 images shot at each game. After the crowds have left the stadiums, curators from each team typically select about 300 photographs. They then spend around four hours manually identifying and tagging player information to each picture.

That sort of timing can be a problem in our fast-paced world. Demand for images is usually at its highest in realtime or near realtime – that is, during or immediately after, each game. Fans and media can quickly lose interest in content from a past game once a new one begins. So, not only is the job of player image identification massive, it needs to be done fast.

Now AI has stepped up to the plate. Developers from Fujifilm and Microsoft Japan have devised a solution: an automatic player name-tagging function that identifies and tags images much faster than people can, and in greater volumes.

Since June 2018, it has been in a trial that has focused on just five baseball teams – including Hiroshima Toyo Carp, which has won the Central League championship eight times, and the Nippon Series three times. The trial was such a success, the function will be used for all NPB teams in the 2019 season.

Its photo analysis capabilities are based on pre-trained AI from Microsoft Cognitive Services and a deep learning framework from the Microsoft Cognitive Toolkit. Specifically, facial recognition using the Microsoft Cognitive Services Face API is combined with a unique determination model built on the Microsoft Cognitive Toolkit.

This enables the classification of images into four types—batting, pitching, fielding, and base running. Often, it can also determine a player’s name when his face is not visible in an angled or side shot. Azure Durable Functions and Automatic Player Name Tagging, and a final manual check by people has reduced overall processing time from the traditional four hours to just 30 minutes.

A sample of IMAGE WORKS baseball photo collection

Through its developmental stages, Microsoft Japan provided a ResNet neural network model from Microsoft Research, its research and development arm. It also held several hackathons with Fujifilm Software, which is the developer of IMAGE WORKS. Repeated verification exercises saw player recognition accuracy rates jump to over 90%.

“With the power of Azure and deep learning, we have been able to create an AI solution that makes of our photo service so much more efficient and faster. And, that is good for our customers,” said Riki Sato, Team Leader of the Advanced Solution Group at IMAGE WORKS. His colleague Daichi Hayata hailed the collaboration between IMAGE WORKS team and Microsoft Japan. “This was the first time we have dealt with deep learning, and we could do it with basic knowledge,” he said.

Fujifilm Imaging Systems now has plans to widen its use to amateur baseball leagues and then other sports. It might also be applied to the content needs outside the sports world. And, it is looking at the use of video analysis through Azure Video Indexer.

Microsoft Japan is committed to helping companies and organization embrace digital transformation with AI and is considering how to use this combination of pre-trained AI and a customizable deep learning framework in other fields, such as medicine.

post

More ways to improve patient care with AI and blockchain

Whether you’re interested in using Artificial Intelligence (AI) and Machine Learning (ML) to drive better health outcomes, reduce your operational costs, or improve fraud detection, one way you can better unlock these capabilities is through leveraging blockchain.

In my last blog, “Improving Patient Care through AI and Blockchain – Part 1,” I discussed several opportunities for blockchain to help advance AI in healthcare, from sourcing more training data from across a consortium, to tracking provenance of data, improving the quality of AI with auditing, and protecting the integrity of AI using blockchain. In this second blog, take a look at four more reasons to consider blockchain for advancing AI in healthcare.

  1. Shared models
    In cases where constraints exist that preclude the sharing of raw training data from across a consortium of healthcare organizations, for legal or other reasons, it may be possible to incrementally train shared models, enabled by the blockchain. In this approach the AI / ML models themselves can be shared across the network of healthcare organizations in the consortium, rather than the raw training data, and these shared models can be incrementally trained by each organization using its training data, and within its firewall. Blockchain can then be used to share the models as well as metadata about training data, results, validations, audit trails, and so forth.
  2. Incentivizing collaboration using cryptocurrencies and tokens
    Cryptocurrencies and tokens on blockchain can be used to incent and catalyze collaboration to advance AI / ML in healthcare. From sharing of training data, to collaboration on shared models, results, validations, and so forth, healthcare organizations can be rewarded with cryptocurrencies or tokens proportional to their participation and contribution. Depending on how the blockchain is setup these cryptocurrencies or tokens could be redeemed by participating healthcare organizations for meaningful rewards, or monetized. This can be useful in any AI / ML blockchain initiative both as an accelerant, and could also be critical to overcome potential impediments and reservations to collaboration that can arise where the size / value of contributions from organizations across the consortium are asymmetrical.
  3. Validating inference results and building trust fasterBefore AI / ML models can be used for patient care they must be validated to ensure safety and efficacy. A single organization validating a model alone will take more time to achieve an acceptable level of trust than would be the case for a consortium of healthcare organizations concurrently collaborating to validate a shared model. Blockchain can be used to coordinate and collaborate around such validation to increase synergy, minimize redundant efforts, accelerate validation, and establish trust in a new model faster.
  4. Automation through smart contracts and DAOsExecutable code for processing transactions associated with AI / ML, whether procurement of training data or otherwise, can be implemented on blockchains in the form of smart contracts. DAOs (Decentralized Autonomous Organizations) such as non-profits can also be built using smart contracts to automate whole enterprises that can facilitate advancing AI / ML in healthcare at scale.

Keep the conversation going

If you’re interested in using AI, ML, or blockchain for healthcare, you know that new opportunities are constantly surfacing and with it come a whole host of new questions. Follow me on LinkedIn and Twitter to get updates on these topics as well as cloud computing, security, privacy, and compliance. If you would like to explore a partnership as you work to implement AI and/or blockchain for your healthcare organization, we’d love to hear from you.

For more resources and tips on blockchain for healthcare, take a look at part 1 of this series here.

post

Microsoft agrees to acquire conversational AI and bot development company, XOXCO

Conversational AI is quickly becoming a way in which businesses engage with employees and customers: from creating virtual assistants and redesigning customer interactions to using conversational assistants to help employees communicate and work better together. According to Gartner, “By 2020, conversational artificial intelligence will be a supported user experience for more than 50 percent of large, consumer-centric enterprises.”* At Microsoft, we envision a world where natural language becomes the new user interface, enabling people to do more with what they say, type and input, understanding preferences and tasks and modeling experiences based on the way people think and remember.

Logo of XOXOCOToday, we are announcing we have signed an agreement to acquire XOXCO, a software product design and development studio known for its conversational AI and bot development capabilities. The company has been paving the way in conversational AI since 2013 and was responsible for the creation of Howdy, the first commercially available bot for Slack that helps schedule meetings, and Botkit, which provides the development tools used by hundreds of thousands of developers on GitHub. Over the years, we have partnered with XOXCO and have been inspired by this work.

We have shared goals to foster a community of startups and innovators, share best practices and continue to amplify our focus on conversational AI, as well as to develop tools for empowering people to create experiences that do more with speech and language.

The Microsoft Bot Framework, available as a service in Azure and on GitHub, supports over 360,000 developers today. With this acquisition, we are continuing to realize our approach of democratizing AI development, conversation and dialog, and integrating conversational experiences where people communicate.

Over the last six months, Microsoft has made several strategic acquisitions to accelerate the pace of AI development. The acquisition of Semantic Machines in May brought a revolutionary new approach to conversational AI. In July, we acquired Bonsai to help reduce the barriers to AI development by combining machine teaching, reinforcement learning and simulation. In September, we acquired Lobe, a company that has created a simple visual interface empowering anyone to develop and apply deep learning and AI models quickly, without writing code. The acquisition of GitHub in October demonstrates our belief in the power of communities to help fuel the next wave of bot development.

Our goal is to make AI accessible and valuable to every individual and organization, amplifying human ingenuity with intelligent technology. To do this, Microsoft is infusing intelligence across all its products and services to extend individuals’ and organizations’ capabilities and make them more productive, providing a powerful platform of AI services and tools that makes innovation by developers and partners faster and more accessible, and helping transform business by enabling breakthroughs to current approaches and entirely new scenarios that leverage the power of intelligent technology.

We’re excited to welcome the XOXCO team and look forward to working with the community to accelerate innovation and help customers capitalize on the many benefits AI can offer.

*Gartner, Is Conversational AI the Only UX You Will Ever Need?, 25 April 2018

Tags: , , , ,

post

Empathy Vision Model: AI that can see and talk with us about our world

Microsoft unveils a smartphone app in Japan, featuring Rinna the chatbot with a combination of powerful new AI technologies

Artificial intelligence (AI) that can see and comment on the world around us will soon be interacting much more naturally with people in their daily lives thanks to a powerful combination of new technologies being trialed in Japan through a chatty smartphone app.

Microsoft Japan President Takuya Hirano
Microsoft Japan President Takuya Hirano

The app features Microsoft Japan’s hugely popular Rinna social chatbot. It was unveiled at the Microsoft Tech Summit 2018 in Tokyo on Monday and is still in its developmental stage.

The AI behind the app has enhanced sight, hearing, and speech capabilities to recognize and talk about objects it sees in ways that are similar to how a person would. As such, it represents a significant step towards a future of natural interactions between AI and people. At the heart of the app is the “Empathy Vision Model,” which combines conventional AI image recognition technology emotional responses.

With this technology, Rinna views her surrounding through a smartphone’s camera. She not only recognizes objects and people, she can also describe and comment verbally about them in realtime. Using natural language processing, speech recognition, and speech synthesis technologies – developed by scientists at Microsoft Research – she can engage in natural-like conversations with a phone’s human user.

“A user can hold their smartphone in their hand or place it in a breast pocket while walking around. With the camera switched on, Rinna can see the same scenery, people, and objects as the user and it talk about all that with the user,” Microsoft Japan President Takuya Hirano said.

Unlike other AI vision models, Rinna can describe her impressions of what she is viewing with feeling, rather than just listing off recognition results such as the names, shapes, and colors of the things she sees. Rinna on a smartphone can view the world from the same perspective as a user and can converse with that user about it.

Let’s take the following image to help illustrate the difference:

Dog, father, son and a car behind

Conventional AI vision technology might typically react this way: “I see people. I see a child. I see a dog. I see a car.”

In contrast, Rinna with the Empathy Vision Model might say: “Wow, nice family! Enjoying the weekend, maybe? Oh, there’s a car coming! Watch out!”

As well as the Empathy Vision Model, which generates empathetic comments in real time about what the AI sees, Rinna’s smartphone app also features other cutting-edge features, including “full duplex.” This enables AI to participate in telephone-like natural conversations with a person by anticipating what that person might say next.

This capability helps Rinna make decisions about how and when to respond to someone who is chatting with her, a skill set that is very natural to people, but not common in chatbots. It differs from “half duplex,” which is more like the walkie-talkie experience in which only one party to a conversation can talk at any one time. Full duplex reduces the unnatural lag time that can sometimes make interactions between a person and a with chatbots feel awkward or forced.

Rinna’s smartphone app also incorporates Empathy Chat, which aids independent thinking by the AI. This helps keep a conversation with the user going as long as possible. In other words, the AI selects and uses responses most likely to encourage a person to keep engaged and talking.

It is still in its development stage and the timing of its general release has not been set. But the voice chat function is available as “Voice Chat with Rinna” on Rinna’s official LINE account in Japan.

READ the Latest on artificial intelligence in Asia

Tags: ,

post

Xiaoice wins over fans with AI, emotions

She has a staggering 660 million online users. And, while they know she’s not real, many prize her as a dear friend, even a trusted confidante. Sometimes the line between fact and fantasy blurs. She gets love letters and gifts. And not too long ago, a group of fans asked her out to dinner and even ordered an extra meal – just in case she showed up.

She is Xiaoice – Microsoft’s chatbot phenomenon that has enthralled digital audiences across the world’s most populous nation for the past four years.

Her popularity is such that she ranks among China’s most admired celebrities. And, her talents appear to have no bounds: She is a poet, a painter, a TV presenter, a news pundit, and a lot more.

Xiaoice, a chatbot phenomenon in China and much more. Photo: Microsoft.

Sometimes sweet, sometimes sassy and always streetwise, this virtual teenager has her own opinions and steadfastly acts like no other bot. She doesn’t try to answer every question posed by a user. And, she’s loathed to follow their commands. Instead, her conversations with her often adoring users are peppered with wry remarks, jokes, empathic advice on life and love, and a few simple words of encouragement.

Herein lies the secret of her success: She is learning, with increasing success, to relate and interact with humans through nuance, social skills, and, importantly, emotions.

But that’s just part of the story. “Xiaoice the chatbot” is just a small part a massive and multi-dimensional artificial intelligence (AI) framework, which continuously uses deep learning techniques to soak up the types of data that build up her emotional intelligence (EQ). She is using her interactions with humans to acquire human social skills, behavior, and knowhow. In other words, she is learning to be more like “us” every day.

Di Li, Microsoft’s general manager for Xiaoice in Microsoft’s Software and Technology Center of Asia

“This is what we call an Empathic Computing Framework,” explains Di Li, Microsoft’s general manager for Xiaoice in Microsoft’s Software and Technology Center of Asia. “To create an AI framework, you need to choose EQ or IQ, or EQ plus IQ”.

“And, if you want to choose EQ plus IQ, you must choose which one to do first. When we started with Xiaoice, we chose to do the EQ first and the IQ later.”

Every interaction a chatbot has with a human produces data. AI systems use this data to build that bot’s capabilities. The more data a machine has, the more it learns and the more it can do.

When they started, Di Li and his team in China did what other chatbot designers were not doing. When they launched the Xiaoice project, they deliberately discarded data that was based on user requests for facts and figures or commands to do simple tasks. Instead, they homed in on data that would help build a “personality” that would attract and engage users.

“Xiaoice wasn’t initially built to tell you how high the Himalayas are or to turn your house lights on. In the beginning, some users didn’t like that. But we soon found that many others stayed around and started treating her like a social entity.”

”With her attempts to interact, they made emotional connections. This kind of data is very valuable for us. They treat Xiaoice as if she were human, like a friend, which was a goal.”

From there she has never looked back. Almost every day, her legions of fans and friends across China send her cards and gifts – so much so that the team have set aside a whole office at their Beijing lab to display many of these tokens of affection and even declarations of “love”.

Originally, her character was to be that of a 16-year-old. But her creators raised that to 18 once her capabilities increased and she started taking on new “jobs”. Since then, her fans have voted that she stay 18 forever. “She won’t grow older. Eighteen is the age many of us want to be,” explains Di Li.

The depth of feeling generated by Xiaoice across her fan base is surprising. Social media shows that people seek her advice on all sorts of personal issues. “They tell her about their family, their job, their health, their boyfriends or girlfriends,” says Di Li. “It can get very personal.”

Some users can spend hours talking with Xiaoice. Others just follow their imaginations. Recently, a group of five students once went to a restaurant and ordered for six in the hope that Xiaoice would come too.

But there is a serious side to this. Microsoft’s research and work on the Xiaoice project has generated serious and important progress on a much wider front that points to where we’re are heading with computing. Xiaoice as “a friend chatbot,” represents is just a small slither of what the AI framework is achieving. Its base of knowledge and skill is also increasing across multiple sectors and tasks.

post

Microsoft unveils AI capability that automates AI development

The tedious but necessary process of selecting, testing and tweaking machine learning models that power many of today’s artificial intelligence systems was proving too time-consuming for Nicolo Fusi.

The final straw for the Microsoft researcher and machine learning expert came while fussing over model selection as he and his colleagues built CRISPR.ML, a computational biology tool that uses AI to help scientists determine the best way to perform gene editing experiments.

“It was just not a good use of time,” said Fusi.

So, he set out to develop another AI capability that automatically does the data transformation, model selection and hyperparameter tuning part of AI development – and inadvertently created a new product.

Microsoft announced Monday at the Microsoft Ignite conference in Orlando, Florida, that the automated machine learning capability is being incorporated in the Azure Machine Learning service. The feature is available in preview.

Learning service reimagined

Automated machine learning is at the forefront of Microsoft’s push to make Azure Machine Learning an end-to-end solution for anyone who wants to build and train models that make predictions from data, and then deploy them anywhere – in the cloud, on premises or at the edge.

Microsoft also announced Monday that the Azure Machine Learning service now includes a software development kit, or SDK, for the Python programming language, which is popular among data scientists. The SDK integrates the Azure Machine Learning service with Python development environments including Visual Studio Code, PyCharm, Azure Databricks notebooks and Jupyter notebooks.

“We heard users wanted to use any tool they wanted, they wanted to use any framework, and so we re-thought about how we should deliver Azure Machine Learning to those users,” said Eric Boyd, corporate vice president, AI Platform, who led the reimagining of the Azure Machine Learning service. “We have come back with a Python SDK that lights up a number of different features.”

These features include distributed deep learning, which enables developers to build and train models faster with massive clusters of graphical processing units, or GPUs, and access to powerful field programmable gate arrays, or FPGAs, for high-speed image classification and recognition scenarios on Azure.

Nicolo Fusi in a conference room having a side conversation in a conference room with four others, while two men look at a white board
From left, Microsoft’s Paul Oka, Sharon Gillett, Nicolo Fusi, Evan Green, Gilbert Hendry, Francesco Paolo Casale and Rishit Sheth discuss the algorithm and different ways to choose the next machine learning pipeline. Photo by Dana J. Quigley for Microsoft.

Recommender system

The automated model selection and tuning of so-called hyperparameters that govern the performance of machine learning models that are part of automated machine learning will make AI development available to a broader set of Microsoft’s customers, noted Boyd.

“There are a number of teams and companies that we work with that are now just going to make predictions based on the models that automated machine learning comes up with for them,” he said.

For machine learning experts, Boyd added that automated machine learning offers advantages as well.

“For trained, specialized data scientists, this is a shortcut. It automates a lot of the tedium in data science,” he said.

Automated machine learning homes in on the best so-called machine learning pipelines for a given dataset in a similar way to how on-demand video streaming services recommend movies. New users of a streaming service watch and rate a few movies in exchange for recommendations on what to watch next. The recommendations get better the more the system learns what movies users rate highest.

Likewise, automated machine learning runs a few models with hyperparameters tuned various ways on a user’s new dataset to learn how accurate the pipeline’s predictions are. That information informs the next set of recommendations, and so on and so forth for hundreds of iterations.

“At the end, you have a very good pipeline. You don’t have to do anything on top of it. And, the system never needs to see the data, which is attractive to a lot of people these days,” said Fusi, explaining that a user’s dataset remains on their local machine or in a virtual machine in Azure backed by Microsoft’s privacy policy.

A smiling Nicolo Fusi outside leaning against a building, looking at the camera
Nicolo Fusi, a Microsoft researcher and machine learning expert, developed the automated machine learning capability for his own research purposes. Photo by Dana J. Quigley for Microsoft.

From lab to product

Fusi described the research behind automated machine learning in an academic paper. The Azure Machine Learning team saw an opportunity to incorporate the technology as a feature in the machine learning service, noted Venky Veeraraghavan, group program manager for the machine learning platform team.

Over the process of validating the technology, product testing and benchmarking with customers, the Azure team discovered several novel ways customers could use it.

For example, customers who have hundreds or thousands of pieces of equipment in different geographic locations, such as windmills on wind farms, could use automated machine learning to fine tune predictive models for each piece of equipment, which would otherwise prove cost and time prohibitive.

In other cases, data scientists are turning to automated machine learning after they’ve already selected and tuned a model as a way to validate their handcrafted solution. “We have found they often get a better model they hadn’t considered,” Veeraraghavan said.

For Fusi, the capability has eliminated the most tedious part of developing AI, freeing him to focus on other aspects such as feature engineering – the process of extracting useful relationships from data – and to get some rest.

“I can start an automated machine learning run, go home, sleep, and come back to work and see a good model,” he said.

Top image: Nicolo Fusi presents a graphic that shows models identified by automated machine learning. Photo by Dana J. Quigley for Microsoft.

Related:

John Roach writes about Microsoft research and innovation. Follow him on Twitter.

post

How AI is building better gas stations and transforming Shell’s global energy business

In one part of the solution, they applied a machine teaching framework developed by Bonsai, which was acquired by Microsoft last summer, that allows subject matter experts with little background in data science or AI to tell the system what it wants the intelligent agent to do and what key information it needs to know to do that job successfully.

This Microsoft team works on combining this subject matter expertise with deep reinforcement learning — a branch of AI that enables models to learn from experiences much like a person does, rather than from meticulously labeled data.

The Bonsai platform performs much of the machine learning mechanics in the background — translating instructions into algorithms, creating neural networks and teaching the model the desired behavior. Using this approach, it produced an intelligent agent that, in a proof-of-concept test, learned how to optimally steer the drill using a simplified simulated 2D virtual well environment.

“What excites us about Bonsai is that it gives us a reinforcement learning platform that allows us to scale quickly and takes away the engineering effort involved in stitching together the open-source capabilities so our data scientists can focus on what they’re best at, which is figuring out what the model needs to do,” Jeavons said. “It’s early days still, but we’re extremely excited about the potential.”

Improving employee engagement

But Shell’s digital transformation isn’t just limited to its physical wells, pipelines and plants. It’s also changing the way employees working around the globe communicate with each other.

When Shell’s internal communications team started looking for ways to boost employee engagement and empower everyone across the organization to share information, they settled on a combination of intelligent tools offered as part of Microsoft Office365: Yammer, Stream and SharePoint Online.

Leaders started using Stream, an enterprise video service, to connect with employees more authentically and personally. Now, in addition to leadership communications, employees can easily find or create videos to promote safety, share best practices or analyze a successful project. Stream features like automatic closed captioning and deep search ensure communications are accessible and help employees quickly find the most useful content.

Those videos can be easily posted on SharePoint, a collaboration repository, and Yammer, a corporate social network that allows employees to have conversations with peers across the organization and give leaders insights into what employees are experiencing. More than three-quarters of Shell employees now use Yammer, with an average of 4,000 joining each month. The discussions help unify teams that are dispersed across the globe, solve problems together and foster open communication between groups that had little contact before.

For instance, employees working the night shift on a rig off the cost of Australia might use Yammer to alert the incoming crew to any issues they’ve experienced, and they can now ask if someone working at another location around the world might have a solution.

“These tools allow people to connect with each other, to learn from each other, to see opportunities quicker and build off of each other’s skills,” Sebregts said. “I lead a global organization, and in the past someone doing my type of job might travel around the world and hold a traditional town hall everywhere and once a quarter they would send an email with some thoughts. This is a new era of communication — it’s open, instantaneous, it’s modern, it’s fast, and I love it.”

Related:

Jennifer Langston writes about Microsoft research and innovation. Follow her on Twitter.