post

Building responsible and trustworthy conversational AI

From financial robo-advisors to virtual health assistants, enterprises across every industry are leveraging virtual assistants to create outstanding customer experiences and help employees maximize their time. As artificial intelligence technology continues to advance, virtual assistants will handle more and more mundane and repetitive tasks, freeing people to devote more time and energy to more productive and creative endeavors.

But like any technology, conversational AI can pose a significant risk when it’s developed and deployed improperly or irresponsibly, especially when it’s used to help people navigate information related to employment, finances, physical health, and mental well-being. For enterprises and society to realize the full potential of conversational AI, we believe bots need to be designed to act responsibly and earn user trust.

Last year, to help businesses meet this challenge, we shared 10 guidelines for building responsible conversational AI. Today, we’d like to illustrate how we’ve applied these guidelines in our own organization and share new resources that can help developers in any industry do the same.

Responsible bot guidelines

In November 2018, Lili Cheng, corporate vice president of Microsoft AI and Research, announced guidelines designed to help organizations develop bots that build trust in their services and their brands. We created these bot guidelines based on our own experiences, our research on responsible AI and by listening to our customers and partners. The guidelines are just that — guidelines. They represent the things that we found useful to think about from the very beginning of the design process. They encourage companies and organizations to think about how their bot will interact with people and how to mitigate potential risks. Ultimately, the guidelines are all about trust, because if people don’t trust the technology, they aren’t going to use it.

Designing bots with these guidelines in mind  

The bot guidelines have already started to play a central role in our own internal development processes. For example, our marketing team leveraged the guidelines while creating an AI-based lead qualification assistant that emails potential customers to determine their interest in Microsoft products and solutions. The assistant uses natural language processing to interact with customers, ensuring they receive the information they need or are directed to the Microsoft employee who can best help them. To provide a useful example, we’ve highlighted the ways in which our marketing team has approached three of the guidelines below.

  • Articulate the purpose of your bot and take special care if your bot will support consequential use cases.

Since the assistant would be customer-facing, the marketing team recognized the importance of completely thinking through every aspect of how the bot would work. Before building the lead qualification assistant, they created a vision and scope document that outlined the bot’s expected tasks, technical considerations, expected benefits and end goals in terms of business performance. By outlining these details early in the design process, the team was able to focus on developing and refining only necessary capabilities and deploy the bot sooner. Creating this document also helped them identify and design for edge cases that the bot was likely to encounter and establish a set of effective reliability metrics.

  • Ensure a seamless hand-off to a person where the person-bot exchange leads to interactions that exceed the bot’s competence.

While considering these edge use cases, the marketing team identified a couple of scenarios in which a handoff to a person would be required. First, if the assistant can’t determine the customer’s intent (for example, the response is too complex or lengthy), then the assistant will flag the conversation for a person. The person can then direct the assistant to the next best course of action or respond directly to the customer. The person also can use key phrases from the conversation to train the assistant to respond to similar situations in the future.

Secondly, the customer may ask something that the assistant doesn’t have pre-programmed. For example, a student may request information about our products and solutions but not be interested in making a purchase. The assistant would flag the conversation instead of forwarding it to sales. A person can then reply through the assistant to help the student learn more.

  • Ensure your bot is reliable

To help ensure the bot is performing as designed, the marketing team reviews a set of reliability metrics (such as the accuracy of determining intent or conversation bounce rate) through a regularly updated dashboard. As the team updates and improves the bot, it can closely analyze the impact of each change on the bot’s reliability and make adjustments as necessary.

Helping developers put the guidelines into practice

We have taken lessons learned from experiences like this one and important work from our industry-leading researchers to create actionable and comprehensive learning resources for developers.

As part of our free, online AI School, our Conversational AI learning path enables developers to start building sophisticated conversational AI agents using services such as natural language understanding or speech translation. We have recently added another module, Responsible Conversational AI, to this learning path. It covers how developers can design deeply intelligent bots and also ensure they are built in a responsible and trustworthy manner. In this learning path, developers can explore topics such as bot reliability, accessibility, security and consequential use cases and learn how to mitigate concerns that often arise with conversational AI. We have also created a Conversational AI lab in which a sample bot guides developers through a responsible conversational AI experience and explains its behavior at each point of the experience.

Learn more

We encourage you to share the AI lab and the Responsible Conversational AI learning module with technical decision-makers in your organization.

You can also go to our new AI Business School to learn more about how Microsoft has integrated AI throughout our business and how your organization can do the same.

Related

Microsoft and General Assembly launch partnership to close the global AI skills gap

Partnership will upskill and reskill 15,000 workers over the next three years and create industry-recognized credentials for AI skills

REDMOND, Wash. — May 17, 2019 — Microsoft Corp. and global education provider General Assembly (GA) on Friday announced a partnership to close skills gaps in the rapidly growing fields of artificial intelligence (AI), cloud and data engineering, machine learning, data science, and more. This initiative will create standards and credentials for AI skills, upskill and reskill 15,000 workers by 2022, and create a pool of AI talent for the global workforce.

Technologies like AI are creating demand for new worker skills and competencies: According to the World Economic Forum, up to 133 million new roles could be created by 2022 as a result of the new division of labor between humans, machines and algorithms. To address this challenge, Microsoft and GA will power 2,000 job transitions for workers into AI and machine learning roles in year one and will train an additional 13,000 workers with AI-related skills across sectors in the next three years.

“Artificial intelligence is driving the greatest disruption to our global economy since industrialization, and Microsoft is an amazing partner as we develop solutions to empower companies and workers to meet that disruption head on,” said Jake Schwartz, CEO and co-founder of GA. “At its core, GA has always been laser-focused on connecting what companies need to the skills that workers obtain, and we are excited to team up with Microsoft to tackle the AI skills gap.”

The joint program will focus on three core areas: setting the standards for artificial intelligence skills, developing scalable AI training solutions for companies, and creating a sustainable talent pool of workers with AI skills.

  • To create clear and consistent standards for AI skills, Microsoft will be the founding member of GA’s AI Standards Board, and will be joined by other industry-leading companies at the forefront of AI disruption. Over the next six months, the Standards Board will define skills standards, develop assessments, design a career framework, and build an industry-recognized credential for AI skills. Learn more about GA’s Standards Boards here.
  • As businesses adopt AI and machine learning cross-functionally, business leaders and technologists alike must understand AI concepts and master AI tools. Today, Microsoft supports business in aerospace, manufacturing and other sectors with Azure, but many workers are not yet ready to leverage its full capabilities. The collaboration will focus on accelerating the workforce training needs of Microsoft’s customers so that more teams have the foundational skills needed to work with AI.
  • To ensure that businesses can meet ever-growing AI talent needs, GA and Microsoft will establish an AI Talent Network to source candidates for hire and project-based work. GA will leverage its existing network of 22 campuses and the broader Adecco ecosystem to create a repeatable talent pipeline for the AI Talent Network.

“As a technology company committed to driving innovation, we have a responsibility to help workers access the AI training they need to ensure they thrive in the workplace of today and tomorrow,” said Jean-Philippe Courtois, executive vice president and president of Global Sales, Marketing and Operations at Microsoft. “We are thrilled to combine our industry and technical expertise with General Assembly to help close the skills gap and ensure businesses can maximize their potential in our AI-driven economy.”

About General Assembly

General Assembly (GA), an Adecco Group company, closes skills gaps for individuals and companies. Offering training and assessments in software engineering, data science, digital marketing, and more, GA is building clear career pathways for people, and sustainable, diverse talent pipelines for employers. To learn more visit https://generalassemb.ly.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777, rrt@we-worldwide.com 

Tess VandenDolder, BerlinRosen for General Assembly, (646) 755-6142, tess.vandendolder@berlinrosen.com

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.

post

Sony and Microsoft to explore strategic partnership

Companies to collaborate on new cloud-based solutions for gaming experiences and AI solutions

Kenichiro Yoshida, President and CEO, Sony Corporation, and Satya Nadella, CEO, Microsoft
Kenichiro Yoshida, President and CEO, Sony Corporation (left), and Satya Nadella, CEO, Microsoft

TOKYO and REDMOND, Wash. — May 16, 2019 — Sony Corporation (Sony) and Microsoft Corp. (Microsoft) announced on Thursday that the two companies will partner on new innovations to enhance customer experiences in their direct-to-consumer entertainment platforms and AI solutions.

Under the memorandum of understanding signed by the parties, the two companies will explore joint development of future cloud solutions in Microsoft Azure to support their respective game and content-streaming services. In addition, the two companies will explore the use of current Microsoft Azure datacenter-based solutions for Sony’s game and content-streaming services. By working together, the companies aim to deliver more enhanced entertainment experiences for their worldwide customers. These efforts will also include building better development platforms for the content creator community.

Sony logoAs part of the memorandum of understanding, Sony and Microsoft will also explore collaboration in the areas of semiconductors and AI. For semiconductors, this includes potential joint development of new intelligent image sensor solutions. By integrating Sony’s cutting-edge image sensors with Microsoft’s Azure AI technology in a hybrid manner across cloud and edge, as well as solutions that leverage Sony’s semiconductors and Microsoft cloud technology, the companies aim to provide enhanced capabilities for enterprise customers. In terms of AI, the parties will explore incorporation of Microsoft’s advanced AI platform and tools in Sony consumer products, to provide highly intuitive and user-friendly AI experiences.

“Sony is a creative entertainment company with a solid foundation of technology. We collaborate closely with a multitude of content creators that capture the imagination of people around the world, and through our cutting-edge technology, we provide the tools to bring their dreams and vision to reality,” said Kenichiro Yoshida, president and CEO of Sony. “PlayStation® itself came about through the integration of creativity and technology. Our mission is to seamlessly evolve this platform as one that continues to deliver the best and most immersive entertainment experiences, together with a cloud environment that ensures the best possible experience, anytime, anywhere. For many years, Microsoft has been a key business partner for us, though of course the two companies have also been competing in some areas. I believe that our joint development of future cloud solutions will contribute greatly to the advancement of interactive content. Additionally, I hope that in the areas of semiconductors and AI, leveraging each company’s cutting-edge technology in a mutually complementary way will lead to the creation of new value for society.”

“Sony has always been a leader in both entertainment and technology, and the collaboration we announced today builds on this history of innovation,” said Satya Nadella, CEO of Microsoft. “Our partnership brings the power of Azure and Azure AI to Sony to deliver new gaming and entertainment experiences for customers.”

Going forward, the two companies will share additional information when available.

About Sony Corporation

Sony Corporation is a creative entertainment company with a solid foundation of technology. From game and network services to music, pictures, electronics, semiconductors and financial services — Sony’s purpose is to fill the world with emotion through the power of creativity and technology. For more information, visit: http://www.sony.net/.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777, rrt@we-worldwide.com

Sony Corporation, Corporate Communications & CSR Department, Sony.Pressroom@sony.co.jp

 

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.

 

post

Ever-changing music shaped by skies above NYC hotel

Barwick composed five movements within an overall soundscape that reflect the constantly changing nature of the sky throughout the day, each with its own background of bass, synthesizer and vocal lines that weave in and out. For each “event,” identified by Microsoft AI, she then created six synthesized and six vocal sounds for the generative audio program to choose from – for example, 60 different musical options a day for every time an airplane passes above. The sounds are an expression of Barwick’s emotions in response to each stimulus.

“I didn’t want it to be too literal,” she says. “I could have made it sound ‘raindroppy,’ but it’s more about the attitude of the event. An airplane is a lot different than the moon, so it has more of a metallic sound than a warm sun sound or a quiet ‘moony’ kind of feeling. I wanted people who listen to it to be curious and wonder what that sound meant, what’s going across the sky right now.”

Barwick has never been afraid of technology, even if she didn’t have access to it. She recorded her first album in 2007 using a guitar pedal to form vocal loops on a cassette tape. “I didn’t even have a computer then,” she remembers. “I took my bag of tapes in somewhere to get mastered to produce the CD.”

Now she relies on technology to compose, record and perform her multilayered, ambient music. She uses effects on everything, including her voice. There’s no such thing as an unplugged Julianna Barwick set. Still, she says, “Before I was approached to do this project, the only thing I knew about artificial intelligence was from the movies. I’d never seen an application of it in my daily life.”

So as she began exploring sounds, Barwick grappled not only with what AI was and could do, but also with what her role would be in comparison to its. Who was the actual composer – she or the program? Was AI a partner or a tool?

“I contemplated how the project would play out in my absence and realized that I can make all the sounds, but I’m not going to be there to detect all the events — you have to rely on the AI to do that,” Barwick says. “And that’s such an important part of the score; it’s almost like it’s a 50-50 deal. And that’s what makes this project interesting. It almost brings in another collaborator, and the possibilities are endless. It’s opened up a new world of thinking and approaching future compositions and scores.”

a woman composes music on a laptop
A camera sends live images to a Microsoft Azure computer vision tool, which assigns tags such as “clouds” or “sun.” Those are fed into the system that technologists programmed after analyzing Barwick’s compositions and distilling them into an algorithm, which then chooses which tracks to play together.
post

How AI is helping kids bridge language gaps

How did you learn to talk?

Probably something like this: Your infant brain, a hotbed of neurological activity, picked up on your parents’ speech tones and facial expressions. You started to mimic their sounds, interpret their emotions and identify relatives from strangers. And one day, about a year into life, you pointed and started saying a few meaningful words with slobbery glee.

But many children, particularly those diagnosed with autism spectrum disorder, acquire language in different ways. Worldwide, one in 160 children is diagnosed with ASD. In the United States, it is one in 59 children — and approximately 40 percent of this group is non-verbal.

YouTube Video

Learning from superheroes and puppies

Lois Jean Brady and Matthew Guggemos, co-founders of Bay Area-based iTherapy who are speech pathologists and autism specialists, are tackling the growing prevalence of autism-related speech challenges with InnerVoice, an artificial intelligence-powered app whose customizable avatars stimulate social cues. The app animates avatars of superheroes, puppies, stuffed animals and people to help young children who have difficulties with language and expression pair words with meanings and practice conversation.

iTherapy received a Microsoft AI for Accessibility grant in 2018. The program provides grants as well as technology and expertise to individuals, groups and companies passionate about creating tools that make the world more inclusive. iTherapy is using the grant to integrate the Azure AI platform to enhance its generated speech, image recognition and facial animation.

A young boy at the iTherapy clinic uses InnerVoice chat bot to describe his photo of a Teddy bear.A five-year-old student using Zyrobotics to learn to read at Ranch Santa Gertrudes Elementary. 

“I think for sure that the AI component was the missing link,” says Guggemos of the app. “How do you use words, and what do words mean? What does a symbol represent? How do you use AI to develop problems that require language to solve?”

How a hippo helps teach speech 

AI is also proving an exciting development in speech and language improvement for Zyrobotics, an Atlanta-based educational technology company that was the first beneficiary of the AI for Accessibility program in 2018. Zyrobotics is using Azure Machine Learning to help its ReadAble Storiez educational tool interpret when a student needs assistance.

YouTube Video

ReadAble Storiez uses an avatar of a hippo to help students with learning disabilities such as dyslexia and other challenges such as stuttering, pauses and heavy accents.

Ayanna Howard, the company’s founder and professor in robotics, was first motivated to create ReadAble Storiez when watching a teacher use Zyrobotics’ Counting Zoo app with a child. When the teacher turned to her and said, “Can you have this app do more than just read with him? I think it’s fantastic that it helps improve his math – could it also help him improve his reading?”

Howard also found teachers mentioning the challenges of dyslexia in the classroom. “I was like, ‘Oh, what happens if you have a reading disability?’ I then learned that signs of dyslexia in children aren’t picked up until much later, typically when schools start standardized testing. I realized we needed an intervention much earlier and that we could do that with Counting Zoo.”

Learning models that don’t take individualized challenges into account, or don’t address the speech patterns of kids, “tend to fail,” Howard says. ReadAble Storiez employs a custom speech model and a sophisticated “tutor” to convert speech to text and measure accuracy, fluency and the child’s reading improvement.

‘It blew my mind!’

Howard is pleased with the program’s early success. “While they were reading a book, kids were correcting themselves,” she says. “As a technologist, you say your stuff works, but I’m sitting there with the kids and I’m blown away, ‘It really does work!’ It’s thrilling to see that what works in the lab actually works in the real world, in the child’s environment. The [avatar] would provide feedback, and a child would be like, ‘I didn’t say a word right. Can I try again?’ It blew my mind. That was the affirmation. Our solution was on track and on target.”

Brady, who came up with the idea for InnerVoice after studying and writing a book on apps for people with autism, reflects on the impact it has made. She cites an example of working with a student who is non-verbal and used the app to communicate with an avatar of himself.

“He would take a picture of an apple, and an avatar would read it as ‘apple,’ and then he would write it down, ‘apple.’ Until then, I hadn’t even thought of that strategy.”

A mother uses InnerVoice to work on communication skills with her young daughter. A mother uses InnerVoice to work on communication skills with her young daughter.  

Brady and Guggemos imagine the benefits of AI-assisted communication beyond their target audience. They are working with people with dementia, head injuries and strokes. “Many communication apps just talk for you,” Brady adds. “Ours spans many aspects of communication for everybody — even English-language learners. Why wouldn’t I try that? It provides a model. There’s a coffee cup on the table, take a picture of it. How do you say that?”

Howard dreams of Zyrobotics helping to close the gap between mainstream learners and students with learning disabilities. To start, this fall Zyrobotics will introduce ReadAble Storiez to classrooms in the Los Nietos, California, school district, where learning disabilities track high. The company will also apply AI to its suite of STEM Storiez, a series of nine interactive and inclusive books that help children ages 3 to 7 engage with science, math, engineering and technology.

The AI for Accessibility program has been instrumental in getting Zyrobotics off the ground with ReadAble Storiez. “If we hadn’t gotten the grant, we’d be in phase zero,” Howard says. “We run on grants to ensure we provide access to learning technologies for all students. We need to be out there for kids that need us.”

The grant gave Brady and Guggemos the technology to take InnerVoice to the next level. “Our kids need this technology,” Brady says. “It’s not a luxury. We want to keep adding the best stuff. Microsoft really propelled us forward in that arena.”

Top image: A young boy at the iTherapy clinic uses InnerVoice chat bot to describe his photo of a Teddy bear. 

Related: 

post

SK Telecom and Microsoft sign MOU for comprehensive cooperation in cutting-edge ICT

The two companies agreed to combine their strengths to jointly promote IoT business, AI technologies and services, media and entertainment services, and new ways of working

Park Jung Ho, CEO of SK Telecom (left), and Satya Nadella, CEO of Microsoft (right), at a recent meeting.
Park Jung Ho, CEO of SK Telecom (left), and Satya Nadella, CEO of Microsoft (right), at a recent meeting.

SEOUL, KOREA – May 13, 2019 – SK Telecom (NYSE:SKM) and Microsoft Corp. signed a memorandum of understanding (MOU) on May 7 for comprehensive cooperation in leading-edge ICT, including 5G, artificial intelligence (AI) and cloud.

Under the MOU, SK Telecom and Microsoft will combine their technological capabilities in areas such as 5G, AI and cloud to jointly promote Internet of Things (IoT) business including smart factory; AI technologies and services; media and entertainment services; and new ways of working for ICT companies under the SK Group umbrella.

To promote smart factory IoT business operations, the two companies established a business strategic partnership in February 2019 to launch Microsoft Azure with Metatron, SK Telecom’s self-developed big data solution. SK Telecom and Microsoft will work together to further upgrade the service and implement joint marketing activities.

By putting together the capabilities of SK Telecom’s AI platform NUGU with Microsoft’s Cortana digital assistant, the two companies will work together to offer new AI-powered products and services, including consumer solutions such as smart speakers and other offerings for the enterprise.

Moreover, the two companies will work together to create a new level of customer experience in the field of media and entertainment.

SK Telecom will adopt Microsoft 365, the company’s intelligent and secure solution to empower employees, to create a modern workplace and promote a new way of working among employees. Eventually, SK Telecom will expand Microsoft 365 to other ICT companies under the SK Group umbrella. In addition, the two companies will provide new value to customers by combining Microsoft’s modern workplace devices and solutions, such as Surface and Office 365, with SK Telecom’s unique products and services.

“SK Telecom is pleased to join hands with Microsoft as collaboration with global leading companies like Microsoft is essential to gain leadership in the 5G market, where competition is already fierce,” said Park Jung-ho, President and CEO of SK Telecom. “SK Telecom will work closely with Microsoft to create an unprecedented value by combining the strengths and capabilities of the two companies.”

“Through the strategic partnership with SK Telecom, we will play a key role in shaping the future and accelerating the digital transformation of the telecommunications industry with our world-class network and technology,” said Jason Zander, executive vice president, Azure, Microsoft. “This will be a deep and multifaceted partnership that strengthens the power of cloud and AI to deliver innovative new services to customers.”

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

About SK Telecom

SK Telecom is the largest mobile operator in Korea with nearly 50 percent of the market share. As the pioneer of all generations of mobile networks, the company has commercialized the fifth generation (5G) network on December 1, 2018 and announced the first 5G smartphone subscribers on April 3, 2019. With its world’s best 5G, SK Telecom is set to realize the Age of Hyper-Innovation by transforming the way customers work, live and play.

Building on its strength in mobile services, the company is also creating unprecedented value in diverse ICT-related markets including media, security and commerce.

For more information, press only:

SK Telecom Public Relations/Media Contact, skt_press@sk.com or sktelecom@bm.com

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777, rrt@we-worldwide.com

post

A Microsoft employee’s personal and global impact on rare disease

Adult hugging childAdult hugging child

The summer of 2009 was a nightmare for my family. Our 3-month-old son Sergio was ill, and doctors could not diagnose his condition. Our family went on a long journey to find an answer. Two years after his symptoms began, in 2011, we learned that he had Dravet Syndrome.

Neurologists were working to find a diagnosis and Sergio was being treated while they searched. At nine months of age, he was given a drug that was contraindicated for what ultimately was his condition. Soon after receiving it, he started to have dozens of seizures per day.  We stop counting the seizures, and wondered why, in the age of computers, neurologists don’t use computers and data to improve the diagnosis process.

This event changed my life. I founded the Dravet Syndrome Foundation in Spain in 2011 after learning of Sergio’s condition, and later, Foundation 29. I spent seven years trying to find a cure. Unfortunately, finding a cure for a rare disease is very challenging. Instead, using resources available to me as a Microsoft employee, and with foundation volunteers, created a diagnostic program for children with Dravet Syndrome, called Dx29. To date, it has provided diagnoses for more than 700 patients worldwide. It is now available for clinicians to use free of charge.

The Long Wait for a Rare Disease Diagnosis

Patients live an average of 4.8 years for a rare disease diagnosis. In the meantime, they contend with the risk of medical errors and severe side effects.  Patients visit an average of 7.3 specialists, with 40% of patients reporting that a delayed diagnosis had a significant or very marked impact on their condition. It is estimated that 6-8% of the world is affected by a rare disease, meaning that improvements in diagnosis procedures could impact 460-620 million people.

The Need for Clinical Data Integration

The conventional diagnosis process is not designed for the complex biology behind rare diseases. It usually starts with a clinical consultation. A physician requests a genomic test, sending along the biological samples and the symptoms (phenotypes) already identified. The sequencing is performed and bioinformaticians (often manually) analyze the large amount of data produced. To carry out this complex analysis, they use the symptoms identified by the physician to guide their search.

The physician’s and bioinformation’s data are not integrated, so these professionals are disconnected. Those conducting the gene filtering have partial phenotypic information but are unable to collect more data because only physicians have full access to patients and their records. The genetic report the physician receives would likely be different if more patient information was available during gene filtering. Clinical decisions made based on the genetic data could be different with more data. How can bioinformaticians check if a given gene variant in the patient is producing a concrete phenotype? How is the patient information put in the hands of bioinformaticians? There is an information gap issue.

Satya Nadella Empowers Employees to Help

At the 2017 Microsoft employee hackathon in Spain, one of my best friends, and Microsoft colleague, Sacha Arozarena, suggested we create a bot to diagnose patients with rare diseases. After just three days of intensive work, our prototype was able to suggest symptoms and navigate the user to a potential diagnosis. It was still a proof of concept, but we won the Spanish hackathon. The most important achievement of this work was the connections the prototype created for us.

That same month, I heard Satya Nadella discuss his son’s medical condition while he presented at Microsoft Ready, so I sent him an email asking for help. He replied within five minutes, connecting me with Microsoft’s Research team. Through this connection, I learned about Microsoft’s efforts in several areas:  a new Genomics team, a team working on medical natural language processing, and the company’s investments and efforts towards bringing artificial intelligence to health science.

Using the Cloud and AI to Speed Diagnosis

One year ago, colleagues and I founded Foundation 29, a non-profit organization with the mission to improve the lives of patients with rare diseases through faster, better diagnosis. The foundation is developing solutions to facilitate diagnosis, with the intention of distributing them to every physician in the world. Dx29 is the name of this effort. The goal is to reimagine and democratize diagnosis.

The tool we developed uses Artificial Intelligence (AI) to close the information gap. The gene filtering of most routine low-level cases can be automated with AI, allowing bioinformaticians and specialists to focus on the most challenging cases where human intervention is required. Physicians can drive automatic genetic analysis simply by identifying symptoms in the tool. The physician’s role comes back to the center of the process, focusing on the patient and doing symptom identification and differential diagnosis.

Dx29 does not make a diagnosis, but enhances the physicians’ skills. It gives physicians a tool that augments their capabilities by hiding the complexity of genomics and allowing them to focus on clinical diagnosis, something they are already experts on.

The process starts by performing automatic symptom identification and codification from medical records. It then allows physicians to navigate the complexity of gene identification by simply selecting identified symptoms in the tool.  In the final step, once enough symptoms have been matched with the genetic information, Dx29 presents a ranked list of potential conditions for the physician to further evaluate and decide how to proceed. The foundation did the first medical tests last December with promising results. Our goal is to make the tool available to the medical community this spring and find a business model to secure the continuity of the project.

Thanks to Microsoft and Global Organizations

Dx29 is possible because of help from Microsoft and its employees. It is impossible to list all Microsoft employees who joined forces to collaborate on this initiative. Foundation 29 and in particular, Dx29 are honored by the privilege of working with Microsoft software engineers, product groups, and consultants. Architects from Microsoft Services, data engineers, data scientists and the legal department provided us with advice on privacy and data protection.

I am proud to work for a company that empowers employees to achieve more. A lack of diagnosis is not only a stressful situation for patients and families, but also for healthcare professionals. Without a diagnosis, an appropriate treatment is not possible. With all the help received so far, Foundation 29 aims to empower physicians with the right tool to provide an accurate diagnosis.

I would like to thank the following organizations for their contributions to the development of the Dx29 tool and its pilot:

  • Centro de Investigación Biomédica en red de Enfermedades Raras (CIBERER), Madrid, Spain
  • Hospital La Paz de Madrid, Madrid, Spain
  • NIMGenetics, Madrid, Spain,
  • Idibell, Barcelona, Spain
  • The Global Commission on Rare Diseases

Rare disease patients are exceptions in clinical routines, but they drive the medical community towards precision medicine. Precision medicine should not be limited to exceptional cases, but spread to all patients, improving the standard of care for all.

Time is ticking. I know I won’t be able to find a cure for Sergio and he will have to live with Dravet Syndrome all his life. But having the possibility of creating a tool to speed up and improve diagnosis process for other children is a strong motivation for me, my family and the community around us. Helping others is sometimes the only way to heal your own wounds.

Learn more about Dx29, the Global Commission and how AI can support diagnosis and read more from the Microsoft In Health Blog on the Global Commission.

post

Breaking Bard: Using AI to unlock Shakespeare’s greatest works

Spoiler alert: At the end of Romeo and Juliet, they both die.

OK, as spoilers go, it’s not big. Most people have read the play, watched one of the famous films or sat through countless school lessons devoted to William Shakespeare and his work. They know it doesn’t end well for Verona’s most famous couple.

In fact, the challenge is finding something no one knows about the world-famous, 300-year-old play. That’s where artificial intelligence can help.

Phil Harvey, a Cloud Solution Architect at Microsoft in the UK, used the company’s Text Analytics API on 19 of The Bard’s plays. The API, which is available to anyone as part of Microsoft’s Azure Cognitive Services, can be used to identify sentiment and topics in text, as well as pick out key phrases and entities. This API is one of several Natural Language Processing (NLP) tools available on Azure.

By creating a series of colourful, Power BI graphs (below) showing how negative (red) or positive (green) the language used by The Bard’s characters was, he hoped to shine a new light on some of the greatest pieces of literature, as well as make them more accessible to people who worry the plays are too complex to easily understand.

Harvey said: “People can see entire plotlines just by looking at my graphs on language sentiment. Because visual examples are much easier to absorb, it makes Shakespeare and his plays more accessible. Reading language from the 16th and 17th centuries can be challenging, so this is a quick way of showing them what Shakespeare is trying to do.

“It’s a great example of data giving us new things to know and new ways of knowing it; it’s a fundamental change to how we process the world around us. We can now pick up Shakespeare, turn it into a data set and process it with algorithms in a new way to learn something I didn’t know before.”

What Harvey’s graphs reveal is that Romeo struggles with more extreme emotions than Juliet. Love has a much greater effect on him challenging stereotypes of the time that women – the fairer sex – were more prone to the highs and lows of relationships.

“It’s interesting to see that the male lead is the one with more extreme emotions,” Harvey added. “The longest lines, both positive and negative, are spoken by him. Juliet is steadier; she is positive and negative but not extreme in what she says. Romeo is a fellow of more extreme emotion, he’s bouncing around all over the place.

Macbeth is also interesting because there are these two peaks of emotion, and Shakespeare uses the wives at these points to turn the story. I also looked at Helena and Hermia in A Midsummer Night’s Dream, because they have a crossed-over love story. They are both positive at the start but then they find out something and it gets negative towards the end.”

<img data-attachment-id="74802" data-permalink="https://news.microsoft.com/en-gb/2019/04/23/breaking-bard-using-microsoft-ai-to-unlock-shakespeares-greatest-works/ancient-architecture-art-189532/" data-orig-file="http://www.sickgaming.net/blog/wp-content/uploads/2019/04/breaking-bard-using-ai-to-unlock-shakespeares-greatest-works.jpg" data-orig-size="6000,3376" data-comments-opened="1" data-image-meta="{"aperture":"0","credit":"","camera":"","caption":"","created_timestamp":"0","copyright":"","focal_length":"0","iso":"0","shutter_speed":"0","title":"","orientation":"0"}" data-image-title="ancient-architecture-art-189532" data-image-description="

statue of William Shakespeare

” data-medium-file=”https://3er1viui9wo30pkxh1v2nh4w-wpengine.netdna-ssl.com/wp-content/uploads/prod/sites/68/2019/04/ancient-architecture-art-189532-300×169.jpg” data-large-file=”http://www.sickgaming.net/blog/wp-content/uploads/2019/04/breaking-bard-using-ai-to-unlock-shakespeares-greatest-works.jpg” class=”wp-image-74802 size-full” src=”http://www.sickgaming.net/blog/wp-content/uploads/2019/04/breaking-bard-using-ai-to-unlock-shakespeares-greatest-works.jpg” alt=”statue of William Shakespeare” width=”6000″ height=”3376″ srcset=”http://www.sickgaming.net/blog/wp-content/uploads/2019/04/breaking-bard-using-ai-to-unlock-shakespeares-greatest-works.jpg 6000w, https://3er1viui9wo30pkxh1v2nh4w-wpengine.netdna-ssl.com/wp-content/uploads/prod/sites/68/2019/04/ancient-architecture-art-189532-300×169.jpg 300w, https://3er1viui9wo30pkxh1v2nh4w-wpengine.netdna-ssl.com/wp-content/uploads/prod/sites/68/2019/04/ancient-architecture-art-189532-768×432.jpg 768w” sizes=”(max-width: 6000px) 100vw, 6000px”>

The project required AI working alongside humans to truly understand and fully appreciate Shakespeare’s plays

His Shakespeare graphs are the final step in a long process. After downloading a text file of The Bard’s plays from the internet, Harvey had to process the data to prepare it for Microsoft’s AI algorithms. He removed all the stage directions, keeping the act and scene numbers, the characters’ names and what they said. He then uploaded the text to Microsoft Cognitive Services API, a set of tools that can be used in apps, websites and bots to see, hear, speak, understand and interpret users through natural methods of communication.

The Text Analytics API is pre-trained with an extensive body of text with sentiment associations. The model uses a combination of techniques during text analysis, including text processing, part-of-speech analysis, word placement and word associations.

After scanning the Shakespeare plays, Microsoft’s NLP tool gave the lines of dialogue a score between zero and one – scores close to one indicated a positive sentiment, and scores close to zero indicated a negative sentiment.

However, before you start imagining a world in which only robots read books before telling humans the gist of what happened, Harvey discovered some unexpected challenges with his test.

While the AI system worked well for Shakespeare plays that contained straightforward plots and dialogue, it struggled to determine if more nuanced speech was positive or negative. The algorithm couldn’t work out whether Hamlet’s mad ravings were real or imagined, whether characters were being deceptive or telling the truth. That meant that the AI labelled events as positive when they negative, and vice-versa. The AI believed The Comedy of Errors was a tragedy because of the physical, slapstick moments in the play.

Everything you need to know about Microsoft’s cloud

Harvey realised that the parts of the plays that dealt with what truly makes us unique as humans – joking, elation, lying, double meanings, subterfuge, sarcasm – could only be noticed and interpreted by human readers. His project required AI working alongside humans to truly understand and fully appreciate Shakespeare.

Harvey insists that his experiments with Shakespeare’s plays are just a starting point but that the same combination of AI and humans can eventually be extended to companies and their staff, too.

“Take the example of customers phoning their energy company,” he said. “With Microsoft’s NLP tools, you could see if conversations that happen after 5pm are more negative than those that happen at 9am, and deploy staff accordingly. You could also see if a call centre worker turns conversations negative, even if they start out positive, and work with that person to ensure that doesn’t happen in the future.

“It can help companies engage with data in a different way and assist them with everyday tasks.”

Harvey also said journalists could use the tool to see how readers are responding to their articles, or social media experts would get an idea of how consumers viewed their brand.

For now, Harvey is concentrating on the Classics and is turning his attention to Charles Dickens, if he can persuade the V&A in London to let him study some of their manuscripts.

“In the V&A manuscripts, you can see where Dickens has crossed out words. I would love to train a custom vision model on that to get a page by page view of his changes. I could then look at a published copy of the text and see which parts of the book he worked on most; maybe that part went well but he had trouble with this bit. Dickens’s work was serialised in newspapers, so we might be able to deduce whether he was receiving feedback from editors that we didn’t know about. I think that’s amazing.”

Tags: , , , , , , ,

post

Machine teaching: How people’s expertise makes AI even more powerful

Most people wouldn’t think to teach five-year-olds how to hit a baseball by handing them a bat and ball, telling them to toss the objects into the air in a zillion different combinations and hoping they figure out how the two things connect.

And yet, this is in some ways how we approach machine learning today — by showing machines a lot of data and expecting them to learn associations or find patterns on their own.

For many of the most common applications of AI technologies today, such as simple text or image recognition, this works extremely well.

But as the desire to use AI for more scenarios has grown, Microsoft scientists and product developers have pioneered a complementary approach called machine teaching. This relies on people’s expertise to break a problem into easier tasks and give machine learning models important clues about how to find a solution faster. It’s like teaching a child to hit a home run by first putting the ball on the tee, then tossing an underhand pitch and eventually moving on to fastballs.

“This feels very natural and intuitive when we talk about this in human terms but when we switch to machine learning, everybody’s mindset, whether they realize it or not, is ‘let’s just throw fastballs at the system,’” said Mark Hammond, Microsoft general manager for Business AI. “Machine teaching is a set of tools that helps you stop doing that.”

Machine teaching seeks to gain knowledge from people rather than extracting knowledge from data alone. A person who understands the task at hand — whether how to decide which department in a company should receive an incoming email or how to automatically position wind turbines to generate more energy — would first decompose that problem into smaller parts. Then they would provide a limited number of examples, or the equivalent of lesson plans, to help the machine learning algorithms solve it.

In supervised learning scenarios, machine teaching is particularly useful when little or no labeled training data exists for the machine learning algorithms because an industry or company’s needs are so specific.

YouTube Video

In difficult and ambiguous reinforcement learning scenarios — where algorithms have trouble figuring out which of millions of possible actions it should take to master tasks in the physical world — machine teaching can dramatically shortcut the time it takes an intelligent agent to find the solution.

It’s also part of larger goal to enable a broader swath of people to use AI in more sophisticated ways. Machine teaching allows developers or subject matter experts with little AI expertise, such as lawyers, accountants, engineers, nurses or forklift operators, to impart important abstract concepts to an intelligent system, which then performs the machine learning mechanics in the background.

Microsoft researchers began exploring machine teaching principles nearly a decade ago, and those concepts are now working their way into products that help companies build everything from intelligent customer service bots to autonomous systems.

“Even the smartest AI will struggle by itself to learn how to do some of the deeply complex tasks that are common in the real world. So you need an approach like this, with people guiding AI systems to learn the things that we already know,” said Gurdeep Pall, Microsoft corporate vice president for Business AI. “Taking this turnkey AI and having non-experts use it to do much more complex tasks is really the sweet spot for machine teaching.”

Today, if we are trying to teach a machine learning algorithm to learn what a table is, we could easily find a dataset with pictures of tables, chairs and lamps that have been meticulously labeled. After exposing the algorithm to countless labeled examples, it learns to recognize a table’s characteristics.

But if you had to teach a person how to recognize a table, you’d probably start by explaining that it has four legs and a flat top. If you saw the person also putting chairs in that category, you’d further explain that a chair has a back and a table doesn’t. These abstractions and feedback loops are key to how people learn, and they can also augment traditional approaches to machine learning.

“If you can teach something to another person, you should be able to teach it to a machine using language that is very close to how humans learn,” said Patrice Simard, Microsoft distinguished engineer who pioneered the company’s machine teaching work for Microsoft Research. This month, his team moves to the Experiences and Devices group to continue this work and further integrate machine teaching with conversational AI offerings.

Machine teaching researchers Patrice Simard, Alici Edelman Pelton and Riham Mansour sit in their Microsoft research office
Microsoft researchers Patrice Simard, Alicia Edelman Pelton and Riham Mansour (left to right) are working to infuse machine teaching into Microsoft products. Photo by Dan DeLong for Microsoft.

Millions of potential AI users

Simard first started thinking about a new paradigm for building AI systems when he noticed that nearly all the papers at machine learning conferences focused on improving the performance of algorithms on carefully curated benchmarks. But in the real world, he realized, teaching is an equally or arguably more important component to learning, especially for simple tasks where limited data is available.

If you wanted to teach an AI system how to pick the best car but only had a few examples that were labeled “good” and “bad,” it might infer from that limited information that a defining characteristic of a good car is that the fourth number of its license plate is a “2.” But pointing the AI system to the same characteristics that you would tell your teenager to consider — gas mileage, safety ratings, crash test results, price — enables the algorithms to recognize good and bad cars correctly, despite the limited availability of labeled examples.

In supervised learning scenarios, machine teaching improves models by identifying these high-level meaningful features. As in programming, the art of machine teaching also involves the decomposition of tasks into simpler tasks. If the necessary features do not exist, they can be created using sub-models that use lower level features and are simple enough to be learned from a few examples. If the system consistently makes the same mistake, errors can be eliminated by adding features or examples.

One of the first Microsoft products to employ machine teaching concepts is Language Understanding, a tool in Azure Cognitive Services that identifies intent and key concepts from short text. It’s been used by companies ranging from UPS and Progressive Insurance to Telefonica to develop intelligent customer service bots.

“To know whether a customer has a question about billing or a service plan, you don’t have to give us every example of the question. You can provide four or five, along with the features and the keywords that are important in that domain, and Language Understanding takes care of the machinery in the background,” said Riham Mansour, principal software engineering manager responsible for Language Understanding.

Microsoft researchers are exploring how to apply machine teaching concepts to more complicated problems, like classifying longer documents, email and even images. They’re also working to make the teaching process more intuitive, such as suggesting to users which features might be important to solving the task.

Imagine a company wants to use AI to scan through all its documents and emails from the last year to find out how many quotes were sent out and how many of those resulted in a sale, said Alicia Edelman Pelton, principal program manager for the Microsoft Machine Teaching Group.

As a first step, the system has to know how to identify a quote from a contract or an invoice. Oftentimes, no labeled training data exists for that kind of task, particularly if each salesperson in the company handles it a little differently.

If the system was using traditional machine learning techniques, the company would need to outsource that process, sending thousands of sample documents and detailed instructions so an army of people can attempt to label them correctly — a process that can take months of back and forth to eliminate error and find all the relevant examples. They’ll also need a machine learning expert, who will be in high demand, to build the machine learning model. And if new salespeople start using different formats that the system wasn’t trained on, the model gets confused and stops working well.

By contrast, Pelton said, Microsoft’s machine teaching approach would use a person inside the company to identify the defining features and structures commonly found in a quote: something sent from a salesperson, an external customer’s name, words like “quotation” or “delivery date,” “product,” “quantity,” or “payment terms.”

It would translate that person’s expertise into language that a machine can understand and use a machine learning algorithm that’s been preselected to perform that task. That can help customers build customized AI solutions in a fraction of the time using the expertise that already exists within their organization, Pelton said.

Pelton noted that there are countless people in the world “who understand their businesses and can describe the important concepts — a lawyer who says, ‘oh, I know what a contract looks like and I know what a summons looks like and I can give you the clues to tell the difference.’”

Microsoft CVP Gurdeep Pall talks in front of a presentation on a TV monitor
Microsoft Corporate Vice President for Business AI Gurdeep Pall talks at a recent conference about autonomous systems solutions that employ machine teaching. Photo by Dan DeLong for Microsoft.

Making hard problems truly solvable

More than a decade ago, Hammond was working as a systems programmer in a Yale neuroscience lab and noticed how scientists used a step-by-step approach to train animals to perform tasks for their studies. He had a similar epiphany about borrowing those lessons to teach machines.

That ultimately led him to found Bonsai, which was acquired by Microsoft last year. It combines machine teaching with deep reinforcement learning and simulation to help companies develop “brains” that run autonomous systems in applications ranging from robotics and manufacturing to energy and building management. The platform uses a programming language called Inkling to help developers and even subject matter experts decompose problems and write AI programs.

Deep reinforcement learning, a branch of AI in which algorithms learn by trial and error based on a system of rewards, has successfully outperformed people in video games. But those models have struggled to master more complicated real-world industrial tasks, Hammond said.

Adding a machine teaching layer — or infusing an organization’s unique subject matter expertise directly into a deep reinforcement learning model — can dramatically reduce the time it takes to find solutions to these deeply complex real-world problems, Hammond said.

For instance, imagine a manufacturing company wants to train an AI agent to autonomously calibrate a critical piece of equipment that can be thrown out of whack as temperature or humidity fluctuates or after it’s been in use for some time. A person would use the Inkling language to create a “lesson plan” that outlines relevant information to perform the task and to monitor whether the system is performing well.

Armed with that information from its machine teaching component, the Bonsai system would select the best reinforcement learning model and create an AI “brain” to reduce expensive downtime by autonomously calibrating the equipment. It would test different actions in a simulated environment and be rewarded or penalized depending on how quickly and precisely it performs the calibration.

Telling that AI brain what’s important to focus on at the outset can short circuit a lot of fruitless and time-consuming exploration as it tries to learn in simulation what does and doesn’t work, Hammond said.

“The reason machine teaching proves critical is because if you just use reinforcement learning naively and don’t give it any information on how to solve the problem, it’s going to explore randomly and will maybe hopefully — but frequently not ever — hit on a solution that works,” Hammond said. “It makes problems truly solvable whereas without machine teaching they aren’t.”

Related machine teaching links:

 Jennifer Langston writes about Microsoft research and innovation. Follow her on Twitter.

post

This Slovak startup is using AI and drones to help preserve natural water cycles

Working on their model for more than three decades, Rain for Climate founder Michal Kravčík has gathered scientists, ecologists, hydrologists, entrepreneurs and government agencies around him to create a plan to restore climate stability:

“The research we’ve been working on for years has shown that climate change is not just about high greenhouse gas production, but especially about desertification – the planet’s drying out. According to the analyses, we have about five years to act, otherwise the ecosystem will be irreversibly damaged,” says Vlado Zaujec , CEO and co-founder of Rain for Climate.

[embedded content]

When AI meets nature
Rain for Climate’s unique solution involves gathering territorial data with drones, which can provide perspective and information faster and more accurately than standard ground-level analysis. This data can then be used to create a personalised report for each customer, based on the needs of their land, providing bespoke technical solutions from a catalogue of over 5,000 different possible measures and actions.

As a Microsoft AI for Earth grant winner, the company has been given free Azure credits to help power and develop an AI solution to more accurately analyse drone data, at a faster rate. Currently in an internal testing phase, the solution also makes use of machine learning, allowing the AI to also improve with time, teaching itself to spot patterns and make connections across its ever-growing data pool.

“We get a lot of data from the drones, which we can quickly analyse thanks to artificial intelligence and machine learning, made possible by our Microsoft grant,” says Vlado Zaujec. “Based on the evaluation, experts can then prepare a water retention project, where they select different technical solutions from a catalogue of more than 5000 measures. The type of solution, size, location and materials used reflect the uniqueness of each territory.”

As with all environmental projects, Rain for Climate’s goal is seen by its founder as a long-distance endeavour. The AI-powered solution that the company is working on is an evolution in its journey towards its ultimate goal of restoring water to its natural balance in affected areas, as quickly, and as efficiently as possible. Thanks to technology and innovative companies like Rain for Climate, we look forward to seeing more innovative solutions that will help conserve our planet.