Posted on Leave a comment

AI for Earth: Helping save the planet with data science

Wee Hyong Tok is a data scientist. He has a passion for numbers, a faith in technology – and a mission that might make a superhero think twice.

“I want to save the Earth,” he says matter-of-factly. “That seems like a very bold statement. But, I strongly believe that artificial intelligence (AI) can play an important role in monitoring the health of our planet.”

Singapore-born and educated, Wee Hyong has been a data guy and techie all his working life – first in academia, and later with Microsoft in China and the United States where he helped create ground-breaking products in the cloud.

For more than a year now, he has been leading an elite global research team for AI for Earth – a five-year, US$50 million Microsoft initiative that supports, and partners with, environmental groups and researchers. They are tackling some of the world’s most intractable problems by marshaling the immense power of AI, machine learning (ML), and the cloud.

Wee Hyong Tok, Principal Data Science Manager, AI & Research.

In a recent interview during a quick visit back to Singapore, Wee Hyong summed up the challenge: We live on planet Earth, and yet we know very little about it.

We have limited time to learn how to conserve its resources. Fresh water supplies are being dangerously overexploited. Land is being exhausted and degraded to produce more food for more people in ever-growing cities. Thousands of species are fading fast into extinction as their habitats disappear in a whirl of industrialization and a haze of pollution. The oceans are choking on plastics and the carbon-charged climate is changing. Precious things that are vital to our existence are under threat and, if lost, might never come back.

I strongly believe that AI can play an important role in monitoring the health of our planet.

When we hear such things, most of us tend to shrug helplessly. Such problems just seem too big, too hard, and too scary to fix. But Wee Hyong and his colleagues at AI for Earth and Microsoft Research are convinced that solutions can come in our time – if data, technology, and imagination are put to work.

“I am an optimist,” he says before describing the technical complexities surrounding his team’s quest. “We can learn how to leverage AI to solve some of the sustainability challenges facing humanity today.”

Asia’s elusive and endangered Snow Leopard. Photo: Peter Bolliger.

Boiled down, AI for Earth aims to create sustainable solutions across four areas that are key to the health of the planet and the future of humankind: agriculture, water, biodiversity, and climate change.

Wee Hyong proudly points to some early breakthroughs. The Farm Beats project is pioneering new data-driven agriculture to guide farmers in India and the United States on where and when to plant crops for the greatest yield.

Equally impressive are the strides being made in land cover mapping – traditionally a time-consuming, expensive tool that is essential for environmental management and precision conservation. Recently, the entire United States was mapped by machine-learning algorithms that processed nearly 200 million aerial images in just over 10 minutes. Done the usual way, such a project would have taken many months and cost a fortune. Deployed globally and locally, this new way of mapping could revolutionize how we mitigate the effects of urbanization, pollution, deforestation, and even natural disasters.

Endangered species are also being given new hope. Traditionally, analysts pore over of thousands of images taken from satellites, drones or camera traps in the wild to study the range, populations, and behaviors of animals otherwise rarely seen by humans. It’s laborious work that takes time, skill, and concentration. “Try spotting a herd of zebra on the African savannah from a satellite image,” Wee Hyong says. “it’s not easy.”

High resolution imagery of zebra on the African savannah. Photo: Courtesy of Save The Elephants

Now computers can take on this role thanks to deep learning techniques that enable them to make sense of the thousands of pixels in an image. This is freeing up the expert time of scientists to do and study more. It’s already adding invaluable knowledge about elusive snow leopards in Kyrgyzstan in Central Asia and dwindling elephant populations in Congo in Africa where AI is also being used to in the fight against the twin scourges of poaching and the ivory trade.

Project Premonition uses insects as de facto “field biologists”. The project uses AI to analyze blood that mosquitoes take from animals across an  an ecosystem to glean valuable data. To achieve this, AI for Earth is developing drones that autonomously locate mosquito hotspots, robotic traps to collect specimens, and cloud-scale genomics and machine learning algorithms to identify each animal bitten.

The rise of the intelligent cloud and the ability to deploy machine learning models to the intelligent edge is accelerating and enabling new exciting possibilities to study and save wildlife from the remotest corners of the Earth to suburban backyards.

African bush elephants with Mount Kilmanjaro in the background. Picture: Courtesy of Save the Elephants

It goes beyond just technology, right? They want to tell their kids they are trying to save the Earth.

Pursuing research is worthy in itself, but real value comes when a solution is launched into action in the real world. It is here that Wee Hyong’s motivation shines through: He wants to leave the world in better shape for his two children – and for all children in the world.

The same goes for his team of data scientists and software engineers who left exciting and satisfying roles in commercial product development to join AI for Earth.

“Every single person who came for a job interview said they wanted to be able to tell their kids and families that they were serving a higher purpose. It goes beyond just technology, right? It goes beyond just new deep learning techniques and approaches, or whatever. They want to tell their kids they are trying to save the Earth.”

Posted on Leave a comment

Breakthrough ideas wanted: Join the Microsoft AI Idea Challenge for cool prizes, exposure

This post is authored by Tara Shankar Jana, Senior Technical Product Marketing Manager at Microsoft.

All of us have creative ideas – ideas that can improve our lives and the lives of thousands, perhaps even millions of others. But how often do we act on turning those ideas into a reality? Most of the time, we do not believe in our ideas strongly enough to pursue them. Other times we feel like we lack a platform to build out our idea or showcase it. Most good ideas don’t go beyond those initial creative thoughts in our head.

If you’re a professional working in the field of artificial intelligence (AI), or an aspiring AI developer or just someone who is passionate about AI and machine learning, Microsoft is excited to offer you an opportunity to transform your most creative ideas into reality. Join the Microsoft AI Idea Challenge Contest today for a chance to win exciting prizes and get your project featured in Microsoft’s AI.lab showcase. Check out the rules, terms and conditions of the contest and then dive right in!

The Challenge

The Microsoft AI Idea Challenge is seeking breakthrough AI solutions from developers, data scientists, professionals and students, and preferably developed on the Microsoft AI platform and services. The challenge gives you a platform to freely share AI models and applications, so they are reusable and easily accessible. The ideas you submit are judged on the parameters shown in the figure below – essentially half the weight is for the originality of your idea, 20% for the feasibility of your solution, and 30% for the complexity (i.e. level of sophistication) of your implementation.

The Microsoft AI Challenge is accepting submissions between now and October 12th, 2018.

To qualify for the competition, individuals or teams are required to submit a working AI model, test dataset, a demo app and a demo video that can be a maximum of three minutes long. We encourage you to register early and upload your projects soon, so that you can begin to plan and build out your solution and turn in the rest of your materials on time. We are looking for solutions across the whole spectrum of use cases – to be inspired, take a look at some of the examples at AI.lab.

Prizes

The winners of the first three places in the contest will respectively receive a Surface Book 2, a DJI Drone, and an Xbox One X.

We hope that’s motivation to get you started today – good luck!

Tara

Posted on Leave a comment

Can sound help save a dwindling elephant population? Scientists using AI think so.

Scientists with the Elephant Listening Project estimate that Africa’s population of forest elephants has dropped from roughly 100,000 animals in 2011 to fewer than 40,000 animals today. But those numbers are largely based on indirect evidence: ivory seizures, signs of poaching and labor-intensive surveys that are too expensive to be done regularly.

The Elephant Listening Project has spent more than three decades researching how elephants use low-frequency rumbling sounds to communicate with one another. More recently, those scientists began to use acoustic sensors at research sites to inform population estimates and, ultimately, to track and protect forest elephants across their ranges in Central and West Africa.

If scientists find, for example, that at specific times of year elephants are using clearings in an unprotected logging concession to access scarce minerals or find mates, scientists can work with the loggers to schedule their work to minimize disturbance and reduce conflicts.

But there has been a bottleneck in getting data out of these remote African forests and analyzing information quickly, says Peter Wrege, a senior research associate at Cornell who directs the Elephant Listening Project.

“Right now, when we come out of the field with our data, the managers of these protected areas are asking right away, ‘What have you found? Are there fewer elephants? Is there a crisis we need to address immediately?’ And sometimes it takes me months and months before I can give them an answer,” says Wrege.

Conservation Metrics began collaborating with the Elephant Listening Project in 2017 to help boost that efficiency. Its machine learning algorithms have been able to identify elephant calls more accurately and will hopefully begin to shortcut the need for human review. But the volume of data from the acoustic monitors, shown in the spectrogram below, is taxing the company’s local servers and computational capacity.

Microsoft’s AI for Earth program has given a two-year grant to Conservation Metrics to build a cloud-based workflow in Microsoft Azure for analyzing and processing wildlife metrics. It has also donated Azure computing resources to the Elephant Listening Project to reduce its data-processing costs for the project. The computational power of Azure will speed processing time dramatically, says Matthew McKown, the CEO of Conservation Metrics. The platform also offers new opportunities for clients to upload and interact with their data directly.

It currently takes about three weeks for computers to process a few months of sound data from this landscape-scale study, says McKown. Once the Azure migration is complete later this year, that same job may take a single day.

“It’s a huge improvement. We’re really interested in speeding up that loop between having equipment monitoring things out in the field and going through this magic process to convert those signals into information you can send into the field where someone can take action,” says McKown. “Right now, that process can take a really long time.”

Posted on Leave a comment

Why Microsoft and National Geographic Society are working together to advance conservation science with computer science

Camper sitting at creek with night sky above
Photograph by Devlin Gandy/National Geographic

By Dr. Jonathan Baillie, chief scientist at National Geographic Society, and Dr. Lucas Joppa, chief environmental scientist at Microsoft

Yesterday, Microsoft and National Geographic Society announced a new, joint grant program that will equip explorers working on the most challenging environmental issues of the 21st century with the most advanced technologies available today. “AI for Earth Innovation” grants will fund new solutions that leverage AI and cloud technologies to monitor, model and manage Earth’s natural resources. Application forms are available today, here, to anyone working at the intersection of computer science and environmental science, especially in the areas of agriculture, biodiversity, climate change and water.

As scientists that have spent our entire respective careers focused on conservation, we’ve come to believe that increased adoption of technology, including AI, is critical to make the progress needed – and at the pace needed – to protect our planet. From producing the foundational estimates on how rapidly species are going extinct to determining the effectiveness of our current conservation efforts, we realized that progress was slow or impossible without deploying scalable technology solutions.

There have been some notable success stories – including those we featured in a book we jointly published on the role of protected areas in conservation. But they are, frustratingly, the exception to the rule.

Now, in our roles as chief scientists at global organizations (for science and exploration and innovative technology, respectively), we hope to solve the root cause of that frustration. That is the goal of this partnership, and why Microsoft and National Geographic Society are bringing $1 million and access to our experts and technology together in this new partnership.

While different, both organizations are focused on pushing the boundaries of science and exploration for the benefit of the planet. National Geographic is synonymous with science and exploration. For 130 years, the organization has opened minds to the natural world, challenged perceptions of what is possible and set the stage for future exploration. For more than 35 years, Microsoft, too, has explored and pushed forward the boundaries of what technology can do, and what it can do for people and the world.

Our organizations have a unique combination of expertise in conservation and computer science, capacity building and public engagement, providing incredible potential to drive fundamental change. We will work together to empower people everywhere to respond to some of the most challenging environmental issues of the 21st century.

We realize that to some, it may seem counterintuitive to try to protect the planet with technology. It’s true that past industrial revolutions and technology development has directly contributed to our current climate crisis. Certainly, we recognize it’s not a panacea. But we’re fundamentally optimistic, because over the course of human history, every solution to a major societal challenge has been the result of human ingenuity and new technologies. It’s been the combination of scientific exploration and technological advances that has fueled new discoveries and led to major breakthroughs in our understanding of the planet and life on Earth. It’s as true today as it was when National Geographic Explorer Bob Ballard discovered new forms of life at the bottom of the ocean using then-cutting edge underwater remotely operated vehicle (ROV) technology.

Lately, innovation in technology has far outpaced anything imaginable before, but scientific knowledge isn’t keeping pace. We have often imagined a future where that is no longer the case, and our individual organizations have worked tirelessly to change this, too.

By partnering, we’re ready to move from imagining to enabling. With AI and the cloud, researchers can stay focused on new discoveries, rather than data collection and sorting. Their findings can more easily be shared with other researchers around the world, creating new economies of scale that accelerate and improve the state of conservation science in near-real time.

While there are only a handful of grants, the program is structured to provide exponential impact. By ensuring that all models supported through this grant follow an open source approach and are publicly available, we will allow environmental researchers and innovators around the globe to take advantage of these new innovations immediately and directly in their own vital work.

For the health of our planet and our future, we all need to get back to nature with the help of technology. Microsoft and National Geographic are ready to put our tools and skills to work for researchers working to make that more sustainable future a reality. Come join us!

Tags: ,

Posted on Leave a comment

Microsoft and National Geographic form AI for Earth Innovation Grant partnership

New grant offering will support research and scientific discovery with AI technologies to advance agriculture, biodiversity conservation, climate change and water

REDMOND, Wash., and WASHINGTON, D.C. — July 16, 2018 — On Monday, Microsoft Corp. and National Geographic announced a new partnership to advance scientific exploration and research on critical environmental challenges with the power of artificial intelligence (AI). The newly created $1 million AI for Earth Innovation Grant program will provide award recipients with financial support, access to Microsoft cloud and AI tools, inclusion in the National Geographic Explorer community, and affiliation with National Geographic Labs, an initiative launched by National Geographic to accelerate transformative change and exponential solutions to the world’s biggest challenges by harnessing data, technology and innovation. Individuals and organizations working at the intersection of environmental science and computer science can apply today at https://www.nationalgeographic.org/grants/grant-opportunities/ai-earth-innovation/.

National Geographic logo“National Geographic is synonymous with science and exploration, and in Microsoft we found a partner that is well-positioned to accelerate the pace of scientific research and new solutions to protect our natural world,” said Jonathan Baillie, chief scientist and executive vice president, science and exploration at the National Geographic Society. “With today’s announcement, we will enable outstanding explorers seeking solutions for a sustainable future with the cloud and AI technologies that can quickly improve the speed, scope and scale of their work as well as support National Geographic Labs’ activities around technology and innovation for a planet in balance.”

“Microsoft is constantly exploring the boundaries of what technology can do, and what it can do for people and the world,” said Lucas Joppa, chief environmental scientist at Microsoft. “We believe that humans and computers, working together through AI, can change the way that society monitors, models and manages Earth’s natural systems. We believe this because we’ve seen it — we’re constantly amazed by the advances our AI for Earth collaborators have made over the past months. Scaling this through National Geographic’s global network will create a whole new generation of explorers who use AI to create a more sustainable future for the planet and everyone on it.”

The $1 million AI for Earth Innovation Grant program will provide financial support to between five and 15 novel projects that use AI to advance conservation research toward a more sustainable future. The grants will support the creation and deployment of open-sourced trained models and algorithms that will be made broadly available to other environmental researchers, which offers greater potential to provide exponential impact.

Qualifying applications will focus on one or more of the core areas: agriculture, biodiversity conservation, climate change and water. Applications are open as of today and must be submitted by Oct. 8, 2018. Recipients will be announced in December 2018. Those who want more information and to apply can visit https://www.nationalgeographic.org/grants/grant-opportunities/ai-earth-innovation/.

About the National Geographic Society

The National Geographic Society is a leading nonprofit that invests in bold people and transformative ideas in the fields of exploration, scientific research, storytelling and education. The Society aspires to create a community of change, advancing key insights about the planet and probing some of the most pressing scientific questions of our time, all while ensuring that the next generation is armed with geographic knowledge and global understanding. Its goal is measurable impact: furthering exploration and educating people around the world to inspire solutions for the greater good. For more information, visit www.nationalgeographic.org.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777,

[email protected]

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.

Posted on Leave a comment

TextWorld, an open-source project for generating text-based games, can train and test AI agents

Today, fresh out of the Microsoft Research Montreal lab, comes an open-source project called TextWorld. TextWorld is an extensible Python framework for generating text-based games. Reinforcement learning researchers can use TextWorld to train and test AI agents in skills such as language understanding, affordance extraction, memory and planning, exploration and more. Researchers can study these in the context of generalization and transfer learning. TextWorld further runs existing text-based games, like the legendary Zork, for evaluating how well AI agents perform in complex, human-designed settings.

Figure 1 – Enter the world of TextWorld. Get the code at aka.ms/textworld.

Text-based games – also known as interactive fiction or adventure games – are games in which the play environment and the player’s interactions with it are represented solely or primarily via text. As players moves through the game world, they observe textual descriptions of their surroundings (typically divided into discrete ‘rooms’), what objects are nearby, and any other pertinent information. Players issue text commands to an interpreter to manipulate objects, other characters in the game, or themselves. After each command, the game usually provides some feedback to inform players how that command altered the game environment, if at all. A typical text-based game poses a series of puzzles to solve, treasures to collect, and locations to reach. Goals and waypoints may be specified explicitly or may have to be inferred from cues.

Figure 2 – An example game from TextWorld with a house-based theme.

Text-based games couple the freedom to explore a defined space with the restrictions of a parser and game world designed to respond positively to a relatively small set of textual commands. An agent that can competently navigate a text-based game needs to be able to not only generate coherent textual commands but must also generate the right commands in the right order, with little to no mistakes in between. Text-based games encourage experimentation and successful playthroughs involve multiple game losses and in-game “deaths.” Close observation and creative interpretation of the text the game provides and a generous supply of common sense are also integral to winning text-based games. The relatively simple obstacles present in a TextWorld game serve as an introduction to the basic real-life challenges posed by text-based games. In TextWorld, an agent needs to learn how to observe, experiment, fail and learn from failure.

TextWorld has two main components: a game generator and a game engine. The game generator converts high-level game specifications, such as number of rooms, number of objects, game length, and winning conditions, into an executable game source code in the Inform 7 language. The game engine is a simple inference machine that ensures that each step of the generated game is valid by using simple algorithms such as one-step forward and backward chaining.

Figure 3 – An overview of the TextWorld architecture.

“One reason I’m excited about TextWorld is the way it combines reinforcement learning with natural language,” said Geoff Gordon, Principal Research Manager at Microsoft Research Montreal “These two technologies are both really important, but they don’t fit together that well yet. TextWorld will push researchers to make them work in combination.” Gordon pointed out that reinforcement learning has had a number of high-profile successes recently (like Go or Ms. Pac-Man), but in all of these cases the agent has fairly simple observations and actions (for example, screen images and joystick positions in Ms. Pac-Man). In TextWorld, the agent has to both read and produce natural language, which has an entirely different and, in many cases, more complicated structure.

“I’m excited to see how researchers deal with this added complexity, said Gordon.”

Microsoft Research Montreal specializes in start-of-the art research in machine reading comprehension, dialogue, reinforcement learning, and FATE (Fairness, Accountability, Transparency, and Ethics in AI). The lab was founded in 2015 as Maluuba and acquired by Microsoft in 2017. For more information, check out Microsoft Research Montreal.

This release of TextWorld is a beta and we are encouraging as much feedback as possible on the framework from fellow researchers across the world. You can send your feedback and questions to [email protected]. Also, for more information and to get the code, check out TextWorld, and our related publications TextWorld: A Learning Environment for Text-based Games and Counting to Explore and Generalize in Text-based Games. Thank you!

Posted on Leave a comment

Facial recognition technology: The need for public regulation and corporate responsibility

All tools can be used for good or ill. Even a broom can be used to sweep the floor or hit someone over the head. The more powerful the tool, the greater the benefit or damage it can cause. The last few months have brought this into stark relief when it comes to computer-assisted facial recognition – the ability of a computer to recognize people’s faces from a photo or through a camera. This technology can catalog your photos, help reunite families or potentially be misused and abused by private companies and public authorities alike.

Facial recognition technology raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression. These issues heighten responsibility for tech companies that create these products. In our view, they also call for thoughtful government regulation and for the development of norms around acceptable uses. In a democratic republic, there is no substitute for decision making by our elected representatives regarding the issues that require the balancing of public safety with the essence of our democratic freedoms. Facial recognition will require the public and private sectors alike to step up – and to act.

We’ve set out below steps that we are taking, and recommendations we have for government regulation.

First, some context

Facial recognition technology has been advancing rapidly over the past decade. If you’ve ever seen a suggestion on Facebook or another social media platform to tag a face with a suggested name, you’ve seen facial recognition at work. A wide variety of tech companies, Microsoft included, have utilized this technology the past several years to turn time-consuming work to catalog photos into something both instantaneous and useful.

So, what is changing now? In part it’s the ability of computer vision to get better and faster in recognizing people’s faces. In part this improvement reflects better cameras, sensors and machine learning capabilities. It also reflects the advent of larger and larger datasets as more images of people are stored online. This improvement also reflects the ability to use the cloud to connect all this data and facial recognition technology with live cameras that capture images of people’s faces and seek to identify them – in more places and in real time.

Advanced technology no longer stands apart from society; it is becoming deeply infused in our personal and professional lives. This means the potential uses of facial recognition are myriad. At an elementary level, you might use it to catalog and search your photos, but that’s just the beginning. Some uses are already improving security for computer users, like recognizing your face instead of requiring a password to access many Windows laptops or iPhones, and in the future a device like an automated teller machine.

Some emerging uses are both positive and potentially even profound. Imagine finding a young missing child by recognizing her as she is being walked down the street. Imagine helping the police to identify a terrorist bent on destruction as he walks into the arena where you’re attending a sporting event. Imagine a smartphone camera and app that tells a person who is blind the name of the individual who has just walked into a room to join a meeting.

But other potential applications are more sobering. Imagine a government tracking everywhere you walked over the past month without your permission or knowledge. Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first. This has long been the stuff of science fiction and popular movies – like “Minority Report,” “Enemy of the State” and even “1984” – but now it’s on the verge of becoming possible.

Perhaps as much as any advance, facial recognition raises a critical question: what role do we want this type of technology to play in everyday society?

The issues become even more complicated when we add the fact that facial recognition is advancing quickly but remains far from perfect. As reported widely in recent months, biases have been found in the performance of several fielded face recognition technologies. The technologies worked more accurately for white men than for white women and were more accurate in identifying persons with lighter complexions than people of color. Researchers across the tech sector are working overtime to address these challenges and significant progress is being made. But as important research has demonstrated, deficiencies remain. The relative immaturity of the technology is making the broader public questions even more pressing.

Even if biases are addressed and facial recognition systems operate in a manner deemed fair for all people, we will still face challenges with potential failures. Facial recognition, like many AI technologies, typically have some rate of error even when they operate in an unbiased way. And the issues relating to facial recognition go well beyond questions of bias themselves, raising critical questions about our fundamental freedoms.

Politics meets Silicon Valley

In recent weeks, the politics of the United States have become more intertwined with these technology developments on the West Coast. One week in the middle of June put the issues raised by facial recognition technology in bold relief for me and other company leaders at Microsoft. As the country was transfixed by the controversy surrounding the separation of immigrant children from their families at the southern border, a tweet about a marketing blog Microsoft published in January quickly blew up on social media and sparked vigorous debate. The blog had discussed a contract with the U.S. Immigration and Customs Enforcement, or ICE, and said that Microsoft had passed a high security threshold; it included a sentence about the potential for ICE to use facial recognition.

We’ve since confirmed that the contract in question isn’t being used for facial recognition at all. Nor has Microsoft worked with the U.S. government on any projects related to separating children from their families at the border, a practice to which we’ve strongly objected. The work under the contract instead is supporting legacy email, calendar, messaging and document management workloads. This type of IT work goes on in every government agency in the United States, and for that matter virtually every government, business and nonprofit institution in the world. Some nonetheless suggested that Microsoft cancel the contract and cease all work with ICE.

The ensuing discussion has illuminated broader questions that are rippling across the tech sector. These questions are not unique to Microsoft. They surfaced earlier this year at Google and other tech companies. In recent weeks, a group of Amazon employees has objected to its contract with ICE, while reiterating concerns raised by the American Civil Liberties Union (ACLU) about law enforcement use of facial recognition technology. And Salesforce employees have raised the same issues related to immigration authorities and these agencies’ use of their products. Demands increasingly are surfacing for tech companies to limit the way government agencies use facial recognition and other technology.

These issues are not going to go away. They reflect the rapidly expanding capabilities of new technologies that increasingly will define the decade ahead. Facial recognition is the technology of the moment, but it’s apparent that other new technologies will raise similar issues in the future. This makes it even more important that we use this moment to get the direction right.

The need for government regulation

The only effective way to manage the use of technology by a government is for the government proactively to manage this use itself. And if there are concerns about how a technology will be deployed more broadly across society, the only way to regulate this broad use is for the government to do so. This in fact is what we believe is needed today – a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.

While we appreciate that some people today are calling for tech companies to make these decisions – and we recognize a clear need for our own exercise of responsibility, as discussed further below – we believe this is an inadequate substitute for decision making by the public and its representatives in a democratic republic. We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology. As a general principle, it seems more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government.

Such an approach is also likely to be far more effective in meeting public goals. After all, even if one or several tech companies alter their practices, problems will remain if others do not. The competitive dynamics between American tech companies – let alone between companies from different countries – will likely enable governments to keep purchasing and using new technology in ways the public may find unacceptable in the absence of a common regulatory framework.

It may seem unusual for a company to ask for government regulation of its products, but there are many markets where thoughtful regulation contributes to a healthier dynamic for consumers and producers alike. The auto industry spent decades in the 20th century resisting calls for regulation, but today there is broad appreciation of the essential role that regulations have played in ensuring ubiquitous seat belts and air bags and greater fuel efficiency. The same is true for air safety, foods and pharmaceutical products. There will always be debates about the details, and the details matter greatly. But a world with vigorous regulation of products that are useful but potentially troubling is better than a world devoid of legal standards.

That’s why Microsoft called for national privacy legislation for the United States in 2005 and why we’ve supported the General Data Protection Regulation in the European Union. Consumers will have more confidence in the way companies use their sensitive personal information if there are clear rules of the road for everyone to follow. While the new issues relating to facial recognition go beyond privacy, we believe the analogy is apt.

It seems especially important to pursue thoughtful government regulation of facial recognition technology, given its broad societal ramifications and potential for abuse. Without a thoughtful approach, public authorities may rely on flawed or biased technological approaches to decide who to track, investigate or even arrest for a crime. Governments may monitor the exercise of political and other public activities in ways that conflict with longstanding expectations in democratic societies, chilling citizens’ willingness to turn out for political events and undermining our core freedoms of assembly and expression. Similarly, companies may use facial recognition to make decisions without human intervention that affect our eligibility for credit, jobs or purchases. All these scenarios raise important questions of privacy, free speech, freedom of association and even life and liberty.

So what issues should be addressed through government regulation? That’s one of the most important initial questions to address. As a starting point, we believe governments should consider the following issues, among others:

  • Should law enforcement use of facial recognition be subject to human oversight and controls, including restrictions on the use of unaided facial recognition technology as evidence of an individual’s guilt or innocence of a crime?
  • Similarly, should we ensure there is civilian oversight and accountability for the use of facial recognition as part of governmental national security technology practices?
  • What types of legal measures can prevent use of facial recognition for racial profiling and other violations of rights while still permitting the beneficial uses of the technology?
  • Should use of facial recognition by public authorities or others be subject to minimum performance levels on accuracy?
  • Should the law require that retailers post visible notice of their use of facial recognition technology in public spaces?
  • Should the law require that companies obtain prior consent before collecting individuals’ images for facial recognition? If so, in what situations and places should this apply? And what is the appropriate way to ask for and obtain such consent?
  • Should we ensure that individuals have the right to know what photos have been collected and stored that have been identified with their names and faces?
  • Should we create processes that afford legal rights to individuals who believe they have been misidentified by a facial recognition system?

This list, which is by no means exhaustive, illustrates the breadth and importance of the issues involved.

Another important initial question is how governments should go about addressing these questions. In the United States, this is a national issue that requires national leadership by our elected representatives. This means leadership by Congress. While some question whether members of Congress have sufficient expertise on technology issues, at Microsoft we believe Congress can address these issues effectively. The key is for lawmakers to use the right mechanisms to gather expert advice to inform their decision making.

On numerous occasions, Congress has appointed bipartisan expert commissions to assess complicated issues and submit recommendations for potential legislative action. As the Congressional Research Service (CRS) noted last year, these commissions are “formal groups established to provide independent advice; make recommendations for changes in public policy; study or investigate a particular problem, issue, or event; or perform a duty.” Congress’ use of the bipartisan “9/11 Commission” played a critical role in assessing that national tragedy. Congress has created 28 such commissions over the past decade, assessing issues ranging from protecting children in disasters to the future of the army.

We believe Congress should create a bipartisan expert commission to assess the best way to regulate the use of facial recognition technology in the United States. This should build on recent work by academics and in the public and private sectors to assess these issues and to develop clearer ethical principles for this technology. The purpose of such a commission should include advice to Congress on what types of new laws and regulations are needed, as well as stronger practices to ensure proper congressional oversight of this technology across the executive branch.

Issues relating to facial recognition go well beyond the borders of the United States. The questions listed above – and no doubt others – will become important public policy issues around the world, requiring active engagement by governments, academics, tech companies and civil society internationally. Given the global nature of the technology itself, there likely will also be a growing need for interaction and even coordination between national regulators across borders.

Tech sector responsibilities

The need for government leadership does not absolve technology companies of our own ethical responsibilities. Given the importance and breadth of facial recognition issues, we at Microsoft and throughout the tech sector have a responsibility to ensure that this technology is human-centered and developed in a manner consistent with broadly held societal values. We need to recognize that many of these issues are new and no one has all the answers. We still have work to do to identify all the questions. In short, we all have a lot to learn. Nonetheless, some initial conclusions are clear.

First, it’s incumbent upon those of us in the tech sector to continue the important work needed to reduce the risk of bias in facial recognition technology. No one benefits from the deployment of immature facial recognition technology that has greater error rates for women and people of color. That’s why our researchers and developers are working to accelerate progress in this area, and why this is one of the priorities for Microsoft’s Aether Committee, which provides advice on several AI ethics issues inside the company.

As we pursue this work, we recognize the importance of collaborating with the academic community and other companies, including in groups such as the Partnership for AI. And we appreciate the importance not only of creating data sets that reflect the diversity of the world, but also of ensuring that we have a diverse and well-trained workforce with the capabilities needed to be effective in reducing the risk of bias. This requires ongoing and urgent work by Microsoft and other tech companies to promote greater diversity and inclusion in our workforce and to invest in a broader and more diverse pipeline of talent for the future. We’re focused on making progress in these areas, but we recognize that we have much more work to do.

Second, and more broadly, we recognize the need to take a principled and transparent approach in the development and application of facial recognition technology. We are undertaking work to assess and develop additional principles to govern our facial recognition work. We’ve used a similar approach in other instances, including trust principles we adopted in 2015 for our cloud services, supported in part by transparency centers and other facilities around the world to enable the inspection of our source code and other data. Similarly, earlier this year we published an overall set of ethical principles we are using in the development of all our AI capabilities.

As we move forward, we’re committed to establishing a transparent set of principles for facial recognition technology that we will share with the public. In part this will build on our broader commitment to design our products and operate our services consistent with the UN’s Guiding Principles on Business and Human Rights. These were adopted in 2011 and have emerged as the global standard for ensuring corporate respect for human rights. We periodically conduct Human Rights Impact Assessments (HRIAs) of our products and services, and we’re currently pursuing this work with respect to our AI technologies.

We’ll pursue this work in part based on the expertise and input of our employees, but we also recognize the importance of active external listening and engagement. We’ll therefore also sit down with and listen to a variety of external stakeholders, including customers, academics and human rights and privacy groups that are focusing on the specific issues involved in facial recognition. This work will take  up to a few months, but we’re committed to completing it expeditiously .

We recognize that one of the difficult issues we’ll need to address is the distinction between the development of our facial recognition services and the use of our broader IT infrastructure by third parties that build and deploy their own facial recognition technology. The use of infrastructure and off-the-shelf capabilities by third parties are more difficult for a company to regulate, compared to the use of a complete service or the work of a firm’s own consultants, which readily can be managed more tightly. While nuanced, these distinctions will need consideration.

Third, in the meantime we recognize the importance of going more slowly when it comes to the deployment of the full range of facial recognition technology. Many information technologies, unlike something like pharmaceutical products, are distributed quickly and broadly to accelerate the pace of innovation and usage. “Move fast and break things” became something of a mantra in Silicon Valley earlier this decade. But if we move too fast with facial recognition, we may find that people’s fundamental rights are being broken.

For this reason, based in part on input from the Aether Committee, we’re moving more deliberately with our facial recognition consulting and contracting work. This has led us to turn down some customer requests for deployments of this service where we’ve concluded that there are greater human rights risks. As we’re developing more permanent principles, we will continue to monitor the potential uses of our facial recognition technologies with a view to assessing and avoiding human rights abuses.

In a similar vein, we’re committed to sharing more information with customers who are contemplating the potential deployment of facial recognition technology. We will continue work to provide customers and others with information that will help them understand more deeply both the current capabilities and limitations of facial recognition technology, how these features can and should be used, and the risks of improper uses.

Fourth, we’re committed to participating in a full and responsible manner in public policy deliberations relating to facial recognition. Government officials, civil liberties organizations and the broader public can only appreciate the full implications of new technical trends if those of us who create this technology do a good job of sharing information with them. Especially given our urging of governments to act, it’s incumbent on us to step forward to share this information. As we do so, we’re committed to serving as a voice for the ethical use of facial recognition and other new technologies, both in the United States and around the world.

We recognize that there may be additional responsibilities that companies in the tech sector ought to assume. We provide the foregoing list not with the sense that it is necessarily complete, but in the hope that it can provide a good start in helping to move forward.

Some concluding thoughts

Finally, as we think about the evolving range of technology uses, we think it’s important to acknowledge that the future is not simple. A government agency that is doing something objectionable today may do something that is laudable tomorrow. We therefore need a principled approach for facial recognition technology, embodied in law, that outlasts a single administration or the important political issues of a moment.

Even at a time of increasingly polarized politics, we have faith in our fundamental democratic institutions and values. We have elected representatives in Congress that have the tools needed to assess this new technology, with all its ramifications. We benefit from the checks and balances of a Constitution that has seen us from the age of candles to an era of artificial intelligence. As in so many times in the past, we need to ensure that new inventions serve our democratic freedoms pursuant to the rule of law. Given the global sweep of this technology, we’ll need to address these issues internationally, in no small part by working with and relying upon many other respected voices. We will all need to work together, and we look forward to doing our part.

Tags: , , ,

Posted on Leave a comment

Medicine Man: How AI is bringing humanity back into healthcare

Oschner Health is one example of a company using AI to revolutionise healthcare. Its system is able to accurately track patients who are at risk of cardiac arrest, and can determine when there is a decline in their condition. This allows them to be admitted into intensive care hours earlier than they otherwise would have been. They are provided with potentially life-saving care, before their condition deteriorated to the point where medical care would have been less effective.

Project InnerEye, in use at Addenbrooke’s Hospital in Cambridge, is another solution which uses machine learning and computer vision for the analysis of radiological images. Designed to identify tumours, it improves the delivery of treatments such as radiotherapy, by precisely distinguishing between cancerous and healthy tissues. It can also better monitor disease progression during chemotherapy, so that treatment can be adjusted in line with how patients respond.

These AI solutions allow medical professionals to improve patient care and admittance time, thanks to their improved precision. This, in turn, reduces financial and manpower strain, improving the healthcare experience in the areas where this technology is being used.

This is supported by data from the World Health Organization (WHO), which shows that between 30 and 50 percent of cancer deaths could be avoided with prevention, early detection and treatment. With cancer costing the global economy an excess of an estimated $1.16 trillion a year, the impact of technology such as AI, is game-changing.

In the UK alone, for example, there are only 4.7 radiologists per 100,000 population, and this number will need to almost double by 2022 to meet demand. Because of this shortage, the NHS spent nearly £88 million in 2016 paying for backlogs of radiology scans to be reported – the same amount could have paid for over 1,000 full-time consultants.

“We are drowning in data in hospitals,” Kos states. “We don’t have enough human brainpower to deal with it all in a timely manner – which in healthcare, is vital.”

Using technology such as AI can therefore substantially decrease strain on healthcare systems, while simultaneously improving patient care and reducing costs, allowing doctors to spend their time on more complex medical diagnoses. Or, indeed, spending more time connecting with patients.

The human factor
Introducing AI to healthcare isn’t removing the humanity from medicine. On the contrary, it’s increasing it.

A study in the Annals of Internal Medicine found that doctors spend nearly twice as much time doing administrative work (49 percent) as they do with their patients (27 percent). In other words, doctors are spending more time crunching through data, sifting through and updating records, and analysing scans, than they are speaking to their patients.

In a profession where people are dealing with often traumatic, life-changing developments, this personal, human touch, is vital for the emotional well-being of patients and their loved ones. By using tools such as AI to free up more of their time, healthcare professionals can focus more on patient interaction, offering reassurance, providing guidance, and answering more questions.

Culture, and the challenges of change
Motivated by the lack of technology during his critical care period, Kos spent eight years crusading to introduce electronic medical record systems into hospitals. But nothing improved.

“We digitized, but we digitized all of the mistakes too. Then it dawned on me – digitization is important, but it’s not transformation.”

Without the supportive technology of cloud storage, or the data analysis powers of AI and machine learning, the full potential of these digitized records weren’t even close to being reached. Only years after, when cloud technology was accepted on a wider scale, and when collaborative tools such as Skype or real-time document editing in the cloud were established – could this initial digitization move on to the next level.

Research has shown that an organisation with the most advanced technology still won’t be as effective if it lacks the right company culture. Employees must be willing to embrace their new tools, while leaders must encourage a culture of learning. Only then, can the new tools be as effective as possible.

In the world of medicine, however, adopting the right culture for technological change can prove to be a challenge.

“Healthcare professionals are rather inward-looking,” says Kos. “Doctors listen to doctors. It’s a very top-down, hierarchical environment. You could have the best technology in the world, but if the culture isn’t ready to embrace it with a willingness to learn, it’s just not going to work.”

Posted on Leave a comment

Microsoft to acquire Bonsai in move to build ‘brains’ for autonomous systems

Group shot of Bonsai's team members
Bonsai’s team members. Photo courtesy of Bonsai.

With AI’s meteoric rise, autonomous systems have been projected to grow to more than 800 million in operation by 2025. However, while envisioned in science fiction for a long time, truly intelligent autonomous systems are still elusive and remain a holy grail. The reality today is that training autonomous systems that function amidst the many unforeseen situations in the real world is very hard and requires deep expertise in AI — essentially making it unscalable.

To achieve this inflection point in AI’s growth, traditional machine learning methodologies aren’t enough. Bringing intelligence to autonomous systems at scale will require a unique combination of the new practice of machine teaching, advances in deep reinforcement learning and leveraging simulation for training. Microsoft has been on a path to make this a reality through continued AI research breakthroughs; the development of the powerful Azure AI platform of tools, services and infrastructure; advances in deep learning including our acquisition of Maluuba, and the impressive efficiencies we’ve achieved in simulation-based training with Microsoft Research’s AirSim tool. With software developers at the center of digital transformation, our pending acquisition of GitHub further underscores just how imperative it is that we empower developers to break  through and lead this next wave of innovation.

Today we are excited to take another major step forward in our vision to make it easier for developers and subject matter experts to build the “brains”— machine learning modelfor autonomous systems of all kinds with the signing of an agreement to acquire Bonsai. Based in Berkeley, California, and an M12 portfolio company, Bonsai has developed a novel approach using machine teaching that abstracts the low-level mechanics of machine learning, so that subject matter experts, regardless of AI aptitude, can specify and train autonomous systems to accomplish tasks. The actual training takes place inside a simulated environment.

The company is building a general-purpose, deep reinforcement learning platform especially suited for enterprises leveraging industrial control systems such as robotics, energy, HVAC, manufacturing and autonomous systems in general. This includes unique machine-teaching innovations, automated model generation and management, a host of APIs and SDKs for simulator integration, as well as pre-built support for leading simulations all packaged in one end-to-end platform.

Bonsai’s platform combined with rich simulation tools and reinforcement learning work in Microsoft Research becomes the simplest and richest AI toolchain for building any kind of autonomous system for control and calibration tasks. This toolchain will compose with Azure Machine Learning running on the Azure Cloud with GPUs and Brainwave, and models built with it will be deployed and managed in Azure IoT, giving Microsoft an end-to-end solution for building, operating and enhancing “brains” for autonomous systems.

What I find exciting is that Bonsai has achieved some remarkable breakthroughs with their approach that will have a profound impact on AI development. Last fall, they established a new reinforcement learning benchmark for programming industrial control systems. Using a robotics task to demonstrate the achievement, the platform successfully trained a simulated robotic arm to grasp and stack blocks on top of one another by breaking down the task into simpler sub-concepts. Their novel technique performed 45 times faster than a comparable approach from Google’s DeepMind. Then, earlier this year, they extended deep reinforcement learning’s capabilities beyond traditional game play, where it’s often demonstrated, to real-world applications. Using Bonsai’s AI Platform and machine teaching, subject matter experts from Siemens, with no AI expertise, trained an AI model to autocalibrate a Computer Numerical Control machine 30 times faster than the traditional approach. This represented a huge milestone in industrial AI, and the implications when considered across the broader sector are just staggering.

To realize this vision of making AI more accessible and valuable for all, we have to remove the barriers to development, empowering every developer, regardless of machine learning expertise, to be an AI developer. Bonsai has made tremendous progress here and Microsoft remains committed to furthering this work. We already deliver the most comprehensive collection of AI tools and services that make it easier for any developer to code and integrate pre-built and custom AI capabilities into applications and extend to any scenario. There are over a million developers using our pre-built Microsoft Cognitive Services, a collection of intelligent APIs that enable developers to easily leverage high-quality vision, speech, language, search and knowledge technologies in their apps with a few lines of code. And last fall, we led a combined industry push to foster a more open AI ecosystem, bringing AI advances to all developers, on any platform, using any language through the introduction of the Open Neural Network Exchange (ONNX) format and Gluon open source interface for deep learning.

We’re really confident this unique marriage of research, novel approach and technology will have a tremendous effect toward removing barriers and accelerating the current state of AI development. We look forward to having Bonsai and their team join us to help realize this collective vision.

Tags: , ,

Posted on Leave a comment

AI powers Windows 10 April 2018 Update rollout

Artificial Intelligence (AI) continues to be a key area of investment for Microsoft, and we’re pleased to announce that for the first time we’ve leveraged AI at scale to greatly improve the quality and reliability of the Windows 10 April 2018 Update rollout.  Our AI approach intelligently selects devices that our feedback data indicate would have a great update experience and offers the April 2018 Update to these devices first.  As our rollout progresses, we continuously collect update experience data and retrain our models to learn which devices will have a positive update experience, and where we may need to wait until we have higher confidence in a great experience.  Our overall rollout objective is for a safe and reliable update, which means we only go as fast as is safe.

Early returns are very positive: With over 250 million machines on the April 2018 Update, we are seeing higher satisfaction numbers, fewer known issues, and lower support call volumes compared to previous Windows 10 releases.

Our AI/Machine Learning approach started with a pilot program during the Windows 10 Fall Creators Update rollout.  We studied characteristics of devices that data indicated had a great update experience and trained our model to spot and target those devices.  In our limited trial during the Fall Creators Update rollout, we consistently saw a higher rate of positive update experiences for devices identified using the AI model, with fewer rollbacks, uninstalls, reliability issues, and negative user feedback. For the April 2018 Update rollout, we substantially expanded the scale of AI by developing a robust AI machine learning model to teach the system how to identify the best target devices based on our extensive listening systems.

AI means both safe AND fast

Our AI approach has enabled us to quickly spot issues during deployment of a feature update, and as a result has also allowed us to go faster responsibly.  In fact, the April 2018 Update is officially the fastest version of Windows 10 to reach 250 million devices, achieving that mark in less than half the time it took the Fall Creators Update!

When our AI model, feedback or telemetry data indicate that there may be an issue, we quickly adjust and prevent affected devices from being offered the update until we thoroughly investigate.  Once issues are resolved we proceed again with confidence.  This allows us to throttle the update rollout to customers without them needing to take any action.

In cases where devices already offered the update may see issues, we communicate via our customer service forums to let our customers know what is occurring and actions we are taking.  A recent example from the past month was a black screen/reboot issue we detected within 24 hours of it first appearing. We immediately blocked all PCs that could be impacted by this issue from being updated, and communicated to customers within 24 hours, including an initial work around. In the next 24 hours, in cooperation with Avast, Microsoft identified an element of the Avast Behavior Shield that conflicted with the April 2018 Update. Avast immediately released a fix to prevent this issue from further occurring, enabling us to continue to safely roll out the April 2018 Update to those devices.

Windows 10 continually improving quality

We are also seeing quality improvements in Windows 10, which is approaching 700 million monthly active devices.  Early data shows the quality of the April 2018 Update exceeding earlier versions of Windows 10 in both reliability and performance.  Of course, this work is never done, and we continue to partner to with our hardware and software partners to drive additional performance and reliability improvements in Windows 10.  Improvements in the April 2018 Update include:

  • 20% reduction in system stability issues
  • 20% total reduction in operating system and driver stability issues, in collaboration with our hardware partners, on over 400k ecosystem drivers
  • Faster updates by reducing the amount of time your device is offline updating by up to 63%
    (Fun Fact: telemetry shows a U.S.-based PC updated from the Fall Creators Update to the April 2018 update in just over three minutes!)
  • Edge launch times improved by up to 40-50% (post-logon window)

Our internal customer support teams are seeing a continued reduction in call and online support requests for Windows 10 with the April 2018 Update.  Our OEMs also continue to experience reductions in monthly customer support volumes with this update.

More devices, declining customer support volume:

Graph showing Windows 10 users and customer support contacts

Windows 10 April 2018 Update (1803) is now fully available

Based on the update quality and reliability we are seeing through our AI approach, we are now expanding the release broadly to make the April 2018 Update (version 1803) fully available for all compatible devices running Windows 10 worldwide. Full availability is the final phase of our rollout process. You don’t have to do anything to get the update; it will rollout automatically to you through Windows Update.

Enterprise customers can also follow the same targeted approach for the Semi-Annual Channel and fully deploy Windows 10, version 1803 when ready.  IT administrators can decide when to broadly deploy once you have validated the apps, devices, and infrastructure in your organization work well with this release.  For an update overview see What’s new for IT pros in Windows 10, version 1803.  Additionally, you can leverage Windows Analytics tools that help you plan, test and deploy Windows 10 to your organization, and help accelerate Windows 10 migration. You can learn more about Windows Analytics tools and the Semi-Annual Channel releases.

An up-to-date device is the most secure device

The ability to rollout an updated version of Windows 10 safely at massive scale and velocity leveraging AI allows us to ensure the broadest number of customer devices have the latest security, technology and features in the shortest period.  As I’ve noted many times, we’re always actively listening. If you have any feedback, please share it with us via the Feedback Hub app.