post

What’s new with Seeing AI

Saqib Shaikh holds his camera phone in front of his face with Seeing AI open on the screen

By Saqib Shaikh, Software Engineering Manager and Project Lead for Seeing AI

Seeing AI provides people who are blind or with low vision an easier way to understand the world around them through the cameras on their smartphones. Whether in a room, on a street, in a mall or an office – people are using the app to independently accomplish daily tasks like never before. Seeing AI helps users read printed text in books, restaurant menus, street signs and handwritten notes, as well as identify banknotes and products via their barcode. Leveraging on-device facial-recognition technology, the app can even describe the physical appearance of people and predict their mood.

Today, we are announcing new Seeing AI features for the enthusiastic community of users who share their experiences with the app, recommend new capabilities and suggest improvements for its functionalities. Inspired by this rich feedback, here are the updates rolling out to Seeing AI to enhance the user’s experience:

  • Explore photos by touch: Leveraging the Custom Vision Service in tandem with the Computer Vision API, this new feature enables users to tap their finger to an image on a touch-screen to hear a description of objects within an image and the spatial relationship between them. Users can explore photos of their surroundings taken on the Scene channel, family photos stored in their photo browser, and even images shared on social media by summoning the options menu while in other apps.
  • Native iPad support: For the first time we’re releasing iPad support, to provide a better Seeing AI experience that accounts for the larger display requirements. iPad support is particularly important to individuals using Seeing AI in academic or other professional settings where they are unable to use a cellular device.
  • Channel improvements: Users can now customize the order in which channels are shown, enabling easier access to favorite features. We’ve also made it easier to access the face recognition function while on the Person channel, by relocating the feature directly on the main screen. Additionally, when analyzing photos from other apps, the app will now provide audio cues that indicate Seeing AI is processing the image.

Since the app’s launch in 2017, Seeing AI has leveraged AI technology and inclusive design to help people with more than 10 million tasks. If you haven’t tried Seeing AI yet, download it for free on the App Store. If you have, please share your thoughts, feedback or questions with us at seeingai@microsoft.com, or through the Disability Answer Desk and Accessibility User Voice Forum.

post

Seattle Times: ‘Even one cigarette’ in pregnancy can raise risk of babies’ death, Seattle Children’s and Microsoft find

It’s no surprise that smoking during pregnancy is unhealthy for the fetus — just as it’s unhealthy for the person smoking. But the powerful combination of medical research and data science has given new insights into the risks involved, specifically when it comes to babies suddenly dying in their sleep.

The risk of Sudden Unexpected Infant Death (SUID) increases with every cigarette smoked during pregnancy, according to a joint study by Seattle Children’s Research Institute and Microsoft data scientists.

Further, while smoking less or quitting during pregnancy can help significantly, a risk of SUID exists even if a person stops smoking right before becoming pregnant, the team demonstrated.

“Any amount of smoking, even one cigarette, can double your risk,” said Tatiana Anderson, a post-doctoral research fellow at Children’s who worked on the study, which was published Monday in the journal Pediatrics.

Most Read Local Stories

Anderson and the rest of the team estimate that smoking during pregnancy is responsible for 800 of the approximately 3,700 SUID deaths in the United States every year. That’s 22 percent of all SUID cases.

The team analyzed vast data sets from the Centers for Disease Control and Prevention (CDC) that included every baby born in the United States from 2007 to 2011. In that time span, more than 20 million babies were born and 19,127 died of SUID, which includes Sudden Infant Death Syndrome (SIDS).

The study found that the risk of SUID doubles if a person goes from not smoking to smoking just one cigarette daily throughout pregnancy. At a pack a day (20 cigarettes), the risk is tripled compared to nonsmokers. The odds plateau from there.

The chance of SUID decreases when women quit smoking or smoke less: Women who tapered their smoking by the third trimester showed a 12 percent decreased risk. Quitting altogether by the third trimester lowered the risk of SUID by 23 percent.

The biggest predictor of SUID risk was the average number of cigarettes smoked daily throughout the three trimesters of pregnancy, rather than smoking more or less at any particular point.

“Thus, a woman who smoked 20 cigarettes per day in the first trimester and reduced to 10 cigarettes per day in subsequent trimesters had a similarly reduced SUID risk as a woman who averaged 13 cigarettes per day in each trimester,” the study states.

Having such precise data about the effects of smoking before and during pregnancy better arms health-care providers to speak with their patients, Anderson said.

“Doctors need to have frank discussions with patients,” she said. “Every cigarette you can eliminate on a daily basis will reduce your risk of SUID.”

Microsoft data scientists teamed up with the Children’s researchers after John Kahan, who heads up customer data and analytics for Microsoft, lost his son Aaron to SIDS in 2003. After Aaron’s death, days after he came home from the hospital, Kahan started the Aaron Matthew SIDS Research Guild. In 2016, he climbed Mount Kilimanjaro to raise money for SIDS research.

When he returned from Africa, he found out his team at Microsoft had been working with the available data on infant deaths. Their goal was to use algorithms to analyze the data and help come up with a way to save babies like Aaron from SUID.

Juan Lavista, a member of Kahan’s team at that time, is now the senior director of data science at the AI For Good research lab, which is part of an initiative called AI for Humanitarian Action, launched by Microsoft president Brad Smith. The idea behind the initiative is to use artificial intelligence to tackle some of the world’s most difficult problems, and it has allowed Lavista to work on things like the SUID study full time instead of cramming it in around his day job.

Data scientists can use computing power to work with huge data sets to help solve confounding issues like SUID, climate change and immigration, Lavista said.

“There are many problems the world has that, we believe, we can make a difference with AI,” he said.

The collaboration has been exciting for Anderson, the Children’s research fellow. She says this unusual partnership between the medical world and the technology sector has applications in many different fields.

“I think it is really exciting because it is a concept that absolutely can be used to ask questions outside of SIDS,” Anderson said. “Everybody is there because they want to make a difference. It is very much a collaborative effort.”

The scientists at Microsoft and Children’s aren’t stopping with the publication of this study. Lavista said they are delving into other questions surrounding SUID, such as the impact of prenatal care, how the age of an infant relates to sudden death and an examination of what SUID looks like in all 50 states.

post

Is drought on the horizon? Researchers turn to AI in a bid to improve forecasts

As winter drags on, some people wonder whether to pack shorts for a late-March escape to Florida, while others eye April temperature trends in anticipation of sowing crops. Water managers in the western U.S. check for the possibility of early-spring storms to top off mountain snowpack that is crucial for irrigation, hydropower and salmon in the summer months.

Unfortunately, forecasts for this timeframe — roughly two to six weeks out — are a crapshoot, noted Lester Mackey, a statistical machine learning researcher at Microsoft’s New England research lab in Cambridge, Massachusetts. Mackey is bringing his expertise in artificial intelligence to the table in a bid to increase the odds of accurate and reliable forecasts.

“The subseasonal regime is where forecasts could use the most help,” he said.

Mackey knew little about weather and climate forecasting until Judah Cohen, a climatologist at Atmospheric and Environmental Research, a Verisk business that consults about climate risk in Lexington, Massachusetts, reached out to him for help using machine learning techniques to tease out repeating weather and climate patterns from mountains of historical data as a way to improve subseasonal and seasonal forecast models.

The preliminary machine learning based forecast models that Mackey, Cohen and their colleagues developed outperformed the standard models used by U.S. government agencies to generate subseasonal forecasts of temperature and precipitation two to four weeks out and four to six weeks out in a competition sponsored by the U.S. Bureau of Reclamation.

Mackey’s team recently secured funding from Microsoft’s AI for Earth initiative to improve and refine its technique with an eye toward advancing the technology for the social good.

“Lester is working on this because it is a hard problem in machine learning, not because it is a hard problem in weather forecasting,” noted Lucas Joppa, Microsoft’s chief environmental officer who runs the AI for Earth program, as he explained why his group is helping fund the research. “It just so happens that the techniques he is interested in exploring have huge applicability in weather forecasting, which happens to have huge applicability in broader societal and economic domains.”

Fields being irrigated on the edge of the desert in the Cuyama Valley Photo by Getty Images.

AI on the brain

Mackey said current weather models perform well up to about seven days in advance, and climate forecast models get more reliable as the time horizon extends from seasons to decades. Subseasonal forecasts are a middle ground, relying on a mix of variables that impact short-term weather such as daily temperature and wind and seasonal factors such as the state of El Niño and the extent of sea ice in the Arctic.

Cohen contacted Mackey out of a belief that machine learning, the arm of AI that encompasses recognizing patterns in statistical data to make predictions, could help improve his method of generating subseasonal forecasts by gleaning insights from troves of historical weather and climate data.

“I am basically doing something like machine learned pattern recognition in my head,” explained Cohen, noting that weather patterns repeat throughout the seasons and from year to year and that therefore pattern recognition can and should inform longer-term forecasts. “I thought maybe I can improve on what I am doing in my head with some of the machine learning techniques that are out there.”

Using patterns in historical weather data to predict the future was standard practice in weather and climate forecast generation until the 1980s. That’s when physical models of how the atmosphere and oceans evolve began to dominate the industry. These models have grown in popularity and sophistication with the exponential rise in computing power.

“Today, all of the major climate centers employ massive supercomputers to simulate the atmosphere and oceans,” said Mackey. “The forecasts have improved substantially over time, but they make relatively little use of historical data. Instead, they ingest today’s weather conditions and then push forward their differential equations.”

A combine harvester moving on a snow-covered fieldPhoto by Getty Images.

Forecast competition

As Mackey and Cohen were discussing a research collaboration, Cohen received notice of a competition sponsored by the U.S. Bureau of Reclamation to improve subseasonal forecasts of temperature and precipitation in the western U.S. The government agency is interested in improved subseasonal forecasts to better prepare water managers for shifts in hydrologic regimes, including the onset of drought and wet weather extremes.

“I said, ‘Hey, what do you think about trying to enter this competition as a way to motivate us, to make some progress,’” recalled Cohen.

Mackey, who was an assistant professor of statistics at Stanford University in California prior to joining Microsoft’s research organization and remains an adjunct professor at the university, invited two graduate students to participate on the project. “None of us had experience doing work in this area and we thought this would be a great way to get our feet wet,” he said.

Over the course of the 13-month competition, the researchers experimented with two types of machine learning approaches. One combed through a kitchen sink of data containing everything from historical temperature and precipitation records to data on sea ice concentration and the state of El Niño as well as an ensemble of physical forecast models. The other approach focused only on historical data for temperature when forecasting temperature or precipitation when forecasting precipitation.

“We were making forecasts every two weeks and between those forecasts we were acquiring new data, processing it, building some of the infrastructure for testing out new methods, developing methods and evaluating them,” Mackey explained. “And then every two weeks we had to stop what we were doing and just make a forecast and repeat.”

Toward the end of the competition, Mackey’s team discovered that an ensemble of both machine learning approaches performed better than either alone.

Final results of the were announced today. Mackey, Cohen and their colleagues captured first place in forecasting average temperature three to four weeks in advance and second place in forecasting total precipitation five and six weeks out.

A flooded river under a walking bridgePhoto by Getty Images.

Forecast for the future

After the competition, the collaborators combined their ensemble of machine learning approaches with the standard models used by U.S. government agencies to generate subseasonal forecasts and found that the combined models improved the accuracy of the operational forecast by between 37 and 53 percent for temperature and 128 and 154 percent for precipitation. These results are reported in a paper the team posted on arXiv.org.

“I think we will continue to see these types of approaches be further refined and increase in the breadth of their use within the field of forecasting,” said Kenneth Nowak, water availability research coordinator with the U.S. Bureau of Reclamation, who organized the forecast rodeo. He added that government agencies will “look for opportunities to leverage” machine learning in future generations of operational forecast models.

Microsoft’s AI for Earth program is providing funding to Mackey and colleagues to hire an intern to expand and refine their machine learning based forecasting technique. The collaborators also hope that other machine learning researchers will be drawn to the challenge of cracking the code to accurate and reliable subseasonal forecasts. To encourage these efforts, they have made available to the public the dataset they created to train their models.

Cohen, who kicked off the collaboration with Mackey out of a curiosity about the potential impact of AI on subseasonal to seasonal climate forecasts, said, “I see the benefit of machine learning, absolutely. This is not the end; more like the beginning. There is a lot more that we can do to increase its applicability.”

Related:

John Roach writes about Microsoft research and innovation. Follow him on Twitter.

post

18 best practices for human-centered AI design

Eighteen best practices for human-centered AI design

By Mihaela Vorvoreanu, Saleema Amershi, and Penny Collisson

Today we’re excited to share a set of Guidelines for Human-AI Interaction. These 18 guidelines can help you design AI systems and features that are more human-centered. Based on more than two decades of thinking and research, they have been validated through a rigorous study published in CHI 2019.

Why do we need guidelines for human-AI interaction?

While classic interaction guidelines hold with AI systems, attributes of AI services, including their accuracy, failure modalities, and understandability raise new challenges and opportunities. Consistency, for example, is a classic design guideline that advocates for predictable behaviors and minimizing unexpected changes. AI components, however, can be inconsistent because they may learn and adapt over time.

We need updated guidance on designing interactions with AI services that provide meaningful experiences, keeping the user in control and respecting users’ values, goals, and attention.

Why these guidelines?

AI-focused design guidance is blooming across UX conferences, the tech press, and within individual design teams. That’s exciting, but it can be hard to know where to start. We wanted to help with that, so…

  • We didn’t just make these up! They come from more than 20 years of work. We read numerous research papers, magazine articles, and blog posts. We synthesized a great deal of knowledge acquired across the design community into a set of guidelines that apply to a wide range of AI products, are specific, and are observable at the UI level.
  • We validated the guidelines through rigorous research. We tested the guidelines through three rounds of validation with UX and HCI experts. Based on their feedback, we iterated the guidelines until experts confirmed that they were clear and specific.

Let’s dive into the guidelines!

The guidelines are grouped into four categories that indicate when during a user’s interactions they apply: upon initial engagement with the system, during interaction, when the AI service guesses wrong, and over time.

Initially

1. Make clear what the system can do.

2. Make clear how well the system can do what it can do.

The guidelines in the first group are about setting expectations: What are the AI’s capabilities? What level of quality or error can a user expect? Over-promising can hurt perceptions of the AI service.

PowerPoint’s QuickStarter illustrates Guideline 1, Make clear what the system can do. QuickStarter is a feature that helps you build an outline. Notice how QuickStarter provides explanatory text and suggested topics that help you understand the feature’s capabilities.

During Interaction

3. Time services based on context.

4. Show contextually relevant information.

5. Match relevant social norms.

6. Mitigate social biases.

This subset of guidelines is about context. Whether it’s the larger social and cultural context or the local context of a user’s setting, current task, and attention, AI systems should take context into consideration.

AI systems make inferences about people and their needs, and those depend on context. When AI systems take proactive action, it’s important for them to behave in socially acceptable ways. To apply Guidelines 5 and 6 effectively, ensure your team has enough diversity to cover each other’s blind spots.

Acronyms in Word highlights Guideline 4, Show contextually relevant information. It displays the meaning of abbreviations employed in your own work environment relative to the current open document.

When Wrong

8. Support efficient dismissal.

9. Support efficient correction.

10. Scope services when in doubt.

11. Make clear why the system did what it did.

Most AI services have some rate of failure. The guidelines in this group recommend how an AI system should behave when it is wrong or uncertain, which will inevitably happen.

The system might not trigger when expected, or might trigger at the wrong time, so it should be easy to invoke (Guideline 7) and dismiss (Guideline 8). When the system is wrong, it should be easy to correct it (Guideline 9), and when it is uncertain, Guideline 10 suggests building in techniques for helping the user complete the task on their own. For example, the AI system can gracefully fade out, or ask the user for clarification.

Auto Alt Text automatically generates alt text for photographs by using intelligent services in the cloud. It illustrates Guideline 9, Support efficient correction, because automatic descriptions can be easily modified by clicking the Alt Text button in the ribbon.

Over Time

12. Remember recent interactions.

13. Learn from user behavior.

14. Update and adapt cautiously.

15. Encourage granular feedback.

16. Convey the consequences of user actions.

17. Provide global controls.

18. Notify users about changes.

The guidelines in this group remind us that AI systems are like getting a new puppy: they are long-term investments and need careful planning so they can learn and improve over time. Learning (Guideline 13) also means that AI systems change over time. Changes need to be managed cautiously so the system doesn’t become unpredictable (Guideline 14). You can help users manage inherent consistencies in system behavior by notifying them about changes (Guideline 18).

Ideas in Excel empowers users to understand their data through high-level visual summaries, trends, and patterns. It encourages granular feedback (Guideline 15) on each suggestion by asking, “Is this helpful?”

What’s next?

If you’d like some more ideas, stay tuned for another post on this work where we share some of the uses we’ve been working with at Microsoft. We’d love to hear about your experiences with the guidelines. Please share them in comments.

Want more?

Authors

Mihaela Vorvoreanu is a program manager working on human-AI interaction at Microsoft Research.

Saleema Amershi is a researcher working on human-AI interaction at Microsoft Research AI.

Penny Marsh Collisson is a user research manager working on AI in Office.

With thanks to our team who developed The Guidelines for Human-AI Interaction: Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul Bennett, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, and Eric Horvitz.

Thanks also to Ruth Kikin-Gil for her thoughtful collaboration, and for curating examples for this post.

post

New research: Leaders at successful companies embracing AI

The race is on
The survey results show that high-growth companies are not only more than twice as likely to actively use AI compared to lower growth companies, but they also have bigger plans in a much shorter timeframe. Of the double-digit growth companies surveyed, while almost all (94%) intend to use in AI for decision making within the next three years, more than half plan to do so over the next 12 months. In comparison, the majority of low-growth companies are only looking to invest in decision-making AI in the next three to five years.

“What’s striking about the research is the difference between double-digit growth companies and those with lower growth,” says Susan Etlinger, Industry analyst with the Altimeter Group. “Double-digit growth companies are further along in their AI deployments, but also see a greater urgency in using more AI. They are looking at a one to three year timeframe – often really focused on the coming year. Lower growth companies are looking at more of a 5-year timeframe. What this says to me is that the more you know, the higher your sense of urgency is.”

Crucially, it’s not too late for those companies and leaders who are further behind in their AI journeys to start now, to increase their chances of remaining competitive.

Start small, learn fast and scale
The research findings have shown that AI is successfully utilised by leaders to invest more time in humans, while helping them create and execute new strategies. In addition, we have seen how leaders value AI’s ability to help them grow their own skills.

Evidence showing that the fastest-growing companies have invested – and will continue to invest – in AI also highlights the importance of ensuring that business, inspired by their leadership, progress on their AI journey sooner, rather than later, before they run the risk of losing their competitive edge to more progressive companies.

In Microsoft EMEA President Michel van der Bel’s words: “Start small, but start with intention. This will help teams build trust, learn from feedback and build confidence. In a nutshell, this is what will help get your AI journey off with a strong start.” Progress today, and reap the benefits for both yourself as a leader, and your company as a whole, tomorrow.

For more information on progressing your AI journey, please feel free to visit our AI business resources.
.


post

New AI and IoT solution frees skilled fish farming workers in Japan to focus on more demanding tasks

Japan’s labor-intensive fish farming sector has taken a major step toward automation with the adoption of an artificial intelligence (AI) and Internet of Things (IoT) solution that frees up highly skilled workers from a crucial, but highly time-consuming, task.

The breakthrough follows half a year of field tests at Kindai University, which plays a significant role in the national production of red sea bream – a fish known in Japanese as “Madai” that is prized by sushi and sashimi lovers both at home and abroad.

The university’s Aquaculture Research Institute hatches red sea bream and raises them to a juvenile stage, known as fingerlings. Every year, it sells around 12 million fingerlings to fish farms that grow them to adult size for the market. To meet rising demand for the delicacy, Kindai’s workers must hand sort as many as 250,000 fingerlings a day.

Fingerlings in a holding pen.

Japan’s aging demographics and other factors have made the recruitment of experienced sorters difficult, particularly when so much repetitive work is required. To counter this problem, the university approached its long-term partner company, Toyota Tsusho, which in turn brought in Microsoft Japan to help come up with ways of automating a number of processes. The aim is to relieve workers of manually intensive duties so that they can focus their valuable skills on more demanding tasks.

This latest innovation centers on software that automatically regulates the flow of water through pumps that transfer fingerlings from their pens and onto conveyor belts for sorting. IoT and AI tools continuously monitor and adjust the flows.

Now automated … Fingerlings being pumped from their pens.

“There are three processes involved in sorting fingerlings,” explains Naoki Taniguchi, who manages the Institute’s Larval Rearing Division and is Deputy General Manager of the Aquaculture Technology and Production Center. “Firstly, we gather the fingerlings near the mouth of the pump that sucks them along with seawater from the fish pens without injuring them. To do this, we must constantly adjust the pump’s water flow to the conveyor belts. Lastly, we sort them by removing fingerlings that are too small or defective from the conveyor belts.”

Naoki Taniguchi of the Institute’s Larval Rearing Division Aquaculture Technology and Production Center

Taniguchi said adjusting the water flow from a pump is crucial.

“If the flow is too fast, too many fingerlings will make it onto the conveyor belts, and our sorting teams won’t be able to keep up, and some fish that should be removed will be missed. If the flow is too slow, too few fingerlings will be sorted, and production will fall short. Until now, it’s been a process entrusted only to a few operators with sufficient experience.”

The new automated transfer system was created with Microsoft Azure Machine Learning Studio using image analysis technology that recognizes the changing ratio of fish shapes and vacant areas within a pump’s pipes. From this, the system machine-learned how expert human operators adjust flows optimally.

Field tests started last year, and within six months the system achieved the same flow control results as an operator.

Taniguchi said employees, who often used to spend their whole working day just adjusting water flows, are now able to devote their time to applying their rich experience in streamlining other fish farming processes. They will also be able to pass on their technical knowledge to a new generation of aquaculture specialists.

Sorting fingerlings on conveyor belts.

He hopes greater automation will make jobs in the sector more attractive to younger workers looking to build careers.

“Japan’s fishing industry employs about 150,000 people. But 80 percent of them are more than 40 years old. It is vital that we attract young people to the industry,” he said. “This automatic transfer system is just the beginning of our journey. Ultimately, we aim to automate the sorting process itself as well.”

 READ MORE:  AI and fish farming: High-tech help for a sushi and sashimi favorite in Japan

post

New research: Emotion and Cognition in the Age of AI

Given the accelerating pace of change around the globe, the worlds of school and work are undergoing massive transformations. New technologies such as artificial intelligence are empowering today’s students to address big challenges that motivate them, such as reversing climate change and slowing the spread of disease. At the same time, collaboration tools, mixed reality and social media are bringing them closer to one another than ever before.

To successfully navigate these changes and to leverage the opportunities ahead of them, we need to prepare students with the diverse skills they will need in the future.

To better understand how to prepare today’s kindergartners to thrive in work and life, last year we released research about the Class of 2030.

Our findings highlighted two core themes: Student-centric approaches such as personalized learning and the growing importance of social-emotional skills.

Social-emotional skills such as collaboration, empathy and creativity have long been essential, but our research revealed they have become newly important to employers and educators alike. Social-emotional skills are also necessary for well-being, which is a key predictor of academic and employment success.

So this year we decided to dig deeper, to better understand what educators and schools worldwide are doing to enhance students’ skills and well-being and to understand how technology can help. We worked with the Economist Intelligence Unit (EIU) to survey more than 760 educators in 15 countries. From Mexico to Sweden and from Indonesia to Canada, we listened to the voices of educators. We also interviewed leading experts on and reviewed 90 pieces of research.

Click the excerpt above to view the full infographic.


What we learned is that educators around the globe are placing a high priority on student well-being and they are actively seeking ways to nurture it in their classrooms, across the school environment, and in their communities.

According to the survey, 80 percent of educators believe that well-being is critical for academic success, for developing foundational literacies and for cultivating strong communication skills, and 70 percent of educators say well-being has grown in importance for K-12 students during their careers.

At the same time, school systems have not moved as quickly as educators to prioritize well-being. Only 53 percent of educators said their schools have a formal policy in place to support students’ well-being. Individual educators can do great things in their own classrooms. But to impact well-being at scale, systemic approaches are needed.

We identified some common barriers that educators encounter in trying to help improve their students’ well-being:

  • 64 percent of educators said they lack the resources or time to support students’ well-being
  • 71 percent of respondents said changes to enhance student well-being need to be driven by school leaders

And, we asked educators what technologies they find most beneficial in overcoming these barriers. Three areas stood out:

  • 58 percent mentioned immersive experiences that allow students to explore scenarios from the perspective of others, which show strong promise for promoting social-emotional skills, particularly empathy
  • 49 percent cited tools that foster collaboration among students
  • 46 percent of educators favor tools that help collect and analyze data about students’ emotional states

In addition, technology provides the critical scale to take any of these approaches beyond a single classroom.

To help identify best practices, we took a close look at schools where teachers report their students enjoy higher-than-average well-being. We found several common threads. These leader schools are more likely to:

  • Have a formal plan to promote well-being
  • Measure and monitor well-being as well as academic achievement
  • Support inclusive classroom practices that amplify student voice
  • Engage purposefully with the community
  • Take a whole-school approach to professional learning

A complete summary of our research results will be released in March. In the meantime, we invite you to join our free webinar series on Teaching Happiness, starting February 25, 2019, for a broader exploration of the skills that empower students to lead happy and fulfilling lives.

We are excited to be on this journey together with all of you, to learn from you, and to contribute our insights and our technologies to help every student on the planet achieve more.

post

How Europe’s clinicians and patients are using data and AI to fight cancer

Fabian Bolin was just 28-years-old when he found out he had leukemia. A promising actor, the diagnosis of cancer made him feel as if he suddenly lost control of his future and that nothing could help him regain it.

His experience is all too common.

Each year, there are an estimated 3.7 million new cases of cancer and 1.9 million deaths from the disease in Europe. According to the World Health Organization, despite making up only one eighth of the total global population, Europe bears a quarter of the world’s cancer cases. In fact, cancer is the second leading cause of death across the region behind cardiovascular disease.

While Europe is home to some of the best and most established healthcare systems in the world, cancer remains a formidable opponent. Today, leading healthcare providers and organizations are using technology such as artificial intelligence (AI) to engage and support patients, empower doctors and accelerate research. Moving us one-step closer to help manage and conquer the disease.

Giving power back to the patient
When Fabian was first diagnosed, he felt powerless and began sharing his experiences on social media. The response was so great that he helped launch WarOnCancer, a social network for cancer patients and relatives.

Group shot of people smiling while wearing war on cancer tshirts

The original platform comprised of a 150-member strong blogging community, who represented 40 types of cancer, highlighted that most cancer patients suffer from low self-esteem and depression. With this insight, WarOnCancer is working with six partners in the pharmaceutical and broader life science industry to develop and test a new mobile app, which aims to become a global social network for cancer patients.

Scheduled to launch during 2019, the app will allow members to share their data and track how the industry uses this data in research. Through the power of Microsoft Azure, WarOnCancer can analyze this data to detect flaws and benefits experienced by different groups of patients depending on where, and how, they are treated.

“During my treatment and interactions with specialists, I was astounded to learn that almost half of clinical trials in oncology are delayed because it’s hard to find patients who meet the right criteria for that particular trial,” said Fabian. “Despite the vast majority of patients willing to share their data for clinical trials, many don’t know these are even taking place or aren’t properly informed how their data will be used. This disconnect can literally be the difference between finding a life-saving treatment or not.”

“The long-term goal is to build a ‘matchmaking’ type service for clinical trials and patients. This will increase the number of successful clinical trials, spearhead the pharmaceutical R&D-process, tailor treatment schedules and medication around a cancer patient’s needs, and ultimately save lives,” says Sebastian Hermelin, co-founder and head of WarOnCancer’s industry partnerships.

Helping doctors deliver early-detection, and increase precision and accuracy

The benefits of early cancer detection are clear. Not only does it result in a higher survival rate, but it helps minimize treatment side effects. While the process varies in every country, standard breast cancer screening typically occurs every two years and involves the mammography of women within a certain age bracket.

However, the effectiveness of mammography dramatically decreases when examining ‘dense’ breasts with a higher percentage of fibroglandular tissue. To address this challenge, the Veneto Institute of Oncology (IOV) is using a new breast density assessment tool from Volpara that has the potential to help millions of people. Leaping beyond the limits of a traditional mammogram, the cloud-based solution assesses images of a patient’s breast tissue, honing in on its density.

“Since dense breast tissue and lesions both appear white on X-rays, it is difficult to detect cancer in women with dense breasts. Moreover, it has been proven that women with dense breasts have higher risk of developing breast cancer compared to women with low breast density,” says Gisella Gennaro, Medical Physicist at the Venetian Institute of Oncology. ““But now, through advanced image analysis, we can automatically and objectively assess women’s breast density, use it to estimate their risk of developing breast cancer, and provide them with personalized imaging protocols such as using ultrasound in the event that breast density hinders cancer detection.”

“Without advanced image computing, it would be impossible to get such fast and accurate analysis. Over the next five years: we plan to examine more than 10,000 women; see an increase in cancer detection rates; a decrease in interval cancers; and sustainable screening costs. It’s truly a step forward towards precision medicine,” says Francesca Caumo, Director of Breast Radiology Department at the Venetian Institute of Oncology.

Back in Stockholm, Fabian and his team are tireless in their mission to improve the lives for everyone affected by cancer. It has been almost four years since his initial diagnosis and the journey to date has been nothing short of courageous. Alongside first-rate treatment and family support, data has also proved a somewhat hidden helping hand.

Whether its researchers, clinicians or patients – together with cloud computing and AI – humanity’s war on cancer has never been as fierce.

For more information on how Data&AI are helping clinicians, researchers and patients to make healthcare more efficient, click here.

post

Podcast series explores how AI can help solve society’s toughest challenges

YouTube Video

A podcast series sponsored by Microsoft on how artificial intelligence is helping people solve previously intractable societal challenges launches Monday, Feb. 4, on This Week in Machine Learning and AI. The six-episode “AI For the Benefit of Society with Microsoft” series highlights how AI breakthroughs are advancing work in environmental sustainability, precision medicine, accessibility and life-saving humanitarian assistance.

Hosted by Sam Charrington, the podcast episodes cover technologies and people using AI to pinpoint communities that are at risk of famine before it strikes, help children with autism get additional communication tools, fight climate change through sustainable forest management and develop chatbots to efficiently connect refugees with legal services. They also explore cross-cutting themes around AI and ethics, including how to account for bias in data, ensure new technologies work for the broadest range of users and build a culture of responsible innovation.

Episodes will be available on the following dates at the This Week in Machine Learning and AI website and on Spotify, iTunes and Google Play.

  • Feb. 4: AI for Humanitarian Action (podcast, transcript)
    With Justin Spelhaug, Microsoft general manager for Technology for Social Impact
  • Feb. 6: AI for Accessibility
    With Wendy Chisholm, Microsoft principal accessibility architect, and AI for Accessibility grantee InnerVoice
  • Feb. 8: AI for Earth
    With Lucas Joppa, Microsoft chief environmental officer, and AI for Earth grantee SilviaTerra
  • Feb. 18: AI for Healthcare
    With Peter Lee, corporate vice president, Microsoft Healthcare
  • Feb. 20: Human-Centered Design
    With Mira Lane, Microsoft partner director–ethics and society
  • Feb. 22: Fairness in Machine Learning
    With Hanna Wallach, principal researcher at Microsoft Research

Related:

Jennifer Langston writes about Microsoft research and innovation. Follow her on Twitter.

post

AI & IoT Insider Labs: Helping transform smallholder farming in Kenya

This blog post was authored by Peter Cooper, Senior Product Manager, Microsoft IoT.

From smart factories and smart cities to virtual personal assistants and self-driving cars, artificial intelligence (AI) and the Internet of Things (IoT) are transforming how people around the world live, work, and play.

But fundamentally changing the ways people, devices, and data interact is not simple or easy work. Microsoft’s AI & IoT Insider Labs was created to help all types of organizations accelerate their digital transformation. Member organizations around the world get access to support both technology development and product commercialization, for everything from hardware design to manufacturing to building applications and turning data into insights using machine learning.

Here’s how AI & IoT Insider Labs is helping one partner, SunCulture, leverage new technology to provide solar-powered water pumping and irrigation systems for smallholder farmers in Kenya.

Affordable irrigation for all

AI-IoT-Insider-Labs-hero

Kenyan smallholdings face some of the most challenging growing conditions in the world. 97 percent rely on natural rainfall to support their crops and livestock—and the families that depend on them. But just 17 percent of the country’s farmland is suitable for rainfed agriculture. Electricity is unavailable in most places and diesel power is often financially out of reach, so farmers spend hours every day pumping and transporting water. This limits them to low-value crops like maize and small yields, all because they lack the resources to irrigate their crops. Additionally, irrigation technologies have an important role to play in reducing the impact agriculture has on the earth’s freshwater resources, especially in Africa.

SunCulture, a 2017 Airband Grant Fund winner, believed sustainable technology could make irrigation affordable enough that even the poorest farmers could use it without further aggravating water shortages. The company set out to build an IoT platform to support a pay-as-you-grow payment model that would make solar-powered precision irrigation financially accessible for smallholders across Kenya.

How SunCulture’s solution works

SunCulture’s RainMaker2 pump combines the energy efficiency of solar power with the effectiveness of precision irrigation, making it cheaper and easier for farmers to grow high-quality fruits and vegetables. Using the energy of the sun, the SunCulture system pulls water from any source—lake, stream, well, etc.—and pumps it directly to the farm with sprinklers and drip irrigation.

This cutting-edge solution combines ClimateSmart™ solar and lithium-ion energy storage technology with cloud-based remote monitoring and optimization software developed with support from AI & IoT Insider Labs. It’s a powerful platform that makes it simple and cheap to deploy off-grid energy and connected solutions.

Farmers get the information they need to make good irrigation decisions at scale, without the costs involved in sending agronomy experts into the field. How? SunCulture processes a steady flow of sensor data, like soil moisture, pump efficiency, solar battery storage, and other factors, that is analyzed within Microsoft Azure’s cloud environment. This sensor data is combined with data from SunCulture’s network of 2,000 hyperlocal weather stations to leverage Azure machine learning tools and provide simple, real-time, precision irrigation recommendations directly to the farmer via text messaging (SMS).

 

The platform also enables real-time locking and unlocking of devices that makes the pay-as-you-grow model feasible. The platform is smart enough to shut off pumps automatically when power levels are getting low on a cloudy day, or when optimal irrigation thresholds are reached.

How farmers are benefiting from SunCulture

SunCulture’s pay-as-you-grow revenue model allows farmers to make small, monthly payments until they own their precision sensor-based irrigation system outright, empowering even the region’s poorest smallholder farmers to take control of their environment.

On average, SunCulture customers enjoy a 300 percent increase in crop yields and a 10x increase in annual income. Farmers with livestock double their milk yield, earning an extra $3.50/day in income from milk alone. The 17 hours per week they used to spend moving water manually is now directed to better tending their crops and livestock. At a price point of $1.25/day for the RainMaker2 with ClimateSmart™, a farmer’s investment is recouped quickly, and profit starts flowing from increased agricultural productivity.

Download SunCulture’s case study to learn more.