Posted on Leave a comment

What’s new in Microsoft 365 accessibility: Intelligent tools to support neurodiversity in the hybrid workplace

As we observe World Autism Month this April, I want to highlight several improvements across Microsoft 365 that help users focus on their work and support effective reading and writing. These are useful capabilities for everyone, but are especially important for those with cognitive differences like autism, ADHD, dyslexia, and dyspraxia – a group that, by some estimates, makes up as much as 40% of the population. 

Having been diagnosed with dyslexia early in life, this is more than an abstract statistic to me. When I was younger, I struggled with tasks such as reading and writing, which made everything else more challenging. I often found myself gravitating away from these activities, filling my time with skiing and playing music instead.

As much as I tried to escape it, I eventually realized that reading and writing were going to be part of my life and I needed to develop tools to navigate. When it came to reading, one of the hacks I leveraged was to identify the words based on the number of characters, allowing me to do a quick process of elimination to narrow in on the word.    

as the Office apps have expanded their accessibility features, I have found myself becoming a power user of features like Immersive Reader, Editor, and Thesaurus, allowing me to stop relying on my own internal hacks and leverage technology to overcome these challenges. This has completely transformed how I get things done and provided a new form of confidence that I never thought I would find.  

And those tools are continuing to improve and expand. Let’s take a closer look at how recent features in Microsoft 365 can support focus and productivity for neurodiverse and neurotypical workers alike.  

Posted on Leave a comment

A community and research approach to detecting and predicting seizures with the help of AI

Epilepsy is a chronic noncommunicable disease of the brain affecting 50 million people and making it one of the most common neurological diseases globally, according to WHO. 

With proper diagnoses and treatment, 70 percent of people living with epilepsy could live seizure free, making access to appropriate care and detection of upmost importance.  

Seizures can create challenges for the independence and day-to-day lives of people living with epilepsy. They can also lead to driving collisions, with 0.2 percent of traffic accidents linked to a form of seizure. A team at University of Sydney, led by Dr. Omid Kavehei, set out to answer an important question, “Can we improve the accuracy of seizure detection in epilepsy and can we predict a future seizure?”

According to the law in New South Walkes, Australia – home to the University of Sydney, people with epilepsy must be seizure free for at least 12 months to drive. This seizure free declaration is often based on a rough conversation between a patient and their clinician, with the clinician certifying they have been seizure free for a set period of time and patient reports. Given it’s not uncommon for patients to not remember seizures, or not have a family member or caretaker around with them, the certification process can lead to inaccurate outcomes. The researchers saw an opportunity to challenge the status quo and help clinicians make data-driven decisions.  

Despite a great deal of research and development in the last few decades, statistics of epilepsy remain almost unchanged. For example, still about 35 percent of diagnosed or confirmed epilepsy patients are going to have a long and difficult journey towards falling into the epilepsy treatment gap, where there are no available nor suitable treatment for them. “The percentage has not changed over time, it’s the same in 2022 as it was in 1990s. The current methods are clearly not working if we have not been able to improve that statistic despite some major understandings about the underlying disease and the brain,” Omid shares.  

The aim became predicting seizures with a good level of accuracy, leading to further investigations in the field. Should a seizure be predicted before it occurs? Would the administration of an anti-epileptic drug, that otherwise might not have been helpful, trigger a different outcome? A whole sphere of ideas was shaping up as they undertook the research. Their project was awarded an AI for Accessibility grant to investigate seizure prevention and prediction with the help of AI. 

The first hurdle was access to data. As Omid explains, “We need democratic access to data. No data, no research.” By partnering with the Royal Prince Alfred Hospital, the researchers were able to access this data, with proper patient consent. “Data should belong to the patients and we should be considered as custodians of it. Equally, research should be facilitated and most patients have no problem with that concept. If we limit access to data inequitably, we limit the opportunity for the next breakthroughs, which can often come from an unexpected source and person. This is doubly important if the data in question was produced using a form of publicly funded research or development. Exclusivity of that data beyond a certain point is neither understandable nor fair to the patients who volunteer so that data becomes available.” 

For successful seizure prediction, the team at University of Sydney needed to extract specific biomarkers from this data, specifically detecting biomarkers that indicate abnormalities in the brain activity. The rate of those incidents and density would represent the onset of a seizure. Existing documentation shows the brain senses when a seizure is about to happen, and it works in order to stop it. From a practical standpoint, clinicians and researchers should be able to seek repetitive and consistent patterns of data in the brain before a seizure. In developing their AI model, they partnered with the head of neurology at the Royal Prince Alfred Hospital, as well as additional neurologists and epileptologists to test the performance of the model and understand if it led to any conclusive results.  

In developing the AI model, the problem they overwhelmingly faced was the retrospective research; creating issues in training the algorithm with the same dataset as the one they wished to test on. They switched to a perspective approach, which takes the view of no knowledge of when a seizure is happening with no data available. The forecasting systems start out as less accurate, but combined with detection systems, it improves over time and more importantly it becomes patient specific. Two papers were published on the research: Continental generalization of an AI system for clinical seizure recognition and A multimodal AI system for out-of-distribution generalization of seizure detection. 

Current data collection is through the help of a high number of electrodes connected to the patient. The next dilemma for this body of research is to understand if they can reduce the number of electrodes and compensate for it through the AI model. “We wish for seizure prediction to continue to improve. The research we are doing is impacting real people, it’s not just a theoretical puzzle to solve. We are in touch with people from all over the world, their stories are profound and real – from a patient who need supervised assistance going up or down the stairs, to another who fears holding their infant baby over potential accidents during a seizure. What we wish to give back to the community is independence,” states Omid. 

Enabling increased independence for those with epilepsy is top of mind and heart for Francesca Fedeli and Roberto D’Angelo, co-founders of FightTheStroke Foundation, who set out to help their son, Mario. At the 2019 Microsoft Hackathon, they developed MirrorHR, an epilepsy research app available in 13 languages and is used daily by hundreds of families across 31 countries. 

Posted on Leave a comment

UN International Day of Persons with Disabilities: sharing our learnings to create a more accessible world

A More Accessible and Inclusive World

Organizations across the globe are increasingly recognizing people with disabilities as talent and setting them up for success through accessible technology.

United Kingdom: GlaxoSmithKline, a multi-national pharmaceutical company, relies on Teams accessibility features such as background blur and live captions to help employees with remote collaboration.

“I have definitely seen an increase in engagement by using Microsoft Teams for everyone, but especially for people with disabilities.” Tracy Lee Mitchelson, Project Lead, Disability Confidence at GSK.

France: Sodexo, a provider of quality of life services, is helping employees work independently through building digital accessibility skills using Microsoft 365 as its communications and collaboration platform.

“Some of the work that’s happened with our front-line workers helping them to understand the Microsoft tools that were available have really helped with their digital accessibility.” Megan Horsburgh Head of D&I, Sodexo UK & Ireland.

India: By embracing Microsoft Teams, v-shesh, a workforce services provider, is delivering engaging lessons in the cloud and equips people with disabilities to be part of the digital-first economy.

“With Teams, we are not only creating an inclusive environment but also equipping our learners to be ready for the future of work.” P Rajasekharan, Co-founder at v-shesh.

United States: Fannie Mae, a provider in mortgage services, is seeing Microsoft 365 play a crucial part in digital transformation and building an inclusive workplace that helps all employees reach their potential.

“We use the accessibility tools in Microsoft 365 as part of our drive to approach everything from an inclusive mindset. When you do that, not only do you problem-solve for the needs of your customers, you drive innovation as a natural by-product.”  Cassie Hong, Product Designer at Fannie Mae.

Posted on Leave a comment

How fading eyesight led Kenny Singh to embrace tech and become a champion for accessibility

“There’s a lot of granularity in the way I can ‘read’ a screen. I right-click an image and say, ‘OCR it,’” he explains. “Screen readers struggled in the past, but as AI evolves, things are constantly improving. If I am presenting with PowerPoint, I click a button, and it starts transcribing my words. It does a live translation. I can be German if you like!”

Over the horizon, he hopes OCR technology will open doors to even more inclusion. “OCR could be built into Office apps so that text in images with their format can be easily read, copied and edited by a screen reader. OCR technology could also make screen-sharing accessible to screen readers.” He would also like video conferencing technology, such as Microsoft Teams, to be able to recognize and describe to him the facial expressions and emotions of others.

Singh has just been promoted into a new product manager role with responsibility for the performance of Microsoft cybersecurity and compliance products and services. Working closely with industry, academia, and government, he is helping to create a more secure Australia.

“Microsoft Security is a holistic security platform. We have security deeply ‘baked’ into the platform: identity security, endpoint security – including mobile phones, laptops, operational technology, and Internet of Things devices, data security, and security of your apps. All your machines, whether on-premises or in the cloud, are safe – that’s data and emails. We invest more than $1 billion into cybersecurity on a yearly basis globally.”

READ more stories about diversity and inclusion in our region.

In the coming years, he would like to move into even more senior roles “It really gets me going to generate shareholder value, to give people autonomy, mastery and a strong connection with their work.”

He is also mindful of his leadership style. “I have a very strong personal will that’s balanced with humility, empathy, and a growth mindset.”

Man on a exercise machine in a gym
Singh started weight training four years ago.

He also enjoys a work-life balance built around his extended family. He met his partner, Tina, in 1999. They got married in 2006 and now live in suburban Melbourne. Tina is a business transformation leader at the national postal service, Australia Post. “She is also my chauffeur,” he says with a wicked chuckle. Both have parents and other family members living close-by.

About four years ago, he went through another personal transformation: He lost weight, got into shape and started powerlifting.

“I had lost all the fat I wanted to lose, but I then wanted to gain muscle. I started to enjoy powerlifting a lot and seeing my strength increase. It was the measurable performance that really drew me to it,” he says.

“I absolutely love powerlifting from a tenacity perspective. We have very vibrant and full days at Microsoft. At night, I’m thinking, ‘I don’t wanna do this,’ but I always feel great afterward.”

Singh works hard at making progress: whether he is chalking up a new personal best in his home gym or exceeding a business target in the office. His 10th anniversary with Microsoft is coming up soon, and looking back, his nervous first trip to Seattle feels like a lifetime ago.

He has grown professionally, and the attitudes of others toward issues of accessibility have evolved. There is a lot more support, he says. “Now at every event I attend, there’s thought given to captioning. There’s thought given to how to make content more accessible to everyone.

A closeup of a man lifting weights.
“I absolutely love powerlifting from a tenacity perspective.”

“If someone joins (Microsoft) today and they have special needs at work, we have a Special Accommodation Fund. The cost doesn’t even come out of your manager’s fund. There’s no burden on your immediate chain of command; there’s just a box asking what special assistance is required. It takes the trepidation out if it.”

He very much believes in the idea that everyone benefits when designers design with people with disabilities in mind. He champions digital technology as an “intense enabler” for inclusion and accessibility.

“At a people, process and technology level, Microsoft has made a lot of investments that have moved the dial significantly forward. And we’ll keep on pushing for more accessibility for everyone.”

TOP IMAGE: Kenny Singh lifting weights in his garage gym. All images by Penny Stephens. 

Posted on Leave a comment

Shrinking the ‘data desert’: making AI systems more inclusive of people with disabilities

“We are in a data desert,” said Mary Bellard, principal innovation architect lead at Microsoft who also oversees the AI for Accessibility program. “There’s a lot of passion and energy around doing really cool things with AI and people with disabilities, but we don’t have enough data.”

“It’s like we have the car and the car is packed and ready to go, but there’s no gas in it. We don’t have enough data to power these ideas.”

To begin to shrink that data desert, Microsoft researchers have been working for the past year and a half to investigate and suggest ways to make AI systems more inclusive of people with disabilities. The company is also funding and collaborating with AI for Accessibility grantees to create or use more representative training datasets, such as ORBIT and the Microsoft Ability Initiative with University of Texas at Austin researchers.

Mary Bellard sits on outdoor steps
Mary Bellard, principal innovation architect lead at Microsoft who oversees the AI for Accessibility program. Photo provided by Bellard.

Today, Team Gleason announced it is partnering with Microsoft on Project Insight, which will create an open dataset of facial imagery of people living with ALS to help advance innovation in computer vision and train those AI models more inclusively.

It’s an industry-wide problem that won’t be solved by one project or organization alone, Microsoft says. But new collaborations are beginning to address the issue.

A research roadmap on AI Fairness and Disability published by Microsoft Research and a workshop on Disability, Bias and AI hosted last year with the AI Now Institute at New York University found a host of potential areas in which mainstream AI algorithms that aren’t trained on inclusive data either don’t work well for people with disabilities or can actively harm them.

If a self-driving car’s pedestrian detection algorithms haven’t been shown examples of people who use wheelchairs or whose posture or gait is different due to advanced age, for example, they may not correctly identify those people as objects to avoid or estimate how much longer they need to safely cross a street, researchers noted.

AI models used in hiring processes that try to read personalities or interpret sentiment from potential job candidates can misread cues and screen out qualified candidates with autism or who emote differently. Algorithms that read handwriting may not be able to cope with examples from people who have Parkinson’s disease or tremors. Gesture recognition systems may be confused by people with amputated limbs or different body shapes.

It’s fairly common for some people with disabilities to be early adopters of intelligent technologies, yet they’ve often not been adequately represented in the data that informs how those systems work, researchers say.

“When technologies are so desired by a community, they’re often willing to tolerate a higher rate of errors,” said Meredith Ringel Morris, senior principal researcher who manages the Microsoft Research Ability Team. “So imperfect AI systems still have value, but they could provide so much more and work so much better if they were trained on more inclusive data.”

‘Pushing the state of the art’

Danna Gurari, an AI for Accessibility grantee and assistant professor at the University of Texas at Austin, had that goal in mind when she began developing the VizWiz datasets. They include tens of thousands of photographs and questions submitted by people who are blind or have low vision to an app originally developed by researchers at Carnegie Mellon University.

The questions run the gamut: What is the expiration date on this milk? What does this shirt say? Do my fingertips look blue? Do these clouds look stormy? Do the charcoal briquettes in this grill look ready? What does the picture on this birthday card look like?

The app originally crowdsourced answers from people across the internet, but Gurari wondered if she could use the data to improve how computer vision algorithms interpret photos taken by people who are blind.

Danna Guarari stands outside
AI for Accessibility grantee Danna Gurari, assistant professor at the University of Texas at Austin who developed the VizWiz dataset and directs the School of Information’s Image and Video Computing Group.

Many of those questions require reading text, such as determining how much of an over-the-counter medicine is safe to take. Computer vision research has often treated that as a separate problem, for example, from recognizing objects or trying to interpret low-quality photos. But successfully describing real-world photos requires an integrated approach, Gurari said.

Moreover, computer vision algorithms typically learn from large image datasets of pictures downloaded from the internet. Most are taken by sighted people and reflect the photographer’s interest, with items that are centered and in focus.

But an algorithm that’s only been trained on perfect images is likely to perform poorly in describing what’s in a photo taken by a person who is blind; it may be blurry, off center or backlit. And sometimes the thing that person wants to know hinges on a detail that a person who is sighted might not think to label, such as whether a shirt is clean or dirty.

“Often it’s not obvious what is meaningful to people, and that’s why it’s so important not just to design for — but design these technologies with — people who are in the blind and low vision community,” said Gurari, who also directs the School of Information’s Image and Video Computing Group at the University of Texas at Austin.

Her team undertook the massive task of cleaning up the original VizWiz dataset to make it usable for training machine learning algorithms — removing inappropriate images, sourcing new labels, scrubbing personal information and even translating audio questions into text to remove the possibility that someone’s voice could be recognized.

Working with Microsoft funding and researchers, Gurari’s team has developed a new public dataset to train, validate and test image captioning algorithms. It includes more than 39,000 images taken by blind and low vision participants and five possible captions for each. Her team is also working on algorithms that can recognize right off the bat when an image someone has submitted is too blurry, obscured or poorly lit and suggest how to try again.

Earlier this year, Microsoft sponsored an open challenge to other industry and academic researchers to test their image captioning algorithms on the VizWiz dataset. In one common evaluation metric, the top performing algorithm posted a 33% improvement over the prior state of the art.

“This is really pushing the state of the art in captioning for the blind community forward,” said Seeing AI lead engineer Shaikh, who is working with AI for Accessibility grantees and their datasets to develop potential improvements for the app.

Posted on Leave a comment

It’s National Disability Employment Awareness Month: Meet people making a difference

This October marks 75 years of National Disability Employment Awareness Month in the U.S. – with increasing access and opportunity as this year’s theme.

In today’s workplace, it has never been more important to include everyone, and accessibility is the vehicle to inclusion. It is a responsibility and an opportunity. Microsoft is passionate about creating products that help people with disabilities unlock their full potential at work, school and in daily life. Designing with and for people with disabilities leads to innovation for everyone. As Microsoft chief accessibility officer Jenny Lay-Flurrie says, “A diverse and talented workforce brings new perspectives that help advance our ability to delight all of our customers.”

This month, Microsoft celebrates those talented and diverse teams, and shares some of their stories.

Angela Mills uses the Seeing AI app to confirm the location of a meeting room.Angela Mills uses the Seeing AI app to confirm the location of a meeting room.

Angela Mills, Director of Program Management, Game Developer Experiences

Angela leads a team on the PlayFab game developer platform. Her colleagues knew she used a screen reader, but it was only 20 years after joining Microsoft that she began to tell people about her visual disability. In 2018, Microsoft released Seeing AI, a mobile app that describes nearby people, text and objects for users with low vision. It meant she could find meeting rooms and choose her lunch without help. She says, “Every person with a disability has honed skills to work around the limitations that their disability brings. I cannot imagine having been more successful in my career if I didn’t have the disability.”

[Subscribe to Microsoft On the Issues for more on the topics that matter most.]

Anne Taylor, Director of Supportability, Accessibility

At 7, Anne told her family in Thailand she wanted to live and work in the United States. A scholarship helped further her dream, and she eventually joined Microsoft as an agent of change. Anne, who is blind, works with engineering teams to ensure products are designed with disabilities in mind. She says, “I want to encourage, inspire and motivate teams to think outside the box and innovate with accessibility design as an essential component to any product or service.”

A quote from Craig Cincotta

Craig Cincotta, Senior Director, Communications

In 2013, while director of communications for Xbox, Craig took two months’ leave to treat debilitating panic attacks with cognitive behavioral therapy, meditation and medication. He opened up to his manager about having obsessive-compulsive disorder and generalized anxiety, and the move allowed him to be his authentic self. He says, “Any time you have a more inclusive environment, you’re able to see fresher ideas, broaden your perspective and get the best version of people.”

Dona Sarkar, Principal Cloud Advocate

Dona had already been at Microsoft for a decade when she was diagnosed with dyslexia, which means she can find it challenging to read charts, graphs and metrics reports at work. But she kept the diagnosis to herself and managed, until she heard about a dyslexic boy who improved his reading with Microsoft Learning Tools. In 2018, she started to talk about her disability and encourage other leaders to do the same “to make a far safer space for employees to open up about their disabilities.”

[Read more: Understanding accessibility through ABCs]

Heather Dowdy, Senior Program Manager, AI & Accessibility

Heather was just six months old when she started learning sign language – to communicate with her parents who had both lost their hearing as toddlers. “My life has given me a special lens for people marginalized by the intersection of race, gender, class and disability,” she says. She trained as an electrical engineer and joined Microsoft in 2016 to develop strategies and drive change to make the internet accessible for everyone.

A quote and picture of Jenny Lay-Flurrie

Jenny Lay-Flurrie, Chief Accessibility Officer

Measles and ear infections in childhood left Jenny with hearing loss, something she tried to hide until her 30s, before she slowly began to accept and celebrate her disability. But then came an embolism, which has left her with long-term damage to her leg and needing canes to walk. “It happened in the space of 90 minutes. The learning was immense,” she says. “There are things we need to do better. This experience has been a good reminder of why we need people with disabilities to be in the process of product design.”

Jessica Rafuse, Director of Strategic Partnerships and Policy, Accessibility

An employment attorney, Jessica joined Microsoft in 2016: “I really wanted to be a part of what they were doing for people with disabilities.” Her role involves going out into the community and asking experts for their perspectives. “I love that idea that the things I do day in, day out can help someone get a job someday.”

[Read more: ‘We are at a crossroads’ – How Microsoft’s Accessibility team is making an impact that will be felt for generations]

A picture and quote from Joey ChemisJoey Chemis, Data and Applied Scientist

Joey came to work at Microsoft through the company’s Autism Hiring Program that started in 2015. Unemployment rates for those with autism are estimated at 70% to 90%. Joey had advanced skills in math but was finding it difficult to get interviews. The hiring process allows people with autism to “show their true colors and abilities,” he says.

Swetha Machanavajhala, Founder, Hearing AI

Swetha was born with profound hearing loss, so her role, using data and machine learning to enable people who are deaf or hard of hearing to better understand the world around them, is a personal mission. Inspired when her carbon monoxide alarm rang for two weeks without her noticing, Swetha founded the Hearing AI research project. This interface aims to visualize the surroundings of people with hearing loss, translating sounds such as alarms and volume changes into visual cues and written materials into speech in real time.

For more on Accessibility, visit On the Issues: Accessibility. And follow @MSFTIssues on Twitter.

Posted on Leave a comment

Despite COVID-19, tech provides new way to upskill people with disabilities

Using Microsoft Teams has created a remote work environment for v-shesh and has bolstered its inclusivity work to a higher level.

“We have been able to simulate our work environment. We have mixed groups in different employability programs, using Teams and Microsoft Office,” he says. “We’ve also conducted interviews (for candidates) on Teams.”

This shift in digital training has also helped v-shesh onboard job seekers during the pandemic safely. Deepthi Ganesh, who is in the final year of her undergraduate degree in computer application, came on board in August. The 21-year-old is a person with autism. She signed up for courses in Microsoft Office, communications, technology, and life skills to prepare herself for the workforce once she graduates from a college in Mumbai.

“I am very fond of computers. I want to do a good job in a good company,” Deepthi says. “It is very interesting training. I can share my link from the calendar. I can download assignments from Microsoft Teams to the desktop and also (share my) assignment response. I like the audio and video (features). I can raise my hand and share my screen.”

Posted on Leave a comment

Microsoft’s accessibility guideposts, from A to Z

A is for Autism at Work. A Microsoft employee with autism draws graphs on a whiteboard.

A is for the Autism @ Work Playbook. This resource was developed for employers who are interested in beginning or expanding their inclusive hiring journey. You can download it here.

B is for Braille. A woman who is blind teaches a student of the Carroll Kids program

B is for Louis Braille, the 12-year-old boy who invented a way for people who are blind to read. Watch Microsoft President Brad Smith explore how Braille’s spirit is still alive today with the innovators focusing on accessibility.

C is for Captions, Man wearing a hoodie reads captions on his computer screen. D

C is for captions and subtitles, supporting people with disabilities to follow along in meetings and PowerPoint presentations. Live captioning is one of the accessibility features the Microsoft Disability Answer Desk can help set up.

D stands for Disability Answer Desk. A woman who is blind uses a braille keyboard with a Surface device.

D is for the Disability Answer Desk Playbook. Click here to find out Microsoft’s top learnings on setting up a Disability Answer Desk.

: E stands for ease of access to low-vision tools.

E is for ease of access, and how you can make Windows 10 work better for you. Whether it’s increasing font size or adjusting the color contrast, there is a range of ease-of-access settings you can personalize.

F is for font size. Screen showing how to make your font size bigger in settings,

F is for font size, which can be adjusted for readability across a range of our products. It’s one of a number of tweaks available to help people who are blind or have low vision.

Top left: Dr Omid Kavehei showing a non-surgical device that would provide advance warning of a seizure for people living with epilepsy. Top right: G is for Grantees. Bottom left: A teacher showing Counting Zoo, an immersive eReader, to a child. Bottom right: A person using the Seeing AI app on their smartphone.

G is for grantees. Microsoft funds projects and research around the world that use AI-powered technology to help make the world more inclusive. Check out some of our AI for Accessibility projects here.

H is for hiring. Fathi Mohamed from the Supported Employment Program waves from the Microsoft Connector bus.

H is for hiring, inclusive hiring and how Microsoft ensures opportunities for everyone through employment programs focused on the untapped talent of people with disabilities.

I is for inclusive design. Various doodles of work life including cars, buildings and avatars.

I is for inclusive design, and ensuring accessibility and inclusion is at the core of products. It’s about drawing on the full range of human diversity, and reflecting different perspectives in what we create.

J is for Jenny Lay-Flurrie. Jenny smiles.

J is for Jenny Lay-Flurrie, our Chief Accessibility Officer. She unites us all as accessibility advocates, making sure accessibility and inclusion are implemented throughout the company’s culture and within the product development process.

K is for keyboard-only users. Overhead view of two students using assistive technology to learn programming at vocational school for the blind.

K is for keyboard-only users, making a straightforward user experience for people not using a computer mouse. Microsoft products include a range of options and shortcuts to customize your keyboard and make navigation quick and easy.

L is for learning tools. A young boy sitting at a table using a computer tablet.

L is for learning tools. Our free features enables students to improve reading, writing and comprehension, whatever their level, such as Immersive Reader, designed to help people with learning disabilities build confidence and ability.

M is for Moovit. A Moovit user waiting for their train.

M is for Moovit, the urban mobility app that has been optimized for accessibility. Now, accessible routes can be plotted around unfamiliar cities, and people who are blind or with low vision can use screen readers to navigate.

N is for Narrator Avatar woman wearing headphones uses Narrator on her computer.

N is for Narrator, the free screen reader built within Windows 10. Our new and improved screen reader has a comprehensive walkthrough guide, so you can start using Narrator on apps, for browsing the web and more.

O is for Outlook Accessibility. Bernardo Villarreal, a man who has low vision, looks closely at a laptop screen as he reads text in big font.

O is for Outlook Accessibility. It is crucial to be inclusive through daily email, and Outlook makes accessibility easy with features like the Accessibility Checker and Alt Text. Find out how you can create accessible email content here.

P is for PowerPoint. Person works on their PowerPoint presentation in a café.

P is for PowerPoint and AI-powered automatic live captions. Real-time captions and subtitles mean everyone can follow and participate in presentations, and they may be particularly useful for people who are deaf or hard-of-hearing, as well as non-native speakers.

Q is for questions

Q is for questions. Our Disability Answer Desk provides support on all our assistive technology and allows customers to give direct feedback to drive greater accessibility across our products and services.

R is for research enable group. Members of the research enable group project work on their products. A man and woman look at a laptop, multiple members adjust a drum set and measure sound.

R is for Research Enable Group and the ongoing work on new accessibility products. These include the Hands-Free Sound Machine, which allows people to create compositions with their eyes, and eye-controlled wheelchairs.

S is for Seeing AI. A person using the Seeing AI app to read a mailed letter.

S is for Seeing AI, a free app that narrates the world around you. Available in multiple languages, it describes everything from text and products, to people, scenes and currency.

T is for Text Alternatives. Pratyush Nalam, a man who uses a wheelchair, types on a laptop.

T is for text alternativesdescriptions of images on websites and applications for screen readers to translate for customers. They need to accurately describe what is being shown, without too much information.

U is for user interface automation. Image shows an introduction to UI Automation

U is for user interface automation (UIA), allowing assistive technologies to let customers know everything they need to about your UI. This means everyone can access the full functionality and enjoy a high-quality experience.

Top right: V is for vision. Cory Joseph, a man who is blind, types on a braille keyboard while also working through a mobile phone. Bottom left: A tech worker with visual impairment uses assistive technology while visiting the Microsoft office in Singapore. Bottom right: Anne Taylor, a woman who is blind, works on a Surface device with a braille keyboard sitting on the side.

V is for vision. Here’s our best practice guide for interviewing candidates who are blind or with low vision.

W is for webinar. Chris Schlechty, a man who uses a wheelchair, sits in front of his desk and shows his straw device that he uses to control his computer.

W is for webinar. We’ve launched a series of accessibility webinars for customers and businesses who want to learn more about accessibility features such as Narrator and Magnifier. Check out the demos from our engineering teams here.

X is for Xbox adaptive controller. Photo of the Xbox adaptive controller and its PDP one-handed joystick.

X is for the Xbox Adaptive Controller, a groundbreaking controller that connects devices to help make gaming more accessible and inclusive so that everyone can play.

Y is for you: Avatars of a female with red hair, freckles and glasses wearing white earrings and white shirt, a male with short black hair and purple shirt, a female with black hair, orange beanie, yellow headphones, and black shirt, and a female with blue hair and bangs, yellow sunglasses, pink earrings, and white shirt.

Y is for you … the person at the center of it all. We build with inclusion in mind and work with the direct involvement of the disability community – “Nothing about us without us.”

: Z is for zero. Image of a large zero.

Z is … hoping that the number of people who feel like they don’t have access to free assistive technology tools to complete everyday tasks is zero. Today, only one in 10 people have access to assistive products.

For more on Microsoft accessibility tools, visit AI for accessibility. And follow @MSFTIssues on Twitter.  

Posted on Leave a comment

How to create clarity in your company’s accessibility journey

We’re often asked what the secret is to our approach to accessibility and inclusion at Microsoft. The simple answer is, we manage it like a business. As with any business, it needs to be actively managed and measured to ensure it continues to grow and stay healthy. To help bring other organizations along, I’d like to dive in a bit deeper into our model in the hopes that you can take what we’ve learned adapt it for your company’s purpose and grow accessibility in a sustainable way that’s integrated into your culture.

Accessibility is a journey. Back in 2016, we realized we needed to evolve our approach to accessibility and so we set out to rebuild our company-wide accessibility program in more systematic way that allowed us to measure progress and set targets. Started by studying best practices in maturity models—the Carnegie Mellon Capability Model and the Level Access Digital Accessibility Maturity Model. The wisdom from these models was discussed by the Microsoft Accessibility Leadership Team who then considered the dimensions and criteria that would work for Microsoft. This led to development of the Accessibility Evolution Model (AEM) which we have been using and improving on for over four years. The use of this model has enabled us to understand year-over-year growth, by division and function, and track progress. To manage accessibility like a business.

The model defines how developed an organization is in addressing a business problem like accessibility and includes a set of structured levels that describe how well the behaviors, practices, and processes of an organization can reliably and sustainably produce desired outcomes.

A graphic illustrating the different types of accessibility evolution models.

The AEM is comprised of eight overarching dimensions by which we assess our accessibility journey. People & Culture, Vision, Strategy and Engagement, Investments, Standards, Training, Support & Tools, Procurement, Product Development Lifecycle, and Sales, Marketing & Communications. We realize that each organization has its own pace and starting point, and you may have your own criteria, but the starting point for building a culture of accessibility and disability inclusion is People.

At Microsoft, we approach inclusion in everything we do and consider disability as a strength. The more you focus and take this approach, the more your culture will improve and evolve. It really starts by hiring people with disabilities and empowering that talent as an integral part of your organization. The mission of Microsoft is to empower every person and every organization to achieve more. So, inclusion of people with disabilities is deeply connected to the core of our company goals. By empowering talent with disabilities, we gain expertise that enables us to build products and services that reflect the diverse needs of our global customers. Our focus on inclusive hiring programs including the Autism Hiring Program is at the core of building a long-term successful organization. Additionally, it’s important that every employee is trained on accessibility and understands why and how to be inclusive using accessibility. We created an ‘Accessibility in Action Badge’ for our employees, 90 minutes of virtual online content to shine a spotlight on how technology can empower everyone. After receiving a lot of positive feedback, we created a similar program for other employers, nonprofits, and consumers to take alongside our employees which launched in May this year. Invest your time to complete the accessibility fundamentals learning path today on Microsoft Learn and consider how similar training could benefit your organization.

The AEM has helped us to understand the maturity of our Product Development Lifecycle. As a technology company, this is imperative, but the same principles apply to every company regardless of what your ‘product’ is. This dimension focuses on how to plan, design, code, build & deploy, test, and receive feedback, to ensure requirements are built into the development lifecycle and that accessibility is a part of the normal engineering processes. While much of this is baked into our formal processes, we also are aware of the need to continue in innovation and creativity within technology. It is important to setup repeatable processes, while also providing flexibility within engineering teams, particularly in design phases. Taking a look at the five levels within the development lifecycle, we explore the reactive mode, whereby teams are learning accessibility, testing, finding and remediating bugs – completely reactive. As we grow in maturity, we begin to see the proactive activities, such as spec reviews with an inclusive lens. This reduces the reactive bug fixes, and allows us to “shift left”, which means moving the accessibility lens and actions upstream into the conceptual and design phases and most importantly listening to user needs. A great and timely example of this is accessibility features in Microsoft Teams including custom backgrounds and live captions. These features stemmed from one of our engineers in the Deaf community who saw an opportunity to improve the product which resulted in features that are now widely used for collaboration. In fact, we saw a 30x increase in captions usage from February to April this year as more and more people leveraged the feature to collaborate globally as a result of the pandemic.

Remember, there is no one-size-fits-all approach to creating a sustainable culture of accessibility. You’ll need to tailor your program to fit the dynamics of your organization to ensure it’s embedded into your DNA and most importantly, part of how you run your business. You’ll need to evolve the model as you mature and learn what factors and dimensions fuel growth. We have a long way to go on our journey but our hope is that by sharing more about our journey and how the Microsoft AEM has helped us mature accessibility at Microsoft, it provides you with a guide and operational model to learn from, and accelerate your journey no matter your starting point.

There is so much more that we want to share that can’t be covered in the span of this blog so we encourage you to check out the resources we’ve made available on Microsoft.com/Accessibility.

Posted on Leave a comment

New AI for Accessibility grantee projects aim to drive mental health research, data insights and innovations

Through our work in the Microsoft AI for Accessibility program, we have learned there are big gaps in mental health services around the globe. In some countries, there may only be one mental health professional per 100,000 people. When paired with the reality that 1 in 5 people have a mental health condition, we are asking how technology can and should be involved. In February, we shared our call for project proposals that aim to accelerate mental health research, data insights, and innovations using AI, and today we want to highlight the projects we’re supporting.

Adaptive text-messaging with Mental Health America

Of the 89% of people who screened positive for major depression through Mental Health America’s online survey last year, 79% do not want to pursue psychotherapy or medications, yet 50% want access to digital tools. There are thousands of mental health apps available, but even the top 30 apps see 97% of people stop using them in the first two weeks. Northwestern University and University of Toronto, in partnership with Mental Health America, are developing an adaptive text-messaging service to help people better manage their mental health. Text messages can reach everyone with bite-sized information and prompts, and to increase people’s engagement, machine learning algorithms will be used to discover how to customize the content and timing of messages. By combining clinical psychology, human computer interaction, and machine learning, they hope to design interventions that are easily delivered, effective, and engaging.

Understanding empathy in text-based peer support

While understanding the time and frequency for when a person wants to engage with mental health prompts is important, the tone and language of messages plays a unique role. The University of Washington, in partnership with TalkLife and Supportiv, are building a natural language model to understand empathy in text-based peer support. By adapting measurements of empathy developed in clinical therapy settings, they intend to create an annotation rubric for the millions of messages in their dataset, train models to recognize aspects of empathy, then offer suggestions to make responses more empathetic. This project will help researchers better understand the language of empathy within a mental health scenario and offer an opportunity to explore how to improve human connection among the peer support volunteer community and those seeking help.

Woman working on her laptop

Deliver more impactful services in India

In addition to online peer support forums, crisis lines may be one of the only ways people can get mental health support in some parts of the world. People calling for support can experience long wait times, dropped calls, or be matched with volunteers who don’t share the same sociocultural background or lived experiences. Georgia Tech, working with Befrienders India, is developing a dashboard and models intended to match crisis line callers with volunteers based on demographic and sociocultural characteristics, the needs and issues of the callers, and the lived experiences of the volunteers. The team aims to explore if AI can not only improve the efficiency and staffing of the crisis line system, but more importantly, help deliver more impactful services for people.

Photo collage of Dr. Munmun De Choudhury, Principal Investigator; Dr. Neha Kumar, Co-Principal Investigator; Sachin Pendse, PhD student at Georgia Tech

More to learn, more work to do

Digital tools and peer services are not replacements for mental health professionals, but our grantees will be investigating how to customize technology support for language, cultural background, help us communicate with each other with more empathy, and get nudges at times and in ways that are the most meaningful. We are grateful for the passion and commitment to inclusion each of our grantees has demonstrated and we look forward to sharing more as this research progresses.

If you are interested in applying for the Microsoft AI for Accessibility program, our next grant application deadline is on July 30, 2020 and is an open call for any project idea related to disability, accessibility, and AI. We will then two have additional focused award rounds, one on Smart Cities and Transportation with a deadline of December 15, 2020, and a second on Education with a deadline of March 12, 2021. Check out our application FAQ’s for more details on the program.