Enabling Accessibility Through Technology

What technologies should we be exploring that will support every learner, including those who can’t see or hear or speak?

I have a friend who has been blind since birth. He loves to learn, and spends a lot of time listening to videos, podcasts, online courses, eBooks, and other audio resources. Typically, he spends a great deal of his waking day interacting with technology. He speaks to his voice-enabled devices, and opts for audio versions of any and everything that interests him. Yes, he interacts with people, but it always astounds me how audio and voice-assisted technology have changed his life.

That got me thinking more deeply about accessibility. More specifically, I questioned how technology companies are addressing accessibility issues these days. Luckily for me, accessibility was a hot topic at both Microsoft’s Build 2019 developer conference and Google’s I/O 2019 conference.


At Build 2019, Microsoft updated progress on AI for Accessibility, which began in 2018. The initiative offered $25 million (over five years) to institutions and individuals developing artificial intelligence (AI) technology designed to support people with disabilities. Those who apply and are selected (and funded) have access to Microsoft engineers, the Azure AI Platform, and additional funds to cover related costs.

Exploring AI for Accessibility further, I found that InnerVoice has been funded by the initiative to add functionality to its existing platform. InnerVoice is an app ($49.99) that teaches social communication skills, and the funding enabled it to add Visual Language—a novel feature that relies on Azure AI to teach language and literacy skills. From the InnerVoice site:

“The camera displays what you’re looking at. Take a picture and watch InnerVoice’s AI system label your picture with text and describe it with speech— allowing users to see the relationships shared among the environment, speech, language, and text.”


Noteworthy from Google’s I/O conference are these three initiatives (still in the research phase):

  1. Project Euphoria: Assisting people with speech impairments
  2. Live Relay: Assisting people who are deaf or hard of hearing
  3. Project Diva: Providing independence and autonomy via Google Assistant

Project Euphoria is part of Google’s AI for Social Good program. Through the use of AI, Google is hoping to improve voice recognition. The goal is to consistently recognize and transcribe impaired speech. According to the Google blog:

“To do this, Google software turns the recorded voice samples into a spectrogram, or a visual representation of the sound. The computer then uses common transcribed spectrograms to ‘train’ the system to better recognize this less common type of speech. Our AI algorithms currently aim to accommodate individuals who speak English and have impairments typically associated with Lou Gehrig’s Disease (ALS), but we believe our research can be applied to larger groups of people and to different speech impairments.”

The second initiative, Live Relay, enables people who are deaf or hard of hearing to use their phone’s speech recognition and text-to-speech functionalities to listen and speak for them. The technology enables a conversation by converting speech into text in real time, and then converting text back to a spoken voice in response.

I see Live Relay as useful to any and all users, and can easily imagine how we can integrate this functionality into our training solutions for all learners.

The third initiative from Google is Project Diva, which enables users to input commands to Google Assistant without using their voices. This is achieved through the use of external devices —such as switches or buttons. Project Diva also looks for ways to pair body movements, gestures, facial expressions, and gross motor movements with Google Assistant commands.

You actually can build one of these devices on your own, following instructions on the Hackster site. https://www.hackster.io/98661/diva-diversely-assisted-aed139


Apple is doing interesting work, as well, with VoiceOver, a gesture-based screen reader that describes everything happening on the device screen. If you are an iPhone user, I recommend that you activate it and experience this type of interaction with your device. It may have you thinking differently about how you can get your learners to interact with technology.


As I explore this landscape, I am looking for ways to provide access to all of my learners. I am continuously reminded of the little (and big) things that get in the way of learner access and success, and consider how technology might play a role in removing those obstacles.

I promise to return to this topic and share insights into the apps and tools we use on a daily basis. For now, I encourage you to consider how you are meeting the needs of your learners who can’t see or hear or speak. What technologies should we be exploring that will support every learner? How can we leverage the research that is being done in this space in support of learning and development?


Phylise Banner is a learning experience design consultant with more than 25 years of vision, action, and leadership experience in transformational learning and development approaches. A pioneer in online learning, she is an Adobe Education Leader, Certified Learning Environment Architect, STC Fellow, performance storyteller, avid angler, and private pilot.

what the tech?

Training magazine is the industry standard for professional development and news for training, human resources and business management professionals in all industries.