Microsoft partners with Team Gleason to create a computer vision dataset for ALS



[ad_1]

Microsoft and Team Gleason, the non-profit organization founded by NFL player Steve Gleason, today launched Project Insight to create an open dataset of facial images of people with amyotrophic lateral sclerosis (ALS). The organizations hope to foster innovation in computer vision and expand the potential of connectivity and communication for people with accessibility challenges.

Microsoft and the Gleason team claim that existing machine learning data sets do not represent the diversity of people with ALS, a condition that affects about 30,000 people in the U.S. Dry eyes and eyes from drugs that control excess saliva .

Project Insight will investigate how to use data and artificial intelligence with the front camera already present in many assistive devices to predict where a person is looking on a screen. The Gleason team will work with Microsoft’s Health Next Enable team to collect images of people with ALS looking at their computer so you can train AI models more inclusively. (Microsoft’s Health Next team, located within its Health AI division, focuses on artificial intelligence and cloud-based services to improve health outcomes.) Participants will be given a brief medical history questionnaire and asked through an application to submit images of themselves using a computer.

“The progression of ALS can be as diverse as the individuals themselves,” said Blair Casey, Team Gleason’s director of impact. “Therefore, access to computers and communication devices should not be a one-size-fits-all solution. We will collect as much information as possible from 100 people living with ALS so that we can develop tools for everyone to use effectively. “

Microsoft and Team Gleason estimate that the project will collect and share 5 TB of anonymous data with researchers on data science platforms such as Kaggle and GitHub.

“There is a significant lack of disability-specific data that is needed to pursue these innovative and complex opportunities within the disability experience,” said Mary Bellard, senior architect for accessibility at Microsoft. “This is not about disability and accessibility improving AI. It’s about how AI improves accessibility and … is better suited to the experience of disability. “

Project Insight follows Microsoft’s GazeSpeak, an accessibility app for people with ALS that runs on a smartphone and uses artificial intelligence to convert eye movements into speech so that an interlocutor can understand what is being said in real time. In testing, GazeSpeak proved to be much faster than blackboards displaying letters in different groups, a method that people with ALS have used historically. Specifically, the app helped users complete sentences in 78 seconds on average, compared to 123 seconds using the boards.

Microsoft is not alone in leveraging artificial intelligence to address the challenges associated with ALS. London-based engineer Julius Sweetland created OptiKey, free software that uses standard eye-tracking hardware to analyze a user’s eye movements. An on-screen keyboard allows the user to select letters in this way, with automatic suggestions appearing as they would on an iOS or Android smartphone.

In August, Google AI researchers working with the ALS Therapy Development Institute shared details about Project Euphonia, a speech-to-text transcription service that can dramatically improve the quality of speech synthesis and generation for people with disabilities in the speaks. To develop the model, Google solicited data from people with ALS and used phoneme errors, which involve the perceptually different sound units in a language, to reduce word error rates.

[ad_2]