
A software that enables deaf and hard-of-hearing individuals to use sign language - over a mobile phone has been developed by researchers at the University of Washington.
The engineers got the phones working together this spring, and recently received a National Science Foundation grant for a 20-person field project that will begin next year in Seattle.
Advertisement
This is the first time two-way real-time video communication has been demonstrated over cell phones in the United States. Since posting a video of the working prototype on YouTube, deaf people around the country have been writing on a daily basis.
'A lot of people are excited about this,' said principal investigator Eve Riskin, a UW professor of electrical engineering.
For mobile communication, deaf people now communicate by cell phone using text messages.
'But the point is you want to be able to communicate in your native language,' Riskin said.
'For deaf people that's American Sign Language,' Riskin added.
Video is much better than text-messaging because it's faster and it's better at conveying emotion, said Jessica DeWitt, a UW undergraduate in psychology who is deaf and is a collaborator on the MobileASL project.
She says a large part of her communication is with facial expressions, which are transmitted over the video phones.
The team tried different ways to get comprehensible sign language on low-resolution video. They discovered that the most important part of the image to transmit in high resolution is around the face. This is not surprising, since eye-tracking studies have already shown that people spend the most time looking at a person's face while they are signing.
The current version of MobileASL uses a standard video compression tool to stay within the data transmission limit. Future versions will incorporate custom tools to get better quality.
The team developed a scheme to transmit the person's face and hands in high resolution, and the background in lower resolution. Now they are working on another feature that identifies when people are moving their hands, to reduce battery consumption and processing power when the person is not signing.
Source: ANI
RAS /J
For mobile communication, deaf people now communicate by cell phone using text messages.
Advertisement
'But the point is you want to be able to communicate in your native language,' Riskin said.
'For deaf people that's American Sign Language,' Riskin added.
Video is much better than text-messaging because it's faster and it's better at conveying emotion, said Jessica DeWitt, a UW undergraduate in psychology who is deaf and is a collaborator on the MobileASL project.
She says a large part of her communication is with facial expressions, which are transmitted over the video phones.
The team tried different ways to get comprehensible sign language on low-resolution video. They discovered that the most important part of the image to transmit in high resolution is around the face. This is not surprising, since eye-tracking studies have already shown that people spend the most time looking at a person's face while they are signing.
The current version of MobileASL uses a standard video compression tool to stay within the data transmission limit. Future versions will incorporate custom tools to get better quality.
The team developed a scheme to transmit the person's face and hands in high resolution, and the background in lower resolution. Now they are working on another feature that identifies when people are moving their hands, to reduce battery consumption and processing power when the person is not signing.
Source: ANI
RAS /J
Advertisement
Advertisement
|
Advertisement
Recommended Readings
Latest News on IT in Healthcare

Global healthcare accelerates with Generative AI. AWS Cloud empowers secure solutions, especially in India's context.

Seeking a remedy for blindness, a US pharma start-up plans to manufacture artificial retinas in space.

The GatorTronGPT AI tool might lead to a substantial improvement in healthcare efficiency through AI advancements.

FDA-approved two renal denervation systems mark a groundbreaking step in treating uncontrolled hypertension.

The AI system generates a report outlining the specific neural pathways impacted and their expected influence on brain functionality in individuals with autism.