Most of us have not heard of machine translation for sign language, but it’s an exciting technological development that helps people who are deaf or hard of hearing communicate with the hearing world, most of which does not know sign language. Machine translation systems for sign-language can do this by automatically converting signs into text or spoken dialogue, then back into sign language, without needing a human interpreter.
Machine translation for sign language has been around since the 1970s, but it has been difficult for developers to perfect the technology because sign language constructs differ greatly from those of spoken languages.
Here are a few machine translation inventions from the past and some more recent developments on the market.
The Sign Language Glove
The History of the Translating Glove
In 1988, James Kramer and Larry Leifer, researchers at Stanford University, invented the first “talking glove” to improve communication between deaf and hearing individuals. The glove translates sign language to text or speech while the person signs.
Then, in 2001, a high school student named Ryan Patterson created a next-generation sign language glove that had sensors on each finger. As the person wearing the glove signed, the movements were translated into text on a screen. It wasn’t perfect: the glove could only translate individual letters from the American Manual Alphabet, but it still received plenty of attention and praise.
Similar types of gloves were invented all over the world, but none of them could translate accurately enough to be sold publicly.
Most recently, in 2016, two undergraduate students, Thomas Pryor and Navid Azodi, invented a glove that translates signs into text or speech using Bluetooth and plays the results through a speaker. The gloves, called SignAloud Gloves, received national attention.
The Controversy Surrounding Sign Language Gloves
Despite these advancements in the technology, the deaf community, as well as linguists, have not responded well to the sign-language glove inventions to date. Often, the deaf community isn’t consulted by inventors about their needs; rather, products have been based on what is preferred by the hearing world. The deaf individual is expected to use sign-language gloves to make it easier for hearing people to understand them, but these tools don’t improve communication in the opposite direction, as the gloves don’t translate what the hearing person is saying into sign language.
In the future, developers, designers and engineers need to collaborate with the deaf community to understand their needs and desires when it comes to machine translation. After all, these technologies should be created to help deaf users in the same way they do hearing speakers.
Luckily, there have been some exciting new developments in machine translation for sign language. A few innovative companies are creating products that are beneficial for hearing speakers and deaf signers alike.
Here are two companies that have invented notable products that are being recognised worldwide:
SignAll 1.0 is the first product in the world to allow for real-time communication between deaf signers and hearing speakers through automated American Sign Language (ASL) translation technology.
The deaf and hearing individuals communicate in their own language through an on-screen chat dialogue that uses ASL and spoken language. How does it work? This powerful system has two monitors, one for the deaf user and the other for the hearing speaker. The deaf individual must wear a pair of gloves and sign in front of cameras. Their signs are then translated into text that can be read by the hearing individual. The hearing person’s response is then translated into text by an automatic speech recognition system, which can be read by the deaf individual.
The SignAll system can be used in business and education settings. The technology increases accessibility for deaf employees in workplaces and allows companies to offer better customer service for deaf customers. It can also help friends and families communicate with one another.
KinTrans is a startup based in Dallas that’s currently developing machine translation software that transforms sign language into spoken dialogue.
This advanced technology works by using a 3D motion-sensing video camera to observe a person signing with their hands and body. The system then translates their signs and both speaks them aloud and displays them on a connected digital screen.
In the real world, envision a deaf individual walking into a store, standing in front of the device and signing. The machine translates what they are saying and the hearing person can type a reply that is signed by an animated avatar on the screen. The great part of this technology is that it is helpful for both the deaf community and the hearing world.
KinTrans technology is designed for use in malls, airports, hotels and hospitals. It is currently being tested in governmental service-related areas and at a bank in Dubai. The system can distinguish thousands of signs in American and Arabic Sign Language with 98% accuracy. Future versions of the technology will include Portuguese Sign Language and Indo-Pakistani Sign Language.
Machine translation for sign language has seen many improvements since it was first developed in the 1970s. Thankfully, recent advancements have paid more attention to the needs of deaf signers in addition to those of the hearing world.
As technology continues to advance, it will be more common to see artificial intelligence aiding interactions between deaf individuals and those who don’t know sign language. It will become more popular with the deaf population as the technology improves, and once it is accurate and consistent, users can trust it to give them more independence in their daily lives and make it easier for them to communicate with others. This type of technology can also transform businesses’ means of connecting with deaf customers and improve their buying experiences.