With a twist or shake of your wrist, your smartphone can interpret motion to take a picture, turn on a light, and more. Last year, George Mason University computer science professors Parth Pathak and Huzefa Rangwala were brainstorming how similar technology could help society in even greater ways. Their idea? To automatically translate sign language into text or speech.
“There are some products that can do gesture recognition, but they’re very preliminary. And it’s very different from ASL [American Sign Language], which is not just a few gestures—it’s thousands of words,” said Pathak, principal investigator on the Summer Team Impact Project funded by Mason’s Office of Student Scholarship, Creative Activities, and Research (OSCAR).
This summer, nine Mason undergraduates joined in the research that could help make the technology a reality.
“The goal would be to deliver a readable message to a device so that it’s bridging the gap between ASL users and non-users,” said therapeutic recreation senior Riley Wilkerson, “an easier, more effective, and more personal way of communicating.”
Three teams of students are experimenting with different sensors: a wireless radar, a camera, and an inertial measurement unit (a wearable motion sensor used in smartphones and Fitbits). Each sensor offers certain opportunities, but also challenges including privacy and ease of use, said Pathak, who is guiding the students on the project along with Mason computer science professor Jana Kosecka and Mason’s Helen A. Kellar Institute for Human disabilities director Linda Mason, and graduate student Panneer Selvam Santhalingham.
On each team, a student familiar with ASL signs in front of a sensor that collects data about the motion or the environment. Computer science and engineering students refine the data to find patterns and write machine learning algorithms—code that allows them to interpret the computer’s recognition of the signs.
So far, the undergraduates have “taught” their machines to recognize about 20 signs with accuracy rates ranging from 70-97 percent. The fluctuations in accuracy are due to the machine learning process, said senior computer science major Yuanqi Du.
Diverse data helps the computer recognize the signs with increased accuracy, Du said. In initial trials with one student, accuracy rates were higher. When a new ASL user was introduced, the accuracy diminished, Du said. Once the new ASL user’s data was included in the algorithms, accuracy rates rose again.
As the multi-year project continues, Pathak said the team plans to increase the number of signs the computer can recognize using data from many diverse users. They will also scale it to interpret full sentences and pick up other gestures used in ASL such as body tilts and micro expressions like raising an eyebrow, he said.
“Being able to communicate instantly would hopefully remove issues [the ASL community experiences],” said Frederick Olson, a senior IT major who said both his parents are deaf. That includes being able to ask a question at a store, socializing, communicating with doctors easily during appointments, or being able to land better job opportunities. The technology could be life-changing, he said.
It could also be applied beyond the deaf community, the students said, helping people with autism or developmental and learning disabilities for whom communicating using spoken words is challenging, Wilkerson said.
“It could be applicable to other industries and disciplines in the future [that will work with similar technology], too,” said junior computer science major Sai Gurrapu.
And, the project pushes student learning to the next level, Pathak said.
“They’re not given a fixed task here—they’re given a problem and they have to find a solution,” Pathak said.
“This project is one of a variety of opportunities [Mason] has presented to me that goes beyond just taking 15 credits each semester,” Wilkerson said. “You can only learn so much in a classroom—you have to apply it.”