Inspiration

Covid-19 poses great challenges for the deaf and hard of hearing community. Most online communication platforms were not designed with them in mind. Under ordinary circumstances, an interpreter could be present in the workplace or classroom to help facilitate communication. Without access to an interpreter digital communication can isolate this community.

We created Sign Assist as a digital interpreter solution to enable online communication to be more accessible for all.

What it does

Sign Assist is a digital ASL interpreter. Using a Kinect camera Sign Assist tracks user movements and patterns to detect different signs. Sign Assist can output these signs as both text and speech.

How we built it

Using the Kinect SDK 2.0 we train gesture detection machine learning models on 38 features parsed from labeled video clips of team members performing target signs. We used Kinect Studio 2.0 for recording signs for our training set. We labeled the video clips using Visual Gesture Builder. We compiled a database of numerous signs and created a user interface in C# with Windows Presentation Foundation (WPF) for the GUI. We track user movements from the Kinect and match them to the signs in our database with an associated confidence score. Then we output the sign with the greatest confidence score. We also used the Windows Speech Synthesis library in order to speak the signs that were detected.

Challenges we ran into

This was the first time anyone on our team had worked with a Kinect before. As a result, we ran into several difficulties integrating our code with the Kinect ecosystem. For instance, we had difficulty managing the bodyTrackingID provided by the device. We came up with creative solutions in order to mitigate some of these integration difficulties and turn some of these problems into features. Our user interface can track up to 6 people doing signs simultaneously as a solution to our issues with bodyTrackingID :D

Accomplishments that we're proud of

Even starting off this project with no experience with Kinect or C# we were able to create something in 36 hours that has the potential to make a real impact and help a community disproportionately affected by the Covid-19 pandemic. With more signs added to the database, Sign Assist could really be used to increase accessibility for digital communication.

What we learned

We learned how to program in C# and build a user interface with Windows Presentation Foundation. We also learned how to integrate with the Kinect SDK 2.0. We learned about training gesture detection machine learning models and matching time-sensitive inputs to gesture vectors in the database.

What's next for Sign Assist

More signs in the database and a user interface overhaul. We plan on making Sign Assist have a mode where it functions as a dictionary where you can look up a sign and see what the associated word is. Perhaps we would also have a video to show the proper motion for the sign. Additionally, by adding more signs to the database we can allow for a richer vocabulary in communication for our users.

Built With

Share this project:

Updates