Inspiration

Falls are the direct cause of over 800,000 injuries and 27,000 deaths in the senior population of North America. Many of these injuries and fatalities can be prevented with faster emergency response times.

Unintentional falls are the most common form of injury across Canada: the Canadian Institute for Health Information (CIHI) reports that falls resulted in almost 1,800 reported emergency department visits and 417 hospital admissions in 2018. The average length of a hospital stay after a fall was 14.3 days, compared to 7.5 days for other medical reasons.

Seniors are especially at risk. Falls experienced by Canadian seniors totals $2 billion per year in direct healthcare costs, with 1 in 3 seniors likely to fall at least once. 95% of hip fractures in seniors are the result of a fall.

With 50% of all falls causing hospitalization happening at home, Mayd.ai aims to proactively support those who are at higher risk of injury after a fall.

What it does

Mayd.ai aims to protect those with accessibility issues by using a gesture recognition system to bring emergency help on demand.

There are other applications currently in the market that aim to solve this issue through detecting changes in a mobile phone or watch’s motion such as Fall Detection by Deskshare, Fade (Android), as well as SmartFall. MobileHelp gives its users a pendant with an emergency button. LifeFone and Medical Guardian uses a walkie-talkie type device to signal an incident.

Mayd.ai doesn’t require extra devices as it is modeled on live video feed information. It also has a backup voice command option through Google Assistant in case the emergency gesture cannot be fulfilled.

How we built it

To bring Mayd.ai to life, we began by designing mock-ups of the mobile app using Figma. We then used those mock ups to build the mobile application using Flutter. This framework was used to create an app that stores the information of the user and their emergency contacts.

We used Twilio to handle communication between our app and the user’s emergency contact. It sends SMS and MMS messages to alert the user’s emergency contact.

We trained Xbox Kinect 2.0 to detect and recognize a specific gesture the user may make to alert that they are in danger. Also, we used Azure Project Gesture to train our model and send web requests. On the backend, Firebase and Google App Engine are used to store data, handle user authentication and provide a controller for the logic between the Kinect service and Firebase/Twilio

Challenges we ran into

During the building of Mayd.ai, we ran into some challenges. They included accuracy of object recognition on the Xbox Kinect and integrating various technologies together.

The team had never worked with the Kinect before so training it to recognize certain gestures had a steep learning curve. It took us a while to get the Kinect to reasonably accurate at identifying and recognizing various gestures, but in the end we got it functioning properly. We had a small dataset for various falls, but found there were so many different possible types of fall we felt it would be better to use the more reliable gesture ‘mayday’ symbol.

Integrating different technologies including APIs and frameworks was a challenge for us. Each of us have varying skill sets and experiences in all sorts of technologies, but were weren’t all familiar with the same ones. However, we successfully connected the technologies together.

Accomplishments that we're proud of

We came into Hack The North with an idea to apply this same concept to pool safety, but we weren’t sure whether or not it was feasible. After various discussions, we were able to pivot and build out an idea with a unique use case.

The team is proud of how quickly we were able to pivot and work well with new technologies. We worked together to overcome difficulties and planned well to create an application that solves a real-world problem.

What we learned

The workflow for training and adding to a dataset with Project Gesture (and the Kinect) was pretty straightforward. It is a great technology to gently transition into getting started with machine learning.

What's next for Mayd.ai

We believe that integrating voice commands into the Mayd.ai alert system will make the app more effective in the future. Also, we want to provide the ability for the user to add more than one person as their emergency contact, as well as dynamically training for customized positions.

Actual fall recognition was a stretch goal that proved to be a bit too complicated for this weekend, but it’s a definite must-have for future development. Moreover, we wanted to have some alternate gestures for users with various accessibility needs.

Built With

Share this project:

Updates