Inspiration

We first dove into this idea when we noticed a student in our school traversing the hallways of Millburn High School with nothing except a stick. It seemed almost dangerous for this student to struggle through the packed and, of course, unfamiliar hallways of the school. As we looked further into this issue, it became evident that hundreds of millions of people (253 million!) in the world live with moderate to severe visual impairments and even blindness. We realized that the two leading aids for the visually impaired, a seeing-eye dog and a cane, were centuries outdated and desperately needed to progress. Thus, we were inspired to use modern-day cameras and sensors to restore the spatial awareness of the visually impaired.

What it does

Our project is a clip-on product that can easily and comfortably attach to any ball cap and other similar hats. The device measures the distance to the nearest object in front of the user and provides audio signals that become progressively higher in pitch and become more frequent as obstacles become closer. Our project also includes a fully functional front-end localhost website to make the process of tuning the hardware to their liking trivial for the user. We hope for this product to assist visually impaired people in both daily movement, but also in navigating new and unfamiliar spaces by providing constant feedback.

How we built it

We used an Arduino board, a TF-Luna LiDAR Sensor, and a Piezzo buzzer to take LiDAR input and process it into audio output. We 3D-printed parts and clips to house all of these components and fasten them securely to the cap. The frontend of our project is built with HTML and JavaScript and communicates using POST requests to a backend controlled by node.js and a BASH script. The programming that runs on the Arduino was made in C++.

Challenges we ran into

One of the most potent challenges we ran into during this hackathon was properly communicating between our LiDAR sensor, our Arduino, and our computer through the I2C bus. Troubleshooting this in order to get a proper input took us many hours. The second largest challenge we faced was connecting our frontend with our backend. In order to achieve this, we sent data through several files, localhosted a server, and used bash scripts in order to create a functioning and easy to use "programmer" for our product.

Accomplishments that we're proud of

First and foremost, this is the first time that either of our group members have worked on hardware in any substantial manner, especially with a microcontroller. We are immensely proud of this accomplishment and the fact that we made a working product seems like a miracle for us. Secondly, we learned how to CAD for the first time this hackathon and our end product was more than satisfactory.

What we learned

This project introduced two completely new sectors of hacking that we had never used before. The first of these was hardware. This project was the first time we had ever used microcontrollers or sensors to augment our programming. In this project, by using Arduino and our LiDAR sensor for the first time, we learned a lot about electrical engineering and the way that microcontrollers function. We learned especially much about I/O and how to configure+program these components. The second thing that we were introduced to during this project was the creation of servers to link frontend applications with backend algorithms. We specifically learned how to use POST requests in JavaScript to transmit data, and learned how to set up servers for that purpose.

What's next for HatSense

Our three developmental goals for HatSense are as follows:

Headphone output As our product stands, users have to hang a piezo buzzer near their ears which not only is a hazard for the user and product but can be an annoyance to all those nearby. Using a headphone output, all three of these issues can be circumvented. Directional output This entails having multiple sensors that point in multiple directions and providing an output that helps the user gain a more accurate view of their surroundings using directional audio through headphones. This greatly improves the effectiveness of the product and the convenience for the user. Computer Vision We hope to use cameras and computer vision to provide a detailed sense of the user's surroundings, an incredibly difficult to implement but very powerful tool.

In addition to developmental goals, we hope to create an easy to follow schematic and instruction guide for the purpose of allowing widespread usage of our creation.

Share this project:

Updates