Inspiration
1) There have been many attempts to create a comprehensive device utilizing either electrode arrays or haptic vibrations to give greater confidence to the blind, however, although useful for varying degrees of being visually impaired, the designs are often obtrusive or only provide basic feedback (ex. if the person were facing North or only considering proximity). 2) Hearing about past attempts, such as the Brain Port and other kinect devices, further made us analyze what were the key components in achieving the ability to predict obstacles and navigate around them, and how they were lacking. 3) Blind people commonly use white canes (probing canes) while walking, which are long and can be a nuance, so having a compact device that functions closer to the person (as how a camera is on the person and can receive surrounding information), if relaying important information would be a huge asset. Furthermore, a white cane primarily detects obstacles in the path, not ones that could be further up on the body, such as a pole. 4) Ultimately, we hope that this technology could give people in visually compromising situations the confidence to roam more freely. We would like to theoretically replace the cane, but ideally this could be used in conjunction to give increased outdoor confidence.
What it does
The most important information is what is directly in front of us (and our feet!). For this purpose we use both depth (for distance to object and warning), and a camera for object recognition to differentiate humans. Given our combined objectives, (to give important spatial information, notify of people, and also gage proximity to objects) this is how our technology differs from the rest. We would use electrode arrays for stimulation to convey a meaning on the person’s skin (tactile interpretation).
How we built it
We built this technology to be fast, unlike only using motors as feedback. We also build off previous ideas, such as the brain port. Our idea is small enough that could be used on a more convenient part of body. We also wanted it to be modular. Considering that people also have varying levels of sensitivity, which research can prove, the ability to add more or less electrodes. Studies interpreting vibrations as a form of navigation information agrees with our notion of vibrating in regards to a nearby object to signal avoiding that direction. In the end we were able to make a prototype of a system that would contain a dual camera (for depth and objects),open CV code for detection of the object and a shock pattern along 4 electrodes that would help the person to know the location. The depth would then be impltemented through vibrations to give a more instant reaction stimulus -works completely offline to be fast and in real time -uses both depth and computer vision -cloud to continuously update, with a continuously updated model to get better, but not reliant -rapid, haptic feedback and patterns to be interpreted
Challenges we ran into
There is a tug of war for improving the situation without the complexity of describing the entirety of the visual field. -deciding the best way to represent the visual field without being too complex while still signaling key information. Whether to use a grid or a graded system. -finding hardware the would allow us to use proximity and computer vision to simulate an eye. -the depth percepting AI program
Accomplishments that we're proud of
By being modular in nature, we are able to make the device meet the needs of the user depending on degree of visual impairedness. A way in which this device is also relevant to everyone is that this device is capable of depth-perception, which provides a way to navigate the dark, while also using camera feedback in daytime, provided object recognition / feedback for object detection for navigation. The depth of the research we did to take into account various aspects, such as case-by-case scenarios (the possible option to switch functions on and off), learning time, sensitivity difference, and the best ways / places to relay camera information tactically. Works with cloud to keep getting better
What we learned
We did a lot of research to understand the mechanisms previously used, and understanding the different aspects of relaying information in a tactal manner while avoiding as many drawbacks that could change from case-to -case as possible. Balance cool technology to be practical and avoid the trap of not keeping the user’s best interest in mind.
What's next for Now you see me
You can have three motors to give left, middle, right sides that vibrate when something is proximal. in this way our project is innovative because it offers two modes of information, depth and objects through motors and shocks Use for detecting sidewalks, depth perception to inform where to step Detect people, object detection for hands for application in like hand shakes and reaching out to hand something. We can make some more amazing systems - actually put deaf and blind people in communication by using leap motion! Leap motion has 10 finger detection and has been started being used to translate sign language. A big aspect is how to relay the information to the person, below, how both tactile and auditory preferred BUT THE BIG ISSUE with auditiory is that it takes away information the person could be getting by hearing what is around them (like a car coming down the road). All this information would presumably get very confusing, and constantly having to shift attention between machine information and people speaking to you could be difficult. THAT IS WHY we have also devised a method to solve this issue. A sensor based system that allows the user to selectively use recognition tools to distiguish between objects being relayed to them tactically. Help someone to navigate the world is how we are social creatures, so beyond audio relay, being told if there are people around you is important. We wouldnt want thi to bee too overwhelming either though so this could be controlled by a switch since in a crowded space this wouldnt be useful.
Log in or sign up for Devpost to join the conversation.