Inspiration

The inspiration behind the EchoSight project stems from the desire to empower visually impaired individuals by providing them with a comprehensive and intuitive tool for navigating their surroundings. The project aims to bridge the gap between the visually impaired community and the visual world through the innovative use of technology, such as computer vision, natural language processing, and text-to-speech capabilities. By leveraging these advanced technologies, the project team seeks to create a solution that enhances the independence, safety, and overall quality of life for visually impaired individuals. The ultimate goal is to make the world more accessible and inclusive for everyone, regardless of visual abilities.

What it does

Its an assistive technology mobile application designed to enhance spatial awareness for visually impaired individuals through real-time audio descriptions.The project aims to empower visually impaired individuals by offering continuous audio feedback, enabling them to navigate safely, avoid obstacles, and access important textual information in their environment. Through image analysis, EchoSight identifies objects and scenes, generating detailed descriptions in natural language that are then converted into voice using text-to-speech models.

How we built it

This Application(IOS specifically) is powered by Google Gemini api packages, Flutter, Text-to-speech

Challenges we ran into

Making app compatible to voiceover(screenreader)

Accomplishments that we're proud of

We are proud that we are using technology to help and improve life of people with disability.

What we learned

We learnt how to build application in flutter and use google api to build applications.

What's next for EchoSight

Improving the overall app performance Add more voice related features Move this application to vision pro or smart glasses (basically any hardware which visibly challenged person can wear)

Built With

Share this project:

Updates