Inspiration

We spend lots of time preview and review lecture contents, even that we may lost simply from the contents of the lectures. So we want a tool that could help us explain and understand the lectures which supported by LLM

What it does

By uploading the lecture video, Cognitube will recognize the content, extract the keywords and keypoints of the lecture, and explain these keywords by with their background information and in-lecture contents.

How we built it

we built a web app with ai system. Our AI system has a ASR and voice emotion model to get input to LLM. we have Gemini 1.5 as knowledge graph and one shot in context learning. And finally combine the two part information. We use Next.js for developing our frontend code. This framework allows us to build web apps that fully enjoy the benefits of server-side rendering, which makes our project incredibly fast. In addition, we have also incorporated google OAuth for easy while secure user experience. Once logged in, a user may upload their video from our frontend and watch the video explained in almost no time. All they need to do is to click on the keywords generated on the video player.

Challenges we ran into

How to embeded the keywords and explained contents into the app that will not influence user's watching experience is a hard challenge we met. We have disign a new way to showing the video, while we have to write all video related components ourselves rather than using existed tools.

What's next for Cognitube

We will start let college students using the app.

Built With

Share this project:

Updates