Inspiration
tldraw-makereal I love white boarding. I love remote work. I hate remote whiteboarding. Remote whiteboarding sucks because most interaction paradigms (whiting, drawing) collapse in remote settings. The 2 interaction mediums that don't collapse are voice and pointing (cursors). So we wanted to augment the best whiteboard we know (tldraw) with real time voice and cursors by operating directly on the whiteboard object graph. It has been my hobby horse - Nifty version of this I did in 2018 (CNN era)
What it does
random tldraw whiteboard -> voice -> transcription -> task commands -> tldraw sdk commands -> rendered new tldraw whiteboard
How we built it
Deepgram for low latency voice transcription Grok llama3 for low latency llm tasks
- task detection from transcripts
- task enumeration into unit categories
- task fixing
- task -> tldraw api calls
- streamlit to ingest recordings
- tldraw to host whiteboard
Challenges we ran into
- Tried to set up transcription with AWS, never worked out of the box. Lost 4 hours on it.
- tldraw's codebase is huge and hard to integrate into on short notice
- locally hosting some large repositories took time to resolve package issues
Accomplishments that we're proud of
- getting a project with a lot of moving pieces working
What we learned
- Install the repos you'll need early and make sure envs are ready to go
- groq is great, groq is fast
What's next for Blabbermouth
Clean it up and host it for everyone to use.
(Demo is hosted in local, please come by the far south end of the big room to see) ( We didn't get time to record the demo video on the whiteboard directly, come to the far end at the south of the big room to find the demo running in local)
Log in or sign up for Devpost to join the conversation.