Inspiration
Our clients span hospitals to life sciences and technology vendors who develop artificial intelligence for healthcare. Our Advocate AI program brings data and analytics to all three constituents to advance the use of AI in healthcare.
Data analysis is a tedious process than can be automated and expedited by large language model. Two examples
- For AI in healthcare to be a regulated offering, vendors must use large data sets and demonstrate a lack of bias and overgeneralization in test data and the performance of AI over time.
- In order to develop appropriate programs for preventative care and the treatment of chronic disease, hospitals analyze patterns in patients and health events.
For this, they need access to curated datasets that can serve their purpose.
What it does
RadRepoSearch-AI facilitates sharing of such pre-approved medical datasets based on nuanced filters and conditions. This chatbot gives our users capabilities to query the data based on various medical terminologies/conditions.
How we built it
We built it using power of LLM snowflake Arctic (and Reka-flash) and Streamlit.
Challenges we ran into
We faced issues in the token size available with Snowflake Arctic. It would have been great if we got larger token size. For summarization, as the context window for Snowflake Arctic is just 4096 tokens, so we used a different approach. The reports were summarized using Snowflake Arctic but the summaries were then again summarized using Reka Flash as that provided us a much larger context window.
Also we had to tune the prompts pretty heavily to get the outputs right when compared to other models also provided by Snowflake. But we expect that successive iterations of Snowflake Arctic would improve the overall responses.
Accomplishments that we're proud of
The tool can query the underlying datasets using natural language and understand the nuances of determinging the type of conditions, age ranges, body parts affected etc. That is pretty cool.
Also we went from use case exploration/identification to implementation in a matter for 7-8 days.
What we learned
We initially started with replicating what our users had expected to just build standardized queries. But with successive iterations we settled with a RAG architecture instead of just building standard queries with wildcards. That was a good learning for the team.
What's next for RadRepoSearch-AI
We would want to expand the tool to other datasets that the Advocate AI team in the organization uses.
Built With
- arctic
- llm
- python
- snowflake
- streamlit
Log in or sign up for Devpost to join the conversation.