inspiration

The story of inspiration begins with a window into chronic care illness in the US.

On top of this is incredible growth in total patients, cost to the US, and projected physician shortfall by 2030.

With the unique power of VOICE and the social and financial incentives in place, we were inspired to build a VOICE FIRST solution for seniors navigating chronic conditions.

aspiration

The inspiration plus our customer survey openness to an an Alexa experience have shaped our mission:

what it does

Our patients say "my doctor is great at telling me what to do, but she's terrible at telling me how to do it". Our physicians say "I estimate my patients do about 30% of what I ask them to do".

Step 1 of what it does begins with integrating into the physician's EMR and transforming complex careplans into an ongoing series of "so what should I do today?" experiences

At its simplest, what it does is help patients build core chronic care habits in diet, physical activity, taking their meds and writing stuff down. Part habit building, part patient education.

That sounds simple, but there is a lot under the hood.

how we built it

All AWS -- s3 for images, cognito for identity management, lambda for logic We used javascript to handle the skill requests. Then we used AWS API Gate to route requests to a Python API which interacts with our MySQL db. Today we only support Alexa, but we built the API structure in anticipation of inbound calls from Google/Nest or Apple Homepod.

challenges we ran into

  1. stepping into and out of Alexa Conversations became too difficult. We also found our customers preferred us just telling them the correct structure of a request instead of multiple-turning them.

  2. the graphic layout of the APL was not always intuitive -- sometimes it was by chance that our images ended up in the right spot

  3. we wanted to show a chart of progress (e.g., weight, glucose, blood pressure) but there's no easy graphing tool or a way to present HTML. Sooooo, we used a graphing package and did some headless browser/server-side image generation and then presented the chart as a static image. When someone recorded a measurement we had to make sure we didn't present the new screen before the round trip of the image creation happened. We eventually figured it out.

  4. figuring out account linking wasn't wildly straightforward

  5. we have so many devices sitting around, whenever we accidentally said "alexa" it was deafening

  6. testing has been tedious and endless

  7. Straight up meds reminders was not a compelling experience in v1, so we're trying to re-look at how a voice experience might differentiate it.

  8. Filtering for both our recipes and workout routines got an "F" with our testers. Need to pray to the gods of VUX for inspiration.

accomplishments that we're proud of

sooo, we've been working on this project for a while -- our customers chose "voice" over an app or a wearable experience. We went HARD on the APL and it ended up looking great -- we even added little flourishes like the name lighthouse glows when a page first loads.

We also did some cool customer-centric design things:

  1. A lot of our messaging is repetitive, e.g. daily "did you measure your glucose?" To fight message fatigue, we constructed every message out of five parts and created multiple versions of each part. A typical message has four parts, and we generated five phrases for each part -- collectively (or permutationally) a high repetition response could have 5 x 5 x 5 x 5 = 625 variations.

  2. All of our content is at a 6th grade or lower reading level (Flesch-Kincaid method)

  3. To meet the needs of seniors with less-than-perfect eyesight, to simulate senior EYESIGHT, equivalent to 2.0 “cheaters”, we make sure our screens are readable at a “2.0 blur”

  4. We worked with the Stanford Behavior lab to "operationalize" the BJ Fogg health behavior model into our engagement model

  5. This list could go on to 20....

what we learned

  1. effective use of "sessionAttributes" can go a long way to better handling HELP and other "what Intent am I in?" challenges

  2. no amount of developer testing will guess how consumers will actually use the service

  3. customers preferred us educating them on what a "good" answer was (which led to shorter interactions)

  4. BUILD FIRST for no screen. When we built some experiences screen-first, they worked because of some visual cues that didn't exist for voice only.

what's next for lighthouse voice

A. At the time of submission, the ADHERENCE function was not included. Our patient panel gave it a thumbs down. Same with EAT. We've got 3,000 recipes loaded up, but the selection filter was off and it's not as effortless as some other recipe readers.

B. We have more content partners interested in in-app-purchase, so we will be firing that up

C. We plan to "give away" two-thousand dots before the year is out as we bring on more doctor's practices.

D. We just got a grant from a division of Health and Human Services to build a version in Spanish, Chinese and Navajo to serve HRSA facilities (about 28MM patients under care).

Share this project:

Updates