Inspiration
While approaching ideation for a software product for social good, we wanted to consider tackling a health issue that is often overlooked but can have serious consequences if not addressed promptly. Every year, millions of patients are diagnosed with skin cancer, yet getting tested for a diagnosis early on can be inaccessible, as screenings can cost up to $300 without insurance. As a result, we developed SkinScreen to lessen this barrier by providing a way for users to receive instant results to help determine potential conditions that they should get checked out by a medical professional.
What it does
Our product allows users to scan their skin and receive results for potential conditions they may have. It also provides additional resources for users concerned about skin health such as daily advice and recommended articles.
How we built it
We started by creating mid-fidelity wireframes for the main screens using Figma. After incorporating high-fidelity changes into the interface, we handed off the designs to the engineers for development. Our app makes predictions on images of suspicious lesions.
A difficulty in training an accurate model was not having enough data, so we used transfer learning, which is taking a pre-trained model that is robust and modifying it to a similar problem where there is less data. We took AlexNet, a convolutional neural network trained on over a million images and changed the last two linear layers so that the model made predictions for the 7 classes in the HAM10000 dataset (link), since AlexNet originally makes predictions for 1000 classes.
After fine-tuning our model using PyTorch, we saved the new model to our backend, and we utilized a Flask Server to make easily make predictions on images from our frontend. We utilized Flutter, a a cross-platform mobile app development framework, for our frontend.
Challenges we ran into
This was all of our first experiences competing at a hackathon within a cross-functional team, so there were challenges in ensuring a seamless collaborative workflow. Designers also overestimated the developers’ background with Figma, which resulted in some miscommunication properly translating Figma wireframes into development.
In terms of our skin cancer prediction model, there were challenges in fine-tuning AlexNet and deploying the model. On the frontend, we also faced a roadblock when trying to set up the prediction feature (i.e.: allowing a user to upload an image and using the data from our backend to provide meaningful insights to the user).
Accomplishments that we're proud of
We’re proud that we were able to hack for 24 hours straight and persevere to the end. Regardless of our challenges integrating our designs into the development, we were able to communicate well as a team and supported each other’s challenges and achievements. We were able to gain more experience using PyTorch and Flutter, as well as a better understanding of the design process.
What we learned
After this experience, we learned the importance of prioritizing certain parts of the product production process, like creating initial screens for the developers to begin integrating. We also ran into some miscommunication between the designers and developers, so this allowed us to gain more knowledge working cross-functionally for the future.
What's next for SkinScreen
Our next step is to explore other deep learning models more suitable for our product’s purpose in order to analyze and generate more accurate and extensive results. We also hope to conduct more usability tests and user research to guide future design iterations for SkinScreen.
Log in or sign up for Devpost to join the conversation.