Inspiration ✨

Hunting for an internship as a student usually results in countless hours of work, hundreds of applications and a very messy spreadsheet. With a rise in recruiters simply ghosting applicants and companies requiring multiple rounds of interviews, it's more important than ever to keep track of it all. Existing job tracking platforms are too high friction that it gets in the way of applying for jobs, eating up your valuable time. Now introducing JobJug, an innovative cloud solution that requires nearly no extra work on the user's end giving you more time to achieve your employment goals.

What it does 🔥

A website fully centred around creating the easiest fuss-free experience of tracking your jobs. When signing into our site, JobJug gives you a custom professional email you can use to apply to jobs such as (raymondshen@jobjug.co). Users will use our emails when applying to positions and when they receive a response, our system will parse the email sent from any potential employers before forwarding to users personal email. We parse the email with a combination of OpenAI and our custom model trained with PyTorch to figure out the position and company name along with the status of the application (received, declined, interview, positioned offered and waitlisted). All this data is displayed on our site’s dashboard designed to give the most essential information at a glance without seeming cluttered.

How we built it 🛠️

We built the frontend with ReactJS and TailwindCSS, designing it through several iterations in Figma. For the backend, we utilized Python and Flask, and MongoDB served as our database. To parse emails we used ImprovMX to forward emails to both a user’s private email aswell as our company gmail where we used Google’s API to read the email. In order to classify emails into five categories, we employed OpenAI to detect position and company name and our custom model to detect email status. There are many benefits of using our own model as we do not have to rely on the expensive API costs of using OpenAI and we have more freedom to fine-tune an extremely accurate model. Additionally, our model was trained with a powerful desktop at home, accessed through Windows remote desktop.

Challenges we ran into 💥

Our custom deep learning model suffered greatly from a lack of quality data to process a wide variety of emails. As there were no existing datasets online, we had to get scrappy and make our own synthetic dataset with ChatGPT with some of our own emails to spearhead the process. Our first attempt at a synthetic dataset only gave an accuracy of around 22%, simply generating was one solution but we quickly learned that augmenting our current dataset is just as effective. The methods we used for data augmentation are back translation, synonym replacement and random insertion/swap/deletion. This gave us 1000 unique sets of labeled data that bumped our model accuracy to nearly 40% which is amazing for our very limited dataset. Most models have over 100 000 unique data entries to achieve a high accuracy.

What’s next ⏭️

A more secure login system for users and the implementation of data analytics to motivate users to keep applying for jobs and visualize the successful results of their hard work. We also wish to implement thorough data visualizations and analytics to provide further insight into the statistical aspect of the job hunting journey.

Built With

Share this project:

Updates