Resume GPT, Automate Resume Tailoring

Resume GPT, Automate Resume Tailoring

Project overview

In this case study, we explore the development of an innovative solution, focused on automating resume tailoring for job applications. In today's competitive job market, customizing your resume for each application can be a time-consuming and daunting process. The demand for an easy solution to save job seekers valuable time is evident. 

Enter "ResumeGPT" – a platform where users can upload their CVs, insert the url or the description of the position they want to apply for, and receive a tailored resume in no time. 

We describe in detail how we built the solution, the challenges we faced, and how we managed to overcome them.




Next.js for building the web application.

Node.js for building the processing logic.

PDFMake for the PDF templates.

Tailwind for the application UI.


Vercel for hosting the web application.

AWS Lambda and Serverless for handling the processing part.

AWS S3 for storage.

AWS RDS with the Postgres engine for the database.


GPT-3.5, GPT-3.5-Turbo



Hour Gap analysis
First draft

Our journey: How we built ResumeGPT


In the initial stages, our development journey began with conceptualizing the solution. We started with a basic user interface and API, implementing a straightforward OpenAI prompt for matching resumes to job posts.

Improving the process

As we progressed, we iteratively improved the system. In the next step, we enhanced the file parsing logic and introduced validation checks to ensure the correctness of inputs, focusing on both the job post and the resume.

Next, we broke down the matching process into smaller, more manageable steps. The first step involved generating a curated summary for the resume, followed by refining the experience to highlight only relevant information and then repeating the same for other sections like education, languages, and skills.

Fine-tuning for better and faster results

During development, we noticed that the overall process was taking forever to finish and the results were not as accurate as expected. To solve this problem, we invested time in fine-tuning. Fine-tuning is a critical phase in refining AI models like ours. It involves training the model on specific data and prompts to enhance its performance in a particular domain or task.

Further enhancements

Moving on, we noticed that a large number of our users did not only need to enhance their existing resumes but also create them from scratch. We decided to handle this by offering an extremely friendly and flexible user interface, where users can directly create their resumes and then enhance them with the help of AI.

In a later iteration, we eliminated the need for users to manually copy and paste job descriptions by enabling automatic fetching. 

Furthermore, we introduced the capability to simultaneously enhance multiple resumes, simplifying the application process for users applying for multiple positions.

"ResumeGPT" remains a work in progress, with an evolving roadmap that responds to customer needs and expectations. We're committed to continuously improving and expanding our offering to deliver the best possible experience for our users.

Challenges & solutions

Having all the necessary input

One of the primary challenges we encountered was ensuring we had all the necessary input for accurate resume tailoring. This included the need for a correctly formatted job description, a resume structure that we could easily parse, and accurate resume content containing all the essential details. To address this, we implemented a robust validation system that checks the quality and completeness of input data. Users are guided through the process to ensure that the provided information meets the requirements. If discrepancies or errors are identified, the system offers clear guidance on corrections, ensuring that the input is accurate and complete.

Data security and privacy

User data security and privacy are top priorities. We follow strict data security protocols, while also being GDPR compliant. All data is encrypted at rest and in transit, guaranteeing confidentiality. We also have a retention policy for all the user uploads, storing them for no longer than 1 week in our databases.

Improving results by fine-tuning the models

Processing large amounts of content with OpenAI can be time-consuming and often results are different from what we expect. To achieve the best possible results, we had to apply fine-tuning to the models we used. But fine-tuning presents us with many challenges. We need large amounts of input data while staying within the token limits imposed by OpenAI.

To address the token limitations, we have broken down the process into smaller steps. For those steps that still require a considerable amount of tokens, we break down the content into smaller chunks, process them in batches, and combine the results at the end.

The outcome of this approach has been remarkably satisfying. We've effectively reduced the time needed to process the content by more than 5x in most of our prompts. This has not only optimized efficiency but also resulted in cost savings, making the process more resource-friendly. Additionally, the fine-tuning efforts have significantly improved the quality of the results, aligning more closely with our expectations.

Dealing with imperfect results

Even with the fine-tuned models, the AI enhancements might not be perfect and many users prefer further customization. 

To address this, we've introduced a user-friendly interface that allows users to manually refine the generated resumes or seek help from an AI assistant. Users can collaborate with the AI assistant through a chat interface to iteratively improve different sections and derive optimal results.

Controlling the costs

Cost control is integral to our approach, ensuring that users receive a reliable and efficient service while maintaining predictability. To achieve this we apply the following strategies:

Clear process: We have a well-defined process where the inputs and outputs of each phase are known. This transparency allows us to anticipate and prevent unexpected expenses, contributing to cost control.

Size limitations: We have established strict upper limits for resume size that we accept for processing. This not only helps streamline the processing but also avoids potential cost escalations associated with extremely large or complex documents.

Data-driven business model: We constantly collect and analyze cost-related information for every process. This data-driven approach enables us to refine our business model and pricing structures, ensuring that costs align with the value delivered to users. By basing our model on concrete data, we offer a fair and cost-effective solution that meets user needs and budget requirements.

To experience the full capabilities of our innovative product, we invite you to explore its features firsthand by visiting our website at

Project results

Resume GPT, Automate Resume Tailoring