HackerRank Projects for RAG enables you to create real-world, project-based questions to assess candidates' ability to implement Retrieval-Augmented Generation (RAG) systems. This feature helps you identify candidates with strong skills in retrieving relevant information, integrating it with generative AI models, and optimizing response accuracy.
With this capability, you can use predefined RAG assessments that evaluate key skills such as:
Data retrieval and indexing (e.g., using vector databases)
Fine-tuning
Evaluation of generated outputs
Additionally, you can create custom RAG-based questions, tailoring them to assess candidates on specific retrieval, augmentation, and response generation techniques relevant to your use case.
Click on the Library tab on the home page and click on the Create Question button.
In the Select Question Type dialog box, under Projects, select Generative AI, as shown below.

In the Environment tab, you can select RAG. You can hover over the info icon for more information.
Click on the Next button to move forward to the next steps.
There are three steps in setting up a project:
Once the IDE loads, you can set up your question project using one of the following options:
Uploading a Zip: This option allows you to create a question by uploading the project file in ZIP format. The file size can be up to 5MB.
Clone from Github: This option lets you clone a project from your private or public Git repository. After selecting the Git option, you must provide a link to your repository source. If the repository is private, the IDE will need permission to connect with it using a one-time token. We do not save your GitHub credentials in our system at any time.
Use sample project: When you choose this option, a default sample project question is added to the IDE. You can use this sample project to build your question.
Note: Do not use sample projects in a test. These projects are designed only to serve as sample projects, not to evaluate candidate skills.
When setting up a Retrieval-Augmented Generation (RAG) question, you must define a data source from which the model can retrieve information. You can choose between:
File-based storage
Database (DB) storage

The platform allows you to upload and manage data files directly through the UI. These files will be accessible in the IDE for further editing and processing.
Option 1: Reference an existing file path in your project. The system will display the token count and size limit.
Option 2: Upload a new data file separately via the UI. The file will be stored in S3 or the IDE (as determined by development).
Maximum file size: 500MB (or based on token limits).
Non-text files: Only the size limit will be enforced.
Uploaded files will be stored in a dedicated "data" folder and added to .gitignore to prevent unnecessary version control tracking.
Once a file is uploaded, the token count and associated limitations will be displayed.
If the token limit is exceeded, rate limiting may apply due to larger embeddings.
You can upload structured data as .db or .sql files for database-based storage. This allows you to manage and query your data efficiently within the project.
Upload a .db or .sql file directly.
The same constraints as file uploads apply (size and token limits).
Large database embeddings may trigger rate limits.
Request limit: 30 requests per minute
Token limits:
Standard limit: 3,000 tokens per minute
Embedding model: 30,000 tokens per minute
Parallel requests: Up to 5 simultaneous requests
Setting up an efficient data source ensures your RAG model functions optimally while staying within these limits.
IDE setup
When building a project, a user can configure what command will run when clicking certain menu items in the IDE. Install is one such command.
Here, you can configure the IDE to run specific commands that should be triggered for the project when the button is clicked. This command will be preserved for the candidates, i.e., the candidates will see the same command execution when they use the question.
The same applies to Run and Test.
You can set which files in the project should open by default for the candidates when they work on the project.
You can specify multiple files in this, and all the specified files will open in the IDE.
Question creators can set read-only files in the project that the candidates should not edit. These may be test case files or readme files. Specify the file path to these files here.
You are provided with the option to choose between having automatic scoring or not. Select Yes or No based on your preference.
If you enable automatic scoring, HackerRank scores a candidate's solution based on the pass-and-fail status of unit test results generated in the XML file. The scoring command should generate the XML file with test cases in Junit, XUnit, or TAP format. Provide unique names if you have multiple XML files that should be used. The XML files are parsed to generate a list of test cases and produce a score for the candidate. All test cases are assigned equal weightage by default.
Define Scoring Command: The scoring command should generate the XML file with test case results.
Scoring Setting Output File: Provide the list of XML file paths generated when the 'Scoring command' is executed. HackerRank executes the scoring command to generate a score, searches for these Scoring output file(s), and parses these files. If no files are listed, the scoring command output will be used for scoring; for multiple XML files, provide unique names.
Hidden Files: In this setting, you can provide a list of test case files hidden from the candidate while they are solving the question. However, these hidden test cases will still generate the candidate's score once they submit their solution.
If the validation fails, use the information the validation result provides to fix the project.
Suppose the validation fails at a step run after a server has been created (scoring command validations, for example). In that case, you will get an additional link called 'open debug console. You can open the server and debug the scoring command in the live environment. The debug console is available for 30 minutes, and changes made here are not saved to the original project.
If you were editing a Full Stack question on modifying existing files, you can 'reset to the last validated version,' which will reset the file structure to the last valid state. Similarly, you can 'Download the last saved folder structure.'
You can monitor the Network Indicator within the IDE to ensure that you don't encounter any issues while creating a project question.
If your project has any blacklisted domains, the IDE will notify you about the same.
You can click on View Blocked Domains to check the domain URLs.
On the Question Details tab, specify the following:
Question Name: Name your question and ensure that the question name does not hint at the solution to the problem.
Score: While you can assign any score you want for the questions you create, we use certain standards to assign scores to the useful questions.
Score | Question Type |
50 points | For an easy question that can be solved in 15 minutes |
75 points | For a medium question that can be solved in 30 minutes |
100 points | For a hard question that can be solved in 45 to 60 minutes |
Tags (Optional): Tags are words or phrases that help with the searchability and organization of your questions. You can add the existing tags or create new tags. Set the difficulty level for your question by associating the tags as Easy, Medium, or Hard. Alternatively, associate custom Tags to identify your question by complexity or level. Select the tags from the drop-down list or add your custom tag.
When you view your questions in the Library, the associated tags will be indicated for every question. You can use these tags to generate candidate reports and performance graphs. Refer to Associating Tags to Question for more information.
Problem Description: While describing the problem statement, ensure the question is clear and detailed. You can also use tables, graphs, or attachments to enhance clarity.
Software Instructions: This optional field contains information about packages and software versions required to solve the question. When left empty, default platform instructions will be used.
Interviewer Guidelines: Interviewer Guidelines are for later reference. You can include a rubric about scoring the questions or write solutions to the problems in this section. Your team can use these while evaluating the test. They are only visible to your team and you; however, candidates cannot view these notes. This step is optional.
Attaching a relevant reference file in these sections can support your problem statement and interviewer guidelines. For more information, refer to Attach a File to a Problem.
You can check the candidate preview after completing the question creation to understand how the question is presented in a test.
Note: The questions you create are stored in the HackerRank Library under the "My Company questions" section.