Your hiring company uses our platform to design your HackerRank Tests and assessments. After you complete and submit your HackerRank Test, the company that conducted the test owns your reports and decides the scores. It is their discretion to share your results with you. HackerRank does not send any test reports to the candidates.
Your evaluators may use manual or automatic evaluation methods to assess your answers and assign relevant scores. Automatic evaluation is typically used for Coding, Multiple-choice and Sentence Completion (Fill-in-the-blanks) type of Questions, where your answer is compared against a preset answer to check for correctness. Based on the comparison, a full score, partial score or no score is assigned. Alternatively, evaluators may also review your answers manually. Questions which require you to define flow-diagrams or involve subjective answers are typically reviewed manually and assigned scores by the evaluators.
Therefore, your overall Test scores can include the sum of automatic and manually assigned scores.
This article helps you to understand the general practices used by your test setter to evaluate your HackerRank Test answers.
Coding Questions Evaluation
Typically, your solution to Coding Questions based on different programming languages, database programming questions, etc is automatically evaluated and assigned scores based on the number Test cases which were successfully executed by your solution to return the expected output. You may be assigned a specific score for every successful Test case.
Refer to How are my coding Questions graded or scored topic for more information.
The following table specifies the general evaluation methods used to evaluate the answers for different types of questions in HackerRank Tests:
|Question Type||Evaluation Method|
Automatic evaluation: Your code is validated for each test case. The output from your code is compared with the expected output to validate whether the test case has passed or failed.
The total score assigned to the candidate is the sum of scores of all the passed test cases.
Automatic evaluation: The data retrieved by you is compared to the correct answer. If they match, full score is awarded to the candidates. However, if they do not match, then the candidate receives a zero score.
Manual evaluation: The evaluator reviews the final webpage rendered by the Candidate's code to verify how the page appears and responds. Based on the evaluation, a relevant score is assigned manually.
Automatic evaluation: The test setter provides a Check script (written in Bash) to validate if you have performed the correct tasks. Based on the output of the Check script, a score is assigned automatically.
Automatic evaluation: You are evaluated based on the number of unit tests that pass.
Your score for a question= (Number of unit tests passed/Total number of unit tests)* Total score.
Automatic evaluation: The evaluator writes a custom checker specifying the scoring logic for these questions. For each test case, custom checker is run to get the score. The sum of the scores returned by the custom checker for each test case is your total score.
Automatic evaluation: Your responses are compared to the correct answer for scoring the question. By default, the score assigned to one multiple choice question is 5. If there is more than one correct answer for a question, then each correct answer is assigned an equal fraction of the total score.
The evaluator might also enable negative marking for wrong answers to ensure that candidates do not indulge in guessing.
Automatic evaluation: To evaluate the score for a question, the fraction of correctly answered blanks to the total number of blanks is multiplied by the total score for the question.
Manual evaluation: The evaluator is required to read your answers, review diagrams, and explanations to assign scores manually.
Automatic evaluation - Based on J-unit based test scoring or custom scoring:
In J-unit based test scoring, there is an equal score assigned to each test case. The candidate receives a score that is proportional to the number of test cases that the code pass.
In custom scoring, the question setter can specify the rules for scoring a particular question in a script. This script is an executable program or a shell command that specifies scoring rules and can be run in the Ubuntu environment.