This feature is part of the AI Add-on. For more information, see đŸ“„ Advanced Evaluation.

The Automated Code Review Scoring feature automatically evaluates a candidate’s code review submission against expert examples. It measures how effectively the candidate identifies issues, explains reasoning, and provides actionable feedback. This helps hiring teams assess review skills at scale with consistency and efficiency.

Key benefits

The Automated Code Review Scoring feature offers the following benefits:

How it works

When you enable Advanced Evaluation at the company level, Automated Code Review Scoring automatically applies to all Code Review questions.

Step 1: Define key evaluation areas

You can define specific areas where candidates are expected to leave comments. within the grading rubric.

To configure these areas:

  1. Open the relevant Code Review question.

  2. Click Edit.

  3. In the Grading Rubric section, define the key areas where candidates are expected to leave comments.

    image.png
  4. Click Save question to finalize the rubric.

Note: HackerRank provides predefined comment locations that you can customize based on your evaluation criteria.

Step 2: Candidate review process

During the test, candidates leave inline comments directly in the provided code to identify issues, suggest improvements, or justify their reasoning. The system uses these comments as the basis for automated evaluation.

image.png

Step 3: Automated evaluation

During evaluation, the AI system compares the candidate’s comments with the defined rubric. It analyzes each comment based on the following factors:

Each relevant and accurate comment contributes to the candidate’s overall score.

code review.gif

Viewing evaluation results

Recruiters can view automated evaluation details in the Summary Report, including:

Note: The system uses the large language model (LLM) Claude 2.7 Sonnet to evaluate comment quality and relevance.

The Candidate Evaluation tab in the detailed report displays:

code review2.gif