Evaluation Methods for Test Questions
Last updated: March 30, 2026
HackerRank supports two evaluation methods for questions in a test, based on the question type.
Automatic evaluation: The platform scores candidate responses using predefined criteria such as test cases, answer keys, or scoring rules.
Manual evaluation: You can review candidate responses and assign scores manually.
Evaluation methods by question type
The table below lists the evaluation method for each question type and provides links to related documentation.
Question Type | Evaluation Method | Reference Article |
Coding | Automatic Evaluation | |
HTML/CSS/JavaScript | Manual Evaluation | |
Database | Automatic Evaluation | |
Approximate Solution | Automatic Evaluation | |
Generative AI | Automatic Evaluation | |
Mobile Developer | Automatic Evaluation | |
Data Science | Automatic or Manual Evaluation | |
Front-End | Automatic Evaluation | |
Back-End | Automatic Evaluation | |
Full Stack | Automatic Evaluation | |
DevOps | Automatic Evaluation | |
Code Review | Automatic or Manual Evaluation | |
QA Engineer | Automatic Evaluation | |
Multiple Choice | Automatic evaluation | |
Subjective | Manual Evaluation | |
Whiteboard | Manual Evaluation | |
Diagram | Manual Evaluation | |
Sentence Completion | Automatic Evaluation | |
File Upload | Manual Evaluation | |
Prompt Engineering | Automatic Evaluation |