This article explains the different evaluation methods for Data Science Questions. Since Data Science evaluation, unlike other questions, is very subjective, and the emphasis is on the approach a candidate takes to solve a question, HackerRank Projects for Data Science enables hiring managers to score Data Science questions manually.
Data Science questions are scored primarily after a thorough analysis of the candidates' solution Jupyter Notebook, available in the candidates' Test Report.
Read more about accessing and evaluating a Candidates' Test Report here.
Manually scoring a Data Science Question
- You must have a HackerRank for Work account.
- You must have at least one Test attempted by Candidates and their submissions pending for further evaluation.
Using the HackerRank provided Scoring Rubric to evaluate Data Science questions
Data Science questions are manually evaluated and hence, the candidate test report experience offers a scoring rubric for each question to help the hiring manager perform an efficient, consistent manual evaluation on data science solutions.
To access the scoring rubric, simply click on the 'Internal Notes' section of the detailed candidate report tab, and use the solution Jupyter Notebook, the evaluation script etc. for all HackerRank Data Science questions, as shown below.
- Navigate to Tests and select the required Test.
- Click the Candidates tab, and select a Candidate entry pending for evaluation.
- In the Candidates Test Summary page, click the Detailed Tab to view the detailed Test Report of the Candidate.
- Alternatively, in the Summary tab, scroll down the page and click "View detailed report" for a particular Question.
- Upon arriving at a Detailed Candidate Test Report, you can choose to expand and view the question description, the internal notes with the scoring rubric (containing the solution Notebook, the evaluation script etc. for all library Data Science questions) and review the candidate's submitted Jupyter notebook, by either downloading a Zip file, or by launching a temporary Jupyter session with the candidate's code.
- The contents of the notebook can also be viewed and assessed on the test report Detailed tab, as they are rendered inline.
- Launching the Jupyter session enables you to review all the files in the submission, run data cells, run any scripts and try different aspects of the candidate submission, as shown below.
- Upon completing your evaluation, proceed to manually enter your score, and leave any candidate feedback. This completes the scoring for the particular question.
Learn about how to analyze a Data Science Test Report in detail here.