Overview
HackerRank Projects for Data Science allows you to set up automatic scoring for your custom project-based real-world questions to assess Data Scientists.
This is an optional step under the "Setup Project" step of the Data Science question creation workflow that allows you to set up automatic scoring. In this step, there is a section called "Add evaluation files" where you can add evaluation scripts, submission files, solution notebooks, etc. All files uploaded in this field will be hidden and not be available to the candidate. During scoring and evaluation, these files will be downloaded in the Juypter session inside the "evaluation_files" folder. The configuration file “Hackerrank.yml” needs to have the scoring command. The scoring command would be executed automatically after a candidate submits the solution.
In this article, we will walk you through the process of setting up automatic scoring while creating your own Data Science Question on HackerRank Projects.
How to setup automatic scoring for Data Science Questions?
Prerequisites
- You must be logged in to your HackerRank for Work account.
- You should have access to Hackerrank Projects
Steps:
- Create a program to evaluate candidate submission.
(Example: a python program that will compare the candidate's submission.csv with the uploaded actual_output.csv and calculate the desired model performance metric such as accuracy, recall, precision, f-score, etc., convert that into a % and return FS_SCORE: X% as output)
- Ensure the scoring program returns in the format “FS_SCORE: X%”
- Upload all the necessary files (actual_output.csv) needed to run the evaluation program successfully along with the scoring program(score.py, score.sh) in the evaluation section
- Edit the “hackerrank.yml” configuration file and add the scoring key with the scoring command. The example configuration below:
Scoring
command: python evaluation_files/score.py evaluation_files/actual_output.csv - Ensure that your evaluation script can handle edge cases and will return a score in all possible scenarios.
- An erroneous response will result in a no score and you will be required to manually review the submission.
- Click on the validate and save button to validate the changes. In case of errors, follow error logs and messages to debug.
Note: To update the evaluation files, you need to reupload them and delete the previous version.
Comments
0 comments
Please sign in to leave a comment.