Some questions require manual evaluation while others are automatically evaluated. In manual evaluation, question setters or examiners have to review the answers of the candidates and assign scores accordingly.
The following table specifies the evaluation method for different question types:
Automatic evaluation: Your code is validated for each test case. The output from your code is compared with the expected output to validate whether the test case has passed or failed. The total score assigned to the candidate is the sum of scores of all the passed test cases.
Automatic evaluation: The data retrieved by you is compared to the correct answer. If they match, full score is awarded to the candidates. However, if they do not match, then the candidate receives a zero score.
Manual evaluation: The test setter must view how the final webpage appears and responds.
Automatic evaluation: The test setter provides a Check script (written in Bash) to validate if you have performed the correct tasks.
Automatic evaluation: You are evaluated based on the number of unit tests that pass.
Your score for a question= (Number of unit tests passed/Total number of unit tests)* Total score
Automatic evaluation: The test setter writes a custom checker specifying the scoring logic for these questions. For each test case, custom checker is run to get the score. The sum of the scores returned by the custom checker for each test case is your total score.
Automatic evaluation: Your responses are compared to the correct answer for scoring the question. By default, the score assigned to one multiple choice question is 5. If there is more than one correct answer for a question, then each correct answer is assigned an equal fraction of the total score.
The test setter can also enable negative marking for wrong answers to ensure that candidates do not indulge in guessing.
Automatic evaluation: To evaluate the score for a question, the fraction of correctly answered blanks to the total number of blanks is multiplied by the total score for the question.
Manual evaluation: The question setter or the examiner is required to read your answers, review diagrams, and explanations to assign scores.
Automatic evaluation based on J-unit based test scoring or custom scoring:
In J-unit based test scoring, there is an equal score assigned to each test case. The candidate receives a score that is proportional to the number of test cases that the code pass.
In custom scoring, the question setter can specify the rules for scoring a particular question in a script. This script is an executable program or a shell command that specifies scoring rules and can be run in the Ubuntu environment.