We are constantly updating HackerRank for Work with new improvements and fixes. Here is a summary of customer facing updates we have made from January 13, 2020 to April 3rd, 2020.
Evaluation Workflow for Projects
We have consolidated the “Browse code” and “Review in IDE” features into a dedicated evaluation page for Project submissions, along with adding the problem description. This new feature will help customers do quality reviews of candidate’s submissions on Project questions.
New Roles in the Test Creation Wizard
Each of these new roles:
- Use high-quality questions which are specially curated
- Predefined-tests for new roles follow test creation best practices.
- Are specific to skills and helps the customer to get accurate skill signals.
Filter for Recommended Time
Each question in the library now has a recommended time associated with it and we have added a way for customers to use this seamlessly. This new filter helps the customer to find relevant questions and control the overall test duration.
Simplified instructions page for candidates
The previous instructions page had around 9 points which were confusing and outdated. This often led candidates to ignore the instructions. We have simplified the instructions page with 3 key points.
Dark Mode for Coding Tests
Dark mode is considered to be loved by developers and most of their IDE’s follow a dark theme. We have taken the dark theme that was present across community, CodePair and Projects questions and expanded to CodeScreen.
We have launched the dark mode for “Coding-only” tests. For community users who enabled dark mode, tests will open in dark mode by default.
Better alerts towards end of test
To make sure that candidates keep track of time, we now blink the timer and alert the candidates in the last 10 mins, 5 minutes and 2 minutes of the test.
Git-based workflow for candidates
A lot of our candidates use the offline workflow to solve Project questions. This experience has consistently received very positive feedback and has been loved by the developers. Listening to them, we have made the following changes:
- Offline workflow is now the default workflow for solving Project questions
- We now show software instructions before the test starts to ensure candidates have their environment ready well in advance and can jump right into solving the problem
- There is a new test setting to enable or disable the Online IDE for the candidates for Project questions. It can be accessed inside Test > Settings > Questions
- The online IDE will be disabled by default for all new customers. Existing customers will continue to see the online IDE option. You can toggle the “Online IDE for Project questions” option on/off to enable/disable the Online IDE for your candidates
- Auto indention for Python
- Autocomplete for C#
Developer Friendly Themes
We launched Library Questions that are mapped to a specific Industry Theme. Tags have been added to more than 50 questions to enable customers to filter the library and find their Industry themed questions:
New Library Questions Launched
We clearly identified the concepts that must be assessed within various skills and launched 100+ High-Quality questions to the Library on various Skills such as:
We have fine-tuned 130+ questions by having a Native US English speaker review and restructure the questions to meet quality guidelines. Close to 30 leaked questions have been refurbished to make it harder to search online.
Skill-based candidate feedback with interviewer scorecards
When it comes to remote technical interviews, the ability to effectively collect and synthesize interviewer feedback is core to assessing candidate fit.
But that’s easier said than done. More often than not, interviewers end up taking their own notes: in their own document, in their own style, and on their own time. The time it takes to gather and synthesize that feedback from interviewers is not only time-consuming: it also yields more generic, imprecise feedback. After all, it’s challenging to recall the details of an interview that happened hours, or even days ago.
To streamline that process, we built a new interviewer scorecard function into CodePair. Blending seamlessly into the existing CodePair interface, the private scorecard allows interviewers to record feedback on candidate skills throughout the interview.
The goal is to provide interviewers a simple way to capture feedback during the interview, and to focus their feedback on key skills of the candidate. The scorecard also helps ensure structured, standardized feedback from every interviewer—helping to focus debrief meetings on the key skills of the candidate.
How it works
As a part of this update, we’ve added the interviewer scorecard directly into the CodePair interface. The scorecard allows interviewers to rank candidates against four key skills:
- Code Quality: Can the candidate write code that’s modular, maintainable, and follows industry standards?
- Problem Solving: Does the candidate show proficiency in basic data structures (e.g. arrays or strings) and algorithms (e.g. sorting and searching)?
- Language Proficiency: Is the candidate able to understand and use different features of the language utilized in the interview?
- Technical Communication: Can the candidate clearly communicate technical concepts?
Interviewers can now rank candidates against each skill using a five-point scale ranging from “Strong Yes” to “Strong No.” Interviewers can also leave more detailed feedback for each skill to elaborate on their choice.In addition to the four key skills , we’ve also enabled interviewers to leave overall feedback about the candidate.
Interviewers can review a collection of feedback via the CodePair candidate report, which summarizes feedback from the interview for easier collective review. Those collecting interview feedback elsewhere—like an applicant tracking system (ATS)—can also copy and paste their scorecard into their ATS notes to streamline candidate review.
Import a full CodeScreen Test into a CodePair session
Up until now, interviewers had to import each question from the previous test using the import question dialog inside CodePair. However, we now have a popup to import the entire screening test into the CodePair interview. We also show the candidate's score for each of the questions so that the interviewer can review where the candidate did not perform well. The import popup also has a mute option which lets the interviewer opt out for 14 days. However, if the interviewer still wants to import questions during the mute period, one can use the usual Import question -> Candidate test option.
We are mindful of our support for diversity, and if diversity is enabled for CodePair, then the interviewer will not see the candidate’s name in the import popup. We sincerely hope that this feature will reduce interviewers’ effort in importing the screening test questions into the interview.
Monaco Editor in Codepair
Monaco editor is Microsoft’s open source code editor, which is one of the world’s favorite developer IDE used by more than 9 million developers. Previously we used the Ace editor in CodePair, which was missing some developer-friendly features such as code linting, context-aware autocomplete and keyboard shortcuts. Monaco comes with in-built autocomplete and many other valuable features and shortcuts.
We have used Monaco for CodeScreen and Community for more than two years now, and candidates have continuously loved the quality of the editor. We also added ‘Autocomplete status’ on the top to showcase if autocomplete is ready or disabled. We hope this update will provide for an excellent candidate experience!
Strengthening the Security & Data Protection
As part of our efforts to make your data secure, we have added another layer of security to your HackerRank for Work account. With this update, if you try to log in to your account while the login remains active in multiple other devices or browsers, you will receive a warning message prompting you to either cancel this login or log out from the existing sessions. This will help us prevent unauthorized usage of your account.
Security for API Tokens
The My API Tokens page has been rewritten to protect the API tokens of users. In the past, we would show the full token value, which has the same power as a user's password. Even though this is more secure, it does make it harder to look up a specific token. To solve that problem, we have added a last used date and IP address to identify your development token.
Report a Leaked Question
Customers now have a mechanism to report a leaked question. This is backed up with a review workflow that helps the internal team to look into it and act swiftly. The questions deemed as leaked are flagged in the UI and relevant details are communicated with the customer.
SmartRecruiters Integration Update
The SmartRecruiters Integration has been updated with the following changes:
- Changed test names now show up in SmartRecruiters
- Now, only Published tests are synced instead of Draft and Archive
- Tests are withdrawn from SmartRecruiters when the integration is turned off
This has been automatically rolled out to existing customers, no migration is required.
Multiple customer impacting issues were fixed during Q4. Here are some of the top issues that were fixed:
- Removing question from section re-ordered the questions from other sections
- Unable to upload documents to question description
- Issue adding accommodation time to tests
- Accounts locked unexpectedly
- Custom test cases not providing the correct output
- Email validation for scheduling CodePair interviews
- Pre-defined tests were picking leaked questions
Changes or Regressions
- Online IDE in Project questions are disabled by default for new customers
- Block Concurrent Sessions - Users will be blocked from using the same credentials from multiple places at the same time.