Why Engineers Won’t Do Your Coding Test

Conversational tech assessment tools are changing the status quo of interviewing engineers & coders by eliminating routine coding tests & testing on-the-job skills.

The idea behind a coding test is very simple: to filter out candidates who do not have the technical chops for the role, early on in the process before the hiring manager and candidate both waste their time with an in-person interview.

But most engineers today frown at the idea of completing a coding test, and over 50% straight out refuse to do status quo assessments (based on our research with 100+ companies in SEA).

3 most common reasons why engineers hate status quo coding tests:

1. They test for algorithmic skill rather than the ability to write code.

Companies need the scores from assessments to be significant, and the easiest way to do this to use trick questions on the assessments. To do well on these tests, candidates need to spend weeks practicing writing code for a list of trick questions. Only a fraction of developers can do well on these tests.

As an interviewer, it is very easy to forget how stressful the interview setting is for the interviewee. Having to write executable code for a very niche algorithm you studied at school (that too only if you were a CS major), and never really used in your time as an engineer in the real world, with the timer ticking, can be very very intimidating!

While it’s great if someone’s good at algorithms (even though this skill can be improved with practice), this is not a strong indicator of how good of an engineer someone is/ how good they’re going to be in the role.

Only a small fraction of tech roles require strong algorithmic ability. Also, this way of measuring developer skills’ has an inherent bias against more experienced developers.

As you can imagine no great developer is excited about the prospect of giving a test in the first place. Add it to the fact that questions are irrelevant and 50% of candidates straight out refuse to take these tests.

2. They’re too big an ask

Asking the candidate to spend more than 60 minutes on a coding test before you’ve spent any time is unfair.

When you use a 3-hour coding test it defeats the purpose of automation, because while the hiring manager has nothing to lose, the candidate now needs to spend more time on it than they would have for a video/ in-person interview.

The longer your assessment, the lower the test-taking rate will be

3. It’s harder to code in an unfamiliar environment

Most developers have a preference of IDE (integrated development environment) that they’ve customized in a way that helps them write code seamlessly. A test environment is unfamiliar, and it’s harder for a software engineer to function optimally. This is especially true when the test requires the use of not just a programming language in a simple code editor but instead is testing for front-end/ backend code framework capabilities.

Developers often challenge the validity of coding tests/ assessments because of these and other reasons and understandably so.

So, should we skip coding tests altogether?

That is not an option. Anyone who has been involved in tech hiring knows that there are enough developers in the world who aren’t qualified enough for the role, making it necessary to have some kind of a litmus test that candidates must pass before being invited to an interview.

Can’t we use a resume screen instead?

Software engineers tend to not be good at selling themselves, and great candidates often massively undersell themselves on paper. At best, a resume screen helps you eliminate some candidates who are very clearly not qualified for the role and sort resumes by priority. Beyond that, using a resume filter has an inherent bias towards candidates with good credentials (education and work history). Good programmers can come from anywhere, and using keyword matching means you’re probably missing out on a lot of great candidates.

But if companies start interviewing everyone who applies, it would take up all of the engineering team’s time just to interview candidates.

How do we evaluate if an assessment solution is a good one?

Here is a list of top things you want to check for. You’re in good hands if:

  • Your test-taking rate > 70%.
  • The average time to complete the assessment is between 45-75 minutes.
  • When you ask candidates for their feedback during in-person interviews, they have good things to say about their experience giving the assessment.
  • Hiring managers are happy with the quality of candidates that are being forwarded to in-person rounds.

If your current solution does not satisfy these criteria, you might be missing out on strong candidates for your team. As software engineers and hiring managers, my co-founder and I have previously used a majority of the status quo solutions and found the results unsatisfactory. So this is what we’ve been working towards for the past couple years on, and have seen early success in.

At Adaface, we’re building a way for companies to automate the first round of tech interview with a conversational AI, Ada.

So, is this just a coding test but in chatbot format?

No. Here’s what we’re doing differently from the status quo:

  • Shorter assessments (45-60 mins) to make sure engineers can do it ASAP, they are investing as less time as possible, while still enough to showcase their expertise.
  • Custom assessments tailored to the requirements of the role (NO trick questions).
  • Questions at the simpler end of the spectrum (it is a screening interview) with a generous time allowance (3x what it takes our team to write code for it).
  • Extremely granular scoring that eliminates false positives and false negatives.
  • Friendly candidate experience (hints for each question, friendly messaging and chatbot; average candidate NPS is 4.4/5). ☺️

How does Adaface fare on our criteria for an assessment solution?

  • We have an average test-taking rate of 86%, as compared to the industry standard of 50%.
  • The average time to complete the assessment is 62 minutes.
  • We’re most proud of the feedback candidates share after the assessment. The average rating is 4.4/ 5.
  • We focus on testing for on-the-job skills for each role. Ada can screen candidates for 700+ skills in Software Engineering, Data science, DevOps, Analytics, Aptitude, etc. This helps our customers find the most qualified candidates. Several of our customers have moved from using status quo solutions to Adaface, are able to save upwards of 75% time in the screening process.

For more such Updates Log on to www.hrtechcube.com



ABOUT THE AUTHOR

Deepti Chopra
Deepti Chopra is the co-founder at Adaface which is a conversational assessment platform that uses friendly messaging and intelligent chatbots to engage candidates and screens them for your role.

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here