Ever feel like you destroyed a job interview, but ended up not getting the job? Or maybe you completely bombed a technical screen and still passed onto the next round?
Hiring standards are confusing at best, and for candidates, these confusing standards beg the question: How well am I performing compared to my peers?
A data science skills assessment can help to shed light on that question.
Let’s take a closer look at data science skills assessments, what they are, how they’re used, and how Interview Query is building data science assessments to help our members advance in their careers.
Take Interview Query’s data science skills test now and see how you compare to other data scientists.
For candidates, a well-calibrated skills test provides a score of where they stand compared to other data science professionals (similar to SAT percentiles). And hiring managers use skill assessments to narrow down the pool of data science applicants.
Let’s start with an example:
Bob is a product manager at Company X and is hiring engineers using an external recruiting agency.
Bob has a problem though: Just 10% of the engineering candidates that the recruiting agency sends over pass the technical interview.
In other words, the recruiting agency is not calibrated to Bob’s hiring standards.
The agency doesn’t understand what signals Company X is looking for in their candidates. And further, it’s likely the agency’s initial phone screens aren’t challenging enough. The recruiting firm doesn’t have a firm grasp of the tangible and intangible skills that will make a candidate successful.
At Interview Query, we think hiring in data science is a fascinating subject. Every day, data scientists are interviewed with technical questions that are supposed to detect the signal through the noise, questions that should assess who will be a productive worker and who will not.
As you can imagine, the hiring process is not always 100% accurate.
Even if our example recruiting agency only sent over A+ candidates who passed every interview, Company X’s process could still be flawed. The questions could be too easy or off-base… potentially resulting in the wrong hires.
A well-calibrated data science assessment can help to solve this problem. And it’s an area in which we’re currently building and testing new features.
Recently, we launched our own data science skills test, which we’re calling Challenges. The assessment features 12 multiple-choice and free-form questions from tech companies that cover a range of data science skills and concepts.
The quiz should take less than 20 minutes to complete, and it’s open to anyone. After you’ve taken the test, you’ll receive a score and see where you rank amongst all the data scientists who have taken the test.
Our goal is to provide data science and machine learning professionals with an accurate assessment of how they stack up against the competition.
Knowing where your skills stand against your peers is a powerful interview prep tool. On the one hand, if the test is well-calibrated, you’ll know your skill level. This ranking can help you to:
For example, if you scored high on an intermediate data science skills test, you’re likely ready for mid-level data science roles. Yet, if you were to score poorly, you might focus on your technical development, or look for more junior-level roles.
But, if you want to know where you stand, the assessment has to be calibrated correctly.
If the test isn’t sufficiently calibrated, the mean would be around 90% accuracy. Anyone taking the test would be a “rockstar” data scientist, ranking in the top 10%. Similarly, a skills test that is too hard would provide the opposite results; hardly anyone taking the test would be qualified.
Calibrating skills test reveals the signal from the noise, but it’s difficult to get right. It’s a subject that we take great interest in, and we’ve had lots of success with it in the past. Now, we’re fine-tuning with the launch of Challenges.
We launched one of our first multiple-choice tests in 2020, which more than 600 data scientists took.
One thing we found after launching that first skills test was that the scores were pretty close to a normal distribution, an indicator of decent calibration.
Here’s a look at the data from our initial assessment results:
From that first quiz, we did find a few areas that we could improve. For one, we felt the quiz could be longer.
With just six questions, most people got just two answers correct, with a mean of 2.35. If you answered 4 correctly, you were already in the top 20%, and if you answered all six correctly, you were in the top 2%.
We felt that sample size would need to be improved to further fine-tune the calibration. Here’s the distribution of cumulative scores:
Ultimately, running that first 6-question data science quiz taught us a few things. It gave us an idea of how calibrated the questions were. The data showed that the questions were decently calibrated, but that we’d probably need a larger sample size.
The thing is, multiple-choice questions for data scientists are difficult to write. You have to condense broad concepts and skills into short answers. One way we’ve learned to do that is by testing multiple concepts in a single question. A SQL question from that initial quiz required the test-taker to know how a LEFT JOIN worked, but also what the distribution of the query results looked like.
With Challenges, we’ve continued to refine and calibrate our questions to ensure they’re both challenging and provide a strong assessment of a test-taker’s data science intuition.
With that initial test, we also learned a lot about how we could improve evaluating quiz scores. For example, we saw that time was a big factor in success. One way we addressed this was by factoring time into Challenges scores. Your efficiency in taking the test now influences your score.
Finally, we felt that the data were limited. It was just a 6-question assessment. With Challenges we’ve doubled that, which will help us to test and fine-tune calibration even more.
You can take the Challenges assessment now. It’s a free feature, and we’re encouraging anyone who’s interested in getting a measure of how they stack up to the competition to take it.
The test features 12 questions that are asked in real data science interviews at FAANG companies, and the assessment covers a range of topics, including:
Here’s a sample product sense question from the assessment:
We think that Challenges will be a valuable benchmarking tool for data science professionals. You can take the quiz now and determine where you stand right away. We encourage everyone to take it, whether you’re actively studying for interviews or just interested in technical skills development.
What Comes Next
Challenges is in beta right now. Our hope is to continue to expand our offerings of data science skills tests, short-form quizzes, and other ways for our users to do on-the-go interview prep.
Eventually, we’ll expand this to offer skills tests in various areas, like a product metrics skills test, as well as other quizzes to test key data science skills. Want to help? Try Challenges and let us know what you think.