Ever feel like you destroyed a job interview and then ended up not getting the job. Or how about completely bombing a technical screen and still passing onto the next round? You’re not alone, hiring standards are confusing at best but it still begs the question: how do you know how well you’re performing in comparison to your peers?

The word of the week is calibration. Calibration is used by recruiters, interviewers, and companies to describe how matched up they are with understanding and optimizing the interview funnel. Here is an example from a frustrated old co-worker of mine.

My co-worker Bob is a product manager at company X and is hiring engineers using an external recruiting agency. Bob was frustrated by the fact that only 10% of the engineering candidates that the recruiting agency send over pass the technical interviews. In this case, the recruiting agency is not calibrated with the company and Bob’s standards and doesn’t understand what signals Company X is looking for in their candidates. They are also likely not screening hard enough in the initial phone screen and not understanding what parts of the candidate’s background aren’t a good fit for the role.

At Interview Query we’ve found the concept of hiring a little fascinating. Every day a data scientist is being interviewed at a company with interview questions that are supposed to detect the signal through the noise, a measurement to find who will be a productive worker and who will not. As you can imagine, this process does not always come away with 100% accuracy. Even if the recruiting agency in the example before only sent over amazing candidates that passed through every interview with flying colors, the interview may still be flawed in that the interviews could be too easy, potentially resulting in bad hires down the line.

We are interested in testing something related to our own calibration amongst data scientists. This week we have compiled eight interview questions from tech companies across the many different topics of data science and put them into a multiple choice and free form quiz. The quiz should take less than twenty minutes, and at the end of the quiz after we receive a sufficient number of results, we’ll email you your score and how well you placed on the distribution compared to other data scientists that have taken the test.

What does this do for you? If our quiz is sufficiently calibrated, you’ll understand how you place in comparison to every other data scientist that has taken the test, given no one else is cheating or taking an abnormally long time on the quiz.

What we really want to know is how well does the interview questions in the quiz scatter the distribution of results? Is our test too easy and we’ll see a mean around 90% accuracy or will the test be too hard?

Try out our test here!