I heard an interesting story from one of my friends working in data science. Let’s call him Bob.

Bob worked for a mid-sized startup that sold a self-service subscription product with pretty variable pricing with some customers spending 10x more than others. Given the growth in this part of the business, a customer success team was formed on the marketing team to handle the clients that spent more.

One day the customer success manager came to Bob with a request to help analyze an AB test that was being run. The customer success manager was running an experiment where half of the new customers that signed up for a subscription over $100/mo would get a phone call from the customer success team, and half of them would not. It was a test to see how effective customer success onboarding was in preventing churn off the platform.

Bob analyzed the experiment but found a couple problems. One was that they didn’t have many sign ups for the month the test was run. There were only around 50 customers in each bucket that bought new subscriptions over $100/mo. In the control group, eight customers churned, and yet in the test bucket, in which each customer got a phone call from customer success, only one customer churned.

“Wow looks like a huge difference!”, the customer success manager exclaimed. Bob reiterated that it was a low sample size, they couldn’t make any real assumptions. But the customer success manager mentioned they were under some pressure from their boss, the director of marketing, to show some results that their team was making progress. Could he just tell him that their strategy was heading in the right direction?

Bob agreed, it was probably best to push their team forward. He didn’t want to be a stickler to what looked like a promising direction.

Then all-hands arrived on Friday. The whole company gathered up together to review the past weeks progress. The chief revenue officer was presenting new revenue numbers from the week when he paused and finished up.

“…and huge gains in our product division. Our highest value customers, with increased efforts from our marketing and customer success teams, have now reduced monthly churn to just two percent!!”

Company-wide applause. Everyone was amazed. And there sat Bob, completely dumbfounded on what actually happened.

For anyone that has worked in analytics, Bob’s story is not exactly surprising. It may be hard to believe, but every number, statistic, metric, produced from a company can be utterly and completely wrong. The only number that gets exempt from this is revenue, which gets audited by accounting and finance.

How do other metrics permeate throughout an entire organization with such inaccuracy? In Bob's case it was a game of telephone. Where Bob says something to the customer success, and then customer success says something a little better to the director of marketing, and then the director of marketing mentions in their one-on-one with the CRO "Oh, and you'll never guess what we did last week....."

But many times it's also an affect through how organizational data culture can affect how data scientists and analysts provide value. Companies usually can be data-driven in two ways.

One way is by using data to drive decisions on which features to build and what direction to take a product. The second way is where companies use data and metrics to validate existing opinions by senior management. It is the difference between a top down versus bottoms up approach to making product decisions. And if your company is making the decisions from a top down approach, why even employ data scientists?

Top down decision making makes sense when you work for a <10 person startup where your CEO makes decision based on customer research and intuition in the product space. When a small startup needs to release something like a new landing page, AB testing is forgone by available engineering time. If there is AB testing, then you as the data scientist will run a query, walks over to the CEO's desk, and tell them what the results are.

This does not happen at a company with >100 people. Now your CEO doesn’t look over every single detail. There’s a director with a Harvard MBA in-between you guys now and landing pages take more time than just one design-focused engineer hacking on it for two days straight; now there are sprint plans, coffee breaks, design team input, and front-end engineers whining about which framework to use.

And after presenting a lack of statistical significance in converting more customers through the new landing page, the director might be disbelief and won't buy your analysis. The director suggests to you to analyze it some different way to make it look like their team didn't just spend four weeks on something that won't increase revenue. And even if you still say there's no change, they might as well release the new landing page because it's not like conversion is going down?!

This kind of organizational culture combined with office politics can be the number one killer of providing data scientists a way to influence actual product decision, aka, do their job! And many times this is because the insights gained from analytics affect other employees in the organization besides the data scientist responsible for the analysis. Data scientists are viewed as the de-facto proof in the data, and if a product feature is actually not working due to a myriad of factors, all employees also tied to the product feature will be affected by the results. Bad metrics, therefore, get squashed like a whistleblower in a totalitarian state. No one wants to see bad metrics in a product, and no one wants to be the bearer of bad news.

The importance of data-driven culture is now important given the power that data scientist hold in surfacing insights. It makes more sense to rationalize a product change with intuition instead of making up an interpretation of numbers to prove success. What happens to a data science team's motivation when they are tasked with analyzing data toward a surefire outcome no matter what the result?

At the end of the day, the chief revenue officer at Bob's company did realize the metrics were wrong when their company did not see a 500% increase in revenue the next month. And Bob himself found it curious that the director also quietly left two months later.  

If your organization has hiring problems with data scientists, data analysts, or machine learning engineers, I would love to hear more about it.

Otherwise, if you're an interviewing data scientist, check out our data science interview prep newsletter at Interview Query.