How many A/B test variants has your email inbox been subjugated to throughout Black Friday? Only a decade or two before, marketers were mostly flying blind on any commercials and posters for sales and promotions.

Now the average e-commerce company probably runs one too many A/B tests on their customers. At Interview Query we tried something similar for our Black Friday promotion by offering 25% off and sending an email about it.

My goal was to run an experiment to learn about how much email copy and content matters. We ran two different email campaigns for Black Friday to two separate randomly assigned user groups.

First we decided to test the subject line with an A and a B variant.

Subject line A: Get 25% off Interview Query
Subject line B: Save BIG on Data Science Interview Prep 😎

Which one do you think performed better?

Subject line B. By a margin where I didn’t even have to run it through an online significance calculator 😂.

Subject line B received a 30% open rate versus Subject line A with a 17% open rate. My hypothesis on this was that Subject line A was a standard control, of literally just telling everyone the promotion, and Subject line B was a little bit more click-bait.

Next was the email body. My product marketing girlfriend took the liberty of designing some fun Canva illustrations while gathered around after Thanksgiving night. We created two designs and added two different email bodies.

Black Friday Email A
Black Friday Email A

The image above was Email A. The image below was Email B. Notice what I changed?

Black Friday Email B
Black Friday Email B

Email A had a 10% click through rate out of all of the users that opened up the email. And Email B had a 7% click through rate. So email A won right?

Not so fast. To be honest, this is where post-turkey Jay started suffering from a common data science experimentation fallacy called laziness and interaction effects.

Since I was changing two different variables, the subject line and the email body, I probably should have created four variants to measure the effect of how the click through rate could be affected by the subject line.

Turns out this might have botched our experiment. Because Subject Line A had a more enticing line of Save BIG, maybe some customers had some preconceived notions of how BIG the savings should be.  

Ultimately what did we learn from this?

Email funnels and copywriting matters. We do a lot of subconscious decision making when we’re browsing our email inbox, which influences which emails we open and interact with.

I don’t buy the over optimization on click through rates, aka click-bait. At the end of the day we did see a 50% increase in unsubscribe rates from Test A. And even one guy sending us this, which is definitely what you don’t want to see in your customer support emails 🤣.

Angry customers

Granted it’s still hard to take away some concrete learnings from this. At some point you would think after you’ve run enough A/B tests, a business would start taking the winning variants and lessons and apply them to all users.

But behaviors do change over time, and no two campaigns are the exact same. What works for Black Friday this year may not translate to Christmas sales later on. So it makes sense that we should always keep a holdback group for understanding baseline observations of what can happen in email marketing.

As for our learnings about this, we did get a lot of fun replies when we sent out the results to our A/B test in an email. When Black Friday comes around next year, you can bet we'll tell all our customers that they're in a more robust A/B again.