5 Common Mistakes To Avoid With Your Amazon Split Tests

andrew browne Split Testing Best Practices 1 Comment

Sharing is caring!

Running a split test sounds easy, in theory. Focus on a metric to improve, formulate your hypothesis, create a variation or two, and start splitting traffic between your Control and Variant. Alas, if only it were that easy to run split tests on Amazon! Instead, there are many pitfalls that can trip up even an experienced digital marketer and cause split testing errors.

 

In this post, I will outline what some of these common split testing errors are, and how you can avoid them. If you follow these points, then you be assured that you are running accurate split tests. More importantly, you can be certain that any changes that occur from these tests are a result of your variant, and not dumb luck. Let’s get started so we can call you an Amazon split testing guru in no time!

 

#1: Ending A Test Too Early (based on insignificant data)

 

If someone reports a lift in conversions based on a split test they are running, I will give a quick high five/thumbs up, then ask a simple question: how much data did you collect? The reply is often a sheepish smile, and then some answer that immediately tells me it was not enough data.

 

At which point I hand them a Shit Sandwich: putting the hard truth between two floppy pieces of tepid praise. Something along the lines of, “Great job on running tests! However, it looks like you didn’t collect enough data, therefore your test results are not conclusive yet. Here is how you can improve your test….”

 

In order to verify that you have generated enough data to get a conclusive winner from your test, you can simply plug in your session and conversion data into a significance calculator like this. You can quickly determine whether you have reached statistical significance. I recommend aiming for 95% or greater statistical significance. At 95% significance,  you can be assured that you are making a positive impact with your tests, and there is no chance or luck involved.

 

However, there is one small detail implicit in this data, which leads to another common mistake that I see….

 

#2: Not Accounting for Days of The Week

 

Even if your Best Seller Rank were in the top #100, and you had boatloads of traffic each day, reaching an accurate statistical significance is more than just simply a numbers game. You need to ensure that you are accounting for variance in traffic based on days of the week. For this reason, it is highly recommended that each full split test on Amazon is run for at least two weeks. This allows for each variant (assuming one control and one variation) to run for each day of the week, Monday-Sunday.

 

However, when you are running your split tests on Amazon (assuming you are running your tests manually), you can not simply run one variation for a week, and then switch your listing to your other variation. This is another common mistake that I see….

 

#3: Inconsistently Switching Between Variations

 

Here’s the little-known-fact of split testing on your Amazon product: in order to get accurate data, you must switch between your control and variation every day at 12M PST for at least 14 days. This is because Amazon only gives search traffic data for one whole day at a time, and their day starts at 12M PST (Amazon is based in Seattle, Washington).

 

do the right thing

 

Understandably, not many people want to follow such an unforgiving schedule for two weeks straight. That is why automating the split tests was a no-brainer reason to create Splitly, but that’s besides the point!

 

Some people try to skirt around this problem by running one variation for one complete week, then switch to another variation for another complete week. In theory, it sounds like a logical solution. However, it is wrong. Very wrong! There are a bunch of variabilities that can skew the data, including your Amazon keyword ranking, your Best Seller Rank, your competitors actions, seasonality, week-to-week unpredictability, holidays, and much more.

 

If you want to run a split test, and get meaningful takeaways, you need to do it properly. And that means changing your variations daily. Or just try Splitly’s free trial to automate the changes, and get accurate data.

 

#4: Testing on Low Traffic

 

Although I hate to admit it, we do discriminate at Splitly. You must have a listing that generates enough sales on a weekly basis in order to create your first test. We will tell you straight away if you don’t qualify. While no one likes to be rejected at the door, it is most important that a test is run properly.

 

This common mistake is directly tied to the first point, of drawing conclusions from limited data. Basically, just make sure that you have “grown into” split testing, where it would add value.

 

Imagine this little kid tells you he’s an amazing golfer. Yeah, he looks cute for sure, but who knows if he is really any good? He’s only 4 years old after all….

 

Tiger-woods-as-a-boy

At that age, he doesn’t have enough “data points” to piece together an analysis. But fast forward some years, and would I believe he’s an amazing golfer. Hell yea.

 

tiger-woods-012815-2 adult

 

My point is, if haven’t accumulated enough data points, you can’t make a fair judgment on how to optimize a listing. If you don’t have at least 5-10 sales on average every day, the data of a split test will be inconclusive. Focus on improving your sessions and conversions before you focus on optimizing the nuances of your listing.

 

#5: Running Multiple Tests, aka Multivariate Tests on Amazon

 

Multivariate tests are a technique to test multiple variables at once. So for example, suppose you were selling life jackets for dogs, like this:

 

fido_pet_product

 

It looks like there is room for more keywords in the Product Title, there could be more images, and the pricing may be higher than the existing competition.

 

An eager-beaver split tester would change all three things and run that against the Control. This would lead to inconclusive data. Due to limitations of running tests on Amazon, it would be impossible to pinpoint exactly what change was driving the change between control and variation.

 

Instead, the data-driven split tester would choose one high-impact element to test, and just focus on that. After that test runs its course, then they would create an entirely new test focusing on another element to change. And onwards, deliberately and with purpose.

 

IN CONCLUSION

 

Those are five common mistakes that I see often. As you can see, they are easily avoidable. And once you start formulating tests, seeing significant lifts in conversion rate and profits, you have tasted the sweet addiction of split testing! If you want to get started now with Splitly’s free trial, we can ensure that you are running accurate tests with takeaways that you can be confident about.

Sharing is caring!

andrew browne

andrew browne

Code Wizard at Splitly
Software developer and Amazon seller from Ireland. Constantly searching for travel adventures, greasy burgers, and all things tech.
andrew browne

Comments 1

Leave a Reply

Your email address will not be published. Required fields are marked *