Posts

How to Get More Loyal Customers and Upsells with Lifecycle Emails

How to Get More Loyal Customers and Upsells with Lifecycle Emails

How to Get More Loyal Customers and Upsells with Lifecycle Emails

This is the third and final article in the series on how to increase your open, click and conversion rates with machine learning.

Lifecycle emails are extremely useful to increase retention, reduce churn, create a more loyal customer base and have more successful cross- and upsells.

In this piece first we take a quick look at two concepts: the buyer journey and the customer lifecycle, how they relate to each other, what’s a lifecycle email, and why do you need one. Then we move on with two example emails each with two variations, and finally, you can have a look at how to test these variations to ensure the best possible email chain variation is sent to your customers.

The Buyer Journey & The Customer Lifecycle

Marketing jargon can be overwhelming, there is the buyer journey and the customer lifecycle. What exactly do they mean? Do they overlap? Where are the connections in the two?

First, let’s clear this up.

Buyer Journey

buyer journey

The buyer journey involves the complete experience a person goes through in connection with your company. It consists of three main stages: awareness, consideration, and decision.

The awareness stage is about getting the word out about your company and the solution you provide. So at this stage, you should focus on the type of problem you solve from your prospects’ point of view.

At the consideration stage, your prospects know that they have a problem and are looking for a solution to it. This is the stage where you’re directly competing with other solutions on the market.

The decision stage is when your prospect considered the possible options and he or she is ready to make the purchase. At this stage, the brand of your company will have a huge effect on your customer acquisition.

Customer Lifecycle

customer lifecycle

The customer lifecycle is a process run by the marketing and sales teams. It’s made up of 7 stages and each of the stages are well-defined:

Subscriber: people who subscribed to your newsletters or other weekly, monthly content.
Lead: someone who showed interest in your content or product, and provided additional information (more than an email address) about themselves.
Marketing Qualified Lead (MQL): leads who are ready to receive marketing messages.
Sales Qualified Lead (SQL): leads who are ready to receive sales messages.
Opportunity: people who are ready to buy.
Customer: people who purchased your product.
Evangelist: loyal customers who spread the word about your product.

How do they relate to each other?

The first contact between your company and a prospect is the point when he or she starts on the buyer journey. The person knows that your company exists, now you need to provide him or her with educational content to engage and guide them along the journey. You’ll need to create a seamless experience all the way to the purchase.

In the early stages, people give you their email address in exchange for your content. And if they decide that you are a reliable source of information they may give you more of their contact info. For example to test your product in a trial version. With this additional data, you can place the buyer in your customer lifecycle.

The main difference between the two concepts is that the buyer journey includes the steps taken by the buyer. So it’s the buyer’s perspective. On the other hand, the customer lifecycle includes the steps the marketer or salesperson needs to take in the right stage of the journey. So the difference essentially is the perspective.

What’s a lifecycle email?

buyer journey

Lifecycle emails are sent to people who already paid for your product or service. Their purpose is to keep the customer satisfied, engaged and loyal, but also to drive cross- and upsells. Onboarding emails for trial users can be classified as lifecycle emails too because a percentage of these users will pay for your service. So onboarding emails can be considered as “pre-onboarding”.

Why should you use lifecycle emails?

For small and medium-sized companies, especially SaaS businesses talking to your customers and engaging them is essential. You don’t want the engineers spending their time developing features that none of your customers want. It’s even worse if your shiny new feature doesn’t solve any of your users’ problems. Also, generally it’s easier to get repeat business from your existing customers. This is why communication with your customer base is crucial.

They chose your product for a reason. Most likely because they felt your solution is the answer to their prayers. That’s great, but if you’re in an industry where competition is high, you have to work hard to keep your customer base. This is where lifecycle emails come in handy.

Lifecycle emails are used at the final stages of the customer lifecycle. But their success depends highly on your previous communication with the person. How did they find your site? What type of message did they receive when they during the onboarding process? These have a huge impact on the outcome of your lifecycle emails. Set the right tone and consistently keep engaging with your customers.

Now let’s have a look at the example lifecycle emails and how to get the most out of them.

Example lifecycle drip campaign with a couple of email templates

In this example, we take a look at 2 lifecycle emails both of them have 2 variations. The goal we set for the whole workflow is to increase cross- and upsells. So the goal is fulfilled if a customer buys a bigger plan or another product.

The first email is connected to the behavior of the user in the application. When the user finished creating his or her fifth drip campaign but never used the AI-powered workflow optimization feature we send out an email to drive their attention to it. This is useful for the customer because:

  1. they might have checked out the feature when they started using the software but forgot all about it or
  2. they didn’t use the feature at all.

Either way, since they reached a milestone (created 5 automation workflows) the feature can be useful for them to work more effectively.

                                  A variation                                                                 B variation

lifecycle email A B variation

These are the variations of the first lifecycle email example. There are common characteristics like the greeting, the last sentence, and the CTA.

However, variation A is a shorter email: just an accomplishment, and a brief sentence about the feature we want to draw their attention to. This could work because it’s short but the CTA and the link to the video provide additional information to the users if they want to dive deeper.

In variation B the second sentence that highlights the benefit is longer. Also, there’s a brief teaser about the feature that gives extra motivation for the users to give it a try.

Another common aspect in the emails is the P.S. which is a great way to drive engagement and establish trust.

The second is as an upsell offer that is sent out to the customer if they are getting close to reaching their contact limit. This is beneficial for both parties. Because if the customer is on the verge of hitting the contact limit he doesn’t want to sacrifice their emails not getting delivered because he is a couple of bucks short. On the other hand for the company it’s an organic upsell opportunity so it would be a shame not to take advantage, wouldn’t it?

                                  A variation                                                                 B variation

lifecycle email A B variation

In this case, both of the emails are about the same in length, structure, and tone. Although, the text itself is different and it’s hard to determine which of them will resonate more with the audience. Another difference is that in the second email the last sentence clearly points to the CTA.

Let’s continue by taking a look at how to test these emails with traditional split testing and multi-armed bandit + machine learning algorithm powered optimization.

Split Testing vs. AI-powered optimization

In the first article of this series, we discussed that split testing automated emails is extremely technical and nearly impossible. But to make this article actionable I’ll explain how to test these emails with traditional split testing and with AI-powered optimization.

You should start by setting up the sending weights of the email variations like this: 50% for variation A, and 50% for variation B. After this step you have to wait a lot. Because a high number of people have to go through your drip campaign to have significant results for both emails.

Let’s say enough people went through your automation, and now the significance level is sufficient for both emails. Now it’s time to adjust the sending weights accordingly.

So you select a winner for both emails by looking at the conversion rates of the users in your drip campaign. You take a look at the people who purchased, then you have to check which combination of emails they received. Imagine how long would it take to do this for each conversion. When you’re done with all this the data you collected can be outdated, and even if it isn’t there’s a slight chance that your results are accidental, which means sending out lower performing emails.

ai powered optimization

There’s a way to avoid all this time-consuming, mind-numbing work and end up with higher conversion: testing with a multi-armed bandit, machine learning algorithm that optimizes your whole drip campaign for you.

All you need to do is set up the variations for both emails, and let the algorithm do its job. It will determine which email variation chain results in a higher conversion rate. The algorithm starts out with the same sending weights. But then as your customers move through the campaign it continuously learns and refines the sending weights based on your conversion rates.

So it will send out more of those emails that result in a conversion, and less of those that didn’t. This cycle is repeated over and over to make sure that the best possible chain of emails are sent out based on the goals you set up.

Capping it off

Engaging with your users is essential to increase retention, reduce churn, create a more loyal customer base and increase cross- and upsells. And lifecycle emails are a great way to increase engagement, especially if you hit the right tone. Which is not as easy as it sounds when it comes to communicating at scale. You have to experiment and find out what works best for your specific audience.

That’s all folks, this is the final article of the 3 piece series on increasing your open, click and conversion rates with machine learning.

[trx_line style=”solid” color=”#dd9933″]

Send emails at the right time, to the right people with the right message. Get started now!

Automizy trial blogbanner

[trx_line style=”solid” color=”#dd9933″]

How to Increase the Click Rates of Onboarding Emails with Machine Learning

onboarding emails with machine learning

In last week’s article, we discussed the benefits and drawbacks of split testing and the advantages of using AI-powered workflow optimization to achieve higher conversion. And to make it relevant I included an example email course, to show you how you can increase the open rate of your emails in drip campaigns. This article continues with one more use case where you can take a look at how to increase the click rates of your onboarding emails with machine learning. So let’s move along the buyer journey from the consideration to the decision stage.

What’s User Onboarding?

Like in the case of the email course the purpose of an onboarding email sequence is education. But in this example, it’s more of a “software-based” education. So the function of user onboarding is to make it easier for people to get started with your product.

By the end of the onboarding, they should be able to achieve their desired outcome.

Why do you need it?

why do you need it

Your onboarding will have a big effect on your customer acquisition. If you screw this up you will lose lots of valuable customers.

Although I said it’s more of a “software-based education”, it’s not just a technical onboarding. It’s not just about you showing your prospects the product. It’s more than that! You have to provide value to them as fast as you can!

First of all, you should ask yourself: how can I give my leads “initial success”? Not from your point of view, but from the prospects’.

So the purpose of onboarding is to make them reach their initial success, which means you won because the next logical step will be paying for your service!

Tips for Successful Onboarding Emails

Let’s take a look at some tips you should follow to have a successful user onboarding process. You can even look at these tips as a checklist. As you build your onboarding you need to take all of these into consideration, so just tick the points you completed.

  1. Send the first email immediately
    Approximately 75% of users expect a welcome email after signing up? In addition, 90% of users go cold after 1 hour.
  2. Provide clear instructions
    Help your new trial user to understand the next logical step in order to reach his “success milestone” and see the benefits he or she can gain.
  3. Send behavior-based emails
    This is a great way to help the users who are stuck somewhere in your software. However, you have to use events that will trigger your emails according to your users’ actions.
  4. Add value with your emails and be benefit focused
    According to newbreedmarketing, the content you provide will have a big effect on user activation.
  5. Extend the trial period for engaged users
    This is a nice gesture that helps you convert the more engaged trial users. But don’t offer too many days!
  6. Take advantage of lead scoring
    Lead scoring for user actions in your trial will help you identify who should be converted before the trial ends. You can segment your users according to their level of engagement and then send different offers to them.
  7. Ask questions
    To increase free trial conversion rates you need to understand the reason why users are leaving your product. This is why utilizing exit interviews with users who are not active can be extremely useful.

Types of Onboarding Emails

types of onboarding emails

According to Lincoln Murphy, the king of user onboarding, there are 5 types of email you can send to your trial users:

  1. Tutorial: specific educational emails that focus only on your product.
  2. Educational: these focus on the benefits of your product and how the users can take advantage of them.
  3. Aspirational: case studies, example use cases of your product.
  4. Transactional: reports, status updates, invoices.
  5. Personal: sent from an employee of your company, usually with a goal of one-to-one, interactive communication.

A successful user onboarding email sequence should contain a combination of these types of emails.

Now let’s have a look at a couple of onboarding emails and how AI-powered optimization can help you convert more trial users.

Onboarding Email Examples

In this example, there are 3 emails each with an A and a B variation.

These are the most important onboarding emails in my opinion: the welcome email, a sales email midway through the trial, and another sales email at the end of the trial. The goal is to increase the overall click-through rate of these emails.

The first is a welcome email that a prospect receives after registering for the trial variation of our service.

                           A variation                                                                        B variation

A variation:

This is a short, two sentence transactional email. It’s sent right after the prospect signs up, and it’s gibberish free. Just a little gratitude and an offer for a free consultation.

B variation:

This is a longer transactional email, and it has the same basic elements as the A version. In addition, there’s some social proof (2nd sentence), and an offer (3rd sentence) to reach out to us via email or in-app message. This offer is crucial because it is sort of a call for engagement. It shows that you care about your customers, and if you can provide solutions to their problem they are more likely to stick with you. Another additional feature of this email is the P.S., which provides more valuable educational content for the prospect, and that is just great during onboarding!

As I previously mentioned it’s extremely important to send out the first email immediately because the users will expect it. And because of its importance, you need to nail this one.

This is the message that will set the tone for your future communication with the user.   

Both of the examples could work because variation A is a straightforward, no bs email so the recipient might read it. Variation B however, can establish trust in the service and gives more value.

The second one is sent out when a user has 6 days left of the trial.

                           A variation                                                                     B variation

A variation:

This one is a short email with a discount offer. If this email pops up in the inbox of an engaged trial user the offer should motivate the prospect to pay for the service much sooner. Voilá, conversion achieved!

B variation:

This version lacks the offer presented in variation A, but it is benefit-focused. So the less-engaged users can learn what advantages the software provide. However, even the more engaged users might not be aware of these advantages, and for them, this can be enough to make a purchase.

When the user receives this email he or she has been using our software for 8 days. So depending on the depth of use, they are more or less familiar with the product and hopefully figured out if this is the right one for them or not.

The third is sent out when the user reached the end of the trial period.

                           A variation                                                                       B variation

A variation:

Once again this is a short email notifying the user that the trial is over. It reminds the prospect of the benefit and points to the payment page. If the user is satisfied, this email can be enough to result in a conversion.

B variation:

This is a longer version of the previous email but in a personal tone. It shows compassion and highlights the main benefits of the product in a more detail. The other difference is that the text doesn’t point directly to the CTA as in variation A. It’s more of a subtle approach, but still, it feels like a click is the next logical step after reading the last sentence.

So we looked at the example emails, now let’s see how to select the better-performing ones with traditional split testing and AI-powered workflow optimization.

Split testing vs. AI-powered workflow optimization

As we discussed in the previous article split testing automated emails manually is too technical and nearly impossible. So if you decide to do it brace yourself!

The main problem with split testing emails in a drip campaign is that the exploration and exploitation parts are two distinct stages, which in this case will result in a very long test. Because to have significant results you have to wait a lot until enough prospects receive the emails. And the length of the exploration period will result in the use of outdated information in the exploitation stage.

So if you want to test the emails in the example with manual split testing you should start out by setting up sending weights for each email like this: 50% for variation A, 50% for variation B.

After the significance level is sufficient for each email, you should adjust the sending weights accordingly.

You have to select a winner that received a higher click rate and continue by sending out that version out to everyone from now on. Not to mention you already lost people who received the low performing versions until you decide the winner variation.

It might not seem too long or hard, but if you try it out you might hit some speedbumps. In addition, because of the nature of split testing even if your results are significant there’s a chance that the results are accidental which means sending out emails that perform worse.

But there’s a way to avoid all of these struggles: testing with a multi-armed bandit and machine learning algorithm that optimizes your whole workflow for you.

With AI-powered workflow optimization, all you have to do is set up the email variations and let the machine learning algorithm do its job. It is determining which email variation chain results in higher click-through (and consequently high conversion) rate.

The algorithm starts out just like you would do it manually: 50% sending weight for variation A, and variation B for each email, but then the magic happens:
As your prospects are flowing through your drip campaign it is continuously learning and fine-tuning simultaneously the sending weights based on the click rates.

As a result, it will send out more and more of the better performing emails, and less of the poorly performing ones. This cycle goes on and on, so the algorithm makes sure to always send the best possible email sequence based on the goals you’ve set up.

Wrapping it up

Whether you’re just starting to develop your user onboarding, or you think you need to improve the process you should start by testing out variations of the most crucial emails in your workflow: testing is a tool that will help you have more engaged users. Higher engagement drives more sales, so it is an opportunity you should never miss!

Next week we’ll publish the upcoming article, that explains how to build a loyal customer base and increase upsells by using and optimizing lifecycle emails.

If you have any question feel free to share it in the comments, and if you want to give AI-powered optimization a shot it’s now in public beta!

[trx_line style=”solid” color=”#dd9933″]

Take your user onboarding to the next level. Get started now!

Automizy trial blogbanner

[trx_line style=”solid” color=”#dd9933″]

How to Increase Email Course Open Rate with Machine Learning

I think we can all agree on the fact that split testing is an effective method to find out what works best and get more out of your existing traffic. It’s extremely useful since it can be applied to a number of different things: subject lines and content of emails, landing pages, home pages, creatives for ads and the list goes on. Also, you can find articles, case studies on split testing for almost anything, except drip campaigns. It’s because experimenting with automated emails takes a lot of time and preparation. It is nearly impossible and too technical with most marketing automation tools. In this article, you can find an easy to understand explanation of AI-powered optimization with an example email course where the subject lines are tested and optimized. Increase your open rate with machine learning in your drip campaigns!

The difference between Split Testing and the Multi-Armed Bandit algorithm

Split Testing

When you split test something you have an A, a B, and a C version of a website, a landing page, an email or ad.

You send 33% of your traffic to A, 33% to the B, and 33% to the C variation. You run this test until you reach a significant result: when the sample size is big enough to prove that the result is not accidental. For example, 97% significance means that we can be 97% sure that our results are valid. So there’s just a 3% chance that our results are accidental. This is the exploration or learning period.

In split testing, you run the experiment long enough to have a winner of the variations that is significant.

After you have a winner, the exploitation (earning) part starts: you send 100% of your contacts to the variation that won.

So you have a significant sample, you learn from it. Then you implement the things you learned – this is the process of split testing.

The problem with this is that the whole process has two distinct stages. Generally, the exploration period is so long that makes it hard and time-consuming to adapt to possible changes in campaigns. Also, the changes made might not be relevant because it’s based on data collected over a long time.

Multi-Armed Bandit Testing

While split testing uses the pure exploration and exploitation phases that are distinct, the multi-armed bandit algorithm mixes these phases. It results in adaptive changes because it is continuously mixing exploration and exploitation.

The two main advantages that multi-armed bandit testing has over split testing is that:

  • the transition between exploration and exploitation is smoother
  • and the data collection (exploration) wastes too much time and resources in split testing.

You will earn and learn. As Matt Gershoff said, during the exploration phase, you learn but it has a cost: you lose opportunities. But if you can decrease the cost of exploration by immediate implementation of your learnings into exploitation, you’ll have higher ROI.

So the purpose of multi-armed bandit algorithms, in this case, can be summed up in one goal: to reduce the possibility of sending traffic to lower performing variations as fast as possible. This way the cost of experimentation will decrease.

Multi-Armed Bandit Testing + Machine Learning = high volume of automated split tests in your drip campaigns

The multi-armed bandit algorithm mixed with machine learning opens up a world of new possibilities: automated drip campaign optimization that saves times and helps marketers have higher open, click and goal conversion rates.

This type of testing can’t be performed using traditional split testing. Because humans simply can’t conduct multi-armed bandit tests as it requires countless repetitive phases of learning and exploiting.

In a nutshell: the AI-powered optimization in drip campaigns means that the machine learning algorithm is continuously testing and optimizing your drip campaigns to achieve a higher conversion rate based on the goal you previously picked  So all you have to do is:

  • set up a drip campaign,
  • create A/B/N versions of your emails,
  • choose the time period you want your emails to be sent
  • and a goal.

From this point on the system will test your drip campaign as more and more leads go through it. Without any the need for any manual intervention. Conversion goals can be:

  • basic email metrics like: open and click rates,
  • form submissions on a landing page,
  • custom events like registration for trial or payment for service
  • and so on.

Now that we covered what you need to know about how the AI works let’s have a look at the email course to have higher open rate with machine learning:

Higher open rate for your email course

Email courses are one of the best ways to generate quality leads. You provide educational content, therefore the leads will be more qualified and higher engaged after they finish your course. Not to mention that it is quite easy to set up an email course.

For a successful course, you’ll need a well-constructed landing page with proper wording and a value proposition. The order of the emails must be well thought out, and each email should have a clear structure.

Start with a summary of the course. Then continue with the actual lessons, and the last lesson should include either a special offer for the participants or just your sales pitch.

It’s crucial that the email course should be related to your product. Therefore, it should be able to solve the challenges your subscribers seeking answers to.

With a well-designed, problem-focused strategy it’s easier to convert a course subscriber to a trial user and a trial user to a paying customer at the end of the day.

But as nothing in life, it can’t always be applicable: a longer sales cycle, for example, might require some additional lead nurturing process too.

Let’s look at the example: a 6-day email course consisting of 6 emails. For each email, 3 different subject lines are tested. The topic of the course is digital marketing fundamentals, and it contains the following lessons:

  • Welcome email and summary of the email course
  • Introduction to digital marketing
  • SEO Fundamentals
  • Email Marketing Basics
  • Social Media Marketing Essentials
  • Analytical Foundations

People can sign up for the course on a landing page where they have to fill out a form with their name and email address. After opting in they receive a welcome email. In this email, you thank them for signing up and give them an outline of the course so they know what to expect. The next day they receive the first lesson, the following day the second, and so on.

Now let’s have a look at some possible subject lines for the 6 emails in the campaign:

However, the problem with email courses is that the open rates of the emails almost always decrease as the course goes on. But since you provide valuable educational content in all of these emails you want your recipients to open them. Otherwise, you wasted time and effort putting together the course, if barely anyone opens the last emails in the campaign which usually include your sales pitch.

So, in this case, the goal is to increase the overall open rate throughout your drip campaign. You’ve set up a killer drip campaign for your email course where the trigger is a form subscription. If you want to have a higher open rate, you have to conduct split tests for the email subject lines in your drip campaign.

First, let’s have a look how to conduct split tests for all these subject lines manually.

The first problem is that it takes a lot of time until enough people go through a certain drip campaign for the results of split testing to be significant. So the exploration period is very long.

The second challenge is executing split testing on 6 emails with 3 subject line variations for each. Can you imagine how long would it take? Theoretically, it’s possible to do it, but it consumes way too much time. And even if you end up with results, the amount of time and resources you put into testing just isn’t worth it. Especially, because there’s an easier way.

That easier way is through testing with a multi-armed bandit solution that is supercharged with machine learning algorithm.

All you have to do is write the emails, set up the subject line variations. After that, it’s time to let the algorithm do its job. In this case that is to determine which combination of subject lines and sending times result in the highest possible open rate. The algorithm starts out with a sending weight of 33.3% percent for each subject line per email. But as the subscribers move through the drip campaign it adjusts the sending weights based on the open rates. This process continues to ensure the best possible message sequence. Here’s a short tutorial video on how to set this up:

 

So there you go, with this algorithm multiple days of work can be done in about an hour. In addition, it will continue to optimize your campaign based on an endless cycle of testing. So increase your email open rate with machine learning and scale your business!

More to come…

This article is the first of a three-part series where you can learn about the potential use cases of the AI-powered workflow optimization. In the next one, you can take a look at how the multi-armed bandit can increase your conversions during user onboarding. After that we show you how to build a more loyal customer base and increase upsells by testing the body copies of your lifecycle emails.

Stay tuned!

[trx_line style=”solid” color=”#dd9933″]

Send emails at the right time, to the right people with the right message. Get started now!

Automizy trial blogbanner

[trx_line style=”solid” color=”#dd9933″]

slot-machine-multi-armed bandit testing

How to optimize your drip campaigns with multi-armed bandit testing?

slot-machine-multi-armed bandit testing

Experimenting with your contents is a must-have thing if you are a marketer. But there are some cases when the well-known split testing methodology doesn’t provide satisfying results. Therefore you need to utilize multi-armed bandit testing! Here you will know what it is, how it is different, when it is better and how can you use it in practice.

What is multi-armed bandit testing?

The multi-armed bandit problem has a famous analogy in probability theory: a gambler goes into a casino and sees a row of one-armed bandits (slot machines). The gambler naturally wants to play on the machines that have the biggest chance of winning.

So he needs to know in which order and how many times he should play on the machines. When he starts to play with them, every machine has a specific probability distribution that will give him a reward. The gambler’s goal is to maximize his overall reward in the end of the day.

The same problem applies when it comes to optimization in marketing: you have different assumptions that need to be tested at a time. So you just “pull the levers” (CTAs, images, copies, headlines, subject lines, etc.), experiment with them and try to maximize your “rewards” (conversion rate).

AB testing or multi-armed bandit testing?

The great debate

There is a debate between A/B test and multi-armed bandit believers. Some of them prove that A/B testing is better because XY… some of them try to do the same for multi-armed bandit algorithms.

Well, the funny thing is that both solutions come from academic methodologies, therefore, both of them require academic approach. Academicians want to disprove their own statements but when the debate got started in the marketing communities, they tried to disprove each others’ approaches.

They use academic language and arguments with not academic argumentation behaviour… Funny.

So just to be honest: both solutions are really good. Just for different use cases.

How does it work?

To understand how it works, you need to understand the simple A/B testing and then you’ll be able to differentiate the bandit testing. It’s a little bit mathematics and statistics but it can be easily understood without numbers and formulas.

AB testing

AB-test-exploration-and-exploitation

In split testing, you have an A and a B version of a landing page, a website, an email or ad.

You send 50% of your traffic to the A and 50% to the B variation. You run this test until you reach significant result: when the sample is high enough to prove that the higher conversion rate of a version is not accidental. For example, 97% significance means that one of the versions performs better with the probability of 97%.

In A/B testing you run the experiment until you have a winner version that is significant enough. This is the period where you assign 50% of your visitors randomly to A or B version. This what statisticians call “pure exploration”.

After you have a winner, the “pure exploitation” part starts. In this stage, you send the 100% of your visitors to the winning version.

So you have a sample, you learn from it then you implement the learnt things – this is the sequence.

But the problem with this is that the exploration and exploitation are two distrinct stages.

Multi-armed bandit testing

AB-test-exploration-and-exploitation-multiarmed

Multi-armed bandit testing is different from A/B/N testing because it says:

  • the transition between exploration and exploitation should be more smooth
  • and the data collection (exploration) phase wastes too much time and resources

While split testing uses the pure exploration and pure exploitation phases that are really distinct, multi-armed bandit testing tries to mix the two phases. It means that it adaptively changes and continuously mixes exploration and exploitation.

As a result, you will earn and learn. As Matt Gershoff said, during exploration phase you learn but it has a cost: you lose opportunities. But if you can decrease the cost of exploration by immediate implementation of your learnings into exploitation, you’ll have higher ROI.

So multi-armed bandit algorithms work for only one goal: to reduce the possibility of sending traffic to lower performing variations as fast as it is possible. This way the cost of experimentation will decrease.

What are the situations where bandit testing comes handy for marketers and why?

According to Matt Gershoff, if you don’t really want to spend time on understanding but only on running the optimization, this is the way to go. But it is just one approach, there are 2 different use cases when multi-armed bandit testing will perform better.

Short tests

If you have a campaign that lasts only for few days or just a week you can’t conduct A/B tests because by the time you’d reach significance, the campaign is already over.

Therefore you have to mix the exploration and exploitation periods to have higher conversion rate. This is why you have to use multi-armed bandit algorithms.

A very good example is when you have a promotion that is just 1 day long: Black Friday. If you would run A/B test, you wouldn’t have enough time to test. Although it could reach significance but you lost more than the half of the day by that time.

Therefore the multi-armed bandit algorithm will adaptively send more traffic to the better-performing variations. (Said, Stephen Pavlovich.)

Long tests

There are 3 main examples when multi-armed bandit testing will outperform split testing.

Scaling your drip campaigns

When you have a process that doesn’t need big changes and continuously runs over and over and over again, you need to use multi-armed bandit testing. This way you don’t even have to take a look at them it will continuously increase your conversion rate. It is an ongoing test.

Chains of conversion points

The best example again is when you have a drip campaign with 3 emails that have the same goal (that truly can be anything). Every touch point will have a chance to convert them but the best way to optimize the overall conversion rate of your drip campaign: send the best 3 versions of emails automatically.

Content and ad distribution

When you have two types of users, one has a common behaviour and one behaves a little bit differently, you can utilize multi-armed bandit testing to serve the common users correctly but still experiment with the other users.

How can you implement bandit testing for your drip campaigns?

If you have a drip campaign that needs continuos improvement, unfortunately, there is no solution to do that right now. But Automizy will be the first one which implements machine learning to make marketers life easier in its upcoming feature.

The closed beta will be released this year September-October, if you want to be among the first few hundred people who can find it out, you subscribe here now.
In addition, you can check a quick overview how it will work in practice. Check the video below!

[trx_video url=”https://youtu.be/AgG8w_yxaVw” autoplay=”off” title=”off” image=”https://automizy.com/wp-content/uploads/2016/07/AI-for-drip-campaign-optimization-previewpic.jpg”]