AB Testing for websites
70.7K views | +0 today
Follow
Your new post is loading...
Your new post is loading...
Suggested by Eyal Katz
Scoop.it!

8 Mind Blowing Hacks to Leverage Your CPC Ad Earnings

8 Mind Blowing Hacks to Leverage Your CPC Ad Earnings | AB Testing for websites | Scoop.it
Instead of making you go through ad network boot camp like a good campaign manager would, we’ve concentrated the 8 best CPC hacks that will get you optimizing like the pros.
No comment yet.
Scooped by Fred
Scoop.it!

Title

Title | AB Testing for websites | Scoop.it
No comment yet.
Scooped by Vincent Demay
Scoop.it!

Statistical Significance Calculator

Statistical Significance Calculator | AB Testing for websites | Scoop.it

is it significant?

Vincent Demay's insight:

via @Guillaume Decugis

No comment yet.
Suggested by Fred
Scoop.it!

19 Obvious A/B Tests You Should Run on Your Website

19 Obvious A/B Tests You Should Run on Your Website | AB Testing for websites | Scoop.it
The real problem with CRO is in knowing how to start and what to test. This post covers the latter.
No comment yet.
Scooped by Vincent Demay
Scoop.it!

What is a t-test?

This video explains the purpose of t-tests, how they work, and how to interpret the results.
No comment yet.
Scooped by Vincent Demay
Scoop.it!

A Guide to Measuring Split Tests in Google Analytics & Other Tools

A Guide to Measuring Split Tests in Google Analytics & Other Tools | AB Testing for websites | Scoop.it

Dear readers – Long time, no see. For those of you who don’t know, I have recently become a freelance analytics and optimisation consultant. Fortunately I’ve been keeping busy :). Don’t you love working in digital right now? Today’s guide covers an activity I perform almost daily: Measuring A/B and MVT split tests within Analytics

Vincent Demay's insight:

It seams google analytics is one of the best tool for split testing

No comment yet.
Scooped by Vincent Demay
Scoop.it!

12 A/B Split Testing Mistakes I See Businesses Make All The Time

12 A/B Split Testing Mistakes I See Businesses Make All The Time | AB Testing for websites | Scoop.it

A/B testing is fun. With so many easy-to-use tools around, anyone can (and should) do it. However, there's actually more to it than just setting up a test.

No comment yet.
Scooped by Vincent Demay
Scoop.it!

Revenue Tracking for A/B testing: exciting new feature in Visual Website Optimizer

Revenue Tracking for A/B testing: exciting new feature in Visual Website Optimizer | AB Testing for websites | Scoop.it

We are extremely proud to release a brand new feature in Visual Website Optimizer: Revenue Tracking. This is a significant new development for our product because it means now in addition to tracking conversion rate (for multiple goals such as clicks on links, visit to pages, form submissions, engagement, etc.), you can track various revenue metrics as well (including revenue per visitor, total revenue, average order value, etc.) Why you should track revenue in your split tests? Revenue tracking.

No comment yet.
Scooped by Philippe Gassmann
Scoop.it!

Prevent Analysis Paralysis By Avoiding Pointless A/B Tests | TechCrunch

Prevent Analysis Paralysis By Avoiding Pointless A/B Tests | TechCrunch | AB Testing for websites | Scoop.it

At RJMetrics, we believe in data-driven decisions, which means we do a lot of testing. However, one of the most important lessons we’ve learned is this: Not all tests are worth running.

In a data-driven organization, it’s very tempting to say things like “let’s settle this argument about changing the button font with an A/B test!” Yes, you certainly could do that. And you would likely (eventually) declare a winner. However, you will also have squandered precious resources in search of the answer to a bike shed question. Testing is good, but not all tests are. Conserve your resources. Stop running stupid tests.

No comment yet.
Scooped by Vincent Demay
Scoop.it!

Bayesian Bandit Explorer

A demo of a work-in-progress tool for exploring the performance of "Bayesian Bandits" for solving the Multi-Armed Bandit problem

No comment yet.
Scooped by Philippe Gassmann
Scoop.it!

Why Multi-armed Bandit algorithms are superior to A/B testing

Why Multi-armed Bandit algorithms are superior to A/B testing | AB Testing for websites | Scoop.it

In a recent post, a company selling A/B testing services made the claim that A/B testing is superior to bandit algorithms. They do make a compelling case that A/B testing is superior to one particular not very good bandit algorithm, because that particular algorithm does not take into account statistical significance.

However, there are bandit algorithms that account for statistical significance.


No comment yet.
Scooped by Philippe Gassmann
Scoop.it!

Multi-armed bandit experiments

Multi-armed bandit experiments | AB Testing for websites | Scoop.it

This article describes the statistical engine behind Google Analytics Content Experiments. Google Analytics uses a multi-armed bandit approach to managing online experiments.

No comment yet.
Scooped by Vincent Demay
Scoop.it!

SEO Split-Testing: How to A/B Test Changes for Google

SEO Split-Testing: How to A/B Test Changes for Google | AB Testing for websites | Scoop.it

-Google is increasingly relying on machine learning and artificial intelligence, making ranking factors harder to understand, less predictable, and less uniform across keywords.

No comment yet.
Scooped by Fred
Scoop.it!

A/A Testing: How I increased conversions 300% by doing absolutely nothing

A/A Testing: How I increased conversions 300% by doing absolutely nothing | AB Testing for websites | Scoop.it
There are few things wantrepreneurs (all due respect, I'm a recovering wantrepreneur myself) love to talk about more than running A/B tests. The belief seems
Fred's insight:

Must read if you consider running A/B tests and think this is easy stuff

No comment yet.
Scooped by Philippe Gassmann
Scoop.it!

Tests statistiques élémentaires

Philippe Gassmann's insight:

En français, mais très instructifs ;)

No comment yet.
Scooped by Philippe Gassmann
Scoop.it!

A Formula for Bayesian A/B Testing

A Formula for Bayesian A/B Testing | AB Testing for websites | Scoop.it

This is a Bayesian formula. Bayesian statistics are useful in experimental contexts because you can stop a test whenever you please and the results will still be valid. (In other words, it is immune to the “peeking” problem described in my previous article, How Not To Run an A/B Test.) Usually, Bayesian formulas must be computed with sophisticated numerical techniques, but on occasion the math works out and you can say something interesting with a simple analytic formula. This is one of those occasions.

No comment yet.
Scooped by Vincent Demay
Scoop.it!

The Definitive Guide To Conversion Optimization

The Definitive Guide To Conversion Optimization | AB Testing for websites | Scoop.it

The chances are you are going to make mistakes. Some of these mistakes can cost you thousands of dollars. So before you start testing, make sure you avoid these mistakes

No comment yet.
Scooped by Philippe Gassmann
Scoop.it!

A/B Test Calculator: Measuring Usability

A/B Test Calculator: Measuring Usability | AB Testing for websites | Scoop.it

N-1 Two Proportion test for comparing independent proportions for small and large sample sizes

No comment yet.
Scooped by Philippe Gassmann
Scoop.it!

How Not To Run An A/B Test

How Not To Run An A/B Test | AB Testing for websites | Scoop.it

If you run A/B tests on your website and regularly check ongoing experiments for significant results, you might be falling prey to what statisticians call repeated significance testing errors. As a result, even though your dashboard says a result is statistically significant, there’s a good chance that it’s actually insignificant. This note explains why.

No comment yet.
Scooped by Philippe Gassmann
Scoop.it!

Bayesian Bandits - optimizing click throughs with statistics

Bayesian Bandits - optimizing click throughs with statistics | AB Testing for websites | Scoop.it

Great news! A murder victim has been found. No slow news day today! The story is already written, now a title needs to be selected. The clever reporter who wrote the story has come up with two potential titles - "Murder victim found in adult entertainment venue" and "Headless Body found in Topless Bar". (The latter title is one I've shamelessly stolen from the NY Daily News.) Once upon a time, deciding which title to run was a matter for a news editor to decide. Those days are now over - the geeks now rule the earth. Title selection is now primarily an algorithmic problem, not an editorial one.

One common approach is to display both potential versions of the title on the homepage or news feed, and measure the Click Through Rate (CTR) of each version of the title. At some point, when the measured CTR for one title exceeds that of the other title, you'll switch to the one with the highest for all users. Algorithms for solving this problem are called bandit algorithms.

In this blog post I'll describe one of my favorite bandit algorithms, the Bayesian Bandit, and show why it is an excellent method to use for problems which give us more information than typical bandit algorithms.

Unless you are already familiar with Bayesian statistics and beta distributions, I strongly recommend reading the previous blog post. That post provides much introductory material, and I'll depend on it heavily.

No comment yet.
Scooped by Philippe Gassmann
Scoop.it!

Agile A/B testing with Bayesian Statistics and Python

Agile A/B testing with Bayesian Statistics and Python | AB Testing for websites | Scoop.it

Here at BayesianWitch, we’re huge proponents of A/B testing. However, we’ve discovered that the normal method of A/B testing often confuses people. Some questions we commonly get include: How do you know how many samples to use? What P-value cutoff should you pick? 5%? 10%? How do you know what to choose for the null hypothesis? Mastering these concepts are the most critical parts of A/B testing, and yet we find them very unintutitve for the average hacker and marketer to use on a day to day basis.

Another issue with standard A/B testing methods is the the issue of when to end the test. If you are dead certain version A is better than B, it’s great to end a test early. But standard statistical techniques don’t allow you to do this - once you gather the data and run the test, it’s over. For a deeper explaination, I strongly recommend reading Evan Miller’s seminal article How Not to Run an A/B Test.. (Side note - Ben Tilly has a Frequentist Scheme for addressing this problem.)

These two factors make it difficult for real life marketers to long-term continue using the standard, frequentist technique. It makes sense to choose a method which is more intellectually intutive as well as one that has flexibility to end a test when the conclusion is obvious. The technique that is described in the rest of this post is the Bayesian technique which avoids these issues. Further, this testing model works extremely well, particuarly in many business situations where time is critical.

I also want to emphasize that the method I’m describing is not untested. A version of it with slightly less accurate math was used at a large news site where I previously worked. A non-technical manager used this script to make changes to email newsletters. Each change provided a marginal (0.5-2%) increase in the conversion rate. But over a few months, he had nearly doubled the open rate and click through rate of the emails through implementing each change.

No comment yet.
Scooped by Philippe Gassmann
Scoop.it!

20 lines of code that will beat A/B testing every time

20 lines of code that will beat A/B testing every time | AB Testing for websites | Scoop.it

A/B testing is used far too often, for something that performs so badly. It is defective by design: Segment users into two groups. Show the A group the old, tried and true stuff. Show the B group the new whiz-bang design with the bigger buttons and slightly different copy. After a while, take a look at the stats and figure out which group presses the button more often. Sounds good, right? The problem is staring you in the face. It is the same dilemma faced by researchers administering drug studies. During drug trials, you can only give half the patients the life saving treatment. The others get sugar water. If the treatment works, group B lost out. This sacrifice is made to get good data. But it doesn't have to be this way.

Fred's comment, October 7, 2014 9:45 AM
ah oui on comprend bien avec cet article!
Vincent Demay's comment, October 7, 2014 10:02 AM
like @Fred