vlad malik

UX+Analytics Blog

Video: Test duration and what-if analysis with ABStats reverse power calculator
1 month ago

In this video, I want to show you a different kind of sample size calculator for your A/B tests. It works backwards compared to how traditional calculators work and you might find that more intuitive. The basic premise of this approach is that we mostly don’t know what effect size to expect, so we projections for a range of outcomes.

When and why to peak at A/B tests
1 month ago

Every other day or so you should should peak at how your tests are doing. Here are some guidelines on doing that without skewing your data:

It’s raining and snowing p-values
6 months ago

Sometimes the outcome of a simulation is a work of art. This is a plot of 1000 trials of an A/A test. The tips of the lines are p-values, while the dark area at the bottom is the effect size. I liked the pattern, so I turned it into 2 artworks: Rain and Snow:

Visual patterns for A/B test structure
8 months ago

Once you figure out what you want to test, you need to define what you’re going to measure and where. In this post, I will introduce my preferred terms for describing test structure (things like test conditions, goals, and pages), and I’ll use a visual language to cover the basic patterns. Here’s an example:

Animated headlines convey more meaning
8 months ago

Animation can convey multiple message using same space, turn a headline into a center-piece, drawing lots of attention to your message, set up a transition or interaction that pulls the user in or guides their gaze, or use the time dimension to convey extra information, like mood.

Simulations are faster and more intuitive than calculations
9 months ago

I use simulations all the time to help answer questions like: Is this outcome possible? What outcomes are most likely? How much data is enough?

Easy A/B test simulations using Excel / Google Docs
9 months ago

Simulations can answer key questions without painful calculations. If you haven’t gotten around to learning R, here’s an A/B test simulator for Excel or Google Docs. It does a power calculation, so you can see the impact of baseline conversion rate, effect size, and traffic has on your chances of detecting an effect. It gives the effect size and p-value for each outcome.

Business hypotheses improve A/B tests
11 months ago

A hypothesis is an explanation of why something is the way it is.

6 statistical reasons to avoid AAB, ABB, AABB tests
1 year ago

Running an AABB-type of test is a poor-man’s way of reducing false positives. That’s what alpha is for. Directly adjusting alpha, or false positive risk, is more precise and clear.

Pack your headings with content
1 year ago

A visitor should learn something just by reading your headings. Write informative headings. Don’t save content for later. You can elicit curiosity by revealing key insights to the right audience rather than by obfuscating.

5 ways to calculate A/B test confidence with 1 line of JavaScript
1 year ago

Besides the standard statistical significance test and confidence intervals, you can try zero-overlap confidence intervals and two kinds of post-hoc power analysis. All are easy to do in ABStats.

I Have An A/B Test Winner, So Why Can’t I See The Lift?
1 year ago

In the town of Perfectville, a company ran a winning A/B test with a 20% lift. A few weeks after implementing the winner, they checked their daily conversions data:

@VladMalik is an interaction designer based in Toronto.
I enjoy breath-hold diving, weight-lifting, and chopping wood. I am vegan.

© 2015 License for all content: Attribution not required. No commercial use.