vlad malik

things digital

Specificity is the term I use to describe the fit between the UI and a user's task. Over-specificity means offloading processing onto users instead of the intelligence of a system. If we increase the intelligence of a system and more precisely understand real-life use cases, we can reduce the specificity of a UI to improve the user's experience.

Another example: Hit "Call" icon on my phone to redial the last number. Saving the number and accurately inferring my intent to redial it = intelligence of the system. Tapping 1 icon instead of 10 numbers = reduced specificity.

Deep accessibility means providing an equivalent experience, a true translation using different means. Examples include an annotated document, a chart, and driving directions.

Guidelines For Instant Search Filters
5 months ago

Vivareal.com.br recently ran a test, where they removed the Apply Filter button and instead updated the search results instantly (screenshot from goodui.org/evidence):

Top aspects of a site to optimize
5 months ago

I sat down recently to brainstorm common optimization ideas for a course. This is not complete or definitive, but these are the first things that come to mind when I look at a page. I’ll be hard pressed to find any site that couldn’t brush up in all these areas.

Patterns For Optimizing Checkouts – Flow
5 months ago


Tips for A/B testing with low traffic
1 year ago

After reading this post, you will be able to say whether your test has “low traffic”, decide if A/B testing is worth it, and know what to do if you decide to A/B test.

Crowd-sourcing A/B test predictions
1 year ago

The collective guess of a crowd can be more accurate than that of an individual. For example, over a hundred years ago, statistician Francis Galton noticed that a crowd of people could guess the weight of an ox with over 99% accuracy. In a more complex domain like politics, we know that expert predictions are terrible, but the average of their guesses is better.

Video: What-if analysis using a reverse test duration calculator
2 years ago

In this video, I want to show you a different kind of sample size calculator for your A/B tests. It works backwards compared to how traditional calculators work and you might find that more intuitive. The basic premise of this approach is that we mostly don’t know what effect size to expect, so we projections for a range of outcomes.

When and why to peek at A/B tests
2 years ago

Every other day or so you should should peek at how your tests are doing. Here are some guidelines on doing that without skewing your data:

It’s raining and snowing p-values
2 years ago

Sometimes the outcome of a simulation is a work of art. This is a plot of 1000 trials of an A/A test. The tips of the lines are p-values, while the dark area at the bottom is the effect size. I liked the pattern, so I turned it into 2 artworks: Rain and Snow:

Visual patterns for A/B test structure
2 years ago

Once you figure out what you want to test, you need to define what you’re going to measure and where. In this post, I will introduce my preferred terms for describing test structure (things like test conditions, goals, and pages), and I’ll use a visual language to cover the basic patterns. Here’s an example:

Animated headlines convey more meaning
2 years ago

Animation can convey multiple message using same space, turn a headline into a center-piece, drawing lots of attention to your message, set up a transition or interaction that pulls the user in or guides their gaze, or use the time dimension to convey extra information, like mood.

Simulations are faster and more intuitive than calculations
2 years ago

I use simulations all the time to help answer questions like: Is this outcome possible? What outcomes are most likely? How much data is enough?

Easy A/B test simulations using Excel / Google Docs
2 years ago

Simulations can answer key questions without painful calculations. If you haven’t gotten around to learning R, here’s an A/B test simulator for Excel or Google Docs. It does a power calculation, so you can see the impact of baseline conversion rate, effect size, and traffic has on your chances of detecting an effect. It gives the effect size and p-value for each outcome.

Business hypotheses improve A/B tests
3 years ago

A hypothesis is an explanation of why something is the way it is.

6 statistical reasons to avoid AAB, ABB, AABB tests
3 years ago

Running an AABB-type of test is a poor-man’s way of reducing false positives. That’s what alpha is for. Directly adjusting alpha, or false positive risk, is more precise and clear.

Pack your headings with content
3 years ago

A visitor should learn something just by reading your headings. Write informative headings. Don’t save content for later. You can elicit curiosity by revealing key insights to the right audience rather than by obfuscating.

5 ways to calculate A/B test confidence with 1 line of JavaScript
3 years ago

Besides the standard statistical significance test and confidence intervals, you can try zero-overlap confidence intervals and two kinds of post-hoc power analysis. All are easy to do in ABStats.

I Have An A/B Test Winner, So Why Can’t I See The Lift?
3 years ago

In the town of Perfectville, a company ran a winning A/B test with a 20% lift. A few weeks after implementing the winner, they checked their daily conversions data:

@VladMalik is an interaction designer and musician based in Toronto.
I enjoy breath-hold diving, weight-lifting, and chopping wood. I am vegan.

Get Update Every Month or Two

© 2018 No commercial use.