20+ User Research Techniques

Customer Interviews & Mental Modeling

The more I spoke to customers, the more I could speak their language and understand how they think and what might be holding them back. Here’s an example something I learned from a customer interview, when I realized that what we were calling “indications” (rough insurance quotes) wasn’t the same as what they understood as “indications”. There was apparent alignment, because they and we meant “rough quotes”, but turned out we used the concept differently. As a result, customers were approaching what we were offering with the wrong preconceptions:

Mental Concept of “Indications”Customer’s Mental ModelHow It Worked In Our Product
EffortTakes some effort – broker attempts to tailor assumptions or get preliminary info from clientFocus on high-volume, low-margins means initially putting in minimal effort
PricingFairly close to final quote; client can use it to weed out some optionsBallpark to start conversation; no point comparing options yet
Application RequirementsEnter as much as you knowAI-driven or generic assumptions for faster response
Time FrameUsually 14-30 days out from when you need final pricingAllow to start conversation much earlier, 60+ days out
A thematic analysis of our customer’s mental model vs. how indications worked in our product

Conflicting Mental Models: Learning how brokers think and make trade-offs a key part of my job. For example, brokers traditionally want to send insurance providers strong deals to gain preferential treatment in the future. If they send weak deals, it might damage their reputation. With APIs, they didn’t need to worry about this, because no humans are involved on the receiving end. But old habits may have caused people to avoiding sending greater volumes through the email bot. Empathy means going beneath the surface to understand the oft unspoken principles and values guiding people’s work.

Push/Pull or Forces Analysis

This is a great framework from the Jobs to Be Done field, which I use often to convey the motivations and obstacles of a decision gathered from interviews. It’s a type of mental modeling. I can’t share a real example, but here’s a great illustration (Source):

Customer Profile

I ran a workshop with Sales and CX to understand how we can share and divide the responsibilities for customer discovery among the team. We came to better understand what CX requires from Sales and what Sales is able to get from customers they speak to. I developed a framework for a “Customer Profile” with itemized criteria rated by importance and who is responsible. This eventually became a Google Doc form, and then later was merged into our CRM system. This profile goes beyond being a research technique. It determines how the company triages prospects, where we invest our efforts, and how we close gaps in understanding that impact everyone, from sales to product. Here’s a key gap that was identified: when brokers do not have enough experience with cyber, they are less ideal customers for us – this framework I helped craft reminds the team what to ask them, what to train them on, and whether they are a good fit in the first place:

Shipping To Learn / MVP

One needs to get the smallest viable solution into the hands of customers. Feedback based on real-world usage of a basic solution or a prototype provides more clarity than rigorous user testing of a complex solution in hypothetical situations. Shipping to learn helps tame the complexity that often bogs down teams trying to find their product-market fit. This involved breaking features down to understand what is absolutely essential:

Wizard of Oz & Other Lean Prototyping

“The Wizard of Oz test essentially tries to fool potential customers into believing they are using a finished, automated offering, while it is still being run manually. ” – BMI Lab

I used this technique successfully to test a Gen AI solution. This had several advantages:

  • AI was rapidly changing but not yet ready; this technique allowed us to get ahead of the technology limitations and envision the future
  • We could collect more real-world data from customers (who submitted real documents, made real queries), which we could then use to evaluate the AI internally, safely
  • Building AI tools was a new skill for the team; this technique kept us from pulling development resources off other critical projects
  • Customer actually benefited from the test – when we ended the experiment after 2 weeks, they were eager to get it back. This was great validation for us.

Remote Testing: Clickable Prototypes, A/B comparisons, and 5-Second Tests (via Maze)

There are great tools now that combine sharable prototypes with slides and surveys. This is especially useful for hallway testing remotely. I set up an experiment, present the context for the user, and then have them perform a set of tasks and answer some questions. The advantage of this over moderated testing is they can do it on their own schedule. I would typically follow up with questions over email. The biggest things I learned about running this kind of test is to test the test – roll out v1 to one or two people to catch any issues. You often need to add clarification, reword or reorder steps before it’s ready to scale. This was particularly useful for quick Hallway Tests with SMEs.

Job Mapping Based on Strategyn’s “Outcome-Driven Innovation”

ODI breaks all processes down into basic phases, whether it’s surgery or selling a home. The activity’s outcome is defined in a standardized way like: “Minimize the time it takes to assemble a proposal to the insured”. This leads to a deep understanding of how the user completes their task. Customers can then be surveyed about their success with very well defined tasks, which leads to a quantified understanding of wider market needs. Here’s a sample activity:

Activity: Location all the inputs needed for an application. Diagram below shows the tasks to accomplish that (based on customer interviews).

 

Situational Personas

I introduced the personas concept to a Fintech client to help them shift away from a focus on features and technical capabilities. I started by asking the client to itemize their audiences in a spreadsheet. Then we identified some “top needs”, based on their calls with customer and industry subject matter experts. Later, on another project, I asked them to write first-person user stories.

Over time, the requirements started include more emotional and situational context. At that point, I started to distill insights into personas that focus on the user’s situation:

Context builds actionable empathy and elicits tons of new product ideas and marketing strategies. What seemed like one type of user falls apart into 3 different types of users with distinct, nuanced needs.

Voice of The Customer

This is a type of research that focuses on capturing and sharing words that customers say. On a more recent project, I defined target audiences through story telling rather than demographics e.g., a team large enough where poor intake clarity starts to cause task assignment confusion. I base the quotes on real interviews, so it resonates in our marketing. I create a specific contrast between before and after, so it’s clear what our product solves (a UI pattern that performed well in A/B tests in the past). The goal is to write copy that resonates with customers, because it’s what they themselves would say.

This perspective can be applied to other content, like this feature matrix:


Pre-2019 examples:


Mining Customer Reviews

For one client, I distilled dozens of pages of user reviews into a concise report with 18 themes (Affinity Mapping). The themes described in Pain English what customers thought, supported by 200+ actual quotes of customers. Here are some themes for their Error Tracking software:

Mapping User Flow / User Journey

I have experience mapping complex processes. In this example, I facilitated a session with SMEs to understand the key steps of a health care process, including role of existing IT system (SMEs are usually senior/former users or people intimately acquainted with the process). I broke the process down into higher level phases:

Next step would be to label this process with assumptions, problems, and opportunities (not shown).

Contextual Inquiry / Job Shadowing

I shadowed a health inspector and documented their process as multi-participant user flows:

I then analyzed these flows to figure out how the process could be improved through efficiencies and use of  technology like tablets.

Here’s me touring a coal mine client to gain a deeper appreciation for the business and user context:

Funnel Analysis

For an online experience, I often started with a rough funnel, showing the customer’s journey backed up with visit & conversion metrics. I could identify points in the process where users give up or digress from the path to their goal.

Why isn’t this chart prettier? Because it not going into a magazine. It gets the job done, and then it’s not needed anymore.

This chart hinted at steps where the problem manifests. Similar to a sitemap, this could also help identify problems with the overall IA of a site (e.g., if there are lots of loop-backs or some pages aren’t being visited). Here, for example, I noticed that 50%+ of people drop off at the Tour, which suggested removing or improving the Tour. I also see there are many steps before actual enrollment. Some of my A/B tests tried different ways to move the Enrollment call-to-action higher in the process.

Listening In On Calls & Reading Emails / Chat Logs

There’s nothing like getting the user’s actual words, hearing the tone in their voice, reading between the lines. I’ve listened in on call center conversations to identify common customer questions and persuasive techniques used by CSRs. I’ve also read chat logs and emails to better understand what customers care about.

Remote Monitoring

I’ve used a number of screen replay tools to observe users and identify potential usability issues. To help sort out useful videos with this technique, I tagged various user events using URLs and custom JavaScript. That way I could find and observe problems like, say, only videos of users that completed a form but didn’t click submit:

Linking screen replays and analytics (e.g., Google Analytics) is a useful and cheap way to do usability testing, because you can define a behavior KPI and then filter videos showing that behavior e.g., someone going back and forth between screens or hesitating to click something.

Heatmaps, Clicks, Attention Tracking

In one test, I used the heat map to corroborate our statistically strong finding. I tested a new home page with a simple gradual engagement element: a clear question with several buttons to choose from:

Our hypothesis was that gradual engagement would guide visitors better toward signing up. The heatmaps for the other variations showed clicks on top menu links and all over the page. In contrast, our winner showed clicks on just the buttons I added. There was virtually no distracted clicking elsewhere. This was reassuring. I also saw a pattern in the choices visitors were clicking.

Sometimes I wanted to test if visitors were paying attention to non-clickable content, like sales copy. One of the tools I’ve used was a script I created to track scroll position. By making some assumptions, I could infer how much time users spent looking at specific content of the site. You can see a demo here

Self-Identification

Sometimes designs offer an opportunity to both test a hypothesis and collect data. For one project, I decided that, instead of emphasizing lowest prices (as the norm in that saturated market), I would emphasize our client’s experience dealing with specific real life scenarios (Authority principle). So I interviewed the client further to understand specific scenarios their customers might be facing (based on their own customer conversations and industry knowledge). Then I wrote copy to address customers in each situation:

Now, by giving clickable options, I could track which options users are clicking. Over time, I could learn which scenarios are most common and then tailor the site more to those users.

Use Cases

I interviewed subject matter experts to better understand the end users’ requirements. I summarized the interests of each user segment using some a User-Goal Matrix, such as:

Here’s another example:

User Narratives & Storyboards

The key here is again to build empathy. “Need to do X” may be a requirement, but do X where, why?

Thinking through the plot of a user story helped put me in the shoes of a user and imagine potential requirements to propose to the client (like using Street View, above, or automatically finding similar restaurants nearby). You think “If that actually were me, I’d want to…”

On another project, I documented the user story as a comic book with 2 personas. Here a nurse visits an at-risk new mom and searches for clues she may be depressed and a danger to herself or her child:

Instead of a table of things to look for, the comic shows clues in context (curtains drawn, signs of crying, etc). This kind of deliverable is a good way to build empathy for the project’s users (nurses finding themselves in these situations) and the end recipients of the service (their at-risk clients).

Customer Interviews & Jobs Theory

I’ve interviewed software users and SMEs. I’m also familiar with consumer interview techniques, including  Jobs To Be Done.

I use JTBD insights to shape how I define requirements with clients. JTBD theory argues that the user’s situation is a better predictor of behavior than demographic details. Its emphasis is on capturing the customer’s thinking in that precise moment when they decide to buy a product, especially when they switch products.

See my article 15 Jobs-To-Be-Done Interview Techniques

A/B Testing

I’ve run A/B tests to compare large redesigns as well as smaller changes. Large redesigns only tell us which version is better, while smaller changes help pinpoint the specific cause (which tells us more about users):

As part of A/B testing, I tracked multiple metrics. That way I could say, for example, “the new page increased engagement but didn’t lead to more sales” or “we didn’t increase the volume of sales but order value went up”.

User Analytics & Segmentation

I used Google Analytics and A/B testing tools to segmented visitor data. A classic case is Mobile vs. Desktop segments:

Another useful segmentation is New Visitors vs. Existing Customers, which I tracked by setting/reading cookies. I also segmented users by behavior e.g., Users Who Clicked a webpage element or hit a page.

I’ve done statistical and qualitative analysis of the data collected, teasing out relationships between various user behaviors:

User Testing

I’ve created test cases for moderated testing workshops. One type of user test provided detailed instructions for a user to follow.

“Log into the case dashboard. Find out if there are any cases that need to be escalated asap and flag them for the supervisor.”

Another type of test case posed open-ended goals to see if the user could figure out how to do it.

“You’re a data-entry clerk. You’ve just received a report from John in damage claims. What would you do next? Let the moderator know if you encounter problems or have questions.”

Or

“A cargo ship arrived into the East Port carrying 100 tons of fuel. When is the next train arriving and does it have enough capacity to transport this fuel to the Main Hub?”

I’ve observed users going about their task. The acceptance criteria included usability (is the user struggling to perform the task) and completion (is the user able to complete their job task). Users provided feedback verbally and using a template like this:

I’ve also created detailed training/test scenarios that closely mimicked real job conditions. Users had to successfully complete the tasks and confirm they matched reality.

Hands-On Hardware Research

On one project, I had to understand how to deploy touch-screens and new software across dental offices. I tested the touch-screens on site and interfaced them with the digital X-Ray equipment. I’ve also used:

  • dive computers
  • VR headsets
  • synthesizers

Solving Problems with User Research, Best Practices, and A/B Testing

What can I do to persuade more people to buy your product online? I tackled this question for 5 years as I ran A/B tests for diverse clients.

I remember one test idea that everyone on the team loved. The client said “That’s the one. That one’s totally going to win.” Well, it didn’t.

The fact is, most A/B test ideas don’t win.

In fact, interpretation is tough, because there are so many sources of uncertainty: What do we want to improve first? Which of a hundred implementations is a valid test of our hypothesis about the problem? If our implementation does better, how statistically reliable is the result?

Is our hypothesis about the users actually true? Did our idea lose, because our hypothesis is false or because of our implementation? If the idea wins, does that support our hypothesis, or did it win for some completely unrelated reason?

Even if we accept everything about the result in the most optimistic way, is there a bigger problem we don’t even know about? Are we inflating the tires while the car is on fire? 

If you take anything away from this, take this analogy: inflating your car tires while the car is on fire will not solve your real problem.

I believe the most effective means of selling a product and building a reputable brand is to show how the product meets the customer’s needs. This means we have to know what the customer’s problem is. We have to talk to them.

Then if we run an A/B test and lose, we won’t be back to square one. We’ll know our hypothesis is based in reality and keep trying to solve the problem.

Emulating Competitors

“I heard lots of people found gold in this area. I say we start digging there!”

That actually is a smart strategy: knowing about others’ successes helps define the opportunity. That’s how a gold rush happens.

This is why A/B testing blogs are dominated by patterns and best practices. So-and-so gained 50% in sales by removing a form field… that sort of thing. Now don’t get me wrong: you should be doing a lot of those things. Improve your value proposition. Ensure your buttons are noticed. Don’t use tiny fonts that are hard to read. You don’t need to test anything to improve, especially if you focus on obvious usability issues.

So what’s the problem? Well, let’s go back to the gold analogy. Lots of people went broke. They didn’t find any gold where others had or they didn’t find enough:

“The actual reason that so many people walked away from the rush penniless is that they couldn’t find enough gold to stay ahead of their costs.” ~ Tyler Crowe Sept. 27, 2014 in USAToday

You could be doing a lot of great things, just not doing the RIGHT things.

The good thing is many people do some research. The problem is not enough of it or directly enough. They are still digging in the wrong place.

“If I had only one hour to solve a problem, I would spend up to two-thirds of that hour in attempting to define what the problem is.” ~ An unknown Yale professor, wrongly attributed to Einstein.

Think about this for a moment: How can you sell something to anyone when you’ve never talked to them or listened to what they have to say?

Product owners often believe they know their customers, but assumptions usually outnumber verifiable facts. Watching session playback can hint at problems. Google Analytics gives a funnel breakdown, but it doesn’t give much insight into a customer’s mind. It’s like trying to diagnose the cause of indigestion without being able to ask the patient what they had for dinner or if they have other more serious health complaints.

The problem is it’s all impersonal, there’s no empathy. There’s no “Oh man, that sucks, I see how that is a problem for you”. It’s more like “Maybe people would like a screenshot there. I guess that might be helpful to somebody”.

Real empathy spurs action. When you can place yourself in your customer’s situation, you know how to go about helping them. If your solution doesn’t work, you can try again, because you know the problem is real rather than a figment of your imagination.

A Pattern Is A Solution To A Problem

Therapist: “Wait, don’t tell me your problem. Let me just list all the advice that has helped my other patients.”

Let’s say some type of visual change has worked on 10 different sites. Let’s call it a pattern.

A pattern works, because it solves some problem. So choosing from a library of patterns is choosing the problem you have. You don’t chose Tylenol unless you have a headache or fever. You don’t chose Maalox unless you have indigestion.

If you know what YOUR problem is, you can choose the right patterns to solve it.

If you don’t know the problem, you won’t get far choosing a pattern because it’s popular, because of how strongly it worked or how many people it has worked for. That’s like taking a medication you’ve never heard of and seeing what it does for you.

Pattern libraries are great for when you have a problem and want a quick, time-tested way to solve it:

Research Uncovers The Problem: A Short Story

Say you’re a shoe brand. You decide to reach out to people who are on your mailing list but haven’t purchased yet.

So you send out a survey. Within the first day, it becomes clear that many people are avoiding buying your shoes, because they’re not sure about sizing.

You’re shocked, but you shouldn’t be. User research insights are often surprising.

It’s just that you thought you anticipated this by posting precise measurements, a great return policy, and glowing testimonials. If anything, you thought people would mention the price, but no one so far mentioned price.

That’s a big deal for your product strategy. You need to build trust. So you set aside your plans for a full redesign (those fancy carousels on your competitor’s site sure are tempting). You set aside A/B test ideas about the font size of prices, removing fields, and so on.

You tackle the big problem. You do some research and come up with solutions:

  • match sizing to a set of well known brands
  • provide a printable foot template
  • allow people to order two sizes and return one
  • mail out a mock plastic “shoe” free of charge, and so on…

You ask a couple of people to come to the office and try some of your solutions.

Your user testing methodology is simple: First people pick their size based on either the sizing chart or template. Then they see if the real shoe fits.

Result? The matched sizing and the foot template were very effective in predicting fit. In user testing, the initial template didn’t work so well, because it’s hard to place a 3D foot in perfect position on a 2D printout. So, you come up with a template that folds up at the back and front, simulating a shoe. The users liked that much better. In fact, you start working on a cardboard model you can mail cheaply to anyone who requests it.

Now you’re off to testing it in the real world!

You design 2 different foot sizing comparisons, one pretty one with photos of top 3 brands and one long, plain table with 20 different brands. You also create an alternative page that links to the downloadable foot template.

You A/B test these variants over 2 weeks and pick the one that works.

(Then you go back to your research and find the next problem.)

You may also like this post about patterns: Compact Navigation Patterns .

If you want to uncover the biggest problems for your customers, I’m happy to help.

15 Jobs-To-Be-Done Interview Techniques

Here are 15 techniques I extracted from the Jobs-To-Be-Done interview Bob Moesta’s team did with a camera customer (link at bottom):

Set expectations

Give an introduction to how long the interview’s going to take and what sorts of things you’re interested in. For example, “even minor details may be important”.

Ask specific details to jot the customer’s memory

Don’t just ask what the customer bought but why that model, which store, what day, what time of day, where they in a rush…

Use humor to put the customer at ease

Intentionally or not, early in the interview the whole team had a good laugh about something the customer said. I think it did a lot to dull the edge of formality.

Discuss pre-purchase experiences

Ask what the customer used before they bought the product and what they would use without it. Dig into any “I wish I had it now” moments prior to the purchase.

Go back to the trigger

Walked back to what triggered the customer to even start thinking about buying the product and to a time before they ever considered it.

Get detailed about use

Interviewers and the customer talked about how she held the camera, which hand, in which situations she used it, which settings she used, and advantages/disadvantages of the alternatives. You want the customer to remember and imagine the product in their hands. Things like the weight or texture of the product could impact the user experience. Dismiss nothing.

Talk about lifestyle impact

Dig into ways in which the product impacted the customer lifestyle, things they were/are able or unable to do. For example, they talked about how taking pictures without the camera affected the way she presented her trip photos to her sister. Focus on the “use” rather than the specific “thing”. For example, you can ask “do you like this feature”, but then you want to move to “what does this feature mean to you in terms of what you’re able to do, how it affects your lifestyle, your future decisions”.

Explore product constraints

Talked about how other decisions and products impacted the decision. For example, the size of the bag that has to fit the camera and avoiding the slippery slope of requiring additional accessories.

Ask about alternatives

Products don’t exist in isolation. The customer had several other solutions, which serve different, specific purposes. Figure out whether the new product will replace or complement other products.

Point out inconsistencies, such as delays

Interviewers pointed out that the customer waited a long time to buy the product from the initial trigger to making the call after a trip. They asked “Why did you wait so long?”

Talk about the influence of other people

Ask about advice other people gave the customer or how other people may be affected by the decision.

Don’t put words in their mouth

In digesting and summarizing back to the customer, it’s easy to inject your own conclusions and words. Try to elicit attitudes and conclusions from the customer. Lead them to it but don’t do it for them (a related technique is to start talking and then leave a pregnant pause, so the customer can complete the thought). In one clear case in the camera interview, the interviewers asked a leading question but then prompty noticed this and corrected themselves, saying “Don’t use his words”.

Talk about the outcome

Asked open ended questions about whether the customer was happy with their purchase and in what ways. Ask about specific post-purchase moments when the customer felt “I am glad I have it right now”, but focus on how the situation is affected not the product itself.


Here are some additional I considered after listening  to the interview:

Avoid fallacy of the single cause

Don’t push the conversation towards a single cause (see Fallacy of the single cause). Rather than engage in cause reductionism, accept there may be multiple, complex causes.

Let’s say you pose the question: “Joe said that, and so you decided to buy X?” The simple narrative may be intuitive, causing the subject to be persuaded that “Yes, I guess that is why I decided to buy X”. In reality, the events may be true (Joe did say that), but in reality may be unconnected. In these cases, it’s important to point out inconsistencies rather than seek confirmation. For example, in the camera interview the interviewer rightly pointed out an inconsistency: “Why did you wait so long to buy X after he said that?” They also often asked “What didn’t you…” Work together to uncover the truth.

Beware planting false memories

Do not reflect back your own sentiments or ideas to the interviewee when clarifying. For example, asking people to confirm something they did not literally say may cause them to confirm a causal relationship that did not happen (other cognitive biases may aid this: pleasing the interviewer, tendency to fall for reductionism). It may plant a subtle attitude that might then be amplified through the course of the interview. Also be careful with “because” statements, as there is some evidence that we are biased to accept such explanations even when they are irrational (see The Power Of The Word Because).

More on possibility of implanting false memories Video 1 and Video 2.


Listen to the interview for yourself.

Detailed Fintech UI Concepting & Design (2015)

Optimized content to persuade visitors to upgrade to the paid product and designed new functionality, dashboards, and reports.

Applying User Centered Lens

I helped the client to create a matrix to clarify their target audiences and user needs in Plain English. Here’s an example:

Writing copy samples helps the team refine its message and value proposition. It generates ideas for design. Later it can serve as raw material for headings, labels, and marketing copy.

This eventually evolved into personas focusing on empathy and context:

Product Concepting & Strategy

I helped the client connect their raw business ideas to specific user goals. I asked user-centered who/how/why questions to tease out the core opportunity. It is common for clients to have implicit knowledge that they don’t think to make explicit.

For example, while discussing a screen, I suggested we break users into “buyers” and “sellers”. It turned out “buyer” and “seller” terminology didn’t feature anywhere in the UI, because the clien’t site isn’t a marketplace. However, this language well described the goals of the users.

I sketched  a lo-fi wireframe in real time. It targeted buyers and sellers explicitly instead of saying “Lists of Products”:

Discussions like this lead to new product ideas, different ways of organizing the existing offerings, and different strategies for marketing.

Landing Pages And Calls To Action

I designed a number of landing pages for this client. When they needed to “collect user data”, I helped them reframe this as a user goal i.e. why are they giving their data? When they wanted a page to list some facts about their product, I helped them articulate the value proposition. I wrote copy and organized content rooted in the user’s situation:

Real-Time Sketching & Collaboration

The client and I using screencasts and email to exchange ideas. Then we’d get on a Skype call to sketch the ideas with real-time feedback.

For example, the client had a screen that lacked purpose and consistency:

I improved the information architecture of the screen (it’s value proposition, hierarchy, and clickable items). During the conversation I redesigned the numerical scale and proposed to expand it create a “report card” for all criteria with useful “how it works” insights for the user (useful based on actual comments from users):

Real-time collaboration allowed design decisions to cascade and evolve to create a more useful, cleaner screen that the client was happy with.

Guiding Users Through Complex Processes

Problem: During our Skype call, the client and I arrived at the idea of “generating leads”, something users are currently not able to do using any tool on the market. To get at this info using the site would require multiple steps and reports.

Solution: I proposed to clarify and emphasize this feature by creating a unified step-wise wizard culminating in a practical “Prospect List” a sales person can run with:

I encouraged the client to guide users more. For example, I recommended adding more descriptions and training videos to various complex areas of the site.

Dashboard Concepting

This is a mockup to summarize a financial portfolio. This report gives users details and a key takeaway i.e. a breakdown of their key product buckets plus a single number summary (top right):

The mockup embodied existing requirements but also served as a proof-of-concept for new potential ideas. For example, in this diagram, I included a blue line that compares Product A to a benchmark. This is a way of asking the question visually: Does the user need to compare to a benchmark?

In this concept for a dashboard component, the idea is to see a subset of the data that’s relevant and then act on it directly. I’m filtering data to show only the negative y axis to highlight only the negative events. I’m then comparing it to the equivalent on the benchmark. I’m also detecting the lowest point (worst event) and allowing the user to click it directly:

I created a summary component to  let uses compare current value to the range, and how far it is from the highly probable values (cutting to the key insight instead of a long table with a complex chart):

Divergent Ideation

There are usually many ways to doing something. When I sketch ideas for a concept, I usually diverge to explore many options. I then converge based on what makes most sense and with client’s feedback OR I propose an A/B test.

For example, I tried an upgrade pop-up instead of the full report to persuade users to pay:

Some of our questions explored as separate variations were: Should I tease user with some summary data? If so, what data? Should I show upgrade call to action and input fields on the same page or hide them behind an upgrade link? Should I go with a dark or light motif? What’s the optimal message for the heading? Should I list top benefits or speak with data?…

For the home page, I concepted out different ways to get the user started:

In version B, I proposed a single field with a call to action, an action 90% of users would be interested in. In version C, I proposed instead to let user choose who they are, then show them a tailored message and call to action (Gradual Engagement). In Version D, I proposed showing the user several “I want to …” statements to directly link to user goals… and so on.