Cyber insurance involves answering detailed and technical questions a broker isn’t used to. I designed and shipped an AI assistant to help users answer those tough questions:
How Did This Project Arise?
Soon after ChatGPT 3 went viral, I was asked to investigate how we could embed Gen AI into our product. New AI tech made it possible to address long-standing aspects of our product vision, including:
Design Principle: We’ve long framed Relay as an “Assistant” to brokers
Problem Validation: Customers struggled with long technical questionnaires. But how big of a problem are the questions? Enough for customers to use the AI assistant, and what additional help would they request?
This project would address a range of business interests as well:
Strategic: Get users comfortable seeing AI in the product, so we can build on it; create buzz by being the first
Tactical: Allow sales and marketing to start bringing up AI in calls and get customer reactions. Would Support team report lower onboarding anxiety?
Internal Learning: Skill up development team in AI and lay groundwork for later
How Did I Validate the Problem
Early on at the company, I helped implement a “continuous discovery” track that ran parallel to our sprints. From these ongoing interviews with customers, sales, and support, I knew of two things that scared off brokers: #1 lack of familiarity with cyber insurance and #2 friction in the workflow: brokers had to ask their clients for info on top of what they already collected for other insurance lines. We had the opportunity here to educate brokers AND help them advise their own clients.
An interesting paradox: I knew the problem wasn’t going away. Brokers needed the ability to get quotes from multiple providers at once, but the more providers, the more questions they had to deal with.
What I Did & With Whom
My key deliverables were:
UI concepting, design, and overseeing implementation
Prompt engineering
I worked with a number people, including:
Stakeholders
Primary Concerns
Our Collaboration
Head of Product
Something useful that we can ship quickly Beat competitors in AI game Mitigating legal risk with a disclaimer UI Feedback (pushed for more “in your face” placement)
We discussed and chose a UI direction based on ideas (We needed this to be dead simple for non-tech savvy people, with no edge cases to complicated development) I negotiated less obtrusive UI that is still discoverable enough for sales / marketing
Lawyer
Disclaim the heck out of this feature
I toned down disclaimer (from blocking interstitial with lots of legalese to small non-blocking footer)
Developer
Minimize scope, as AI was new to us
I paired closely with an engineer to refine the design. I negotiated error handling logic and testing of response times to ensure edge-case-less experience
User
Something that doesn’t require additional training or explanation
I did rapid hallway testing with 2-3 former brokers. This was sufficient to ship quick to learn
Interaction Design
In order to use the AI Assistant, a broker would hover over a question and click “Clarify”. Here is a video showing the full interaction:
What Were the Hardest Parts?
Choosing from many possible prompts. I knew our user’s typical concerns from experience. My key decision was to restrict the choice to 4 starting questions. I created short, user-friendly labels to hide the complex underlying prompts. I tested numerous prompt versions to ensure consistent, high-quality responses and no hallucinations. I didn’t want users to waste effort typing anything.
Minimizing distractions. Leadership wanted AI to be prominent in the UI (like on the home page) with a prominent, blocking disclaimer. I argued we had an existing user behavior to build on- users could hover on a troublesome question to see who’s asking it. Therefore, I placed the AI affordance in that context, next to the question. And I negotiated wih Product and Legal to keep the disclaimer small and out of the way.
Scalable pattern. I needed to establish a pattern that could be used in other situations in the future. A contextual trigger that opened a sidebar seemed scalable. In fact, it was later used to facilitate prototyping a different feature.
What Alternatives Did I Explore?
Alternatives to AI. I had considered writing high-quality custom content for each question (as an alternative to AI). Unfortunately, upfront work and ongoing maintenance was cost-prohibitive.
Open/Closed Interactions. I considered using the UI as research: to learn what other questions users might have “in the moment”. One idea was a “Add Question” interaction. However, I decided against this to keep scope tight and control risk with curated questions.
Dedicated flow vs sidebar. Another idea was a questionnaire Wizard that would explain each question and spread AI guidance over the whole flow. But that was too much work for v1 and over-optimized for first-time users.
How Did We Measure Success?
We instrumented analytics to see how often the feature was triggered, on which questions, and what kind of help the user triggered. The data wasn’t conclusive yet (given our long sales and adoption cycles) to establish a direct impact on the bottom line. There were larger factors in play e.g., strategically we soon after shifted focus to bypassing questions instead (which meant fewer “doer” users would need to consult the AI Assistant, at least initially).
We did collect qualitative feedback from Support teams doing onboarding that showed the existence of the AI assistant contributed to the perceived value of our product from the buyer persona.
Also, the marketing efforts around AI generated some sales leads, which is an important thing for an early-stage startup. And the AI experience within the team yielded more opportunities later.
The best way to get real-life feedback is to put something real in the hands of customers. Wizard of Oz is a prototyping technique where you tell customers they are using a real solution, but you’re actually faking it behind the curtain.
I shipped an AI bot that consumed documents with client information over email (e.g., last year’s PDF submission) and replied with quotes. We used an LLM backend to parse the document, automatically fill an insurance submission, and fill gaps with assumptions.
The key chapters of this story include:
My Role as Product Manager (and Designer)
I owned this end-to-end as Product Manager. I mocked up and tested the concept, tested usability of final designs, and managed 2 engineers to ship the AI Email Bot. I was responsible for managing all internal and external stakeholders, technical work to set up the solution, design, and managing implementation:
Problem and Solution
Brokers want quotes from multiple providers. More providers = more questions upfront, and this wasn’t always worth it. This is a larger problem I worked on for a long time:
For this project, the final solution was: brokers can get rough quotes right from their inbox by submitting existing documents and getting quotes by email:
Effort: No more data re-entry from existing PDF applications
Intuitiveness: No logging into anything, no leaving email inbox
Time: Cut overall time-to-quote
What Research Triggered This Project?
Some customer insights were amplified by recent interviews:
Brokers felt overwhelmed by the proliferation of provider portals (if they didn’t have to use our portal for the initial steps, it was a big plus)
Brokers essentially “lived” in their inbox (if they could transact in rough quotes without ever leaving their inbox, that would be huge)
Based on my ongoing discovery research, I knew customers of a particular type:
Brokers that value speed/volume over accuracy (ok with rough number they can revise later)
Brokers comfortable making assumptions, whose whose clients rarely revise application to bind (e.g., Apogee)
Brokers that receive filled applications via email (e.g., last year’s app for other carriers, broker’s form)
In the backdrop were other factors:
I had already shipped a related feature: ability to skip questions for rougher pricing
I had long-term relationships with customers who need this sort of feature
I had previously concepted related ideas, like Outlook integrations, but there were practical obstacles, and it was hard to prototype (none of which held for AI-over-email)
My team had access to ChatGPT-powered data extraction tools
At some point company leadership listened to what customers were saying and okayed exploring a solution that catered to an email-based workflow.
Managing Conflict and Negotiating a Pilot With Beta Customers
As part of my design role, I cultivated long-term relationships with customers. I identified 2 beta customers for this feature and gained their buy-in for a pilot. One of them, I previously visited at their office. As a result, securing their participation in a 2-week pilot was relatively easy. In the end, I had 6 potential users, who I knew would benefit from this feature, based on my deep knowledge of how their specific workflow and values.
I had a preliminary call with them to estimate how much time we stand to save them.
One company was in a busy season, but I persuaded them this was actually a great chance to try a time-saving feature. The other ran a very lean shop. When I reached out with a follow-up request to one of the end users, the manager pushed back, saying, “We can’t spend time doing that. Perhaps this isn’t going to work for us.”
Instead of negotiating over email, I knew we had a standing call the next day. I simply said “No worries, we can touch on this tomorrow”. The following day, I walked the customer through a carefully prepared, minimalist slide deck. In step 1, I reiterated their goals and the projected time saving = handling more business. Then I explained how we needed some sample documents to calibrate our solution. The customer gladly agreed.
Negotiating With External Stakeholders
Part of my job involved negotiating with various stakeholders, vendors, lawyers, underwriters, and others.
For this project, we were using an AI provider for extracting data from PDFs. I suggested the vendor and my team set up a shared Slack channel, where I ended up giving a lot of feedback and making feature requests. I articulated the benefits well enough that the vendor actually shipped new features based on my feedback, in time to be useful to us.
Scope of the Pilot
For one of our key customers, I negotiated a desired workflow and target metrics:
How I Ran “Wizard of Oz” Research
For two weeks, I monitored my inbox and impersonated a chatbot (assembling submissions and attaching quotes manually).
My research objectives included questions like:
What’s maximum time we can take to respond so the solution is acceptable?
Would users be comfortable sending follow-up questions to a chatbot?
Would users even remember to use the new workflow? (old habits die hard in this industry)
How would users react to assumptions? What information would users need to verify?
And most importantly: would the document extraction tech be good enough to produce quotes most of the time?
It was a chance to collect more real documents to refine our prompts for data extraction
I varied my responses to test various hunches. I’d make successful responses but also reject a submission on purpose to see what the user would do instead.
At times the incoming customer request woke me early in the morning and I had to respond as quickly as possible to impersonate the AI Quote Bot.
A Useful Technique: Intentionally Fail
People are surprised when I tell them I sometimes intentionally gave failed responses, from complete failure to parse the document to “I got this but it’s not enough for a quote. Please take this additional step.” Knowing how customers would react to fail states is a key part of prototyping. Sometimes fail states elicit behaviors that you wouldn’t catch otherwise e.g., would users try to push back on the response, would they try to resubmit, or modify their submission?
Learnings From Prototype
We needed to respond within 5 min in order not to block other steps in users’ workflow
To some users it felt natural to ask the chatbox follow up questions, but it wasn’t cruicial; it was enough for our Version 1 release to merely create a “shell” submission, which users could edit via a link
We measured the time saving of 10-20 minutes per submission from data entry alone; this translated into one customer being able to place 20% more business
Needed data prediction improvements: e.g., 15% of submissions couldn’t go through because of missing industry code
Most importantly: we kept seeing submissions despiteusers being told they could stop at the end of our pilot
The following quote further illustrates how thisfeature validated our principle of meeting users where they are (i.e. shifting from web UI to their inbox):
How We Measured Success
Several customers came to rely entirely on the new Email Bot. One of the pilot customers was able to quote 20% more business:
Technical Success
From our Beta release, we saw we weren’t hitting our 80% submission-to-quote success ratio. I identified the causes, and we fixed it in the next release:
Desire Paths: Users Vote With Behavior
A desire path is a workaround. It’s formed when pedestrians refuse to walk a paved path and forge their own, shorter path across the grass. This way users communicate their desire through behavior.
During the pilot, one customer hacked our solution. Since they knew our AI email service consumed prefilled PDF applications, they created plain-text PDFs with customer information and then submitted them to the AI bot. Doing so required extra work, but it told us that the email channel was so effective and intuitive that they we willing to do that work.
Biggest Proof: Do They Want It Back When It’s Taken Away?
One the most surprising findings AFTER the pilot ended was that we kept seeing submissions despiteusers being told they could stop at the end of our pilot.
Once the pilot was “turned off”, users were eager to get it back, even users who complained about a lot of rough edges during the pilot. In B2B, it is harder to impress the “doer” rather than the “buyer” persona. In this case, users didn’t have to use our AI tool, but they saw that it made their lives easier and wanted it back. This was another strong signal that we were on the right track.
One customer said:
User Testing the Bot Response
Part of the bot response was a checklist to verify accuracy of extracted data. One of they key considerations was: should the user see it in the same order every time or should it be in order of priority (e.g., errors first). I surveyed the pilot participants over email to choose:
One user replied: “B is better” while another said “Example A is better.” So I dug in further with the latter: “What makes version A more practical? Was it clear to you that… [details]”
The user clarified: “Version A is more practical because it captures more info and leaves us just 4 additional bullets to double check…Version B captures less info and requires more time to validate those 8 bullets filled out using guesses and defaults.” This was not true, but the user’s perspective here did inform me about the perceived complexity. After talking to the customer more, they agreed Version B would be better for them.
I explored severalother variants e.g., I guessed Version E4 would be easier to scan, but Iwas wrong. Although E4 would give the user criteria in the same order every time, the mental load was actually higher, because they processed it as 2 lists, left and right. I also saw that too granular a breakdown was counterproductive:
Through a couple of iterations I validated:
How granular the breakdown should be
What layout entails lowest mental load (it wasn’t what I expected)
How many bullets are too many
Which items are most crucial to review
In the end, I reused the email “digest” template I had designed for another feature. The bot summary was simplified and tacked on at the bottom:
Prompt Engineering & Test Strategy
To ensure we had an 80%+ success rate converting submissions into quotes, I had to get creative with designing and testing prompts. The AI vendor did not have a mass-test feature, but I wanted a systematic approach. I built a JavaScript browser automation for mass prompt testing and established a process to regression test prompt changes. Here is an example of my log, showing pass/fail with various prompt variants across dozens of document samples:
g our first AI features on the submission form and email workflow (the salient and familiar), we were able to get a positive response to AI from a non-tech-savvy crowd.
What Product Risks Did I Identify?
I lead team discussions on the following topics. These are things we needed to be ok with, actively mitigate, or learn about from our pilot.
Risk
Description
Mitigation
ACCURACY
AI is faster but will it hallucinate or extract data accurately?
Restrict to high priority data like name, state, revenue
CARRIER CHOICE
Brokers won’t use AI, because it doesn’t support their key carriers
Reinforce message that it’s zero effort to try; hinges on our carriers’ premiums’ being competitive over time; also on the number of carriers they get (3+ good quotes are hard to pass up even if 1 is missing)
OVERALL USABILITY
Broker is annoyed if data is extracted incorrectly and fixing it takes even longer
Overall testing showed good results but could impact specific carriers (need to monitor long-term)
DEFAULTS
Broker can make better assumptions than AI
Research showed brokers are ok with a rough number and reasonable assumptions e.g., 3 employees should be enough and this can be configured for each broker
MVP Scope and Feature Roadmap
After the prototyping and consulting with engineering, I defined the roadmapfor this feature, setting MVP constraints as:
Bot can process 1 document at a time
Bot cannot read email body
Bot cannot reply to follow ups (user must hit Edit link)
Go-to-Market
As the feature was nearing completion, I kept the release on track:
Pitched the latest customer feedback to engineering
Updated customers on final release date and scope
Wrote product brief for sales, support, marketing
Ran Q&A session with sales/support
Discussed upcoming release with marketing
Interesting Aside: Hardest Mental Shift Is When Lo-Tech Wins
It’s often said that we as product people overestimate where we sit in our customer’s life. This can be like tunnel vision.
Over the years, I invested a lot of time into designing and shipping a user-friendly, lean application flow. I always saw email and spreadsheets as something to be replaced. However, as time went on we saw repeatedly that there are times when PDFs, spreadsheets, and email are good solutions for specific circumstances.
Meeting customers where they are at became an important theme. This opened my eyes to new factors. For example, I came to appreciate that certain user personas just wouldn’t log into our software. For example, “producers” (sales people) spent all their time on phone calls with customers and traveling. They never logged into anything and delegated a lot to others. The tendency then was to either ignore them OR to push solutions on them they didn’t want. But I challenged myself on how to make it “dead simple” to create opportunities. Maybe they wouldn’t log in but they would click a link in a spreadsheet.
This project was one of the steps we took to meet customers where they are at.
Concepting The Future
Following that release, we started work on enhancements (not covered here). The vision included expanding the AI Assistant into more of the day-to-day workflow. For example, I knew that flagging failed deals is important for company reporting but is a task of less value to the broker… This is a rough mock I used in discussions with customers to illustrate how that might work:
When I joined, Relay consisted of a simple form and a rough idea for a “risk tower” visualization. I brought those ideas to maturity as part of an end-to-end workflow complete with email integration, dashboards, permissions, and so on.
This is a risk towner screenshot from the video. It visually shows how much risk a reinsurance company is willing to cover. You can view multiple providers, pin your favorites, and assemble full coverage:
I shot the first promo for Relay Platform myself. It shows shows more of the product context and some Risk Tower interactions I designed.
Customer Feedback
It took a long time to learn to talk the talk with customers and to walk the line between winning important customers while getting up to speed. Sometimes a customer might tell us: “Guys, this is basic insurance. I shouldn’t have to explain this to you”. Sometimes we’d get contradictory information. It was humbling, but we persevered and won even over the skeptics once they saw how much time they were saving.
Here is a 1 minute video froma customer team that saved time with Relay:
Views for Multiple Participants
Designing multi-participantexperiences meant always considering how a change on one side affected other parties. For example:
Broker submitting an application users tower to request and view quotes
Carrier viewing the application uses tower to see where their quotes lie with respect to what is requested
Here is what it looked like for a broker to enter a risk tower and view quotes visually:
Example Tower Interaction Challenges
The Tower graph was our flagship feature to represent a request for coverage. An insurer claims some part of that surface area with their quote. Here are some example UI/interaction challenges with Towers that I faced:
When layers quoted were different than the layers requested (didn’t visually map)
Showing overlapping layers e.g., several quotes in exact same position on tower
Showing layers that are too thin to show e.g., insurer quotes 1% of the width of a layer
Tower sits on top of another tower (so the visual scale is skewed or unknown)
How to show “non-quoted” retention – as absence of a layer or as different type of layer?
When coverage meta-data differs for adjacent layers (not applies to applies)
Here is an example of how I addressed the latter:
I ignored this problem as an edge case until I ran into a customer that dealt with this a lot. I interviewed them about why they needed this information. I then designed the above enhancement to give them that insight.
Relay had about 4 target personas based on stories we heard over and over as they features on an earlier website:
Making Hard Decisions
Sometimes, you have to let go of ideas you’ve invested in. These are some spin-off project samples I invested significant research and design time into. They didn’t pan out. In one case, I lobbied to kill the project by analyzing the pros and cons, citing changes in our target audience, the market, and our strategy. This decision allowed the team to focus on more profitable opportunities.
Dealing With Ambiguity & Complexity
Complexity and moving targets made scoping, prototyping, and designing difficult. Challenges included inconsistent language among brokers, complex workflow requirements, and differences in how brokerages worked.
I knew that large brokers spent significant time assembling complex risk towers. I grappled with questions like: What qualifies as complex? Is this our target segment? How does it integrate with our product? Through research, I identified key decision points and worked with SMEs and Product to clarify and make assumptions.
Making sense of the domain, I sketched visualizations to identify commonalities in insurance towers. This helped me lead discussions around “What’s the simplest starting point?” and “What are the key dependencies?” I played a key role in resolving this through analysis, concepting, and gathering early feedback:
End of Story
Despite securing influential customers, we didn’t get the self-serve, exponential growth we needed with this product. The business was complex, change management slow. The cyber market, in contrast, was experiencing a lot of innovation due to (1) growing cyber risk in the market and (2) higher tech maturity of providers (APIs) that makes scaling easier. As a result, the reinsurance product was retired, and the company pivoted to cyber insurance.
Images/videos were taken from our public website, social, and Youtube.
The more I spoke to customers, the more I could speak their language and understand how they think and what might be holding them back. Here’s an example something I learned from a customer interview, when I realized that what we were calling “indications” (rough insurance quotes) wasn’t the same as what they understood as “indications”. There was apparent alignment, because they and we meant “rough quotes”, but turned out we used the concept differently. As a result, customers were approaching what we were offering with the wrong preconceptions:
Mental Concept of “Indications”
Customer’s Mental Model
≠
How It Worked In Our Product
Effort
Takes some effort – broker attempts to tailor assumptions or get preliminary info from client
≠
Focus on high-volume, low-margins means initially putting in minimal effort
Pricing
Fairly close to final quote; client can use it to weed out some options
≠
Ballpark to start conversation; no point comparing options yet
Application Requirements
Enter as much as you know
≠
AI-driven or generic assumptions for faster response
Time Frame
Usually 14-30 days out from when you need final pricing
≠
Allow to start conversation much earlier, 60+ days out
A thematic analysis of our customer’s mental model vs. how indications worked in our product
Conflicting Mental Models: Learning how brokers think and make trade-offs a key part of my job. For example, brokers traditionally want to send insurance providers strong deals to gain preferential treatment in the future. If they send weak deals, it might damage their reputation. With APIs, they didn’t need to worry about this, because no humans are involved on the receiving end. But old habits may have caused people to avoiding sending greater volumes through the email bot. Empathy means going beneath the surface to understand the oft unspoken principles and values guiding people’s work.
Push/Pull or Forces Analysis
This is a great framework from the Jobs to Be Done field, which I use often to convey the motivations and obstacles of a decision gathered from interviews. It’s a type of mental modeling. I can’t share a real example, but here’s a great illustration (Source):
Customer Profile
I ran a workshop with Sales and CX to understand how we can share and divide the responsibilities for customer discovery among the team. We came to better understand what CX requires from Sales and what Sales is able to get from customers they speak to. I developed a framework for a “Customer Profile” with itemized criteria rated by importance and who is responsible. This eventually became a Google Doc form, and then later was merged into our CRM system. This profile goes beyond being a research technique. It determines how the company triages prospects, where we invest our efforts, and how we close gaps in understanding that impact everyone, from sales to product. Here’s a key gap that was identified: when brokers do not have enough experience with cyber, they are less ideal customers for us – this framework I helped craft reminds the team what to ask them, what to train them on, and whether they are a good fit in the first place:
Shipping To Learn / MVP
One needs to get the smallest viable solution into the hands of customers. Feedback based on real-world usage of a basic solution or a prototype provides more clarity than rigorous user testing of a complex solution in hypothetical situations. Shipping to learn helps tame the complexity that often bogs down teams trying to find their product-market fit. This involved breaking features down to understand what is absolutely essential:
Wizard of Oz & Other Lean Prototyping
“The Wizard of Oz test essentially tries to fool potential customers into believing they are using a finished, automated offering, while it is still being run manually. ” – BMI Lab
I used this technique successfully to test a Gen AI solution. This had several advantages:
AI was rapidly changing but not yet ready; this technique allowed us to get ahead of the technology limitations and envision the future
We could collect more real-world data from customers (who submitted real documents, made real queries), which we could then use to evaluate the AI internally, safely
Building AI tools was a new skill for the team; this technique kept us from pulling development resources off other critical projects
Customer actually benefited from the test – when we ended the experiment after 2 weeks, they were eager to get it back. This was great validation for us.
There are great tools now that combine sharable prototypes with slides and surveys. This is especially useful for hallway testing remotely. I set up an experiment, present the context for the user, and then have them perform a set of tasks and answer some questions. The advantage of this over moderated testing is they can do it on their own schedule. I would typically follow up with questions over email. The biggest things I learned about running this kind of test is to test the test – roll out v1 to one or two people to catch any issues. You often need to add clarification, reword or reorder steps before it’s ready to scale. This was particularly useful for quick Hallway Tests with SMEs.
Job Mapping Based on Strategyn’s “Outcome-Driven Innovation”
ODI breaks all processes down into basic phases, whether it’s surgery or selling a home. The activity’s outcome is defined in a standardized way like: “Minimize the time it takes to assemble a proposal to the insured”. This leads to a deep understanding of how the user completes their task. Customers can then be surveyed about their success with very well defined tasks, which leads to a quantified understanding of wider market needs. Here’s a sample activity:
Activity: Location all the inputs needed for an application. Diagram below shows the tasks to accomplish that (based on customer interviews).
Situational Personas
I introduced the personas concept to a Fintech client to help them shift away from a focus on features and technical capabilities. I started by asking the client to itemize their audiences in a spreadsheet. Then we identified some “top needs”, based on their calls with customer and industry subject matter experts. Later, on another project, I asked them to write first-person user stories.
Over time, the requirements started include more emotional and situational context. At that point, I started to distill insights into personas that focus on the user’s situation:
Context builds actionable empathy and elicits tons of new product ideas and marketing strategies. What seemed like one type of user falls apart into 3 different types of users with distinct, nuanced needs.
Voice of The Customer
This is a type of research that focuses on capturing and sharing words that customers say. On a more recent project, I defined target audiences through story telling rather than demographics e.g., a team large enough where poor intake clarity starts to cause task assignment confusion. I base the quotes on real interviews, so it resonates in our marketing. I create a specific contrast between before and after, so it’s clear what our product solves (a UI pattern that performed well in A/B tests in the past). The goal is to write copy that resonates with customers, because it’s what they themselves would say.
This perspective can be applied to other content, like this feature matrix:
Pre-2019 examples:
Mining Customer Reviews
For one client, I distilled dozens of pages of user reviews into a concise report with 18 themes (Affinity Mapping). The themes described in Pain English what customers thought, supported by 200+ actual quotes of customers. Here are some themes for their Error Tracking software:
Mapping User Flow / User Journey
I have experience mapping complex processes. In this example, I facilitated a session with SMEs to understand the key steps of a health care process, including role of existing IT system (SMEs are usually senior/former users or people intimately acquainted with the process). I broke the process down into higher level phases:
Next step would be to label this process with assumptions, problems, and opportunities (not shown).
Contextual Inquiry / Job Shadowing
I shadowed a health inspector and documented their process as multi-participant user flows:
I then analyzed these flows to figure out how the process could be improved through efficiencies and use of technology like tablets.
Here’s me touring a coal mine client to gain a deeper appreciation for the business and user context:
Funnel Analysis
For an online experience, I often started with a rough funnel, showing the customer’s journey backed up with visit & conversion metrics. I could identify points in the process where users give up or digress from the path to their goal.
Why isn’t this chart prettier? Because it not going into a magazine. It gets the job done, and then it’s not needed anymore.
This chart hinted at steps where the problem manifests. Similar to a sitemap, this could also help identify problems with the overall IA of a site (e.g., if there are lots of loop-backs or some pages aren’t being visited). Here, for example, I noticed that 50%+ of people drop off at the Tour, which suggested removing or improving the Tour. I also see there are many steps before actual enrollment. Some of my A/B tests tried different ways to move the Enrollment call-to-action higher in the process.
Listening In On Calls & Reading Emails / Chat Logs
There’s nothing like getting the user’s actual words, hearing the tone in their voice, reading between the lines. I’ve listened in on call center conversations to identify common customer questions and persuasive techniques used by CSRs. I’ve also read chat logs and emails to better understand what customers care about.
Remote Monitoring
I’ve used a number of screen replay tools to observe users and identify potential usability issues. To help sort out useful videos with this technique, I tagged various user events using URLs and custom JavaScript. That way I could find and observe problems like, say, only videos of users that completed a form but didn’t click submit:
Linking screen replays and analytics (e.g., Google Analytics) is a useful and cheap way to do usability testing, because you can define a behavior KPI and then filter videos showing that behavior e.g., someone going back and forth between screens or hesitating to click something.
Heatmaps, Clicks, Attention Tracking
In one test, I used the heat map to corroborate our statistically strong finding. I tested a new home page with a simple gradual engagement element: a clear question with several buttons to choose from:
Our hypothesis was that gradual engagement would guide visitors better toward signing up. The heatmaps for the other variations showed clicks on top menu links and all over the page. In contrast, our winner showed clicks on just the buttons I added. There was virtually no distracted clicking elsewhere. This was reassuring. I also saw a pattern in the choices visitors were clicking.
Sometimes I wanted to test if visitors were paying attention to non-clickable content, like sales copy. One of the tools I’ve used was a script I created to track scroll position. By making some assumptions, I could infer how much time users spent looking at specific content of the site. You can see a demo here
Self-Identification
Sometimes designs offer an opportunity to both test a hypothesis and collect data. For one project, I decided that, instead of emphasizing lowest prices (as the norm in that saturated market), I would emphasize our client’s experience dealing with specific real life scenarios (Authority principle). So I interviewed the client further to understand specific scenarios their customers might be facing (based on their own customer conversations and industry knowledge). Then I wrote copy to address customers in each situation:
Now, by giving clickable options, I could track which options users are clicking. Over time, I could learn which scenarios are most common and then tailor the site more to those users.
Use Cases
I interviewed subject matter experts to better understand the end users’ requirements. I summarized the interests of each user segment using some a User-Goal Matrix, such as:
Here’s another example:
User Narratives & Storyboards
The key here is again to build empathy. “Need to do X” may be a requirement, but do X where, why?
Thinking through the plot of a user story helped put me in the shoes of a user and imagine potential requirements to propose to the client (like using Street View, above, or automatically finding similar restaurants nearby). You think “If that actually were me, I’d want to…”
On another project, I documented the user story as a comic book with 2 personas. Here a nurse visits an at-risk new mom and searches for clues she may be depressed and a danger to herself or her child:
Instead of a table of things to look for, the comic shows clues in context (curtains drawn, signs of crying, etc). This kind of deliverable is a good way to build empathy for the project’s users (nurses finding themselves in these situations) and the end recipients of the service (their at-risk clients).
Customer Interviews & Jobs Theory
I’ve interviewed software users and SMEs. I’m also familiar with consumer interview techniques, including Jobs To Be Done.
I use JTBD insights to shape how I define requirements with clients. JTBD theory argues that the user’s situation is a better predictor of behavior than demographic details. Its emphasis is on capturing the customer’s thinking in that precise moment when they decide to buy a product, especially when they switch products.
I’ve run A/B tests to compare large redesigns as well as smaller changes. Large redesigns only tell us which version is better, while smaller changes help pinpoint the specific cause (which tells us more about users):
As part of A/B testing, I tracked multiple metrics. That way I could say, for example, “the new page increased engagement but didn’t lead to more sales” or “we didn’t increase the volume of sales but order value went up”.
User Analytics & Segmentation
I used Google Analytics and A/B testing tools to segmented visitor data. A classic case is Mobile vs. Desktop segments:
Another useful segmentation is New Visitors vs. Existing Customers, which I tracked by setting/reading cookies. I also segmented users by behavior e.g., Users Who Clicked a webpage element or hit a page.
I’ve done statistical and qualitative analysis of the data collected, teasing out relationships between various user behaviors:
User Testing
I’ve created test cases for moderated testing workshops. One type of user test provided detailed instructions for a user to follow.
“Log into the case dashboard. Find out if there are any cases that need to be escalated asap and flag them for the supervisor.”
Another type of test case posed open-ended goals to see if the user could figure out how to do it.
“You’re a data-entry clerk. You’ve just received a report from John in damage claims. What would you do next? Let the moderator know if you encounter problems or have questions.”
Or
“A cargo ship arrived into the East Port carrying 100 tons of fuel. When is the next train arriving and does it have enough capacity to transport this fuel to the Main Hub?”
I’ve observed users going about their task. The acceptance criteria included usability (is the user struggling to perform the task) and completion (is the user able to complete their job task). Users provided feedback verbally and using a template like this:
I’ve also created detailed training/test scenarios that closely mimicked real job conditions. Users had to successfully complete the tasks and confirm they matched reality.
Hands-On Hardware Research
On one project, I had to understand how to deploy touch-screens and new software across dental offices. I tested the touch-screens on site and interfaced them with the digital X-Ray equipment. I’ve also used:
This slide deck contains ~40 visual designs without case studies. It covers landing pages, B2C workflow examples, analytical dashboards. This is for those who just want to gauge my visual design experience.
Please request access via Google and I will add you promptly
This is a 2019 redesign of my very first mobile web app. Back in 2013, while working for the City of Toronto, I designed a basic app allowing you to look up a restaurant’s inspection status on the go. Unbeknownst to me, it was built and put into production! Here I revisit this old project. This is the updated concept.
Background
I was tasked with defining the requirements for modernizing the DineSafe program at the City of Toronto. The impetus came from a Public Opinion Poll about the program. Among the recommendations was improving usability of the existing website and creating new channels for data disclosure to the public, such making the raw data available and creating a mobile version of the existing site. I spoke with the business unit, who were directly involved in the public consultation. I defined a number of user and technical requirements.
Audiences
From my conversations with client, I identified a number of audiences. It turned out that even internal staff were using DineSafe to look up data, due to lack of a proper internal system.
The target audience for the mobile experience was casual restaurant goers who wanted to see if a restaurant has had health violations. It was decided to expose just this functionality through the mobile app.
Application Map
Prior to design, I did entity maps similar to these, which established the basic structure of the app:
I documented the data model in detail (not shown).
My Design in 2014 vs. 2019
Here’s the original design, intermediate idea, and latest iteration:
I did this as an exercise to improve upon my original design.
Instead of specialization, we should strive for some overlap. When engineers and designers share experiences, they develop empathy, which leads to clearer communication. This in turn improves outcomes.
Designer Saves The Day (True Story)
Sam was engaged on a renovation project, where the goal was to open up a high-traffic space in a house.
Sam was the “designer”, who drew up a detailed plan to remove a main load bearing wall that would meet the permit requirements. Sam also defined the functional requirements: the beam had to be concealed, the opening had to be this wide, and so on. Mike was the highly skilled “engineer” tasked with making it happen.
As Mike and Sam discussed implementation, Mike made specific decisions about materials and specific dimensions. A few compromises were made, but all looked doable, until Mike exposed the floor where the beam cross-support would go and realized the space contained an HVAC conduit that could not be relocated.
At this point, Mike the engineer sighed and said, “You know, this is a show stopper. I would reconsider removing this wall. We’ll have to move up past the vent and at that point, you’re not really removing that much wall.”
To this, Sam replied, “No, this wall is in the way. We can’t stop now. Let’s just think about it for a moment.”
Very quickly it occurred to Sam that they could build on top of the floor without impacting the vent. Sam described how it could be done. Apparently, this option didn’t occur to the engineer, because it’s not usually done this way and if they got a grumpy inspector, it would not pass inspection. So Sam vetted his idea with an architect, who highlighted the risks with doing that. Fortunately, Mike the engineer jumped in and offered solutions that would mitigate those risks. Sam then relayed this final plan to the inspector, who signed off on it.
And so Sam’s solution was implemented and the project was a success.
Mike was 50X more skilled than Sam. But two heads are better than one. Although Sam was a designer and project manager, he acquired some basic construction knowledge. This knowledge turned out to be critical in the successful implementation of his design. And his not being an expert was actually helpful, because he naturally thought outside the box.
The critical takeaway here was that BOTH the specialized skillsets of the engineer and the architect AND the designer’s basic technical acumen came together to produce the perfect solution.
Now let me tell you a different story.
Engineer Saves The Day (True Story)
Sam was involved in a bathroom renovation project. Since Mike the engineer was busy, Sam decided to enlist the help of a different engineer, named Anton. Anton said he required design drawings even for a small project. “You tell me what you need, and I’ll build it” was his motto.
So Sam decided to contract out the design to a dedicated designer, Julie. Julie came highly recommended. She looked at the existing layout and drew up several recommendations. Sam and Julie agreed on a plan to put the shower by the window, because it didn’t seem to fit anywhere else. Julie then produced detailed drawings for the engineer Anton. But at that point Anton was no longer available.
Luckily, Mike now was. Mike the engineer looked at the plans and immediately said it was no go. You can’t put a shower by the window, because obviously the water would go all over the window sill and it would be a mess in the long run. Julie or Sam were so focused on the paper layout that they overlooked the implementation.
So Mike the engineer thought it over for a day or two and proposed a completely different design, which moved everything around in a way that neither Julie nor Sam ever considered. It was a spectacular improvement. A bit more work but definitely worth it. Mike’s design was a rough pencil sketch on the wall. With that in mind, he successfully implemented the new design.
Most often designers are frustrated that engineers haven’t implemented their designs exactly on the first try. But that isn’t always a good expectation. I don’t always care to align my boxes down to the pixel or get the font colors and sizes perfect. I always expect the implementer to critically think about what they are building. The result need not be antagonistic. Here’s a sample tweet I just came across:
Building Rockets Iteratively
There are many examples of designers and engineers collaborating successfully.
If you get a chance to watch the documentary Cosmodrome, it’s an interesting story about how Soviets perfected a closed cycle rocket engine in the 70s. The U.S. thought it impossible and wasn’t even aware of it until the 90s.
The Soviets’ manufacturing and engineering process is a perfect case study in iterative design, prototyping, and collaboration.
The engineers drew up plans, and then planned a dozen test flights to iron out flaws. These were FULL flights, and they fully expected the first few rockets to explode. And they did. They even destroyed the launch complex and had to rebuild it to keep testing. Luck was a factor in these decisions. The Soviets simply didn’t have the right test facilities, so they adapted. Whereas the Americans could test an engine without actually launching it, the Soviets had to do a full launch.
The Soviets learned from each failure and with each test, they refined the engine. This way they achieved something the Americans could not.
Their design method is particularly instructive. Whereas for Americans, design and build phases were separate, Soviet design engineers handed responsibility over their design to build engineers. The build engineers would take over and be free to iterate the design to make it work.
“That’s Not My Job”
I started out my career at a consulting firm as a Business Analyst working closely with other analysts and developers. This company called us renaissance consultants, and in fact all our official titles were plain “Consultant”. I remember a training presentation where it was emphasized that we should all do what is necessary. “If the trash bin is full, we can’t say That’s not my job”.
I believe this kind of environment allows bright individuals to thrive. It allows developers who have a flair for client-facing consulting to take active part in client meetings. It allows technically inclined designers/analysts like myself to pick up coding when needed. This allows the team to go beyond the requirements or design “hand-off” and instead work hands-on together on challenges.
Contrast this with experience all designers have had with *some* developers. Developers who convey “It’s not my job” by saying “Done!” without even bothering to load their work in a browser (they send you a link to a page that’s completely broken). Equally frustrating is the reverse experience with designers, who don’t consider mobile experience or feasibility at all.
What The Future Looks Like For IT
There are many trends in the industry that try to address this divide. The trend toward pattern libraries and design systems can help developers design. Abstraction layers like jQuery made JavaScript coding more accessible to designers. Prototyping software like Adobe XD allows designers to build sophisticated interactions without any coding and make it easier to share specs with developers. But the bigger problem is culture.
Still, I think there is room for optimism. As much as there is a trend to specialization, there is potential for cross-pollination. Modern solutions are simply too complicated for designers to remain oblivious about technology and, I will add, business.
Organizations need to embrace more fluid approaches to specialization. It allows individuals to utilize 100% of their potential, making them more valuable and more satisfied.
It also sends a message to educational institutions. For example, now that UX is coming into its own, there is a growing danger of creating new silos where none existed. The field was built by folks who’ve come from all sorts of diverse backgrounds, yet it might be adopted now by those who would enter the field through specialized programs.
If we are not careful, specialization will change things, and I don’t think it will be for the better.
P.S. As I come to UX from a Business Analyst role, I have a similar view of that divide. In fact, a large consultancy I talked to recently mentioned they were experimenting with joining their two departments. They are not alone. But that’s a story for another day.
Most things in life get done through trial and error, and digital products need to support that sort of fuzziness.
What scares people about technology is that it’s precise and uncompromising. A bank machine asks you for THE number — you can’t type anything else and nothing happens until you do. When you turn a dial on a washing machine, it does exactly and only what you choose. At the other extreme, automated systems take all control away from us.
The beauty of machines is that machines don’t lie. They just do what they are programmed to do. But that’s not how people interact, and that’s what makes machines hard to interact with. Sometimes we need to be guided even as we remain in control.
Many real-life decisions follow a similar pattern:
Julie tries decorating her apartment. She chooses some colors and pieces. She then hires a Designer to pull it all together . The Designer generates potential floor plans and finds photo inspiration. Julie provides feedback: more like this, less like that. She rejects or accepts ideas until the picture comes together.
Many products fail to find a place in our lives, because they try to be too simple, too smart. Perfect anticipation of user’s intent will likely never happen. Even my wife with her human brain and a decade of experience can’t always predict what I will like. So digital products should make educated guesses, but they shouldn’t expect to be right. Instead, technology should be optimized for making suggestions, making corrections, and listening for feedback.
Here are some patterns I’ve been paying attention to lately:
User Input Optimization
Basic ways to integrate lightweight guides into our workflow are found in features like:
“snap“ in drawing software gently shifts objects so they lie on a grid or align with guides or other objects; “auto-smoothing” makes lines straighter or curves smoother,
“quantization” in music sequencing software automatically aligns notes precisely to a rhythm (or conversely “swing” humanizes rhythms so they sound more natural); “pitch correction” corrects out of pitch singing even in real time; “compressors” or “limiters” remove outliers to maintain a consistent loudness
In these cases, software smooths the rough edges of the user’s input through very light automation, which can be enabled or disabled easily.
Undo (Your Best Friend)
Undo is a powerful feature, which allows users to generate randomized content manually. Randomness is a key part of my creative process.
A few years ago, I created several comic strips for the Suicide Prevention program at the City of Toronto. Although I know nothing about drawing human faces, I was able to create characters real enough to convey emotion. Here is Ida, one of the fictional people at risk, opening her door cautiously:
Ida, a character I created using trial and error in Illustrator.
Ida and the rest were created purely through trial and error. I drew randomized shapes and hit undo a lot, until I assembled a realistic face.
Multiple Takes (Undo’s Big Brother)
The opposite feature is the ability to generate lots of content quickly to be sifted through later. For example, I rely on my ability to take hundreds of shots on my camera, to increase the chance that more will be in focus and that some may even contain pleasant surprises I can extract later in Photoshop.
Multiple Presets
“Smart default” is a great way make an interface easier to use, but why does it have to be just one default? When you open visual effect filters or audio effect plugins, each filter has a default setting. Often there is just the default, and sometimes it’s not so great. A better case is when multiple presets are offered in a pull-down, so I can flip through to see what the filter is capable of. But an even better feature would be to offer an intelligently random set of presets.
Some products make users do work, for example, to categorize their content like emails or photos… It is common for software to start with a suggested categorization and force it on the user — remember when Gmail in 2013 rolled out an automatic and hugely unpopular categorization of email into Social, Primary, etc. Is there a way to quickly generate multiple suggestions based on the user’s own activity and let the user choose?
Randomized Content Generation
Users of complex digital products can’t always express their full potential, because their technical skills are limited. I already showed how I use undo and multiple takes to overcome my limitations.
The key insight for me is that when users hit the limit of what they can create intentionally, they are still able to recognize if shown some options. Besides, people enjoy surprising themselves. They don’t always want predictable outcomes.
The Nord Lead 4 synthesizer has a Mutator function, which takes a seed sound and changes it in slight or major ways. I’ve come to depend on this feature in my creative process. It works on demand — meaning it’s easy to access, so I can easily trigger it manually.
Mutator feature on the Nord Lead 4 synthesizer
There 3 types of randomization and 5 levels of randomization strength. I can hear endless variations on a single sound (similar to the many-takes pattern), evolve a sound gradually, or create a completely random sound.
Here is a sample of gradual mutation followed by a big mutation at the end:
What makes the Mutator effective for me is that the Nord Lead
chooses optimal parameters to randomize (bounded randomness), so most generated sounds are viable,
allows fluid change from manual editing to mutation — I can generate a very random sound, then tweak an aspect I don’t like manually, then randomize slightly
To me it feels just like asking the device for suggestions. It’s not intrusive.
In Closing
I would like to see more light automation and more explicit support for trial-and-error in software.
In the future, I can imagine my sketching app recognize that I am trying to draw a person. The app will automatically fix and elaborate the sketch, using my own line style. It will even be able to ask me, “Hey, how about this?” My writing app will offer alternative ways to arrange my content and offer better headlines to choose from.
That said, I have two concerns:
Products that try to be too smart often fail to listen to feedback. Products should have humility as a feature. How do we create digital collaborators and partners rather than intrusive automatons? Past attempts to do this failed miserably — remember Clipy from MS Office? Many people were frustrated with the Nest thermostat. And so on.
Are there good working patterns for a digital product to offer content and ask for feedback without obstructing the user’s workflow? Most cases of such interaction today are intrusive and built on selling the user something, not on truly being helpful. How do we avoid unwanted help?
Let me know your thoughts and please share examples.
What if all your navigation items don’t fit onto one line on mobile? In the screenshot below, if Habanero opens another office, it won’t fit:
Pattern 1: Horizontal Swipe
You can let the choices go off-canvas. You’ll find this in use on the web and in native apps.
Make sure to hint at the concealed content. Adjust spacing to ensure the last item is cut off:
Google adds a fade effect as an added cue, which also makes the cut off item less awkward:
Strymon adds fade effect AND a small arrow for clarity. Moreover, they turned the links into buttons, which makes it easier to see they are cut off (not a video):
This pattern isn’t useful just on mobile. LinkedIn uses this pattern in their Create Post pop-up, where width is restricted and the concealed tags are low priority:
Samsung adds motion as an affordance. In the video below, notice that it’s important to auto-pan the selected item into view after a page redirect:
This diversity and number of cues highlights the main disadvantage of this pattern: discoverability. A user may not recognize a swipeable menu and may miss out on the concealed choices.
Pattern 2: Hide Low Priority Items
A common pattern is to hide complex navigation behind a hamburger icon or icon + label. You should avoid hiding important choices if possible. It may be appropriate if the menu is redundant and the high priority options are prominent in the body of the page.
Try a hybrid approach to achieve a balance between discoverability and layout constraints.
CASE STUDY: Interaction Design Foundation Home Page IDF decided to hide all its links, including “UX Courses”. The courses offered are listed in large cards way down the page. So at first blush, it’s not obvious what courses they offer, although it’s the primary interest of visitors.
Here’s how the page looks now:
SOLUTION 1: Move Courses Higher Here we leave navigation unchanged but move the relevant content above the fold:
SOLUTION 2: Expose Top Priority Menu Item Here we leave the page unchanged but surface the UX Courses as a button:
Another hybrid example Vivobarefoot exposes the high-level filters but conceals more precise filters behind an icon. This is better than hiding all filters, but the “filter” row is empty, so they should use available space to expose 1 or 2 mostly frequently used precise filters:
Pattern 3: Simplify and Fit
You can save some space by using icons without labels, but you should avoid pure icons if possible. When you choose this approach, be pragmatic. Certain icons like Home are safe, while others are less recognizable. I use YouTube every day, but I don’t know what the flame icon represents:
Simon Cowell’s gesture symbolizes the idea that if a user doesn’t know what something means, it’s as if it’s not there.
You can fit links by grouping them under pulldowns:
Another approach is to shorten text labels on mobile. Here is a desktop nav I did for a client:
On mobile, I kept just the key words:
Here’s how the code for this looks: <a>Promote <span class=”mobile-hide”>Your Listing</span></a>. A media query hides .mobile-hide class on narrow screens. Note that assistive technologies may still be able to read the full label depending on how you hide it (e.g., if you make it 0 width and height).
Sometimes the links are just too long and won’t fit. In that case, you can let them overflow:
On that note, let’s talk about my preferred approach.
Pattern 4: Let It Overflow (Counter-Pattern)
The approach I generally prefer is to not hide anything. As you can see in the previous example, I let the links overflow on mobile. I want them all visible at a glance as an overview.
If you visit my site on mobile, you’ll notice my sitemap is fully exposed right at the top to orient visitors to what I’m about:
On the inner pages, I’ve removed low priority items, but I still let items overflow onto a second row:
This works for a limited number of links, 3 rows at most. I do the same thing for inline tabs:
When I worked on GoodUI.org, we usually exposed links in the body and let users scroll through them normally. 100% discoverable:
Sometimes you have tabs that are not links but switch up some text in-page. If the tabs take up a lot of height, the user may not be aware that the text below has changed. In those cases, checking if text is out of view and auto-scrolling to it may be an option.
Wrapping Up: Best Practices
Keep choices exposed whenever possible to ensure disoverability
Try simplifying to fit the choices on one line by using icons carefully, simplifying text labels, or grouping (simplify without sacrificing function and clarity)
If you rely on horizontal swipe, use fading, cut-off text / buttons, and motion to increase discoverability
If you decide to hide choices, use available space to keep high priority choices exposed in the nav area or the page body
Don’t be afraid to give navigation a large treatment if it makes sense
What can I do to persuade more people to buy your product online? I tackled this question for 5 years as I ran A/B tests for diverse clients.
I remember one test idea that everyone on the team loved. The client said “That’s the one. That one’s totally going to win.” Well, it didn’t.
The fact is, most A/B test ideas don’t win.
In fact, interpretation is tough, because there are so many sources of uncertainty: What do we want to improve first? Which of a hundred implementations is a valid test of our hypothesis about the problem? If our implementation does better, how statistically reliable is the result?
Is our hypothesis about the users actually true? Did our idea lose, because our hypothesis is false or because of our implementation? If the idea wins, does that support our hypothesis, or did it win for some completely unrelated reason?
Even if we accept everything about the result in the most optimistic way, is there a bigger problem we don’t even know about? Are we inflating the tires while the car is on fire?
If you take anything away from this, take this analogy: inflating your car tires while the car is on fire will not solve your real problem.
I believe the most effective means of selling a product and building a reputable brand is to show how the product meets the customer’s needs. This means we have to know what the customer’s problem is. We have to talk to them.
Then if we run an A/B test and lose, we won’t be back to square one. We’ll know our hypothesis is based in reality and keep trying to solve the problem.
Emulating Competitors
“I heard lots of people found gold in this area. I say we start digging there!”
That actually is a smart strategy: knowing about others’ successes helps define the opportunity. That’s how a gold rush happens.
This is why A/B testing blogs are dominated by patterns and best practices. So-and-so gained 50% in sales by removing a form field… that sort of thing. Now don’t get me wrong: you should be doing a lot of those things. Improve your value proposition. Ensure your buttons are noticed. Don’t use tiny fonts that are hard to read. You don’t need to test anything to improve, especially if you focus on obvious usability issues.
So what’s the problem? Well, let’s go back to the gold analogy. Lots of people went broke. They didn’t find any gold where others had or they didn’t find enough:
“The actual reason that so many people walked away from the rush penniless is that they couldn’t find enough gold to stay ahead of their costs.” ~ Tyler Crowe Sept. 27, 2014 in USAToday
You could be doing a lot of great things, just not doing the RIGHT things.
The good thing is many people do some research. The problem is not enough of it or directly enough. They are still digging in the wrong place.
“If I had only one hour to solve a problem, I would spend up to two-thirds of that hour in attempting to define what the problem is.” ~ An unknown Yale professor, wrongly attributed to Einstein.
Think about this for a moment: How can you sell something to anyone when you’ve never talked to them or listened to what they have to say?
Product owners often believe they know their customers, but assumptions usually outnumber verifiable facts. Watching session playback can hint at problems. Google Analytics gives a funnel breakdown, but it doesn’t give much insight into a customer’s mind. It’s like trying to diagnose the cause of indigestion without being able to ask the patient what they had for dinner or if they have other more serious health complaints.
The problem is it’s all impersonal, there’s no empathy. There’s no “Oh man, that sucks, I see how that is a problem for you”. It’s more like “Maybe people would like a screenshot there. I guess that might be helpful to somebody”.
Real empathy spurs action. When you can place yourself in your customer’s situation, you know how to go about helping them. If your solution doesn’t work, you can try again, because you know the problem is real rather than a figment of your imagination.
A Pattern Is A Solution To A Problem
Therapist: “Wait, don’t tell me your problem. Let me just list all the advice that has helped my other patients.”
Let’s say some type of visual change has worked on 10 different sites. Let’s call it a pattern.
A pattern works, because it solves some problem. So choosing from a library of patterns is choosing the problem you have. You don’t chose Tylenol unless you have a headache or fever. You don’t chose Maalox unless you have indigestion.
If you know what YOUR problem is, you can choose the right patterns to solve it.
If you don’t know the problem, you won’t get far choosing a pattern because it’s popular, because of how strongly it worked or how many people it has worked for. That’s like taking a medication you’ve never heard of and seeing what it does for you.
Pattern libraries are great for when you have a problem and want a quick, time-tested way to solve it:
Research Uncovers The Problem: A Short Story
Say you’re a shoe brand. You decide to reach out to people who are on your mailing list but haven’t purchased yet.
So you send out a survey. Within the first day, it becomes clear that many people are avoiding buying your shoes, because they’re not sure about sizing.
You’re shocked, but you shouldn’t be. User research insights are often surprising.
It’s just that you thought you anticipated this by posting precise measurements, a great return policy, and glowing testimonials. If anything, you thought people would mention the price, but no one so far mentioned price.
That’s a big deal for your product strategy. You need to build trust. So you set aside your plans for a full redesign (those fancy carousels on your competitor’s site sure are tempting). You set aside A/B test ideas about the font size of prices, removing fields, and so on.
You tackle the big problem. You do some research and come up with solutions:
match sizing to a set of well known brands
provide a printable foot template
allow people to order two sizes and return one
mail out a mock plastic “shoe” free of charge, and so on…
You ask a couple of people to come to the office and try some of your solutions.
Your user testing methodology is simple: First people pick their size based on either the sizing chart or template. Then they see if the real shoe fits.
Result? The matched sizing and the foot template were very effective in predicting fit. In user testing, the initial template didn’t work so well, because it’s hard to place a 3D foot in perfect position on a 2D printout. So, you come up with a template that folds up at the back and front, simulating a shoe. The users liked that much better. In fact, you start working on a cardboard model you can mail cheaply to anyone who requests it.
Now you’re off to testing it in the real world!
You design 2 different foot sizing comparisons, one pretty one with photos of top 3 brands and one long, plain table with 20 different brands. You also create an alternative page that links to the downloadable foot template.
You A/B test these variants over 2 weeks and pick the one that works.
(Then you go back to your research and find the next problem.)
Comparing an interaction to its real-life equivalent can be a useful test of how intuitive it is. Does it match your real-life mental model?
Example 1: In-Game Inventory Management
Imagine you’re scavenging in a post-apocalyptic city. Your bag is full. Get to a safe place. Lock the door. See what you have.
What do you do?
I bet you dump it out on the floor in front of you.
But in the Fallout games, you pull out an alphabetical list with obscure names:
Now, you could get fancy with mimicking a real-life experience. Instead, what if we had a simple gallery that we can reorder by dragging, like this rough mock-up:
You can now see how many pistols you have, which weapons are bigger… all at a glance.
It’s best to use an existing intuitive cue (like size) than create a new abstract cue (like range). What if more powerful weapons were always beefier? What if the most accurate rifles were longer?
In the game, there is little correlation between a weapon’s size or visual impressiveness and its damage. You’d think ANY gun would finish an enemy with just a shot or two at close range, but that’s not the case. I don’t know which weapon can incapacitate a raider with a single shot at close range. If I did, the choice of which pistol to carry would be less intimidating.
Example 2: Storage Metaphors
Here’s another example. In the Farmville game, the user can buy all sorts of equipment, seeds, etc. But seeing getting to the inventory requires menu diving.
Where’s my stuff?
Well, in real life, my stuff would be in my storage shed. So let’s add a shed:
Example 3: Character Interaction
In one aquarium simulation game, the fish swim around, and you have to feed them and buy stuff for the aquarium. I found it unexciting.
I used to have a real fish once (a rescue) and my real-life experience with him was quite rewarding. He saw me and followed me when I entered the room. He was at times curious, lethargic, startled, cozy… Why not model the AI of the fish to simulate some of these real-life behaviors? Wouldn’t that engage users more?
There are many situations where comparing to the real-life equivalent can generate solutions to UI problems.
Meaning is not in the words — it’s in the total situation. – Ronald Langacker
To know if we created a great product, we need to test the User Experience beyond the screen:
Level of Test
Types of Stories
Individual screens
Short-term usability stories about UI-level problems.
“I wanted to buy the product but couldn’t find a Buy button.”
Flow in context
User Experience stories that show whether the product can successfully do the job it was hired for.
“I avoided using the medical software, because it forced me to turn away from my patient.”
Usage over time
Full User Experience stories that show how well the product works as the details of the job change over time.
“The gorgeous curved screen design that I loved at first caused the screen to break a few months later, which cost me $400 to repair.”
Visually polished and usability-tested screens can still lead to a failed product experience in the long run.
Case Study: Inventory System in Fallout 4 Game
The inventory gets long and hard to manage as you pick up tons of items. For example, it’s hard to compare weapons, because you can only see stats for one at a time:
But these sorts of usability-level issues are easiest to fix. For example, as a work around, this user prefixed the weapon names with useful stats. This makes it possible to compare items at a glance.
However, fixing screen-level problems is small potatoes in comparison to the larger issues that hurt the game. Here are user stories that evaluate the inventory system at different levels:
Example: Choosing the best apparel
Screen Level
Context & Time Levels
Problem
I can apply clothing in inventory mode A by clicking it, but how do I apply apparel while in inventory mode B? Clicking in mode B sells apparel instead.
What kind of apparel is going to keep me safe?
What happened?
My workaround is to exit the trade dialogue and go into inventory mode A. I then use the body chart there to see what I’m wearing now and which new apparel is superior. I click the right apparel to apply it and go back into the trade dialogue and click the apparel I’m no longer wearing to sell it. However, I sometimes nearly sell an item by mistake when I reflexively click it to apply it.
I’ve spent hours wondering which items to carry, figuring out where to stash inventory when I had too much to carry, comparing items, and agonizing over which to sell. In the end, I mastered the inventory UI, but my overall UX was poor.
I expected my diligence to pay off, but that optimal weapon I handpicked still couldn’t kill the next enemy and despite all the hoarding and trading I still couldn’t afford the best apparel.
At times I was paralyzed, even quit the game, when I found something important and had to figure out what to drop to make room. I’ve found myself not wanting to go into a new building, because finding new things had become a burden.
You can clearly see which user stories affects the User Experience more profoundly.
There are issues of both clarity and meaning. Should I use the gun with 100 accuracy & 30 damage or the gun with 200 accuracy & 10 damage? What’s the difference between “accuracy” and “range”? Is a “fire rate” of 6 good or bad? These questions are frustrating, but the bigger UX problem is why any of this matters in the first place.
How does a feature translate into outcomes the user cares about?
If we can’t answer that, we have more than just a usability problem.
Case Study: Context for Medical Software
There’s a great story about software failure in Clayton Christensen’s Competing Against Luck:
We’d designed a terrific software system that we thought would help this doctor get his job done, but he was choosing to ‘hire’ a piece of paper and pen instead…
Why? The design team overlooked the situational and emotional context:
“As [Dr. Holmstrom] began to discuss Dunn’s prognosis, he grabbed a piece of paper to sketch out, crudely, what was wrong with Dunn’s knee and what they could do to fix it. This was comforting, but puzzling. Dunn knew there was state-of-the-art software in that computer just over Holmstrom’s shoulder to help him record and communicate his diagnosis during an examination. But the doctor didn’t choose to use it. “Why aren’t you typing this into the computer?” Dunn asked.
…The doctor then explained that not only would typing the information into the computer take him too much time, but it would also cause him to have to turn away from his patient, even just for a few moments, when he was delivering a diagnosis. He didn’t want his patients to have that experience. The doctor wanted to maintain eye contact, to keep the patient at ease, to assure him that he was in good hands…”
Case Study: Samsung Galaxy Edge
The Samsung S7 Edge phone was very slick with its curved edge. But it turned out that this design choice makes the phone hard to protect. Flat screens allow protective cases with higher sides that rise over the screen. Cases for curved screens rise just barely. Even with a high quality case, this screen cracked along the curved edge (and it happened twice)!
If we look beyond the first experience and aesthetic factors, we see a very different User Experience story.
The cost of repair was $400 the first time. The second time, I had to replace the perfectly functional phone, which didn’t suit my ecological values.
Sadly, Samsung appears to have standardized this design. I suspect it’s even financially lucrative, due to demand for pricey replacement parts or replacement devices.
Case Study: Fitbit Dashboard
In The Big Book of Dashboards, Steve Wexler describes how his experience with his Fitbit changed over time:
“After a while, I came to know everything the dashboard was going to tell me. I no longer needed to look at the dashboard to know how many steps I’d taken. The dashboard had educated me to make a good estimate without needing to look at it. Step count had, in other words, become a commodity fact. I’d changed my lifestyle, and the dashboard became redundant.
Now my questions were changing: What were my best and worst days ever? How did my daily activity change according to weather, mood, work commitments, and so on? Fitbit’s dashboard didn’t answer those questions. It was telling the same story it had been telling on the first day I used it, instead of offering new insights. My goals and related questions were changing, but Fitbit’s dashboard didn’t. After a year, my Fitbit strap broke, and I decided not to buy a replacement. Why? Because the dashboard had become a dead end. It hadn’t changed in line with my needs.”
So there wasn’t anything wrong with the dashboard in 2 dimensions, but its usefulness wasn’t constant along the dimension of time.
How to Uncover the User Experience Story
Time is a necessary dimension of user testing. Retrospective interviews and delayed customer feedback help reveal the total story. You want to know how the customer enjoyed the shopping experience or their first time playing a game. Then you should follow up to see how they feel weeks later.
You can get more insight into how the story unfolds through something like a diary study, which allows users to keep track of their usage at their own pace. They can capture one-off occurrences and experiences that might at first seem insignificant and would get lost otherwise.
Here are 15 techniques I extracted from the Jobs-To-Be-Done interview Bob Moesta’s team did with a camera customer (link at bottom):
Set expectations
Give an introduction to how long the interview’s going to take and what sorts of things you’re interested in. For example, “even minor details may be important”.
Ask specific details to jot the customer’s memory
Don’t just ask what the customer bought but why that model, which store, what day, what time of day, where they in a rush…
Use humor to put the customer at ease
Intentionally or not, early in the interview the whole team had a good laugh about something the customer said. I think it did a lot to dull the edge of formality.
Discuss pre-purchase experiences
Ask what the customer used before they bought the product and what they would use without it. Dig into any “I wish I had it now” moments prior to the purchase.
Go back to the trigger
Walked back to what triggered the customer to even start thinking about buying the product and to a time before they ever considered it.
Get detailed about use
Interviewers and the customer talked about how she held the camera, which hand, in which situations she used it, which settings she used, and advantages/disadvantages of the alternatives. You want the customer to remember and imagine the product in their hands. Things like the weight or texture of the product could impact the user experience. Dismiss nothing.
Talk about lifestyle impact
Dig into ways in which the product impacted the customer lifestyle, things they were/are able or unable to do. For example, they talked about how taking pictures without the camera affected the way she presented her trip photos to her sister. Focus on the “use” rather than the specific “thing”. For example, you can ask “do you like this feature”, but then you want to move to “what does this feature mean to you in terms of what you’re able to do, how it affects your lifestyle, your future decisions”.
Explore product constraints
Talked about how other decisions and products impacted the decision. For example, the size of the bag that has to fit the camera and avoiding the slippery slope of requiring additional accessories.
Ask about alternatives
Products don’t exist in isolation. The customer had several other solutions, which serve different, specific purposes. Figure out whether the new product will replace or complement other products.
Point out inconsistencies, such as delays
Interviewers pointed out that the customer waited a long time to buy the product from the initial trigger to making the call after a trip. They asked “Why did you wait so long?”
Talk about the influence of other people
Ask about advice other people gave the customer or how other people may be affected by the decision.
Don’t put words in their mouth
In digesting and summarizing back to the customer, it’s easy to inject your own conclusions and words. Try to elicit attitudes and conclusions from the customer. Lead them to it but don’t do it for them (a related technique is to start talking and then leave a pregnant pause, so the customer can complete the thought). In one clear case in the camera interview, the interviewers asked a leading question but then prompty noticed this and corrected themselves, saying “Don’t use his words”.
Talk about the outcome
Asked open ended questions about whether the customer was happy with their purchase and in what ways. Ask about specific post-purchase moments when the customer felt “I am glad I have it right now”, but focus on how the situation is affected not the product itself.
Here are some additional I considered after listening to the interview:
Avoid fallacy of the single cause
Don’t push the conversation towards a single cause (see Fallacy of the single cause). Rather than engage in cause reductionism, accept there may be multiple, complex causes.
Let’s say you pose the question: “Joe said that, and so you decided to buy X?” The simple narrative may be intuitive, causing the subject to be persuaded that “Yes, I guess that is why I decided to buy X”. In reality, the events may be true (Joe did say that), but in reality may be unconnected. In these cases, it’s important to point out inconsistencies rather than seek confirmation. For example, in the camera interview the interviewer rightly pointed out an inconsistency: “Why did you wait so long to buy X after he said that?” They also often asked “What didn’t you…” Work together to uncover the truth.
Beware planting false memories
Do not reflect back your own sentiments or ideas to the interviewee when clarifying. For example, asking people to confirm something they did not literally say may cause them to confirm a causal relationship that did not happen (other cognitive biases may aid this: pleasing the interviewer, tendency to fall for reductionism). It may plant a subtle attitude that might then be amplified through the course of the interview. Also be careful with “because” statements, as there is some evidence that we are biased to accept such explanations even when they are irrational (see The Power Of The Word Because).
More on possibility of implanting false memories Video 1 and Video 2.
Developed case studies for a financial news site in order to clarify its value proposition and increase subscriptions.
The Original
No clear value proposition – there are many ways to get similar information elsewhere.
The original graphs are too small and don’t make it obvious what happened, when, and what the user can learn from the example.
The study shows a “Learn More” button forcing users to do more work, instead of improving the case study so it does the work of persuading visitors.
To address this, I developed the improved case studies, wrapping the top benefit of the product in a simple narrative with a simple visualization.
Concepting and Collaboration
I always start on paper and get alternative viewpoints from others whenever possible.
I did some rough paper sketches to capture my idea:
My collegue (@jlinowski) did his own sketch:
My Solution
A new simple case study looked like this:
I also took the opportunity to show a multi-step scenario that involve both buying and selling:
These were based on my domain research and interview of the business owner, to learn past stories of success.
I designed and wrote 3 strong case studies to improve:
Value proposition: New case studies better convey that early info = profit, which is the key benefit of this subscription. All examples are real, dead simple, and recent. The “How To” wording in the headline also helps convey that these sample strategies are repeatable.
Visual hierarchy: The diagram is larger to emphasize the importance of this section to a prospective customer and reduce distractions.
Usability: I simplified the diagrams and added clear annotations to show what happened, when, and why it’s important.
Transparency: Trying to anticipate visitors’ questions, I lead with a consice screenshot of the actual news feed, so it’s clear what info lead to the decision being described.
Measurement
We A/B tested other aspects of the home page redesign. We agreed not to A/B test the case studies, because (1) team agreed it was an improvement (low risk) and (2) testing would impact other business priorities. After our initial engagement the client reached out to create several more case studies, based on positive feedback to the ones we originally launched.
Recency: <2016 Role: Product designer, web developer Collaboration: Solo design with lots of feedback from colleague
Background
The Visual Website Optimiser (VWO) dashboard shows the performance of page variants in real time. The original dashboard omitted many key metrics and did not provide sufficient guidance to users (based on my conversations with clients):
How’s the test doing now? Any early indications?
When do we have enough data to stop? What are the risks and trade-offs to stopping now?
What’s on deck to be tested next?
What I Did
I put together some tools to help solve these problems for me and my clients:
Statistical library in JavaScript focused on A/B testing
Greesemonkey script to add missing metrics and rules to VWO
Created email status updates using PHP and VWO’s API
Created landing page explaining the free tool’s benefits
Created Project Management tool to track ideas
Marketing Focused On Benefits
I created a page to clearly explain the top 3 problems I’m trying to solve:
Enhanced VWO Overview
The original dashboard started with an overview, which showed the relative performance of the each version :
The problem was:
No indication of the statistical significance of the results
Hard to compare bars as performance differences narrowed over time
I enhanced the overview with:
Worst case scenario: Vertical line to easily compare versions
Margin of error: T lines to show margin of error
Statistical confidence: Added p-value statistic
Confidence lines at the top of each bar show uncertainty. I drew a vertical line to represent the maximum estimate of V1 (the Control). Now it is easy to say that even if the true performance of V1 is its maximum, then the lowest estimates for V2 and V2 are still outperforming it. This is very good.
I added a p-value, which is a standard way to measuring the strength of results. Normally you can’t show p-values like this in real time, but there are various reasons that I did so here.
Enhanced Main Dashboard
The original dashboard looked like this:
The problem was:
No indication of current false positive and false negative risk
No margin of error for the improvement
“Chance to beat” was not always reliable
No indication of how much longer to go
Over multiple iterations, the dashboard looked like this:
I made a number of improvements here:
Labeling: I back-calculated VWO’s margin of error and discovered it was lower than is standard (only 75%). I clearly labeled this.
Added confidence interval: I used 99% confidence intervals for extra aggressive to allow for other statistical laxness in making this more user-friendly. Now users could see a range of uncertainty instead of one value.
New confidence indicator: I replaced the “Chance to Beat” with my own “Actual Confidence”, based on my own algorithm. Users could hover the values to see what they mean.
Sample size guide: I tried to estimate how much longer a test has to run. When users hovered over the icons, they could see an explanation and a recommendation in plain English. I also applied many rules in the background to show context-specific messages e.g., if visitors are under some best practice minimum.
Test metrics & risk: I added holistic metrics, showing time elapsed and estimated weekly test traffic. I also quantified the false positive risk, taking into account number of variants being tested.
External calculator link: I provided a link to an external calculator that would allow users to manipulate the data and add special “corrections” not available in VWO
User Feedback
I received feedback from multiple sources and found bugs, which I fixed. The Addon went through 7+ iterations.
Next, I Created Email Alerts
The problem was VWO had no email update service to keep the client updated. Tracking results for multiple clients across different accounts was also laborious for me. Fortunately VWO had an API.
I created an email update service that sent bi-weekly test updates to me and my clients. I used VWO’s API and PHP to route emails. I first started with a status update showing current performance and change from last time:
The email included:
All tests and their status
Performance of each version, traffic, and statistical assessment
Estimate of test duration
I then incorporated my own heuristics that weren’t available in VWO. For example, this report included daily performance so I can see how consistent the test is:
For many projects, the daily counts of visitors were low, so I expanded the weekly summary to show detailed performance. Also, my colleague suggested making the report more personal. So, I also added a custom summary at the top in yellow:
The red and green colors are also distinguished by minus signs and difference in tint, so it’s still clear for color-blind users.
I also built my ownstatistical calculator to facilitate both the planning and analysis of tests.
Product Page for Addon
The full product page included a clear explanation of the what’s new with arrows pointing to specific features and what they mean to educate users.
MVP / Prototype for Project Management
My clients wanted to see the list of A/B test ideas and their current status. I created a functional prototype to allow us to enter test ideas, clearly articulate the rationale, prioritize, and flag them for testing:
When a test was activated in VWO, it would show up in the list, and anyone on the team could click on it to open the VWO dashboard.
Tool Retired
Eventually VWO updated their statistical model and I retired my tools. I also retired the email updates, because it was decided weekly personal updates with clients were more valuable. However, going through the prototyping exercise was highly valuable in documenting the process.
I was hired to improve the sales of client’s eBooks/Guides about health supplements. The challenge was showing what’s inside (improve usability) without giving away too much for free. I hand-sketched new page design, built it, and ran A/B test.
Problem 1
The top of the landing page did a poor job of explaining the value proposition and contents of the products. Also it lacks a low-friction action to engage a visitor:
The list of health topics covered appeared mid-way on the page and was not easy to scan:
Solution 1
I created a strong start with the main value proposition and top 3 benefits of the product. I categorized the products into 3 buckets Body, Lifestyle, and Mind (something the client had not done), which made the topics easy to scan at a glance. “Three” is a common number in design. Finally, I made the topics clickable to create an area for gradual engagement:
Feature 2
The contents of the guides were originally shown way below the fold in small font. There was little to show a potential buyer what the guides actually contain and envision how they will be using them:
I decided to turn this afterthought into a feature, giving away some more free information without giving away too much.
Here’s an early concept that highlighted a sample supplement:
After several iterations and collaboration with another designer, we decided on a different approach. When the visitor clicked a topic, they would see all the supplements covered. The supplements were categorized into 3 categories: work, unproven, and avoid. I also included scenarios for combining supplements, so users could better gauge if those supplements are applicable to them:
Testing With Users
We A/B tested the old and new versions with ~40,000 users. The new version increased completion rate by ~6% and revenue by ~15%. This was done through a genuine improvement in content usability as well as persuasion techniques like curiosity.
Optimized content to persuade visitors to upgrade to the paid product and designed new functionality, dashboards, and reports.
Applying User Centered Lens
I helped the client to create a matrix to clarify their target audiences and user needs in Plain English. Here’s an example:
Writing copy samples helps the team refine its message and value proposition. It generates ideas for design. Later it can serve as raw material for headings, labels, and marketing copy.
This eventually evolved into personas focusing on empathy and context:
Product Concepting & Strategy
I helped the client connect their raw business ideas to specific user goals. I asked user-centered who/how/why questions to tease out the core opportunity. It is common for clients to have implicit knowledge that they don’t think to make explicit.
For example, while discussing a screen, I suggested we break users into “buyers” and “sellers”. It turned out “buyer” and “seller” terminology didn’t feature anywhere in the UI, because the clien’t site isn’t a marketplace. However, this language well described the goals of the users.
I sketched a lo-fiwireframe in real time. It targeted buyers and sellers explicitly instead of saying “Lists of Products”:
Discussions like this lead to new product ideas, different ways of organizing the existing offerings, and different strategies for marketing.
Landing Pages And Calls To Action
I designed a number of landing pages for this client. When they needed to “collect user data”, I helped them reframe this as a user goal i.e. why are they giving their data? When they wanted a page to list some facts about their product, I helped them articulate the value proposition. I wrote copy and organized content rooted in the user’s situation:
Real-Time Sketching & Collaboration
The client and I using screencasts and email to exchange ideas. Then we’d get on a Skype call to sketch the ideas with real-time feedback.
For example, the client had a screen that lacked purpose and consistency:
I improved the information architecture of the screen (it’s value proposition, hierarchy, and clickable items). During the conversation I redesigned the numerical scale and proposed to expand it create a “report card” for all criteria with useful “how it works” insights for the user (useful based on actual comments from users):
Real-time collaboration allowed design decisions to cascade and evolve to create a more useful, cleaner screen that the client was happy with.
Guiding Users Through Complex Processes
Problem: During our Skype call, the client and I arrived at the idea of “generating leads”, something users are currently not able to do using any tool on the market. To get at this info using the site would require multiple steps and reports.
Solution: I proposed to clarify and emphasize this feature by creating a unified step-wise wizard culminating in a practical “Prospect List” a sales person can run with:
I encouraged the client to guide users more. For example, I recommended adding more descriptions and training videos to various complex areas of the site.
Dashboard Concepting
This is a mockup to summarize a financial portfolio. This report gives users details and a key takeaway i.e. a breakdown of their key product buckets plus a single number summary (top right):
The mockup embodied existing requirements but also served as a proof-of-concept for new potential ideas. For example, in this diagram, I included a blue line that compares Product A to a benchmark. This is a way of asking the question visually: Does the user need to compare to a benchmark?
In this concept for a dashboard component, the idea is to see a subset of the data that’s relevant and then act on it directly. I’m filtering data to show only the negative y axis to highlight only the negative events. I’m then comparing it to the equivalent on the benchmark. I’m also detecting the lowest point (worst event) and allowing the user to click it directly:
I created a summary component to let uses compare current value to the range, and how far it is from the highly probable values (cutting to the key insight instead of a long table with a complex chart):
Divergent Ideation
There are usually many ways to doing something. When I sketch ideas for a concept, I usually diverge to explore many options. I then converge based on what makes most sense and with client’s feedback OR I propose an A/B test.
For example, I tried an upgrade pop-up instead of the full report to persuade users to pay:
Some of our questions explored as separate variations were: Should I tease user with some summary data? If so, what data? Should I show upgrade call to action and input fields on the same page or hide them behind an upgrade link? Should I go with a dark or light motif? What’s the optimal message for the heading? Should I list top benefits or speak with data?…
For the home page, I concepted out different ways to get the user started:
In version B, I proposed a single field with a call to action, an action 90% of users would be interested in. In version C, I proposed instead to let user choose who they are, then show them a tailored message and call to action (Gradual Engagement). In Version D, I proposed showing the user several “I want to …” statements to directly link to user goals… and so on.
Developed concepts to modernize and expand features on a customer portal.
Problem
The original dashboard (which I cannot share) was missing a lot of key details. There were lots of links and generic info. I wanted to give the user a summary of their account and emphasize actionable items, such as access policy documents or payment history. As a team, we captured several business priorities, like reducing paper documents, more self-serve options to free up call center, etc.
User Profiles and Use Cases
After talking with the project owner and interviewing a business subject matter expert, I captured several light personas defined by use cases:
My personas are defined by WHAT a user intends to do. A persona or use case becomes a trigger for a wireframe flow that meets one or more requirements.
Wireframes
I created detailed wireframes like this. Their function was “proof of concept” and exploration of other potential features. Clients need to see something to be able to better understand what they need:
I usually encourage clients to build components as needed, to think in Agile terms and embrace rapid iteration over perfection. As a result, wireframes for each screen are more detailed in key area and less in others.
Solutions on the dashboard above included:
Holistic view: I summarized all aspects of the user’s interaction with the company and categorized them into 3 columns with subsections. I also added a global summary info bar at the top (e.g., “You have 6 unread docs”)
Hierarchy: I showed content in order of importance/relevence, from top to bottom, left to right. Older and secondary content was concealed behind summary links.
Consistent Clickable Styles: I exposed the key actions as buttons or links. For example, under Payment, I added a prominent Pay Now button and a secondary link to View Invoice.
Relevance & Quick access: New and unread items highlighted. I exposed some key actions. For example, I added a pull-down to quickly switch the policy holder for those account that have multiple people on them.
Surfaced Meta-Data: To increase the relevance of each link, I tried to surface the key metadata from the item being linked to. For example, a document link came with a concise description showing document data, policy number, expiry date, etc. For the View Invoice link, I showed how much they saved on that invoice, due date, etc.
Ideation & Storyboarding
I developed a layout for the dashboard to standardize existing pages. Based on this layout standard, I explored various areas of functionality. The client didn’t know what exactly they wanted. They wanted to see options and get advice on what their client might be interested in seeing and how.
Most interactions dealt with looking up the right documents, managing access, and sending documents:
I mapped out user actions and screen transitions by connecting various wireframes, like this:
The client and I went through multiple iterations of each screen. I would start by making suggesting of what might be useful features. They would give me feedback on what they thought would work for their clients. I would make revisions and so on.
Prototyping
Even a rough functional prototype can help better “feel out” a solution than a static representation. Here’s a simple HTML prototype that the user could click through to test the overall flow. I also mocked up an alert email, which triggers the process. This way the client could start seeing the full story of how a user will to log into their dashboard:
The client would then take my concepts to his team for testing and further elaboration. My objective as to quickly concept and prototype UI ideas for them.
Farmville is a farm simulator for Facebook from 2009.
Original Home
Redesigned UI (Rough Mock)
Theme
Problem
Solution
Neighbours
Greatest real estate to empty slots for Facebook friends
Simplified Neighbors pane and surfaced stats (level and cash), can grow as more social features are used (progressive disclosure)
Action menu
Buried actions requiring multiple clicks
Exposed action menu with animated transition to avoid loss of context (view prototype)
Key Stats
Key Stats are scatteredDifference between “Cash” and “FVO” Farm Cash is unclear
Compact Key Stats, together on the leftFarm Cash deleted
Levels
Unclear “level” system
Explicit level labelLevels are now goal-driven, so it’s clearer what to do and what the constraint/deadline is (added Time Remaining to the stats)
Selected States
No selected states for tools, selected seed, etc.
Clear selected style for actions (e.g,. in my mockup, the Plow is selected in nav and mouse cursor looks like plow)
Plot labels
Easy to miss wilted plots
Clearer labeling of plot status (Ready, Wilted tags)Status summary on the side (when plot clicked, map pans the screen to the plot)
Cash
Two types of cash (confusing)
Simplified “cash” concept (removed Farm cash)
Inventory
No buildings by default.Unclear where my inventory is.
Created default building (how can you have a farm without buildings?)This building doubles as the inventory (you can click the barn to see what you own so far)
Settings
Full screen mode not discoverable.
Full screen icon in standard bottom right location.
Original Product List
Redesigned Product List Concept
Theme
Problem
Solution
Costs / Layout
Low contrast on costsUnclear prices (two prices shown: cost to buy and eventual profit; unclear which is which)
Cleaner, standard layout for all “costs”Can’t confuse cost and profit (using price look for cost and “earn” label on profit)
Action
Small BUY buttons and unclear that BUY means Plant Now
Entire item card is clickable
Scrolling
Horizontal navigation via arrows is awkward
Standard, faster scrollbar navigation
Transition
Loss of context when menu opens and covers up game
Animated slide out attached to main menu, less jarring (See Adobe XD Prototype below)
Top Usability Lessons
Avoid Competing Concepts
It was unclear why I was seeing a “you’re out of cash” message when I had tons of cash.
In my mockup, I removed Farm Cash, leaving money and water as the two main constraints. Users would buy regular cash or other perks.
Free Should Be Playable
I ran out of “Farm Cash” too fast, without knowing what it is, leaving few things left to do in the game. A game should still be playable and fun for all, not only paid players. A user could still make upgrades later. Otherwise, they will just spread the word that the game is not fun, hurting adoption.
User Research Question: At what point would players be ready to invite friends? Would they do so right away to say “Hey, I’ve just started this game” or would they do it later to say “Hey, I’ve played this game already and it’s great”.
Reward Every Session
Once you plant a lot of stuff, there’s not much to do. It teaches users not to expect much. Delayed gratification and long feedback loops are always weaker motivators. One factor aggrevating this is that interesting Actions are burried inside the Market dialogue:
Even if the game could support short play times (plant now and harvest tomorrow), it should also be playable during longer sessions. In my mockup, I exposed some of the actions, so it looks like there is more stuff to do. I would also unlock more of the categories, so users could play around. For example, they could buy 1 cow and maybe do something with that cow (feed it, tickle it). This would provide a simple experience with immediate feedback, while the longer feedback loop of growing/harvesting is ongoing.
User research question: What are some contexts in which users will play the game? For example, user plays for 1 min while waiting for a bus or for 15 minutes while riding the bus. Will there be lots of distractions? How long should the typical session be to fit the constraints? What are the user’s most satisfying moments?
Avoid Interrupting The User
There are lots of pop-ups and early upgrades that happen at the wrong time or interrupt another process. For example, I was trying to plow but found a box. This popped up a new screen and another about the box, completely interrupting my task. Often dialogues pop up one over the other. It’s better to show messages when they are relevant. When I’m planting is not a good time to suggest that I customize my character. Detect the “end” of a process or task and show relevant messages then.
User research question: What are natural pauses in the user’s game play where a general message could be shown? What are key problem moments when a contextual message could help? Which things do users enjoy figuring out for themselves and which things are frustrating?
Enable Wayfinding
There are many screen orphans: they pop up, you close them, and later don’t know how to go back to them. For example, when you click a plot of land with the default Multi-tool, you get a pop-up with seed choices to plant. There’s no direct link to this screen. There’s no headline to explain whether this is an inventory of seeds I own or stats on what I’ve planted. Also, there is no selected state although you can choose your seed.
User Research Question: What are most common and enjoyable tasks? How long do users spend on a task e.g., planting crops? What categories of items do users care about? What do they want to do with those items? How much choice is too much?
Clearly Label What Is Selected
There is no selected state on tools, which makes it unclear what mode I’m in or what to do (especially if I clicked something just exploring early on). In the screenshot below, my mouse pointer shows no clue as to what tool is selected and what will happen if I click the ground. The plow tool has no selected state. It’s also unclear how to exit plow mode.
In my mockup, I labeled the selected states and the tool clearly.
Create Explicit Rules And Constraints
Some of my crops wilted, and it was unclear by looking at them that they wilted. When I clicked them, I just plowed over them, because I didn’t know what “unwilting” means.
In my mockup, I labeled “wilted” plants more explicitly. I would also show some kind of message, perhaps along with the “Ready” status updates on the side.
User Research Question: What do users want to do while things are growing? Are they receptive to, say, email notification when their crop is ready or wilting?
Feature Ideas To Improve Immersion And Create Better User Stories
Keeping It Fresh
Daily opportunities + challenges:
Extra rain causes crops to mature faster, BUT you have to harvest quickly
Drought is causing wilting, you have to water your crop immediately
Phone call from a vendor who wants to arrange an ongoing bulk order and pay your extra (if you grab the deal, you make extra for every sale and you get free shipment of resources like fertilizer)
Material upgrades:
A barn upgrade, so you can store more stuff
Better rake that can plow faster and cheaper
A plot size upgrade, so you plant more and earn more per plot
You discover an old well that gives you extra water
You receive sample bags of fertilizer so your next 10 crops will grow faster
Environmental challenges and opportunities:
Storm endangers your crop so you have to build greenhouse to protect your crops, or fix damage before your crops fail
A tornado destroys your barn so you need to recruit neighbours to help you rebuild it
Surprise new characters:
A part-time helper shows up to pick crops before they wilt (your cousin who’s come to stay for a week and help for free or a part-time employee who requires minimum payment)
A feral dog comes wandering onto your property, and you can tame it (it then keep badgers away)
A new neighbour moves in: if you get on bad terms, they can drive you out of business, but if you get on good terms, they can help you
Compete with AI neighbors for business
Mini games
Add some RPG elements (e.g., ride a tractor), so there is a purpose to controlling the avatar
Combine big picture activities and specific farming tasks: zoom on crop plots and do something detailed (e.g., a mini bug spraying game like Whack-a-Mole or Candy Crush)
Characters
Invite friends to visit farm. They can water crops or pick ready crops when I’m not available. If they want to do more, they need to get their own farm next door.
Users get different perks and can share props e.g., “Can I borrow your tractor? I’ll pay you 50 coins for a day” You get rewarded for sharing.
Auction where you can get items cheaper (tractor, animals, tools)
Invite friends as helpers (e.g., “A drought destroyed my farm. I need 2 people to sign up to help me rebuild. I need someone to play carpenter to rebuild the barn and someone to dig a new well”)
AI neighbours whom you can borrow things from
Multiplayer-like features and co-ownership of items:
Start a farm together with friend, spouse, etc.
The more people, the larger the default farm and more cool farm props (e.g., a farm with 3 people gets a tractor by default)
Chat features
Game posts friend updates automatically (e.g., “John just got a larger shed”); users can then like an update or reply with text
Team chat for multi-player farm g., User 1 asks User 2: “Hey, I just sold a large harvest. Should we buy a tractor or buy more land?”
Chatbot: you can talk to your avatar or AI neighbour e.g., “How are you? Go plant some carrots”
Integration with Messenger for chat with avatar e.g., “Something’s happened. Come right away.” or “Crops ready. Should I harvest?”)