20+ User Research Techniques

Mental Modeling via Customer Interviews

In an interview, I learned that while both we and our insurance users understood “indications” as “rough quotes,” our feature broke all the core expectations that this concept entailed.

Concept of “Indications” (rough quotes)In User’s Mental ModelIn Our Product
EffortTakes some effort. Broker attempts to collect info from client or make better assumptionsMinimal effort. Focus on high-volume, low-margin business
PricingFairly close to final quote; client can use it to weed out some optionsBallpark price to start conversation; no point comparing options yet
Application RequirementsProvide as accurate information as you canAI-driven or generic assumptions for faster response
Time FrameGet pricing 14-30 days ahead of timeStart preliminary conversation earlier, 60+ days out
A thematic analysis of our customer’s mental model vs. how indications worked in our product

Conflicting Mental Models: Learning how brokers think and make trade-offs a key part of my job. For example, brokers traditionally want to send insurance providers strong deals to gain preferential treatment in the future. If they send weak deals, it might damage their reputation. With APIs, they didn’t need to worry about this, because no humans are involved on the receiving end. But old habits may have caused people to avoiding sending greater volumes through the email bot. Empathy means going beneath the surface to understand the oft unspoken principles and values guiding people’s work.

Push/Pull or Forces Analysis

This is a great framework from the Jobs to Be Done field, which I use often to convey the motivations and obstacles of a decision gathered from interviews. It’s a type of mental modeling. I can’t share a real example, but here’s a great illustration (Source):

Customer Profile

I ran a workshop with Sales and CX to understand how we can share and divide the responsibilities for customer discovery among the team. We came to better understand what CX requires from Sales and what Sales is able to get from customers they speak to. I developed a framework for a “Customer Profile” with itemized criteria rated by importance and who is responsible. This eventually became a Google Doc form, and then later was merged into our CRM system. This profile goes beyond being a research technique. It determines how the company triages prospects, where we invest our efforts, and how we close gaps in understanding that impact everyone, from sales to product. Here’s a key gap that was identified: when brokers do not have enough experience with cyber, they are less ideal customers for us – this framework I helped craft reminds the team what to ask them, what to train them on, and whether they are a good fit in the first place:

Shipping To Learn / MVP

One needs to get the smallest viable solution into the hands of customers. Feedback based on real-world usage of a basic solution or a prototype provides more clarity than rigorous user testing of a complex solution in hypothetical situations. Shipping to learn helps tame the complexity that often bogs down teams trying to find their product-market fit. This involved breaking features down to understand what is absolutely essential:

Wizard of Oz & Other Lean Prototyping

“The Wizard of Oz test essentially tries to fool potential customers into believing they are using a finished, automated offering, while it is still being run manually. ” – BMI Lab

I used this technique successfully to test a Gen AI solution. This had several advantages:

  • AI was rapidly changing but not yet ready; this technique allowed us to get ahead of the technology limitations and envision the future
  • We could collect more real-world data from customers (who submitted real documents, made real queries), which we could then use to evaluate the AI internally, safely
  • Building AI tools was a new skill for the team; this technique kept us from pulling development resources off other critical projects
  • Customer actually benefited from the test – when we ended the experiment after 2 weeks, they were eager to get it back. This was great validation for us.

Remote Testing: Clickable Prototypes, A/B comparisons, and 5-Second Tests (via Maze)

There are great tools now that combine sharable prototypes with slides and surveys. This is especially useful for hallway testing remotely. I set up an experiment, present the context for the user, and then have them perform a set of tasks and answer some questions. The advantage of this over moderated testing is they can do it on their own schedule. I would typically follow up with questions over email. The biggest things I learned about running this kind of test is to test the test – roll out v1 to one or two people to catch any issues. You often need to add clarification, reword or reorder steps before it’s ready to scale. This was particularly useful for quick Hallway Tests with SMEs.

Job Mapping Based on Strategyn’s “Outcome-Driven Innovation”

ODI breaks all processes down into basic phases, whether it’s surgery or selling a home. The activity’s outcome is defined in a standardized way like: “Minimize the time it takes to assemble a proposal to the insured”. This leads to a deep understanding of how the user completes their task. Customers can then be surveyed about their success with very well defined tasks, which leads to a quantified understanding of wider market needs. Here’s a sample activity:

Activity: Location all the inputs needed for an application. Diagram below shows the tasks to accomplish that (based on customer interviews).

Situational Personas

I introduced the personas concept to a Fintech client to help them shift away from a focus on features and technical capabilities. I started by asking the client to itemize their audiences in a spreadsheet. Then we identified some “top needs”, based on their calls with customer and industry subject matter experts. Later, on another project, I asked them to write first-person user stories.

Over time, the requirements started include more emotional and situational context. At that point, I started to distill insights into personas that focus on the user’s situation:

Context builds actionable empathy and elicits tons of new product ideas and marketing strategies. What seemed like one type of user falls apart into 3 different types of users with distinct, nuanced needs.

Voice of The Customer

This is a type of research that focuses on capturing and sharing words that customers say. On a more recent project, I defined target audiences through story telling rather than demographics e.g., a team large enough where poor intake clarity starts to cause task assignment confusion. I base the quotes on real interviews, so it resonates in our marketing. I create a specific contrast between before and after, so it’s clear what our product solves (a UI pattern that performed well in A/B tests in the past). The goal is to write copy that resonates with customers, because it’s what they themselves would say.

This perspective can be applied to other content, like this feature matrix:


Pre-2019 examples:


Mining Customer Reviews

For one client, I distilled dozens of pages of user reviews into a concise report with 18 themes (Affinity Mapping). The themes described in Pain English what customers thought, supported by 200+ actual quotes of customers. Here are some themes for their Error Tracking software:

Mapping User Flow / User Journey

I have experience mapping complex processes. In this example, I facilitated a session with SMEs to understand the key steps of a health care process, including role of existing IT system (SMEs are usually senior/former users or people intimately acquainted with the process). I broke the process down into higher level phases:

Next step would be to label this process with assumptions, problems, and opportunities (not shown).

Contextual Inquiry / Job Shadowing

I shadowed a health inspector and documented their process as multi-participant user flows:

I then analyzed these flows to figure out how the process could be improved through efficiencies and use of  technology like tablets.

Here’s me touring a coal mine client to gain a deeper appreciation for the business and user context:

Funnel Analysis

For an online experience, I often started with a rough funnel, showing the customer’s journey backed up with visit & conversion metrics. I could identify points in the process where users give up or digress from the path to their goal.

Why isn’t this chart prettier? Because it not going into a magazine. It gets the job done, and then it’s not needed anymore.

This chart hinted at steps where the problem manifests. Similar to a sitemap, this could also help identify problems with the overall IA of a site (e.g., if there are lots of loop-backs or some pages aren’t being visited). Here, for example, I noticed that 50%+ of people drop off at the Tour, which suggested removing or improving the Tour. I also see there are many steps before actual enrollment. Some of my A/B tests tried different ways to move the Enrollment call-to-action higher in the process.

Listening In On Calls & Reading Emails / Chat Logs

There’s nothing like getting the user’s actual words, hearing the tone in their voice, reading between the lines. I’ve listened in on call center conversations to identify common customer questions and persuasive techniques used by CSRs. I’ve also read chat logs and emails to better understand what customers care about.

Remote Monitoring

I’ve used a number of screen replay tools to observe users and identify potential usability issues. To help sort out useful videos with this technique, I tagged various user events using URLs and custom JavaScript. That way I could find and observe problems like, say, only videos of users that completed a form but didn’t click submit:

Linking screen replays and analytics (e.g., Google Analytics) is a useful and cheap way to do usability testing, because you can define a behavior KPI and then filter videos showing that behavior e.g., someone going back and forth between screens or hesitating to click something.

Heatmaps, Clicks, Attention Tracking

In one test, I used the heat map to corroborate our statistically strong finding. I tested a new home page with a simple gradual engagement element: a clear question with several buttons to choose from:

Our hypothesis was that gradual engagement would guide visitors better toward signing up. The heatmaps for the other variations showed clicks on top menu links and all over the page. In contrast, our winner showed clicks on just the buttons I added. There was virtually no distracted clicking elsewhere. This was reassuring. I also saw a pattern in the choices visitors were clicking.

Sometimes I wanted to test if visitors were paying attention to non-clickable content, like sales copy. One of the tools I’ve used was a script I created to track scroll position. By making some assumptions, I could infer how much time users spent looking at specific content of the site. You can see a demo here

Self-Identification

Sometimes designs offer an opportunity to both test a hypothesis and collect data. For one project, I decided that, instead of emphasizing lowest prices (as the norm in that saturated market), I would emphasize our client’s experience dealing with specific real life scenarios (Authority principle). So I interviewed the client further to understand specific scenarios their customers might be facing (based on their own customer conversations and industry knowledge). Then I wrote copy to address customers in each situation:

Now, by giving clickable options, I could track which options users are clicking. Over time, I could learn which scenarios are most common and then tailor the site more to those users.

Use Cases

I interviewed subject matter experts to better understand the end users’ requirements. I summarized the interests of each user segment using some a User-Goal Matrix, such as:

Here’s another example:

User Narratives & Storyboards

The key here is again to build empathy. “Need to do X” may be a requirement, but do X where, why?

Thinking through the plot of a user story helped put me in the shoes of a user and imagine potential requirements to propose to the client (like using Street View, above, or automatically finding similar restaurants nearby). You think “If that actually were me, I’d want to…”

On another project, I documented the user story as a comic book with 2 personas. Here a nurse visits an at-risk new mom and searches for clues she may be depressed and a danger to herself or her child:

Instead of a table of things to look for, the comic shows clues in context (curtains drawn, signs of crying, etc). This kind of deliverable is a good way to build empathy for the project’s users (nurses finding themselves in these situations) and the end recipients of the service (their at-risk clients).

Customer Interviews & Jobs Theory

I’ve interviewed software users and SMEs. I’m also familiar with consumer interview techniques, including  Jobs To Be Done.

I use JTBD insights to shape how I define requirements with clients. JTBD theory argues that the user’s situation is a better predictor of behavior than demographic details. Its emphasis is on capturing the customer’s thinking in that precise moment when they decide to buy a product, especially when they switch products.

See my article 15 Jobs-To-Be-Done Interview Techniques

A/B Testing

I’ve run A/B tests to compare large redesigns as well as smaller changes. Large redesigns only tell us which version is better, while smaller changes help pinpoint the specific cause (which tells us more about users):

As part of A/B testing, I tracked multiple metrics. That way I could say, for example, “the new page increased engagement but didn’t lead to more sales” or “we didn’t increase the volume of sales but order value went up”.

User Analytics & Segmentation

I used Google Analytics and A/B testing tools to segmented visitor data. A classic case is Mobile vs. Desktop segments:

Another useful segmentation is New Visitors vs. Existing Customers, which I tracked by setting/reading cookies. I also segmented users by behavior e.g., Users Who Clicked a webpage element or hit a page.

I’ve done statistical and qualitative analysis of the data collected, teasing out relationships between various user behaviors:

User Testing

I’ve created test cases for moderated testing workshops. One type of user test provided detailed instructions for a user to follow.

“Log into the case dashboard. Find out if there are any cases that need to be escalated asap and flag them for the supervisor.”

Another type of test case posed open-ended goals to see if the user could figure out how to do it.

“You’re a data-entry clerk. You’ve just received a report from John in damage claims. What would you do next? Let the moderator know if you encounter problems or have questions.”

Or

“A cargo ship arrived into the East Port carrying 100 tons of fuel. When is the next train arriving and does it have enough capacity to transport this fuel to the Main Hub?”

I’ve observed users going about their task. The acceptance criteria included usability (is the user struggling to perform the task) and completion (is the user able to complete their job task). Users provided feedback verbally and using a template like this:

I’ve also created detailed training/test scenarios that closely mimicked real job conditions. Users had to successfully complete the tasks and confirm they matched reality.

Hands-On Hardware Research

On one project, I had to understand how to deploy touch-screens and new software across dental offices. I tested the touch-screens on site and interfaced them with the digital X-Ray equipment. I’ve also used:

  • dive computers
  • VR headsets
  • synthesizers

Mobile App Prototypes (2019)

This is a 2019 redesign of my very first mobile web app. Back in 2013, while working for the City of Toronto as a Business Analyst, I designed a basic app allowing you to look up a restaurant’s inspection status on the go. Unbeknownst to me, it was built and put into production! Here I revisit this old project.

Background

I was tasked with defining the requirements for modernizing the DineSafe program at the City of Toronto. The impetus came from a Public Opinion Poll about the program. Among the recommendations was improving usability of the existing website and creating new channels for data disclosure to the public, such making the raw data available and creating a mobile version of the existing site. I spoke with the business unit, who were directly involved in the public consultation. I defined a number of user and technical requirements.

Audiences

From my conversations with client, I identified a number of audiences. It turned out that even internal staff were using DineSafe to look up data, due to lack of a proper internal system.

The target audience for the mobile experience was casual restaurant goers who wanted to see if a restaurant has had health violations. It was decided to expose just this functionality through the mobile app.

Application Map

Prior to design, I did entity maps similar to these, which established the basic structure of the app:

More Experiments

This was a prototype for a shoe brand:

This one was for an ecommerce client:

Storytelling for Cryptocurrency News (2018)

Developed case studies for a financial news site in order to clarify its value proposition and increase subscriptions.

The Original

  1. No clear value proposition – there are many ways to get similar information elsewhere.
  2. The original graphs are too small and don’t make it obvious what happened, when, and what the user can learn from the example.
  3. The study shows a “Learn More” button forcing users to do more work, instead of improving the case study so it does the work of persuading visitors.

To address this, I developed the improved case studies, wrapping the top benefit of the product in a simple narrative with a simple visualization.

Concepting and Collaboration

I always start on paper and get alternative viewpoints from others whenever possible.

I did some rough paper sketches to capture my idea: My collegue (@jlinowski) did his own sketch:

My Solution

A new simple case study looked like this:

I also took the opportunity to show a multi-step scenario that involve both buying and selling:

These were based on my domain research and interview of the business owner, to learn past stories of success.

I designed and wrote 3 strong case studies to improve:

  • Value proposition: New case studies better convey that early info = profit, which is the key benefit of this subscription. All examples are real, dead simple, and recent. The “How To” wording in the headline also helps convey that these sample strategies are repeatable.
  • Visual hierarchy: The diagram is larger to emphasize the importance of this section to a prospective customer and reduce distractions.
  • Usability: I simplified the diagrams and added clear annotations to show what happened, when, and why it’s important.
  • Transparency: Trying to anticipate visitors’ questions, I lead with a consice screenshot of the actual news feed, so it’s clear what info lead to the decision being described.

Measurement

We A/B tested other aspects of the home page redesign. We agreed not to A/B test the case studies, because (1) team agreed it was an improvement (low risk) and (2) testing would impact other business priorities. After our initial engagement the client reached out to create several more case studies, based on positive feedback to the ones we originally launched.

Statistical Dashboard & Internal PM Tool

Recency: <2016
Role: Product designer, web developer
Collaboration: Solo design with lots of feedback from colleague

Background

The Visual Website Optimiser (VWO) dashboard shows the performance of page variants in real time. The original dashboard omitted many key metrics and did not provide sufficient guidance to users (based on my conversations with clients):

  • How’s the test doing now? Any early indications?
  • When do we have enough data to stop? What are the risks and trade-offs to stopping now?
  • What’s on deck to be tested next?

What I Did

I put together some tools to help solve these problems for me and my clients:

  • Statistical library in JavaScript focused on A/B testing
  • Greesemonkey script to add missing metrics and rules to VWO
  • Created email status updates using PHP and VWO’s API
  • Created landing page explaining the free tool’s benefits
  • Created Project Management tool to track ideas

Marketing Focused On Benefits

I created a page to clearly explain the top 3 problems I’m trying to solve:

Enhanced VWO Overview

The original dashboard started with an overview, which showed the relative performance of the each version :

The problem was:

  • No indication of the statistical significance of the results
  • Hard to compare bars as performance differences narrowed over time

I enhanced the overview with:

  • Worst case scenario: Vertical line to easily compare versions
  • Margin of error: T lines to show margin of error
  • Statistical confidence: Added p-value statistic

Confidence lines at the top of each bar show uncertainty. I drew a vertical line to represent the maximum estimate of V1 (the Control). Now it is easy to say that even if the true performance of V1 is its maximum, then the lowest estimates for V2 and V2 are still outperforming it. This is very good.

I added a p-value, which is a standard way to measuring the strength of results. Normally you can’t show p-values like this in real time, but there are various reasons that I did so here.

Enhanced Main Dashboard

The original dashboard looked like this:

The problem was:

  • No indication of current false positive and false negative risk
  • No margin of error for the improvement
  • “Chance to beat” was not always reliable
  • No indication of how much longer to go

Over multiple iterations, the dashboard looked like this:

I made a number of improvements here:

  1. Labeling: I back-calculated VWO’s margin of error and discovered it was lower than is standard (only 75%). I clearly labeled this.
  2. Added confidence interval: I used 99% confidence intervals for extra aggressive to allow for other statistical laxness in making this more user-friendly. Now users could see a range of uncertainty instead of one value.
  3. New confidence indicator: I replaced the “Chance to Beat” with my own “Actual Confidence”, based on my own algorithm. Users could hover the values to see what they mean.
  4. Sample size guide: I tried to estimate how much longer a test has to run. When users hovered over the icons, they could see an explanation and a recommendation in plain English. I also applied many rules in the background to show context-specific messages e.g., if visitors are under some best practice minimum.
  5. Test metrics & risk: I added holistic metrics, showing time elapsed and estimated weekly test traffic. I also quantified the false positive risk, taking into account number of variants being tested.
  6. External calculator link: I provided a link to an external calculator that would allow users to manipulate the data and add special “corrections” not available in VWO

User Feedback

I received feedback from multiple sources and found bugs, which I fixed. The Addon went through 7+ iterations.

Next, I Created Email Alerts

The problem was VWO had no email update service to keep the client updated. Tracking results for multiple clients across different accounts was also laborious for me. Fortunately VWO had an API.

I created an email update service that sent bi-weekly test updates to me and my clients. I used VWO’s API and PHP to route emails. I first started with a status update showing current performance and change from last time:

The email included:

  • All tests and their status
  • Performance of each version, traffic, and statistical assessment
  • Estimate of test duration

I then incorporated my own heuristics that weren’t available in VWO. For example, this report included daily performance so I can see how consistent the test is:

For many projects, the daily counts of visitors were low, so I expanded the weekly summary to show detailed performance. Also, my colleague suggested making the report more personal. So, I also added a custom summary at the top in yellow:

The red and green colors are also distinguished by minus signs and difference in tint, so it’s still clear for color-blind users.

I also built my ownstatistical calculator to facilitate both the planning and analysis of tests.

Product Page for Addon

The full product page included a clear explanation of the what’s new with arrows pointing to specific features and what they mean to educate users.

MVP / Prototype for Project Management

My clients wanted to see the list of A/B test ideas and their current status. I created a functional prototype to allow us to enter test ideas, clearly articulate the rationale, prioritize, and flag them for testing:

When a test was activated in VWO, it would show up in the list, and anyone on the team could click on it to open the VWO dashboard.

Tool Retired

Eventually VWO updated their statistical model and I retired my tools. I also retired the email updates, because it was decided weekly personal updates with clients were more valuable. However, going through the prototyping exercise was highly valuable in documenting the process.

+ 15% Revenue from Supplement Content Redesign (2017)

I was hired to improve the sales of client’s eBooks/Guides about health supplements. The challenge was showing what’s inside (improve usability). I hand-sketched new page design, coded it, ran A/B test, and boosted revenue ~15%.

Constraint: Don’t give away too much for free.

Content Audit

Problem: The original page was weak on the value proposition and product content. Key information was buried mid-page, making it hard to scan, and there was little to help buyers envision using the guides.

Solutions: I improved this by crafting a strong value proposition and highlighting the top three benefits. I also categorized the products into three clear sections – Body, Lifestyle, and Mind – for better readability. “Three” is a familiar design principle, and I made each category clickable like a filter to increase engagement. 

One of The Initial Concepts

Additionally, we I noticed a minor section, almost an afterthought:

I loved the part about proven/uproven supplements and the idea of assembling a stack. I decided to turn this section into a feature, giving away some more free information without giving away too much.

Final Design

After several iterations and collaboration with another designer, we decided on a different approach. When the visitor clicked a topic, they would see all the supplements covered. The supplements were categorized into 3 categories: work, unproven, and avoid. I also included scenarios for combining supplements, so users could better gauge if those supplements are applicable to them:

Impact

I designed and ran an A/B test of the old and new versions with ~40,000 users.

The new version increased completion rate by ~6% and revenue by ~15%.

This was done through a genuine improvement in content usability as well as persuasion techniques like curiosity.

Mistakes and Considerations

I should have chosen more accessible colors than green and red despite using icons as a secondary means of distinguishing proven, unproven, and avoid. Another options was to put them into separate sections, but since the content is dynamic this would cause some states too look weird e.g., one box taking up an entire row.

SaaS Fintech Concepting & Design (2015)

Optimized content to persuade visitors to upgrade to the paid product and designed new functionality, dashboards, and reports.

Applying User Centered Lens

I helped the client to create a matrix to clarify their target audiences and user needs in Plain English. Here’s an example:

Writing copy samples helps the team refine its message and value proposition. It generates ideas for design. Later it can serve as raw material for headings, labels, and marketing copy.

This eventually evolved into personas focusing on empathy and context:

Product Concepting & Strategy

I helped the client connect their raw business ideas to specific user goals. I asked user-centered who/how/why questions to tease out the core opportunity. It is common for clients to have implicit knowledge that they don’t think to make explicit.

For example, while discussing a screen, I suggested we break users into “buyers” and “sellers”. It turned out “buyer” and “seller” terminology didn’t feature anywhere in the UI, because the clien’t site isn’t a marketplace. However, this language well described the goals of the users.

I sketched  a lo-fi wireframe in real time. It targeted buyers and sellers explicitly instead of saying “Lists of Products”:

Discussions like this lead to new product ideas, different ways of organizing the existing offerings, and different strategies for marketing.

Landing Pages And Calls To Action

I designed a number of landing pages for this client. When they needed to “collect user data”, I helped them reframe this as a user goal i.e. why are they giving their data? When they wanted a page to list some facts about their product, I helped them articulate the value proposition. I wrote copy and organized content rooted in the user’s situation:

Real-Time Sketching & Collaboration

The client and I using screencasts and email to exchange ideas. Then we’d get on a Skype call to sketch the ideas with real-time feedback.

For example, the client had a screen that lacked purpose and consistency:

I improved the information architecture of the screen (it’s value proposition, hierarchy, and clickable items). During the conversation I redesigned the numerical scale and proposed to expand it create a “report card” for all criteria with useful “how it works” insights for the user (useful based on actual comments from users):

Real-time collaboration allowed design decisions to cascade and evolve to create a more useful, cleaner screen that the client was happy with.

Guiding Users Through Complex Processes

Problem: During our Skype call, the client and I arrived at the idea of “generating leads”, something users are currently not able to do using any tool on the market. To get at this info using the site would require multiple steps and reports.

Solution: I proposed to clarify and emphasize this feature by creating a unified step-wise wizard culminating in a practical “Prospect List” a sales person can run with:

I encouraged the client to guide users more. For example, I recommended adding more descriptions and training videos to various complex areas of the site.

Dashboard Concepting

This is a mockup to summarize a financial portfolio. This report gives users details and a key takeaway i.e. a breakdown of their key product buckets plus a single number summary (top right):

The mockup embodied existing requirements but also served as a proof-of-concept for new potential ideas. For example, in this diagram, I included a blue line that compares Product A to a benchmark. This is a way of asking the question visually: Does the user need to compare to a benchmark?

In this concept for a dashboard component, the idea is to see a subset of the data that’s relevant and then act on it directly. I’m filtering data to show only the negative y axis to highlight only the negative events. I’m then comparing it to the equivalent on the benchmark. I’m also detecting the lowest point (worst event) and allowing the user to click it directly:

I created a summary component to  let uses compare current value to the range, and how far it is from the highly probable values (cutting to the key insight instead of a long table with a complex chart):

Divergent Ideation

There are usually many ways to doing something. When I sketch ideas for a concept, I usually diverge to explore many options. I then converge based on what makes most sense and with client’s feedback OR I propose an A/B test.

For example, I tried an upgrade pop-up instead of the full report to persuade users to pay:

Some of our questions explored as separate variations were: Should I tease user with some summary data? If so, what data? Should I show upgrade call to action and input fields on the same page or hide them behind an upgrade link? Should I go with a dark or light motif? What’s the optimal message for the heading? Should I list top benefits or speak with data?…

For the home page, I concepted out different ways to get the user started:

In version B, I proposed a single field with a call to action, an action 90% of users would be interested in. In version C, I proposed instead to let user choose who they are, then show them a tailored message and call to action (Gradual Engagement). In Version D, I proposed showing the user several “I want to …” statements to directly link to user goals… and so on.

Wireframes For Insured Dashboard Redesign (2014)

Developed concepts to modernize and expand features on a customer portal.

Problem

The original dashboard (which I cannot share) was missing a lot of key details. There were lots of links and generic info. I wanted to give the user a summary of their account and emphasize actionable items, such as access policy documents or payment history. As a team, we captured several business priorities, like reducing paper documents, more self-serve options to free up call center, etc.

User Profiles and Use Cases

After talking with the project owner and interviewing a business subject matter expert, I captured several light personas defined by use cases:

My personas are defined by WHAT a user intends to do. A persona or use case becomes a trigger for a wireframe flow that meets one or more requirements.

Wireframes

I created detailed wireframes like this. Their function was “proof of concept” and exploration of other potential features. Clients need to see something to be able to better understand what they need:

I usually encourage clients to build components as needed, to think in Agile terms and embrace rapid iteration over perfection. As a result, wireframes for each screen are more detailed in key area and less in others.

Solutions on the dashboard above included:

  • Holistic view: I summarized all aspects of the user’s interaction with the company and categorized them into 3 columns with subsections. I also added a global summary info bar at the top (e.g., “You have 6 unread docs”)
  • Hierarchy: I showed content in order of importance/relevence, from top to bottom, left to right. Older and secondary content was concealed behind summary links.
  • Consistent Clickable Styles: I exposed the key actions as buttons or links. For example, under Payment, I added a prominent Pay Now button and a secondary link to View Invoice.
  • Relevance & Quick access: New and unread items highlighted. I exposed some key actions. For example, I added a pull-down to quickly switch the policy holder for those account that have multiple people on them.
  • Surfaced Meta-Data: To increase the relevance of each link, I tried to surface the key metadata from the item being linked to. For example, a document link came with a concise description showing document data, policy number, expiry date, etc. For the View Invoice link, I showed how much they saved on that invoice, due date, etc.

Ideation & Storyboarding

I developed a layout for the dashboard to standardize existing pages. Based on this layout standard, I explored various areas of functionality. The client didn’t know what exactly they wanted. They wanted to see options and get advice on what their client might be interested in seeing and how.

Most interactions dealt with looking up the right documents, managing access, and sending documents:

I mapped out user actions and screen transitions by connecting various wireframes, like this:

The client and I went through multiple iterations of each screen. I would start by making suggesting of what might be useful features. They would give me feedback on what they thought would work for their clients. I would make revisions and so on.

Prototyping

Even a rough functional prototype can help better “feel out” a solution than a static representation. Here’s a simple HTML prototype that the user could click through to test the overall flow. I also mocked up an alert email, which triggers the process. This way the client could start seeing the full story of how a user will to log into their dashboard:

The client would then take my concepts to his team for testing and further elaboration. My objective was to quickly concept and prototype UI ideas for them.

UX Audit of a Facebook Game (2018)

Farmville is a farm simulator for Facebook from 2009.

Original Home

Redesigned UI (Rough Mock)

Theme Problem Solution
Neighbours Greatest real estate to empty slots for Facebook friends Simplified Neighbors pane and surfaced stats (level and cash), can grow as more social features are used (progressive disclosure)
Action menu Buried actions requiring multiple clicks Exposed action menu with animated transition to avoid loss of context (view prototype)
Key Stats Key Stats are scatteredDifference between “Cash” and “FVO” Farm Cash is unclear Compact Key Stats, together on the leftFarm Cash deleted
Levels Unclear “level” system Explicit level labelLevels are now goal-driven, so it’s clearer what to do and what the constraint/deadline is (added Time Remaining to the stats)
Selected States No selected states for tools, selected seed, etc. Clear selected style for actions (e.g,. in my mockup, the Plow is selected in nav and mouse cursor looks like plow)
Plot labels Easy to miss wilted plots Clearer labeling of plot status (Ready, Wilted tags)Status summary on the side (when plot clicked, map pans the screen to the plot)
Cash Two types of cash (confusing) Simplified “cash” concept (removed Farm cash)
Inventory No buildings by default.Unclear where my inventory is. Created default building (how can you have a farm without buildings?)This building doubles as the inventory (you can click the barn to see what you own so far)
Settings Full screen mode not discoverable. Full screen icon in standard bottom right location.

Original Product List

Redesigned Product List Concept

Theme Problem Solution
Costs / Layout Low contrast on costsUnclear prices (two prices shown: cost to buy and eventual profit; unclear which is which) Cleaner, standard layout for all “costs”Can’t confuse cost and profit (using price look for cost and “earn” label on profit)
Action Small BUY buttons and unclear that BUY means Plant Now Entire item card is clickable
Scrolling Horizontal navigation via arrows is awkward Standard, faster scrollbar navigation
Transition Loss of context when menu opens and covers up game Animated slide out attached to main menu, less jarring (See Adobe XD Prototype video)

Top Usability Lessons

Avoid Competing Concepts

It was unclear why I was seeing a “you’re out of cash” message when I had tons of cash.

In my mockup, I removed Farm Cash, leaving money and water as the two main constraints. Users would buy regular cash or other perks.

Free Should Be Playable

I ran out of “Farm Cash” too fast, without knowing what it is, leaving few things left to do in the game. A game should still be playable and fun for all, not only paid players. A user could still make upgrades later. Otherwise, they will just spread the word that the game is not fun, hurting adoption.

User Research Question: At what point would players be ready to invite friends? Would they do so right away to say “Hey, I’ve just started this game” or would they do it later to say “Hey, I’ve played this game already and it’s great”.

Reward Every Session

Once you plant a lot of  stuff, there’s not much to do. It teaches users not to expect much. Delayed gratification and long feedback loops are always weaker motivators. One factor aggrevating this is that interesting Actions are burried inside the Market dialogue:

Even if the game could support short play times (plant now and harvest tomorrow), it should also be playable during longer sessions. In my mockup, I exposed some of the actions, so it looks like there is more stuff to do. I would also unlock more of the categories, so users could play around. For example, they could buy 1 cow and maybe do something with that cow (feed it, tickle it). This would provide a simple experience with immediate feedback, while the longer feedback loop of growing/harvesting is ongoing.

User research question: What are some contexts in which users will play the game? For example, user plays for 1 min while waiting for a bus or for 15 minutes while riding the bus. Will there be lots of distractions? How long should the typical session be to fit the constraints? What are the user’s most satisfying moments?

Avoid Interrupting The User

There are lots of pop-ups and early upgrades that happen at the wrong time or interrupt another process.  For example, I was trying to plow but found a box. This popped up a new screen and another about the box, completely interrupting my task. Often dialogues pop up one over the other. It’s better to show messages when they are relevant. When I’m planting is not a good time to suggest that I customize my character. Detect the “end” of a process or task and show relevant messages then.

User research question: What are natural pauses in the user’s game play where a general message could be shown? What are key problem moments when a contextual message could help? Which things do users enjoy figuring out for themselves and which things are frustrating?

Enable Wayfinding

There are many screen orphans: they pop up, you close them, and later don’t know how to go back to them. For example, when you click a plot of land with the default Multi-tool, you get a pop-up with seed choices to plant. There’s no direct link to this screen. There’s no headline to explain whether this is an inventory of seeds I own or stats on what I’ve planted. Also, there is no selected state although you can choose your seed.

User Research Question: What are most common and enjoyable tasks? How long do users spend on a task e.g., planting crops? What categories of items do users care about? What do they want to do with those items? How much choice is too much?

Clearly Label What Is Selected

There is no selected state on tools, which makes it unclear what mode I’m in or what to do (especially if I clicked something just exploring early on). In the screenshot below, my mouse pointer shows no clue as to what tool is selected and what will happen if I click the ground. The plow tool has no selected state. It’s also unclear how to exit plow mode.

In my mockup, I labeled the selected states and the tool clearly.

Create Explicit Rules And Constraints

Some of my crops wilted, and it was unclear by looking at them that they wilted. When I clicked them, I just plowed over them, because I didn’t know what “unwilting” means.

In my mockup, I labeled “wilted” plants more explicitly. I would also show some kind of message, perhaps along with the “Ready” status updates on the side.

User Research Question: What do users want to do while things are growing? Are they receptive to, say, email notification when their crop is ready or wilting?

Product Recommendations for the Future

Keeping It Fresh

  • Daily opportunities + challenges:
    • Extra rain causes crops to mature faster, BUT you have to harvest quickly
    • Drought is causing wilting, you have to water your crop immediately
    • Phone call from a vendor who wants to arrange an ongoing bulk order and pay your extra (if you grab the deal, you make extra for every sale and you get free shipment of resources like fertilizer)
  • Material upgrades:
    • A barn upgrade, so you can store more stuff
    • Better rake that can plow faster and cheaper
    • A plot size upgrade, so you plant more and earn more per plot
    • You discover an old well that gives you extra water
    • You receive sample bags of fertilizer so your next 10 crops will grow faster
  • Environmental challenges and opportunities:
    • Storm endangers your crop so you have to build greenhouse to protect your crops, or fix damage before your crops fail
    • A tornado destroys your barn so you need to recruit neighbours to help you rebuild it
  • Surprise new characters:
    • A part-time helper shows up to pick crops before they wilt (your cousin who’s come to stay for a week and help for free or a part-time employee who requires minimum payment)
    • A feral dog comes wandering onto your property, and you can tame it (it then keep badgers away)
    • A new neighbour moves in: if you get on bad terms, they can drive you out of business, but if you get on good terms, they can help you
    • Compete with AI neighbors for business
  • Mini games
    • Add some RPG elements (e.g., ride a tractor), so there is a purpose to controlling the avatar
    • Combine big picture activities and specific farming tasks: zoom on crop plots and do something detailed (e.g., a mini bug spraying game like Whack-a-Mole or Candy Crush)

Characters

  • Invite friends to visit farm. They can water crops or pick ready crops when I’m not available. If they want to do more, they need to get their own farm next door.
  • Users get different perks and can share props e.g., “Can I borrow your tractor? I’ll pay you 50 coins for a day” You get rewarded for sharing.
  • Auction where you can get items cheaper (tractor, animals, tools)
  • Invite friends as helpers (e.g., “A drought destroyed my farm. I need 2 people to sign up to help me rebuild. I need someone to play carpenter to rebuild the barn and someone to dig a new well”)
  • AI neighbours whom you can borrow things from
  • Multiplayer-like features and co-ownership of items:
    • Start a farm together with friend, spouse, etc.
    • The more people, the larger the default farm and more cool farm props (e.g., a farm with 3 people gets a tractor by default)

Chat features

  • Game posts friend updates automatically (e.g., “John just got a larger shed”); users can then like an update or reply with text
  • Team chat for multi-player farm g., User 1 asks User 2: “Hey, I just sold a large harvest. Should we buy a tractor or buy more land?”
  • Chatbot: you can talk to your avatar or AI neighbour e.g., “How are you? Go plant some carrots”
  • Integration with Messenger for chat with avatar e.g., “Something’s happened. Come right away.” or “Crops ready. Should I harvest?”)

Concepting: Let Down by Travel Insurance? (2019)

I was hired by a small business CEO to explore and visualize a business opportunity. I can’t share the actual industry due to NDA, but I created an analogous scenario about travel.

The Problem

There are many stories of travelers buying insurance and still not being covered:

How might we prevent this or help the customer through this kind of situation?

Business Opportunity

My client’s business would help people in financial difficulty. The ask was for a concept mockup to share with a larger team to drive very early discussions. I also created a preliminary service blueprint of an emotionally charged situation: Susan cancels an expensive trip and her insurance doesn’t cover it.

Wireframing and Crafting Value Proposition

I designed a concept and wrote copy to connect the business idea to a strong customer story. The client appreciated the story-driven approach, which encouraged them to engage with real customers sooner. In past projects, similar advocacy led my clients to incorporate testimonials and eventually move to photos and candid videos.

User Advocacy

Clients are rarely eager to get user input to validate assumptions or get new perspectives. I always have to initiate those discussions.

Here’s an example email where I start to raise these issues:

To: Client

You asked me to think through the employer-side of the travel insurance service. How might we get insights from actual "employers"? What past experiences may encourage them or deter them from using a service like this?

Identifying Points of Potential Breakdown

My objective was to encouraging the team to seek real customer feedback and gain a more holistic understanding of the customer experience.

When the big problem hits, Susan is dealing with multiple issues: a stressful event, canceling her trip, losing money, limited time off work, and exhaustion from fighting with insurance. She feels powerless when she discovers our service.

Empathizing with her situation can influence how the service is organized, marketed, and how customer service staff are trained. Instead of focusing only on the end result, like Susan getting a partial refund and us getting paid, it’s important to recognize that the solution for the customer begins much earlier, such as finding a service that offers hope.

Opportunities

A broader definition of the job-to-be-done encourages the team to view the service from the customer’s perspective. While we may think we’re selling a refund service, Susan is actually seeking guidance through a stressful situation. This broader view reveals additional opportunities:

  • offering flexible flight options
  • creating a ticket resale marketplace
  • providing financial counseling
  • software for independent insurance claim evaluations

Next Steps

Continuing to get a clearer picture of the target audience, defining constraints (e.g., what type of user story to focus on), and moving along an idea of the solution.

Wireframes & Responsive Design For Fashion / E-Commerce Client (2015)

I created wireframes and a partial high fidelity design for a fashion e-commerce site looking to increase its click-through rate.

Direction

My hypothesis was that larger images and exposed curators would encourage users to browse more slowly and pay more attention to each item. Instead of 3 columns of smaller images, I opted for 2 columns with larger images. I exposed basic item details, including the brand, because this site is all about curated, select items from boutique brands.

Wireframing

I sketched a desktop concept in Adobe Illustrator and annotated my design rationale and interaction details. I then showed the flow of various edge cases and states:

I use Illustrator to be able to freely explore layout options without being constrained by the shapes and features of the software.

I created a responsive mobile view in parallel, showing how the columns could be collapsed and filters overlaid:

High Fidelity Designs

Business Analysis – Audits and Strategic Planning (2013)

Objectives

Executed an Audit of IT systems (what they do, how old they are, who’s using them and why) and operations (where are the staff allocated, how do programs share resources):

  • Interview all program heads
  • Design qualitative and quantitative research as needed
  • Summarize current state and identify opportunities for improvement
  • Present findings to management

Solution

I audited operations at a large Public Health Sector organization. I interviewed stakeholders, collected information on IT systems and people, and presented an analysis with diagrams and organizational recommendations.

I delivered a presentation and wrote a 60-page report on the state of IT operations, covering 9 program areas:

Expanded The Scope

On my own initiative, I expanded the scope to include Strategic guidance for management based on my findings. This included organizational models, a service blueprint (lists of responsibilities, goals, beneficiaries, internal vs. external stakeholders), and a governance survey (audited the management process itself).

The Work (Service Blueprinting)

I created a Blueprint to help management to better understand the scope of the organization (using “The Work” model) and the effect of external factors (it wasn’t always obvious to people managing The Work).

I further broke down the organization into areas like Data, Processes, Business Applications, and People. I then detailed all the internal support activities that are required to manage each aspect of the organization. Finally, I mapped internal activities to public services:

Surveys And Interviews

I created multiple surveys (developed the scope and wrote the questions). The big challenge was that each program used different terminology and had a different way of seeing itself in relation to the whole. I had to think how to frame questions in Plain English in a way that is properly interpreted by all participants.

Surveys included systems and process questions:

Surveys also included self-evaluation of the management team and their relationship to IT (provider of services):

When a standard survey format was insufficient, I created a custom framework and format:

After survey data was collected, I interviewed the head of each program. In the end, I presented my findings to the whole team.

Data Analysis And Customized Presentation

I created different presentations for different types of data.

For data pertaining to relationships, I created a relationship overlap diagram:

For standardized data, I created a color-coded map to highlight pain points:

From qualitative data and interview notes, I extracted and summarized the key actionable requirements:

Governance Framework

I created a survey to assess the effectiveness of the IT Steering Committee in areas like communication and resource management. I also created an overarching framework to convey the scope of organizational activities, weak points, and opportunities (SWOT analysis):

More Leadership Projects:

Presented Vision for IT Service Delivery (2012): Presented to cross-divisional senior management about transforming IT services through collaboration and workflow automation. Current state included poor collaboration across silos, resistance to change, and lack of process transparency. I explained how process definition enables measurement and creating efficiencies through reuse. I also presented ITIL Service Design principles and sample IT service catalogues.

Charter for TPH IT Strategic Plan (2013): IT wanted to understand the gaps in its service delivery to business and develop a service-oriented strategy to create Business-IT alignment. I advised senior managers on creating a project plan for this initiative. I developed problem statements, objectives, and analyzed risks. I prepared a detailed roadmap of activities.