Problem & Solution
Cyber insurance involves answering detailed and technical questions a broker isn’t used to. I designed and shipped an AI assistant to help users answer those tough questions:
How Did This Project Arise?
Soon after ChatGPT 3 went viral, I was asked to investigate how we could embed Gen AI into our product. New AI tech made it possible to address long-standing aspects of our product vision, including:
- Design Principle: We’ve long framed Relay as an “Assistant” to brokers
- Problem Validation: Customers struggled with long technical questionnaires. But how big of a problem are the questions? Enough for customers to use the AI assistant, and what additional help would they request?
This project would address a range of business interests as well:
- Strategic: Get users comfortable seeing AI in the product, so we can build on it; create buzz by being the first
- Tactical: Allow sales and marketing to start bringing up AI in calls and get customer reactions. Would Support team report lower onboarding anxiety?
- Internal Learning: Skill up development team in AI and lay groundwork for later
How Did I Validate the Problem
Early on at the company, I helped implement a “continuous discovery” track that ran parallel to our sprints. From these ongoing interviews with customers, sales, and support, I knew of two things that scared off brokers: #1 lack of familiarity with cyber insurance and #2 friction in the workflow: brokers had to ask their clients for info on top of what they already collected for other insurance lines. We had the opportunity here to educate brokers AND help them advise their own clients.
An interesting paradox: I knew the problem wasn’t going away. Brokers needed the ability to get quotes from multiple providers at once, but the more providers, the more questions they had to deal with.
What I Did & With Whom
My key deliverables were:
- UI concepting, design, and overseeing implementation
- Prompt engineering
I worked with a number people, including:
Stakeholders | Primary Concerns | Our Collaboration |
Head of Product | Something useful that we can ship quickly Beat competitors in AI game Mitigating legal risk with a disclaimer UI Feedback (pushed for more “in your face” placement) | We discussed and chose a UI direction based on ideas (We needed this to be dead simple for non-tech savvy people, with no edge cases to complicated development) I negotiated less obtrusive UI that is still discoverable enough for sales / marketing |
Lawyer | Disclaim the heck out of this feature | I toned down disclaimer (from blocking interstitial with lots of legalese to small non-blocking footer) |
Developer | Minimize scope, as AI was new to us | I paired closely with an engineer to refine the design. I negotiated error handling logic and testing of response times to ensure edge-case-less experience |
User | Something that doesn’t require additional training or explanation | I did rapid hallway testing with 2-3 former brokers. This was sufficient to ship quick to learn |
Interaction Design
In order to use the AI Assistant, a broker would hover over a question and click “Clarify”. Here is a video showing the full interaction:
What Were the Hardest Parts?
Choosing from many possible prompts. I knew our user’s typical concerns from experience. My key decision was to restrict the choice to 4 starting questions. I created short, user-friendly labels to hide the complex underlying prompts. I tested numerous prompt versions to ensure consistent, high-quality responses and no hallucinations. I didn’t want users to waste effort typing anything.
Minimizing distractions. Leadership wanted AI to be prominent in the UI (like on the home page) with a prominent, blocking disclaimer. I argued we had an existing user behavior to build on- users could hover on a troublesome question to see who’s asking it. Therefore, I placed the AI affordance in that context, next to the question. And I negotiated wih Product and Legal to keep the disclaimer small and out of the way.
Scalable pattern. I needed to establish a pattern that could be used in other situations in the future. A contextual trigger that opened a sidebar seemed scalable. In fact, it was later used to facilitate prototyping a different feature.
What Alternatives Did I Explore?
Alternatives to AI. I had considered writing high-quality custom content for each question (as an alternative to AI). Unfortunately, upfront work and ongoing maintenance was cost-prohibitive.
Open/Closed Interactions. I considered using the UI as research: to learn what other questions users might have “in the moment”. One idea was a “Add Question” interaction. However, I decided against this to keep scope tight and control risk with curated questions.
Dedicated flow vs sidebar. Another idea was a questionnaire Wizard that would explain each question and spread AI guidance over the whole flow. But that was too much work for v1 and over-optimized for first-time users.
How Did We Measure Success?
We instrumented analytics to see how often the feature was triggered, on which questions, and what kind of help the user triggered. The data wasn’t conclusive yet (given our long sales and adoption cycles) to establish a direct impact on the bottom line. There were larger factors in play e.g., strategically we soon after shifted focus to bypassing questions instead (which meant fewer “doer” users would need to consult the AI Assistant, at least initially).
We did collect qualitative feedback from Support teams doing onboarding that showed the existence of the AI assistant contributed to the perceived value of our product from the buyer persona.
Also, the marketing efforts around AI generated some sales leads, which is an important thing for an early-stage startup. And the AI experience within the team yielded more opportunities later.