The Feature
To apply for cyber insurance, a broker has to answer many technical questions. I helped unify and streamline questionnaire from different providers and designed an AI assistant to help users answer tough questions:

Key Activities
Discovery with customers
Prompt engineering
Visual concepting/design
Negotiating with legal
Dev collaboration
Time Frame
A few days for this feature
(overall application form covered several projects and months of iteration)
Team
Designer (me)
PM
Developer
2-3 SMEs
Core Problem
Brokers want quotes from multiple providers. More providers means more questions upfront. More questions means it’s harder to persuade seekers to consider insurance. And it’s wasted effort for brokers if clients never purchase. Making this worse is lack of unified approach among the different providers, leading to repetitive and inconsistent questions:

Problem Timeline
I chipped away at this problem across multiple projects over a long time, both iterating and inventing new approaches:

Where Questionnaire Fits Into the Workflow
There are multiple features tackling this challenge and several alternative workflows aimed at different kinds of brokers. I’ve highlighted the core workflow and where Filling the Underwriting Questionnaire sits:
Customer Research
In my continuous discovery with customers, Sales, and Support, I saw how less sophisticated brokers struggled with the technical questions asked by insurance providers.
⚡ Cyber Is Intimidating
Cyber is an upsell/addon coverage that’s technical and unfamiliar to many. This creates resistance to offering it, which opens up brokers to liability.
⚡ Workflow Friction
To offer cyber on top of the main insurance policies, brokers had to justify why they are asking their clients for more info.
First Task: Unify and Streamline Questionnaires
There were many repetitive and inconsistent questions among providers. Driving a unified approach, I made myself an expert on the questions. Here is a sample question about backups:

Notice how terminology, specificity, and structure can vary between providers.
Next, I developed 12 Patterns to drive my strategy for negotiating and transforming questions. The trick here was to push UX improvements in away that mitigated the legal impact of modifying wordings that had been carefully crafted by underwriters.

Finally, I negotiated and secured alignment from all our provider teams and internally, based on a rock-solid argument:
- Lack of consistency in questionnaire is hurting UX (I shared broker feedback)
- Provider will lose business to peers
- Changes I propose are safe and responsible
- Changes are consistent with peers
I collaborated with folks across the organization as well as our customers and vendors:

But It’s Still A Lot of Tough Questions!
I learned a lot about cyber risk in my research with brokers. Many teams, however, didn’t have the luxury of a cyber expert they could reach out to. They were still intimidated by cyber questions.
If we could educate brokers seamlessly with inline assistance, they could:
- be more confident in selling
- make better assumptions
- streamline information gathering from clients
UX and Strategic Considerations
I worked with the Head of Product to decide how we can quickly ship some kind of AI assistant that would be useful. Besides chipping away at a known problem, this project was an opportunity:
For Discovery
How effective would “in the moment” help be? Would it replace the need for upfront education (studying up on cyber insurance, taking webinars)? This feature would allow sales to start bringing up AI in calls and get customer reactions to the technology – would people feel safe using it?
For Laying Groundwork
Getting users comfortable seeing AI in the product, so we can build on it. Create buzz and build the internal skillset so we can tackle more complex features later.
⚡ Key Risk: Mitigating legal risk (e.g., AI giving bad advice) was key, which is why I decided on a fixed set of prompts to address top broker questions.
Design Choices
User Testing Questions
I knew our user’s typical concerns from experience. My key decision was to restrict the choice to 4 starting presets. I created shorter, user-friendly labels to hide complex underlying prompts. I tested numerous prompt versions to ensure consistent, high-quality responses and no hallucinations. I did rapid hallway testing with 2-3 former brokers. This was sufficient to reduce the choices to 4, ship fast, and start gathering real-world feedback.

Where to Place the Affordance
Leadership wanted AI to be prominently featured (for marketing). Instead, I decided on tacking onto an existing interaction. Users could hover on a tough question to see who’s asking it. That seems like the a great place to start.

Layout
I needed to establish a pattern that could be used in other situations in the future. An inline (contextual) affordance that opened a sidebar seemed reusable. In fact, it was later used to facilitate prototyping a different feature.
An alternative idea was a Wizard that would explain each question and be more of its own guided version of the flow. But that was too much work for v1 and too focused on first-time users.

Within the sidebar version, one idea was to remove all choice and provide default help instead of giving the user prompt choices. The simpler approach would give a response faster. But the solution would also feel less “comprehensive” (considering different brokers might want to ask different questions) and less interactive. Other design decisions included: should we continue a “thread” if multiple questions were asked or replace the answer when new question is asked – I kept it simple to start.
Presets vs Open Ended. To learn what other questions users might have “in the moment”, I considered adding an “Add Question” (PLUS) interaction. However, I decided against this to keep scope tight and control risk with curated questions.

Brand and Color
I decided to brand everything AI-related purple. We didn’t use it anywhere else and I chose a purple that worked well with my existing palette. We knew the AI would need some kind of persona or logo. I created several logo ideas for the AI, including a brain and a rocket. I knew it was temporary. Later on we replaced it with a smiling robot face icon, which we named Q-u-o-t-i-e and used in other AI features.
Negotiating the Disclaimer with Legal
I tested the Gen AI provider against my hand-picked prompts to mitigate the risk of inappropriate responses. I then crafted a reduced disclaimer (compared to the one provided by legal), used UI mocks to explain the context, and secured our lawyer’s agreement.
Measuring Success
We instrumented analytics to see how often the feature was triggered and on which questions. We needed more time (given our long sales and adoption cycles) to establish a direct impact on the bottom line. There were larger factors in play e.g., strategically we soon shifted focus to bypassing questions instead (which meant fewer “doer” users would need to consult the AI Assistant, at least initially). So there were rapid iterations on other fronts chipping away at similar problems.
We collected qualitative feedback from the Support team doing onboarding, which suggested that having an AI assistant could reduce adoption anxiety (“What if I offer cyber and a client asks me something I don’t know?”). Also, the marketing efforts around AI generated some sales leads, which is an important thing for an early-stage startup. And the AI development experience within the team yielded more opportunities later.
Before Relay, we didn’t think we would be able to sell any Cyber.
– Relay Customer