Platform Performance Research
Project Overview
As part of WR Berkley’s Digital Team, I’m helping build an enterprise solution that unifies digital tools across operational units. During early modernization planning, questions arose about how agents perceive “performance” in the portals they use daily — and how those perceptions influence their choice of carrier.
​
I led mixed-method UX research to define what high performance truly means for insurance agent portals. By combining qualitative and quantitative insights, the project uncovered the experience factors that most influence agent satisfaction and carrier selection — directly shaping the modernization roadmap for Berkley’s carrier portals.
My Contributions
-
Designed and led a mixed-method research initiative combining moderated interviews and a quantitative survey of insurance agents and underwriters.
-
Partnered with product and engineering to frame research questions around user experience, speed, and reliability.
-
Analyzed both behavioral and perceptual data to identify which aspects of “performance” most impact trust and preference.
-
Synthesized findings into actionable insights that informed the carrier portal modernization roadmap and ongoing platform strategy.

- My Process -
Defining the Questions
​
The modernization effort needed clarity:
​
What does “high performance” mean to our users?
How do those expectations guide carrier choice?
I partnered with product managers and engineers to turn these into researchable questions, ensuring the findings would directly guide platform priorities.​ Using this feedback, I designed a quantitative survey to validate these themes at scale, ranking which experience factors most influenced satisfaction and carrier preference.
​​​
​​​​
​
Mixed Method Research
Capturing both depth and scale.​
​
​​
​​
​
​
​​​​​​
​
​
​
​
​
​
​​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​​
​
​
​​​​​​​​​​​

Semi-Structured Interviews, Followed By A Broader Agent Survey
​
We were looking to understand:
-
How agents evaluate a portal’s performance in real-world conditions.
-
Which performance factors (load time, clarity of feedback, workflow stability) most affect satisfaction and carrier choice.
-
What expectations define a “best-in-class” experience for digital quoting and servicing.
-
How performance perceptions differ between high-volume, independent and captive agents.
​
To achieve this, I designed a mixed-method approach combining qualitative depth and quantitative validation:
-
Moderated interviews captured in-context insights into daily workflows, highlighting moments when performance builds or erodes trust. These sessions helped uncover the emotional and practical dimensions of “speed,” “responsiveness,” and “reliability.”
-
A large-scale quantitative survey tested these themes across a broader audience of agents, ranking which experience attributes most influenced satisfaction and carrier loyalty.
​
This combination allowed us to move beyond assumptions about technical metrics (like load times) and instead uncover what users actually perceive as performance. By triangulating qualitative stories with quantitative evidence, we built a robust model of experience factors that could directly guide both UX design and engineering priorities.
Analysis and Synthesis
Integrating both data sources revealed three key themes of “high performance” from the agent perspective:
​
-
Reliability and uptime — interruptions or errors during quoting were the #1 source of frustration, cited by 73% of participants.
-
Ease of navigation — clear structure and predictable flows reduced perceived slowness; 68% said “knowing where I am in the process” mattered more than raw speed.
-
Transparency — visible progress and clear feedback built trust; 61% said they valued transparency over interface aesthetics.
​
These findings reframed performance as a user experience outcome, not just a technical metric.
What This Looks Like In Practice - The Target Experience
​
Quote start:
Select product → appetite check returns “Eligible with notes” or “Ineligible (reason + alternatives)”
Inputs:
Minimal required fields
Upload application to prefill required fields
“Same-as” toggles
Real-time validation
Status:
Persistent progress with SLA clock (“Underwriter review — target: same business day”)
Outcomes:
Clear steps; instant ID/policy number; artifacts stored back to AMS
Service:
Endorsements, billing, COIs, ID cards completed in-portal with confirmation & audit logs
Communication:
Message center with timestamps, templates, and proactive alerts (maintenance, missing docs)
​
This vision translates research findings into tangible design and functional behaviors — making “high performance” visible and measurable in daily workflows.
Outcomes
​
Strategic impact
-
This research directly informed the carrier portal modernization roadmap, shaping both UX and technical performance goals.
-
The recommendations provided a shared framework for defining and measuring “high performance” across the enterprise.
-
The recommended metrics will begin to connect UX improvements to measurable business outcomes, reinforcing UX as a driver in product strategy.
​
Organizational value
-
Established a shared vocabulary between product, UX, and engineering.
-
Strengthened buy-in for experience-led modernization efforts.
-
The research framework is now used to evaluate new digital initiatives across WR Berkley’s operational units.