Defining High-Performance
Agent Portals
Project Overview
As part of WR Berkley’s Digital Team, I’m helping build an enterprise solution that unifies digital tools across operational units. During early modernization planning, questions arose about how agents perceive “performance” in the portals they use daily — and how those perceptions influence their choice of carrier.
I led mixed-method UX research to define what high performance truly means for insurance agent portals. By combining qualitative and quantitative insights, the project uncovered the experience factors that most influence agent satisfaction and carrier selection — directly shaping the modernization roadmap for Berkley’s carrier portals.
My Contributions
-
Designed and led a mixed-method research initiative combining moderated interviews and a quantitative survey of insurance agents and underwriters.
-
Partnered with product and engineering to frame research questions around user experience, speed, and reliability.
-
Analyzed both behavioral and perceptual data to identify which aspects of “performance” most impact trust and preference.
-
Synthesized findings into actionable insights that informed the carrier portal modernization roadmap and ongoing platform strategy.

Defining the Problem
I partnered with product managers and engineers discover researchable questions, ensuring the findings would directly guide platform priorities. Using this feedback, I designed a quantitative survey to validate these themes at scale, ranking which experience factors most influenced satisfaction and carrier preference.
The modernization effort needed clarity:
What does “high performance” mean to our users?
How do those expectations guide carrier choice?
Approach
Mixed Method Research
This target experience translates research findings into tangible design and functional behaviors, making “high performance” visible and measurable in daily workflows.
Qualitative Depth and Quantitative Validation

Semi-Structured Interviews, Followed by a Broader Agent Survey:
-
Moderated interviews captured in-context insights into daily workflows, highlighting moments when performance builds or erodes trust. These sessions helped uncover the emotional and practical dimensions of “speed,” “responsiveness,” and “reliability.”
-
A large-scale quantitative survey tested these themes across a broader audience of agents, ranking which experience attributes most influenced satisfaction and carrier loyalty.
Focus:
-
How agents evaluate a portal’s performance in real-world conditions.
-
Which performance factors (load time, clarity of feedback, workflow stability) most affect satisfaction and carrier choice.
-
What expectations define a “best-in-class” experience for digital quoting and servicing.
-
How performance perceptions differ between high-volume, independent and captive agents.
Key Themes of Agent Defined "High-Performance"
-
Reliability and uptime: Interruptions or errors during quoting were the #1 source of frustration, cited by 73% of participants.
-
Ease of navigation: Clear structure and predictable flows reduced perceived slowness; 68% said “knowing where I am in the process” mattered more than raw speed.
-
Transparency: Visible progress and clear feedback built trust; 61% said they valued transparency over interface aesthetics.
-
Communication: Response time and quality is a direct reflection of carrier reliability.
These findings reframed performance as a user experience outcome, not just a technical metric.
Outcome
What This Looks Like In Practice: The Target Experience
This target experience translates research findings into tangible design and functional behaviors, making “high performance” visible and measurable in daily workflows.
Reduced Manual Entry
-
Minimal required fields
-
Upload application to parse data and prefill required fields
-
“Same-as” functionality
-
Real-time validation
Clear and Actionable Status Updates
-
Surfaced progress for requests and show "next actions" to reduce reliance on follow-up emails.
-
Example: “Underwriter review: Same business day”
-
-
Clear steps for follow-up tasks
-
Instant ID/policy number/artifacts stored back to AMS
Error & Referral Prevention
-
Core submission requirements in advance.
-
Dynamic eligibility criteria as submission details are entered.
End-to-End Self Service
Common changes directly in portal. Examples: auto ID cards, payments, billing preferences and address updates.
Centralized Communication History
Message center with timestamps, templates, and proactive alerts (maintenance, missing docs)
Impact
Strategic
-
Directly informed the carrier portal modernization roadmap, shaping both UX and technical performance goals.
-
Recommendations provided a shared framework for defining and measuring “high performance” across the enterprise.
-
Metrics will begin to connect UX improvements to measurable business outcomes, reinforcing UX as a driver in product strategy.
Organizational
-
Established a shared vocabulary between product, UX, and engineering.
-
Strengthened buy-in for experience-led modernization efforts.
-
The research framework is now used to evaluate new digital initiatives across WR Berkley’s operational units.
Measuring Impact
-
Decreased time-to-quote
-
Lower abandonment rates
-
% increase in self-service tasked completed in-portal
-
Decrease in manual service requests
-
Agent CSAT
-
For quoting, focusing primarily on: speed, clarity and control
-
For self-service, focusing primarily on: usability and trust
-
Personas

