Web Order: Benchmarking
Project Overview
A qualitative benchmark ran once a quarter to track web order performance over time. This will help communicate the ROI of design effort as well as assist in continuous improvement of the experience.
My Contributions
-
Created test environment
-
Outlined study goals and metrics
-
Recruited participants through Respondent.ie
-
Created test script
-
Analysis & documentation of results
-
Determined formal recommendations / next steps
-
Communicated results to stakeholders
- My Process -
Web Order Benchmark:
Goal
-
Track improvements of the experience over time
-
Determine whether or not design changes are having a positive impact on the experience
-
Compare our experience against competitors
-
Demonstrate the value of design to stakeholders or clients
-
Estimate how much design contributes to business goals
Approach
Method: Unmoderated test through Lookback.io
Participants: 5 male, 5 female
Recruitment method: Respondent.ie
Product: Desktop Web Order
Metrics
-
Time on Task: Amount of time a user spends to complete a given task
-
Successful task completion: % of tasks that test participants complete correctly
-
Critical errors: Errors that block the user from successfully completing a task
-
Non-critical errors: Error that does not affect the ability to successfully complete a task but does impose a lag in the experience
-
Task Satisfaction: How participants felt about the task
-
Test Satisfaction: How participants felt about the overall experience
-
SUPR-Q (Standardised User Experience Percentile Rank Questionnaire)
Outcome
From this study, we were able to identify key areas for improvement and validate design decisions made in previous research studies.
Next Steps
-
Recommendations presented to PMs & Head of Product
-
Recommendations to be translated into backlog items in Jira (ranked by severity)
-
Follow-up benchmark to run in the next quarter