Overview

I was leading research for a large consolidation project that would impact millions of our customers. Customers had 3 websites and 2 apps they could have used to self-serve and this initiative was to unify those platforms into 1 website and 1 app.

The research spanned across 14 teams, and multiple lines of business.

The challenge

The research had been primarily focused on usability of designs within the new consolidated experience. However, teams continued to express reservations about how this consolidated experience would compare to the previous, un-consolidated experience. I needed funding in order to reliably compare the customer experience in completing priority tasks.

Role

I was responsible for pitching the study, co-designing the study, coordinating work with a third-party vendor, and socialization of final results.

 

 

Tools

Mural, PowerPoint, Word, MUIQ, Tableau

 

The Pitch

Teams to this point had been concerned with deadlines and deliverables. But it became clear that the central question on impact to the experience would require a study of a different scale.

Different teams were also responsible for interconnected features which limited the feedback our traditional design/feature-team focused studies would be able to collect on journey-level questions

I created a presentation that laid out the gap in our data and delivered it to the Sr. Director of Experience Research as well as the VP of Design and won approval for funding to complete our testing across 3 platforms.

Analytics Review

We worked across 14 teams who were driving toward unique outcomes. The benchmark would not have been able to accommodate tasks that aimed at each of their outcomes. So, I needed a way to prioritize tasks and have justification for why those tasks did not address each and every team outcome.

Previous in the project, I had compiled proto-personas that were developed from analytics. That data had helped to show priority customer tasks. I also was able to look through call-center data to identify major drivers as to why customers call our customer support in order to identify places where there is high business value.

Access to data was not always easy and straightforward. I worked across teams to find and understand the most appropriate data.

Test Plan

Our team did not have the proper tool to conduct a quantitative usability study such as this one. So we worked with an industry leader, Jeff Sauro (Measuring Usability) in order to help design and field this study.

Once we had our tasks, I had to decide what metrics we were primarily concerned with. I knew that cross-industry comparison was important and so SUS and NPS would be important metrics.

We asked both subjective and objective success questions in order to determine major problem points where a customers might have felt they were successful but had actually failed. (We were not able to validate success through URL, so we designed questions that could validate whether or not a customer indeed made it to the right place at the end of the task.)

Since each of those questions were asked after each task we needed to prioritize the data that would motivate the clearest action.

Fielding

I decided to prioritize 3 platforms for the study which then entailed 2 user types. There was success with 2 platforms and 1 user type. But we ran into struggles with 1 platform and 1 user type.

In the end, I determined a minimum number that I would be prepared to accept while still maintaining the reliability of the results.

Socialization

Once the study was complete, I made the rounds across teams to share the results.

I made the choice to focus these read-outs with individual feature teams so that the results would be used to inform their roadmaps moving forward.

There was a positive response and interest in the results, but due to the impending launch of the new experience the feedback was also tempered.

Upon reflection, I was able to see that the real value would come with the next round that would establish the comparison point. I had to take comfort in the fact that the effort was primarily around establishing an anchor point against which we would be able to compare future changes.

Scaling

Once the first round was complete, I moved toward ensuring that the effort continued and that it would scale. I needed to:

  • Define the cadence when we would conduct benchmarking

  • Evaluate tools and process that allows for our team to conduct regularly

  • Build out integrations with other data streams

I am happy to say that the program continues!