One personal example from a previous work experience comes to mind: I was working for a social-media curation company that was really keen on increasing share rates for their articles. So, naturally, they did quite a bit of A/B testing on how the size, placement, color, and other variables for the Share button affected the number of shares they were able to squeeze out of users. Because social sharing was a key metric that drove their overall profitability, many employees might have viewed pursuing this particular research effort as a successful endeavor in and of itself. But I disagree.
Macro Research
Our not taking a macro look at the overall user experience by doing more comprehensive user research was a lost opportunity. Neglecting other means of measuring the user experience is a potential downfall of any very specific research method such as A/B or multivariate testing.
It is important to ask macro questions such as “Where are users getting stuck?” and “Do users understand our value proposition?” so we can frame micro-focused questions such as “How can we get users to share more?” relative to larger user-experience and business goals.
Looking at only a single metric such as users’ inclination to share articles is not necessarily indicative of the quality of the overall experience they’re having. How would users feel about a more ostentatious Share button after 5 minutes of use? After 2 weeks? These were questions we didn’t, but should have asked up front.
After the Share button redesign, while the company experienced an initial, positive boost in some KPIs, the repeat interaction rate ultimately dropped, and the longevity of our user base diminished. Collectively, these negative metrics ended up weighing more than the number of social shares we got early on. Taking an overly narrow, micro viewpoint in both research design and the validation of research can be dangerous.