What UX Research Actually Looks Like Behind the Scenes
Thursday, I posted about researcher value in an AI world. Alexander Knoll CEO of Condens, asked me exactly where an untrained researcher or AI would have missed it. Here's the answer.
𝗕𝗮𝗰𝗸𝗴𝗿𝗼𝘂𝗻𝗱
Website redesign. 6 segments, desktop + mobile, 76 unmoderated sessions across 12 prototypes.
𝗞𝗻𝗼𝘄𝗶𝗻𝗴 𝘄𝗵𝗶𝗰𝗵 𝗱𝗮𝘁𝗮 𝘁𝗼 𝘁𝗿𝘂𝘀𝘁 𝘃𝘀 𝗮𝗱𝗷𝘂𝘀𝘁
All prototypes were high-fidelity and built per segment and device, with only specific flows live in each. When participants hit dead ends or non-live links, many self-reported task outcomes didn't reflect what actually happened.
The data recorded things that didn't happen. An untrained researcher accepts that output, eliminates it, or abandons the study. A trained one knows to question it, investigate the source, and make a defensible call on what to keep, adjust, and set aside.
Some Ps dropped mid-session, expressing extreme frustration. That sentiment was about prototype limitations (which were communicated upfront), not the design, but it bled directly into confidence ratings and qual feedback. Knowing the difference and adjusting accordingly required watching and listening to 76 sessions.
SEQ scores required manual adjustment in both directions. Ps who spent 6 minutes finding the correct destination rated ease a 7/7. Others were exactly in the right place, rated it a 2, because placeholder copy created doubt. That's human judgment applied at the task level, not just the session level.
𝗞𝗻𝗼𝘄𝗶𝗻𝗴 𝘄𝗵𝗮𝘁 𝘂𝗻𝗶𝘁 𝗼𝗳 𝗮𝗻𝗮𝗹𝘆𝘀𝗶𝘀 𝘁𝗼 𝘂𝘀𝗲
For every session, the question wasn't just "is this session usable?" It was "is this specific task, within this specific session, on this specific device, for this specific segment, usable — and for what purpose?" Hundreds of individual calls. Get those wrong, and the conclusions are quietly, invisibly flawed.
𝗞𝗻𝗼𝘄𝗶𝗻𝗴 𝘄𝗵𝗮𝘁 𝘁𝗼 𝗹𝗼𝗼𝗸 𝗳𝗼𝗿 𝘁𝗵𝗮𝘁 𝗻𝗼𝗯𝗼𝗱𝘆 𝗮𝘀𝗸𝗲𝗱 𝗮𝗯𝗼𝘂𝘁
Across segments, a clear pattern emerged that no task, research goal, or client assumption was designed to explore. The 3 prospect segments wanted to browse and shop the site. The other 3 approached it transactionally. Together, they pointed to 4 distinct navigation paths the site needed to support, a pattern that only emerged because a human was holding all 12 studies simultaneously.
𝗞𝗻𝗼𝘄𝗶𝗻𝗴 𝗵𝗼𝘄 𝘁𝗼 𝘀𝘆𝗻𝘁𝗵𝗲𝘀𝗶𝘇𝗲 𝗮𝗰𝗿𝗼𝘀𝘀 𝗹𝗮𝘆𝗲𝗿𝘀
This study followed a Qualtrics survey of 516 respondents. Insights emerged first within each device per segment, then across each segment's sessions, and when all 6 segments were synthesized together. Patterns invisible at one layer only became visible across the full arc because a human maintained the thread.
The platforms were essential to making this scale. Research experience made the results trustworthy — and to me, the design team, and the client, meaningful and actionable.