UX Research Training: How to Think Before You Execute

The second swing

I said last week I picked the bat back up.

What I forgot is how strange the next part feels.

You put something out into the world, and now you have just enough feedback to start questioning it. Not silence, not clarity, just a handful of signals that can send you in ten different directions if you let them.

That’s the second swing.

"This wasn't even my use case...but it actually helped me frame a selling story."

I sent out an early version of something I’ve been working on. Not polished, not finished, just real enough that people could actually try it. And watching what happened next has been more useful than anything I could’ve done on my own.

One person used it in a completely different way than I expected — to shape a selling story. But what stuck with them were the fundamentals: getting clear on why the work exists, who it’s for, what outcome actually matters. The kind of thinking we all know we should do, and somehow still skip when we’re moving fast.

"The outcome focus was the most useful part. Forced me to think about why this matters."

Someone else got stuck almost immediately. Not because anything broke, but because they didn’t have something concrete to plug into it. Which made me realize pretty quickly, this kind of thing only works if you have something real to push against.

"I got stuck because I didn't have a real project to use."

And then there was the more honest feedback. Not knowing how to answer certain questions. Not being sure how this was different from just…thinking it through on your own. But even there, the structure helped and was appreciated. Just having something hold the shape of the work made it easier to move forward.

At the same time...

I had a conversation with a former client that put all of this into sharper focus. She is a researcher in fintech.

"This feels less like a toll and more like a manager giving me feedback."

They’re going all in on AI. Their usage is being tracked. It’s showing up in performance conversations. There’s real pressure to not just use the tools, but to use them well.

And even there, the bottleneck isn’t the tools.

It’s knowing how to think through the work in the first place. What approach to take. What good looks like. How to design something before you execute it.

Which brings me back to a question I've asked over and over again:

If fewer people are learning about proper research and the judgment it requires on the job… how does anyone learn how to do this well, now?

I don’t have a clean answer yet.

But I’m starting to think this thing I’m building can play a big part in it. Not by replacing experience, but by making the thinking more visible while you’re in it.

I’m still very much in the second swing phase. Trying things, watching what happens, resisting the urge to overcorrect too quickly.


If you use GPT or Claude and are open to experimenting a bit, I’d love a few more people to kick the tires on this.

Thinking 20 minutes, screenshare, you using it on something real while I watch and ask questions. Very scrappy, very early, very helpful (for me at least).

If you’re up for it, just hit reply and say “I’m in" and include your full name, please.

If you’re thinking about jumping in, you’ll probably get the most out of this if:

  • You’re responsible for some aspect of applied research in your work (UX, market, marketing, etc.)

  • You’ve had to design a study, not just run one

  • You’ve ever paused before starting and thought, “what’s actually the right approach here?”

Bonus points if:

  • You coach or mentor non-researchers (designers, PMs, founders) to do research

  • You’ve tried using GPT or Claude for this kind of work and felt like… it sort of works, but also kind of doesn’t

  • You’re working from a messy brief, an RFP, or something that’s not fully defined yet

  • You’re moving fast and don’t always have time to think everything through the “perfect” way

  • You work in a more regulated or high-stakes environment (healthcare, finance, etc.) where getting this wrong actually matters

Probably not a fit (for now):

  • You don’t have something real to test this on

  • You’re looking for something polished vs something early and a bit scrappy

Also, you don’t need to share anything sensitive. This is about how the thinking works, not your confidential docs. Happy to bring something made-up or sanitized if that’s easier.

And if you’re sitting in your own version of the second swing right now, that messy middle where you’ve started something and aren’t quite sure what to make of the early signals, I’d love to hear about that too.

I read every reply.

– Michele


Speak up, get involved, and share the love!


And that’s a wrap!

We try to alternate between a theme and Insights/UX/UXR jobs, events, classes, articles, and other happenings every so often. Thank you for all of the feedback. Feedback is a gift, and we continue to receive very actionable input on how to make Fuel Your Curiosity more meaningful to you.

What do you think? We're constantly iterating and would love to hear your input.

Stay curious,

- Michele and the Curiosity Tank team



Previous
Previous

The Future of UX Research Is Tangible

Next
Next

Learn the lingo: Mental Model