She Named It After the Best Researcher She Ever Worked With

You've been with me through the swings.

The teaser. The scrappy early sessions where people showed up with half-formed problems and we figured it out together. The messy middle, where I had just enough feedback to question everything and not quite enough to know what to do with it.

What I didn't tell you, until now, is where it all started.

In March, I was ghostwriting a keynote for the CEO of a qualitative research platform. The speech was about the future of research (AI-powered analysis, faster synthesis, better pattern recognition), all the things these platforms optimize for.

I was building BUILD Like A Pro at the time — Course 7 in the Ask Like A Pro series. I needed a real example to teach with. So I built one. Then building it made me realize I needed another example. And I went down a rabbit hole.

That rabbit hole became...

Plan Like A Pro: Together — a full AI-powered research planning partner. Trained on my proprietary methodology. Built to push back the way a strong research manager would, not just generate something that looks right. Platform-agnostic by design, because access changes, budgets shift, tools evolve, get acquired, and the question should always comes before the approach.

This wasn't an accident. I knew exactly what I was building. But it evolved — significantly. Testers pushed past the plan into screener, discussion guides, secondary research. The tool handles all of it. I haven't optimized those downstream capabilities yet because the plan is the spinal cord of all that work. Get the plan right, and everything else has a fighting chance.

The keynote handed me the sharpest argument for why it needed to exist.

Every platform in this space starts at the same place. After the screener, recruitment, the guide is finalized, and the data collected. After the plan was... whatever it was.

They assume everything upstream was sound: that the research question was anchored to an actual business decision, that the methods matched the objectives, that the right people were asked the right questions in the right way, and that the stakeholders are invested because someone thought to include them.

That's a significant assumption.

I mentioned this to the CEO of another research platform, who said the focus on analysis and synthesis makes sense, as that's where researchers spend the most time.

He's not wrong.

But time spent is not the same as leverage. You can spend enormous time analyzing data from a study that was designed poorly, with the wrong participants, answering a question no one actually needs answered.

Who cares if the analysis is flawless? The work will be worthless.

The plan is where bad research gets authorized, if it happens at all.

In 15 years of workshops, consulting engagements, and Ask Like A Pro courses, the plan is exactly where the breakdown begins. Not the analysis. Not the synthesis. The plan. And nothing platform-agnostic existed to address it.

So I built it.I've been building and beta testing an AI companion for true research enablement over the past few weeks, focused on that exact gap. It's designed to be used independently, by teams, customized, white-labeled, and integrated into platforms optimized for their offerings.


What I built.

Plan Like A Pro: Together is an AI-powered research partner built on Curiosity Tank's methodology — the same framework from the Ask Like A Pro curriculum, embedded in a tool that works with you in real time.

It's not a template. It doesn't just fill in blanks.

It asks the question you haven't asked yourself. It flags the assumption you're treating as fact. It tells you when your recruiting window is too narrow and what you're trading for that choice. It doesn't move forward until the business decision is anchored. And in the end, you have a research brief, a research strategy, or proposal, depending on what the work calls for, along with a scratch pad of deferred items and a list of flagged gaps.

It calibrates to who you are.

For a trained researcher, it's a conversational thought partner and gut check. For a stakeholder, founder, PM, or designer conducting research without that title, it covers the essentials, teaches as you go, and helps crystallize your thinking — whether you're running the study yourself, requesting formal research support, or issuing an RFP.

Most AI tools agree with you. This one doesn't. That's the point.


The beta testing.

I didn't launch it quietly and wait for reviews. I ran 13 beta sessions with readers from this cherished list — senior researchers, founders, a product strategist, a pre-sales consultant, an insights manager in Australia, an educator, and more. Real projects, real feedback, no soft-pedaling.

Here's some of what they said:

  • "It's like a researcher counterpart that performs research for my research endeavors." — TS, Senior UX Researcher, enterprise software

  • "Junior team members especially benefit from having a coach built into the AI to teach them as they go." — CB, Senior Researcher, HR/payroll platform

  • "The greatest value was its ability to reinforce the importance of stakeholder management, ensuring it stays front of mind with every step." — CH, Insights Team Manager, consumer advocacy organization

  • "It ensures the research I'm about to do has some oomph behind it." — LC, Senior Researcher, cloud computing

  • "A tool that helps you think through research and catch blind spots... front-loading thinking that you might have gone too quickly on."— EB, Senior UX Researcher, developer platform

  • "This brings so much more knowledge and understanding of the research process (compared to general LLMs)... It's already researchy — like it speaks the language. It pushed back — 'this is really broad, you're going to need to focus this.' That was brilliant.' "— JB, Senior Independent Researcher

  • "A mentor and an intern simultaneously — guiding me the way my research mentors did, while taking my feedback like a peer." — SR, Senior UX Researcher, identity verification

And then there was the moment that stopped me.

At the end of her session, I asked SR what she would name the tool. She said she would name it after the best UX research partner she had ever worked with — because that's what it reminded her of. She named it after him. "He's going to be horrified that I named an LLM after him," she said, "but he'll appreciate the intention."

I've been building tools for a long time. That's the one that landed.


What changed because of beta testing

Everything, honestly. The name (beta testers said "assistant" felt like an intern — it's now Plan Like A Pro: Together). The tagline ("A research plan that writes back" became "The research plan that pushes back" — because every AI writes back; not every AI tells you when you're wrong, why, and what to consider instead). The push-back got sharper after SR said the tool was too validating. The opening experience got clearer after EB said she didn't know what she'd have at the end of a session. The tool now integrates with your team's workflow tools — Jira, Notion, Slack — and reformats the output accordingly. And ethical obligations to protect participants are now explicitly built in, not implied.

Thirteen sessions. Every one changed something.


I'm still looking for beta testers. Here's what I need.

The tool is live. I'm looking for range — different industries, research maturity levels, organizational contexts, study types.

If any of this describes you, I want to hear from you:

— You have research in progress or coming up soon — You work in healthcare, education, nonprofit, government, financial services, retail, or an early-stage startup, contexts where research doesn't always look like big tech research — You're a PM, designer, founder or other stakeholder who conducts research without a researcher title — You've been burned by bad research before and want to understand why — Your focus is quant, not qual, and you want to pressure-test quant

One real session with a real project at any stage. Anything shared with me can be redacted; context matters more than client names. Afterward, 15 minutes of honest feedback.

What you get: early access, all of the artifacts the session produces, and a direct line to me.

Reply to this email if you're in. Or forward this to someone who should be.


One more thing — May 20th

I'll be presenting at a UXR Guild event on May 20th: "How to Build AI Mockups to Showcase Research Recommendations." And yes — I'll be sharing Plan Like A Pro: Together live. It's the recommendation, taken to the nth degree. I couldn't help myself!

Come see it in action, ask hard questions, and get inspired to become a researcher who builds!

​Register here →​


Thank you!

To LR, AJ, CB, TS, CH, LG, AR, EB, SR, NZ, JB, LF, RA, KN, and PM — thank you, sincerely.

You gave your time, your candor, and in several cases your most current or complicated research challenges to a tool that was still finding itself. The tool that exists now is better because of every session, every push-back, every moment where something didn't land right.

That's the best possible outcome of a beta.

Workflow is a commodity. Judgment isn't.

Let's build the judgment. Together.

Just hit reply and say “I’m in" and include your full name, please.

I read every reply.

– Michele


Speak up, get involved, and share the love!


And that’s a wrap!

We try to alternate between a theme and Insights/UX/UXR jobs, events, classes, articles, and other happenings every so often. Thank you for all of the feedback. Feedback is a gift, and we continue to receive very actionable input on how to make Fuel Your Curiosity more meaningful to you.

What do you think? We're constantly iterating and would love to hear your input.

Stay curious,

- Michele and the Curiosity Tank team



Previous
Previous

UX Research Planning: Why Bad Research Starts Upstream

Next
Next

Learn the lingo: Empathy Mapping