I Reclaimed 10 Hours a Week Using AI as a PM. My Exact Workflow, With Real Prompts. (ChatGPT, Gemini & Claude)
I now ship PRDs in 2 hours that used to take 2 days. I synthesise 20-person research studies in 15 minutes. I walk into stakeholder meetings with pre-built counterarguments for every objection. Here's the exact AI workflow I use - with specific prompts that actually work.
β± 16 min readA year ago, I was skeptical. I'd tried using AI tools and gotten back generic, superficial output that I'd have to rewrite entirely. I concluded that AI was useful for developers but not for PMs.
I was using it wrong.
The PMs who get 10Γ productivity from AI tools aren't using them as writing assistants. They're using them as a thinking partner, a research analyst, a devil's advocate, and a first-draft machine - with carefully crafted prompts that treat the AI as a smart but uninformed colleague who needs full context to give useful output.
The Three AI Tools Every PM Should Know
You don't need to use all three. But understanding what each is best at helps you pick the right tool for the right task.
Best for: Brainstorming & Breadth
Broadest training data. Strong for ideation, frameworks, and generating 10 different approaches to a problem. GPT-4o handles images, so you can paste wireframes and get feedback. The canvas feature is useful for collaborative editing.
Best for: Long Documents & Structured Writing
Largest context window - handles 200K tokens. Best for feeding in long research documents, multiple interview transcripts, or entire spec sheets. Produces more structured, nuanced writing. Less prone to confident hallucinations.
Best for: Real-Time Research & Google Workspace
Live web access means up-to-date competitive research. Integrates directly with Google Docs, Sheets, and Slides. If your team runs on Google Workspace, Gemini in the sidebar is the fastest workflow integration.
"The PMs who are most effective with AI treat it like a brilliant intern who has read everything but experienced nothing. Give them full context, specific instructions, and always review their work before sharing it."
The Golden Rule of AI Prompts for PMs
Bad prompt: "Write a PRD for a notification feature."
Good prompt: the one that gives role, context, constraints, format, and examples.
Every high-quality PM prompt has five elements:
- Role: "You are a senior PM at a B2B fintech companyβ¦"
- Context: "β¦building a merchant analytics dashboard used by 500K small businesses."
- Task: "Write a PRD for a smart notification feature that alerts merchants to anomalies in their daily transactions."
- Constraints: "The feature must work within our existing push notification infrastructure. The team has 3 engineers and 8 weeks."
- Output format: "Format as: Problem Statement, User Stories (5 stories), Acceptance Criteria per story, Edge Cases, Success Metrics, Out of Scope."
π οΈ The Context Dump Strategy
For complex tasks, start by "context dumping" - paste in all relevant documents before asking your question. For example: paste your existing PRD, the design doc, the tech spec, and 10 user interview quotes, then ask: "Given all of the above, what are the 5 highest-risk assumptions in this PRD that we haven't validated with users?"
Claude handles this especially well due to its large context window. The more context you give, the more specific and useful the output.
Task 1: Writing PRDs Faster (Without Sacrificing Quality)
Writing a PRD from scratch is the most time-consuming regular PM task. AI doesn't replace your thinking - it eliminates the blank page problem and fills in the structural work so you can focus on the hard decisions.
You are a senior product manager at a fintech startup. Write a PRD for the following feature: Feature: [Feature name] Product context: [2-3 sentences about the product and users] Problem: [The specific user problem this solves] Business goal: [What business outcome this drives] Constraints: [Team size, timeline, tech constraints] Format the PRD with these sections: 1. Problem Statement (2 paragraphs) 2. Goals and Non-Goals 3. User Stories (5β7 stories in "As a [user], I want [action] so that [outcome]" format) 4. Acceptance Criteria (3β5 per user story) 5. Edge Cases (minimum 8) 6. Success Metrics (leading and lagging indicators) 7. Open Questions (5β7 unresolved questions for team discussion)
After getting the draft, run a second prompt:
Review the PRD above and identify: 1. The 3 most dangerous assumptions we haven't validated with users 2. Any missing edge cases that could cause user harm or data issues 3. Acceptance criteria that are vague or not testable 4. Any scope creep hiding in the user stories 5. Dependencies that aren't mentioned but are implied Be direct and specific. This PRD is going to engineering in 48 hours.
Want to Break Into Product Management?
I help engineers, analysts, and career-switchers land their first PM role - no MBA required.
Get Free Guidance βTask 2: Synthesising User Research at Scale
If you've ever conducted 10+ user interviews and faced the wall of transcript text, you know how long synthesis takes manually. This is one of the highest-leverage AI use cases for PMs.
I'm a PM synthesising user research. Below are [N] user interview transcripts separated by "---". Each interview is with a [user segment] user of [product]. [Paste all transcripts here] --- After reading all transcripts, provide: 1. Top 3 Jobs-to-be-Done (what users are trying to accomplish) 2. Top 5 Pain Points (specific frustrations, not general themes) 3. Top 3 Delighters (what users loved or mentioned positively) 4. 5 Opportunity Areas (specific product improvements suggested by the data) 5. Verbatim quotes: 2 per theme that best represent the theme 6. Anything surprising or contradictory across interviews Be specific. Don't generalise. Quote specific users where possible.
The key is asking for verbatim quotes - this forces the AI to ground its synthesis in actual data rather than extrapolating, and gives you the evidence you need for stakeholder presentations.
Task 3: Competitive Analysis in 30 Minutes
Gemini's live web access makes it particularly useful for competitive research. But the prompting approach matters - you need to ask for structured comparison, not just a description of each competitor.
Search for current information on these competitors to [our product]: [Competitor 1], [Competitor 2], [Competitor 3]. Build a competitive analysis matrix covering: - Core product offering (2-3 sentences) - Target customer segment - Pricing model - Key differentiators (what they claim) - Weaknesses or gaps mentioned in reviews/community feedback - Recent product launches or changes (last 6 months) - Estimated market position Then answer: What are the 3 most exploitable gaps across all competitors that we could own? Format as a table plus 3-paragraph written analysis.
Task 4: Generating Edge Cases and Test Scenarios
The most underused PM prompt type. AI is exceptionally good at generating exhaustive edge cases - much more thorough than a human brainstorm in a time-constrained meeting.
For the following feature, generate a comprehensive list of edge cases and failure scenarios: Feature: [Feature description] User types: [Types of users who will use this] Platform: [Mobile / Web / Both] Key flows: [The 3 main user flows] Categories to cover: - Network / connectivity failures - Empty states and zero-data states - Permission / authorisation edge cases - Concurrent user actions - Extreme input values (very long text, special characters, etc.) - Internationalisation and localisation issues - Accessibility failure modes - Data inconsistency scenarios - Rate limiting / API failure scenarios - User error and recovery paths For each edge case: describe the scenario, current expected behaviour, and recommended behaviour.
Is Your PM Resume Getting You Interviews?
Most PM resumes get rejected in under 10 seconds. Let me help you fix that.
Get a Free Resume Review βTask 5: Stakeholder Preparation and Pushback Rehearsal
This is my favourite use case. Before any important stakeholder meeting, I ask AI to steelman the opposition.
I'm presenting the following proposal to [stakeholder type: e.g., CFO, engineering lead, CEO] tomorrow: [Paste your proposal / key points] The stakeholder tends to focus on [known concerns: e.g., cost, timeline, technical risk]. Generate: 1. The 7 hardest questions or objections they are likely to raise 2. For each objection: the strongest counterargument from my position 3. The 3 scenarios where their concern would actually be right 4. Data or evidence I should prepare to address each objection Don't make my position sound perfect. Make me prepare for the real objections.
The AI-First PM Workflow: Week by Week
- Monday: Discovery synthesis Use Claude to synthesise the previous week's user feedback, support tickets, and NPS responses into a prioritised list of insights. Paste raw data in, get structured output out.
- Tuesday: Competitive pulse Use Gemini to scan for competitor product updates, news, and community discussions from the past week. 20 minutes to stay current across 5 competitors.
- Wednesday: Document drafting Use ChatGPT or Claude to generate first drafts of the week's PRDs, specs, or presentations. Start with a prompt, iterate twice, then refine yourself. First draft in 30 minutes instead of 3 hours.
- Thursday: Critique and red-team Paste your draft documents into Claude and ask it to play devil's advocate. Get the hardest questions now, not in the review meeting.
- Friday: Communication Use ChatGPT to draft stakeholder updates, sprint review summaries, and the week's team communication. Keep the PM voice - edit for tone and specificity.
What AI Tools Cannot Do (And Shouldn't Try)
Being clear about AI's limitations is as important as knowing its strengths. AI is a force multiplier for work you understand well - it amplifies competence, not compensates for incompetence.
- Conduct genuine user empathy interviews. AI can write interview guides, but it cannot sit in a user's home and notice what they don't say, the hesitation before an answer, or the frustration in their body language.
- Make hard prioritisation calls with political context. Prioritisation requires understanding organisational dynamics, trust levels, team capacity, and unstated constraints that no AI has access to.
- Build stakeholder trust and influence. Trust is built through consistent delivery, genuine care, and human judgment over time. AI doesn't build relationships.
- Validate hypotheses with real users. AI can help you design experiments and analyse results. But the experiment still needs to happen in the real world.
- Make ethical product decisions. When a feature could harm vulnerable users or enable abuse, the PM needs human judgment and accountability - not a model's probabilistic output.
β οΈ Never Share These With AI Tools
Treat AI tools like any external service: don't share personally identifiable user data, unpublished financial information, employee performance details, or information under NDA without first checking your company's AI use policy. Most enterprise AI tools have data isolation - but verify before pasting sensitive information.
π Key Takeaways
- AI gives its best output when you provide role, context, task, constraints, and output format in every prompt.
- Use Claude for long documents and structured writing. ChatGPT for brainstorming. Gemini for real-time research.
- The "context dump" strategy - pasting all relevant documents before asking your question - dramatically improves output quality.
- Use AI to generate edge cases and stakeholder objections - tasks where breadth matters more than depth of experience.
- AI amplifies competent PMs. It doesn't compensate for weak product thinking or missing user insight.
- Never skip the review step. AI output is a starting point, not a finished deliverable.
- Check your company's AI data policy before pasting anything sensitive.
- PMs who use AI well will outpace those who don't - but PMs who over-rely on AI without judgment will produce mediocre work faster.