Data & Analytics

Your Analytics Dashboard Is Lying to You. Here's How Data-Driven PMs Actually Make Decisions.

📅 January 1, 2026
✍️ Adarsh Mohan

After analyzing millions of transactions, here's the uncomfortable truth: your metrics are probably telling you what you want to hear, not what you need to know. Learn how to build product intuition through data, spot vanity metrics, and make decisions when numbers conflict.

⏱ 18 min read

Your dashboard shows 10,000 monthly active users. You celebrate. The board is happy. Your metrics are "up and to the right."

Then someone asks: "How many of those users actually completed a transaction?"

Silence.

You check. It's 247 users. 2.4% of your "active" users are actually using your product.

Welcome to the lie of data-driven product management.

73%

of product teams track metrics that don't predict business outcomes. (Amplitude State of Product Analytics, 2024)

They're measuring. They're not learning.

After building products across fintech, SaaS, and consumer tech, I've learned that being "data-driven" doesn't mean drowning in dashboards. It means developing intuition through numbers - knowing which metrics matter, when they lie, and how to make decisions when they conflict.

"Data doesn't make decisions. You do. Data just makes you confident about bad decisions faster."

Part 1: The Vanity Metrics That Are Destroying Your Product

Let's start with the uncomfortable truth: most metrics you track are vanity metrics dressed up as insights.

They look impressive in board meetings. They make pretty graphs. They tell you nothing about whether your product is actually working.

❌ Vanity Metric
Total Downloads

Why it lies: Downloads don't mean usage. 80% of apps are opened once and deleted.

One fintech app hit 1M downloads in month 2. Only 23K were weekly active. They were celebrating fake growth.

✓ Real Metric
D7 Retention Rate

Why it matters: Users who return after 7 days actually find value. They're real users, not tire-kickers.

When teams shift focus to D7 retention (from 12% to 40% target), everything changes. They fix onboarding, not marketing.

The Vanity vs Reality Gap

📈 What You Report

10,000

Monthly Active Users

✓ What Matters

247

Actually Transacting Users (2.4%)

❌ Vanity Metric
Monthly Active Users (MAU)

Why it lies: Someone opening your app once in 30 days isn't "active." They're barely alive.

MAU looks great. But 67% hadn't completed a transaction in 2 months. They were zombies padding the numbers.

✓ Real Metric
L28 (# of days used in last 28)

Why it matters: Usage frequency predicts retention. L7+ users rarely churn. L1-2 users always do.

Discovery: L14+ users had 94% 6-month retention vs 11% for L1-3 users. Completely changed product strategy.

❌ Vanity Metric
Page Views

Why it lies: High page views often mean confused users clicking around trying to find things.

One feature had 5x page views. Success? No. Users couldn't figure out how to complete the task, so they kept refreshing.

✓ Real Metric
Task Completion Rate

Why it matters: Did the user accomplish what they came to do? Yes/No. That's what matters.

Payment flow had high views, 34% completion. Reduced steps, views dropped 40%, completion hit 79%. Less is more.

💡 The Vanity Metric Test

Ask yourself: "If this metric goes up, will revenue/retention/engagement definitely improve?"

If the answer is "not necessarily," it's a vanity metric. Stop tracking it. It's noise.

Examples that pass the test:
• Transaction completion rate → directly drives revenue
• 7-day retention → predicts LTV and churn
• Time to value → correlates with activation
• Feature adoption among retained users → shows what keeps them

Part 2: The North Star Lie (And What Actually Works)

Every product blog tells you to define a "North Star Metric" - one metric that matters above all else.

Here's what they don't tell you: your North Star changes every 6 months. And having only one metric makes you dangerously blind to what's actually happening.

Real Example: Fintech Product's Evolving North Star

Phase 1 (Launch): North Star = Total transactions
Goal: Prove demand exists. Get any transactions happening.
Result: Hit target. But many were one-time users testing the product.

Phase 2 (Growth): North Star = Weekly transacting users
Goal: Build habit. Get people using regularly.
Result: Success. But optimized for frequent small transactions, not valuable ones.

Phase 3 (Scale): North Star = Transaction volume × Frequency
Goal: Maximize actual business value, not just activity.
Result: Completely different product priorities. Stopped incentivizing $1 transactions.

The Lesson: Your North Star should evolve with your product stage. What matters at 1K users is different from what matters at 1M users.

How North Star Metrics Evolve with Product Stage

0-1K Users

Focus: Any usage

Total transactions

1K-100K Users

Focus: Habit formation

Weekly active transactors

100K+ Users

Focus: Business value

Volume × Frequency

The Metric Hierarchy That Actually Works

Level 1: North Star Metric

The ONE metric that best captures value delivered to users.
Example: Weekly transacting users (fintech), Messages sent (messaging), Nights booked (travel)

Level 2: Primary Drivers (3-5 metrics)

Metrics that directly move your North Star.
Example (Fintech): New user activation, Payment success rate, Average transaction value, Feature adoption rate

Level 3: Secondary Indicators (5-10 metrics)

Leading indicators that predict primary driver movement.
Example: Time to first transaction, Failed transaction recovery rate, Customer support tickets per user

🚨 The Single Metric Trap

Optimizing for ONE metric is how you accidentally destroy your product.

Real disaster stories:

Social platforms optimized for engagement (time spent). Result: Addictive feeds, doom-scrolling, mental health crisis, government regulation.

Video platforms optimized for watch time. Result: Algorithms pushed increasingly extreme content to keep viewers watching. Brand safety nightmare.

A fintech app optimized for transaction count. Result: Users gamed the system with $1 transactions for rewards. Revenue stayed flat while costs exploded.

The fix: Track counterbalancing metrics. If optimizing for growth, track churn. If optimizing for engagement, track satisfaction. Always monitor what you're sacrificing.

💡

Ace Your Next PM Interview

From product sense to metrics to execution - get the frameworks that actually land offers.

Start Interview Prep →

Part 3: When Data Lies (And How to Catch It)

Data doesn't just show you reality. It shows you reality filtered through your measurement choices, biases, and blind spots.

Here's how your data lies to you - and what to do about it.

Lie #1: Survivorship Bias

📊
The Problem

You survey active users: "How's the product?" 95% love it! Success!

What you're missing: The 10,000 users who churned aren't in your survey. They hated your product and left.

Real Example: A fintech product's NPS from active users was 67 (excellent!). But they had 78% week-1 churn. The 22% who survived loved them. The 78% who left? Never asked them why.

✓ The Fix

1. Exit surveys: Email users who churned. Offer $100 gift card for 5-minute call. You'll learn more from 10 churned users than 100 happy ones.

2. Cohort analysis: Track what Week 1 users do in Week 4. If you lose 70% by Week 4, your Week 4 metrics are worthless - they only represent survivors.

3. Failed action tracking: What are users trying to do and failing at? Often 43% of users attempt key actions that fail. They never come back.

The Survivorship Bias Illusion

Week 1 Cohort: 1000 Users

Week 1: 1000 users

Week 2: 340 left (66% churned)

Week 4: 180 left

Week 8: 89 left

⚠️ You Only Survey

89

The survivors who love you

Lie #2: The Local Maximum Trap

⛰️
The Problem

You A/B test small changes. Button color: +2% conversion. Copy tweak: +1.5% conversion. Layout shift: +3% conversion.

You're climbing a hill. You're getting incrementally better. But you're on the wrong mountain.

What you're missing: Your competitors just launched a completely different product that makes yours irrelevant.

Real Example: Stories Feature

A photo-sharing app could have A/B tested their way to 5% better photo engagement. Instead, they launched a completely different format (ephemeral stories) and fundamentally changed their product. It worked.

Data-driven optimization gets you 10% better. Intuition-driven reinvention gets you 10x better.

✓ The Fix

80/20 rule for product development:
• 80% of time: Data-driven iteration (optimize what exists)
• 20% of time: Vision-driven bets (try radically different things)

Most teams do 95/5 or 100/0. They optimize themselves into irrelevance.

Lie #3: Correlation ≠ Causation (But You Forget This Every Day)

🔗
The Problem

You notice: Users who add a profile photo have 3x higher retention.

Your conclusion: "We should force everyone to add photos! Retention will triple!"

What's actually happening: Users who care enough to add photos are already more engaged. The photo didn't cause retention - engagement caused both.

Real Example: Professional Network

A professional networking app found that users who connected with 5+ colleagues in Week 1 had much higher retention.

Bad interpretation: "Force users to connect with 5 people!"
Better interpretation: "Users who have 5+ relevant connections find value. How do we help them discover relevant connections faster?"

They built "People You May Know" and connection suggestions. Retention improved.

✓ The Fix

Ask "Why" 3 times:

Users with profile photos retain better.
→ Why? Because they're more engaged.
→ Why are they more engaged? They see value in the product.
→ Why do they see value? They successfully completed [core action].

Now optimize for helping users complete that core action, not for adding photos.

Part 4: The Framework for Making Decisions When Data Conflicts

Here's the reality nobody talks about: Your data will constantly contradict itself.

Metric A says go left. Metric B says go right. User feedback says go up. Your gut says go down. The competitor went sideways.

Welcome to actual product management. Here's how to decide.

The Conflicting Data Decision Framework

Step 1: What's the Time Horizon?

Short-term metric wins but long-term loses? Ignore the short-term metric.

Example: Popup ads increase immediate clicks but destroy long-term trust. Choose trust.

Fintech Example: Could boost Week 1 transactions with aggressive cashback. But it attracted deal-seekers who churned in Week 2. Choose sustainable growth over vanity metrics.

Step 2: Who Are You Optimizing For?

Power users want X. New users want Y. Who matters more to your business right now?

Example: Power users want more features. New users want simplicity. If your growth has stalled, optimize for new users. If retention is the problem, optimize for power users.

Real Example: Top 10% users generating 70% of revenue. Build for them, not the 90% who barely transact. Controversial, but it works.

Step 3: What Does Your North Star Say?

When metrics conflict, tie-break with your North Star.

If your North Star is weekly transacting users, and a feature increases downloads but not transactions, ignore downloads.

Rule: Any metric that moves you away from your North Star is a distraction, no matter how impressive it looks.

Step 4: What Do Retained Users Do?

Ignore average user behavior. Study retained users.

Data says average session is 2 minutes. But your Week 8 retained users have 12-minute sessions. Build for 12 minutes, not 2 minutes.

Example: Average user scrolls for 3 minutes. But daily active users scroll for 30+ minutes. Optimize for the 30-minute user, not the 3-minute user.

Step 5: Can You Test It?

When in doubt, run an experiment.

Don't debate which metric is right. Test both approaches on 10% of users for 2 weeks. Let reality decide.

Warning: Only test if you'll actually use the results. If you've already decided, don't waste time on fake data-driven theater.

Decision Framework in Action

The Dilemma

Growth team wants aggressive cashback. Retention shows these users churn fast.

Short-term

+50% new users

⚖️

Long-term

89% churn Week 2

✓ Decision

Optimize for long-term retention. Sustainable growth > vanity growth.

🧭

Stuck in Your PM Career?

Whether you're aiming for Senior PM, Principal, or Director - let's map out your next move.

Get Career Guidance →

Part 5: What Good Data-Driven Actually Looks Like

Being truly data-driven isn't about having more dashboards. It's about asking better questions.

❌ Vanity Dashboard
  • 📈 Total Users: 50K
  • 📱 App Downloads: 75K
  • 👀 Page Views: 2.3M
  • ⏱️ Avg Session: 4 mins
  • 🌍 Countries: 23
  • 📊 Growth: +12% MoM

Looks impressive. Tells you nothing. You can't make a single product decision from this dashboard.

✓ Actionable Dashboard
  • 🎯 Weekly Transacting Users: 8.2K
  • 📊 D7 Retention: 34% (target: 40%)
  • ✅ Payment Success Rate: 87%
  • ⚡ Time to First Transaction: 4.2 days
  • 🔄 Failed → Retry Rate: 42%
  • 💰 Avg Transaction Value: $847

Every metric has a target and drives decisions. If D7 retention is low, fix onboarding. If retry rate is low, improve error messages.

💡 The Questions That Matter

Instead of asking "What's our MAU?"
Ask: "How many users completed our core action this week?"

Instead of asking "What's our engagement rate?"
Ask: "What's the minimum viable engagement that predicts retention?"

Instead of asking "What features do users want?"
Ask: "What do retained users do that churned users don't?"

Instead of asking "Why is X metric down?"
Ask: "What changed in user behavior before X metric dropped?"

Part 6: Building Intuition Through Data (The Real Skill)

The best PMs I know aren't spreadsheet wizards. They're pattern recognizers who've stared at enough data to develop instincts.

Here's how to build that intuition:

The 30-Day Data Intuition Challenge

Week 1: Daily Dashboard Review

Spend 15 minutes every morning with your core metrics. Not just reading numbers - asking "Why did this change?"

Exercise: Before checking yesterday's data, predict what happened. Transaction volume up or down? Why? Then check if you were right.

After 30 days, you'll start predicting correctly 70%+ of the time. That's intuition forming.

Week 2: User Session Recordings

Watch 5 session recordings daily (use session replay tools). Don't just watch - narrate what users are thinking.

Example: "User clicked 'Add to Cart' 3 times. They don't realize it worked because the button doesn't give feedback. They abandoned."

Quantitative data tells you WHAT. Qualitative shows you WHY. You need both.

Week 3: Cohort Deep Dives

Pick one cohort (e.g., "Users who signed up in January"). Track their journey week by week.

Week 1: 1000 users. Week 2: 340 left. Week 4: 180 left. Week 8: 89 left.

The question: What did the 89 who stayed do that the 911 who left didn't? That's your retention unlock.

Week 4: Reverse Engineer Success

Find your top 10% power users. Map their first 7 days. What actions did they take? In what order?

Discovery Example: Users who completed 3+ transactions in Week 1 had 87% 6-month retention. Redesign onboarding to push for 3 transactions in Week 1. Retention jumped from 31% to 54%.

⚠️ The Paralysis by Analysis Trap

More data doesn't mean better decisions. It means more opportunities to overthink and do nothing.

The 80/20 rule for data:
• 80% of insights come from 20% of your metrics
• If you track more than 10 key metrics, you track nothing
• If you can't make a decision with existing data, more data won't help - your framework is wrong

Red flag: If your answer to "Should we build feature X?" is "Let me run more analyses," you're stalling. Trust your intuition or run a small experiment. Don't drown in spreadsheets.

The Uncomfortable Truth About Data-Driven Product Management

After 10+ years and millions of data points, here's what I know for certain:

"Data tells you what happened. Intuition tells you what to do about it. The best PMs have both - and know when to use which."

Use data when:
• Optimizing existing flows (A/B testing button colors, copy, layouts)
• Validating hypotheses (Does X feature improve Y metric?)
• Understanding user behavior patterns (When do users churn? Why?)
• Measuring impact (Did our change actually work?)

Use intuition when:
• Making strategic bets (Should we enter a new market?)
• Defining product vision (What should we become in 3 years?)
• Deciding what to measure (Which metrics actually matter?)
• Making trade-offs (Speed vs quality? Growth vs retention?)

🔑 Key Takeaways

  • 73% of product teams track vanity metrics that don't predict outcomes. Focus ruthlessly on metrics that drive business results.
  • Your North Star metric should evolve. What matters at 1K users ≠ what matters at 1M users.
  • Track counterbalancing metrics. Optimizing for ONE metric destroys products (engagement without satisfaction = addiction).
  • Survivorship bias is killing you. Exit interviews with churned users teach more than NPS surveys of happy users.
  • Data-driven optimization gets you 10% better. Vision-driven bets get you 10x better. Do 80/20.
  • Correlation ≠ causation. Users who add photos retain better because they're engaged - photos don't cause retention.
  • When data conflicts, ask: What do retained users do that churned users don't? Optimize for that.
  • Good dashboards drive decisions. If you can't make a product decision from your dashboard, it's vanity theater.
  • Watch 5 user sessions daily. Quantitative data shows WHAT. Qualitative shows WHY. You need both.
  • More data ≠ better decisions. If you track 20+ metrics, you track nothing. Focus on 5-10 that matter.

📚 Want More Product Wisdom?

I'm writing a series on the messy reality of product management - using data, making decisions under uncertainty, and building intuition through experience.

Follow me on LinkedIn for weekly insights on product strategy, fintech, and data-driven decision making.

AM

Adarsh Mohan

Director of Product Management. Built 6 products from 0 to scale with data as my compass (and gut as my backup). Analyzed millions of transactions, ran 500+ experiments, and learned that the best metrics are the ones that make you uncomfortable.