If your SEO service can’t show you what changed, why it changed, and what it did to the business… you’re not buying SEO. You’re buying vibes.
Some months are genuinely “quiet” (algorithm churn, seasonality, dev delays). Fine. But “quiet” still has footprints: technical fixes shipped, pages updated, testing notes, SERP CTR shifts, index coverage changes. If none of that exists, don’t overthink it.
One-line reality check:
If you can’t audit the work, you can’t trust the results.
The baseline question nobody wants to answer
Before you judge performance, you need a baseline that isn’t squishy.
– What did organic traffic look like before they touched anything?
– Which pages were already winning?
– Which keywords were already ranking (and where)?
– What counts as a conversion for you: lead, call, demo, sale, signup?
Now, this won’t apply to everyone, but if your agency didn’t lock these down in month one, you’re going to spend month six arguing about “growth” that’s actually just attribution drift. Read more
Milestones that matter (not the fake ones)
You don’t need 37 KPIs. You need a few that ladder up to money.
Here’s a framework I’ve seen hold up under pressure:
Business outcomes (the only ones executives care about)
– Organic revenue (or pipeline influenced by organic)
– Leads from organic (quality-adjusted, not raw form fills)
– Conversion rate from organic landing pages
Visibility + demand capture

– Impressions and clicks in Google Search Console (GSC)
– CTR by query and by page (messaging + intent fit)
– Share of top 3 / top 10 rankings for your priority terms
Site health + delivery
– Index coverage issues resolved
– Crawl errors trending down
– Page speed / Core Web Vitals directionally improving
Milestones should be tied to actions. “Publish 10 blogs” isn’t a milestone. “Increase qualified organic demos by 15% by fixing intent mismatch on 5 high-traffic pages” is.
Hot take: rankings are the least interesting part of most SEO reports
Rankings are useful. They’re also easy to abuse.
If an SEO report screams about “keyword positions” but gets weirdly quiet about GSC clicks, organic conversions, and which pages actually moved, you’re being distracted.
Rankings without context create two bad outcomes:
- You celebrate terms that don’t convert.
- You miss the pages that are bleeding money because they rank but don’t persuade.
I’d rather see: “We improved CTR from 2.1% to 3.4% on a query cluster that drives SQLs” than “We ranked 4 for a keyword.”
What “real SEO work” leaves behind (receipts, basically)
Look, SEO isn’t magic. It’s a backlog.
A legit provider will have an activity trail that reads like a changelog:
– Technical tickets created and shipped (with URLs)
– Before/after snapshots for titles, metas, headers
– Internal linking updates (what linked to what, and why)
– Content refresh notes (sections added, pruned, consolidated)
– Backlink outreach or digital PR logs (domains, relevance, outcomes)
– Testing notes (even if results were mixed)
If your vendor can’t produce that, they’re either not doing much—or they’re doing things they don’t want you to scrutinize.
A KPI framework that doesn’t lie to you
Here’s the cleanest way to structure SEO metrics so they don’t become a circus.
Primary KPIs (impact)
These answer: “Did SEO help the business?”
– Organic conversions (macro + micro)
– Organic conversion rate by landing page
– Revenue per organic visitor (or pipeline per session)
Secondary KPIs (behavior)
These answer: “Are we attracting the right people and serving them well?”
– Engaged sessions / time on page (use cautiously)
– Scroll depth (if you track it)
– Pages per session (context-dependent)
– Assisted conversions (with a sane attribution model)
Diagnostic KPIs (troubleshooting)
These answer: “Why did the numbers move?”
– Indexing status, crawl stats, canonical issues
– Query-to-page mapping (cannibalization)
– CWV, mobile usability, template-level problems
– Backlink quality and relevance
One practical tip: force every KPI in the report to have an owner and a cadence. If nobody “owns” CTR or index coverage, nobody fixes it.
Traffic quality by intent (where SEO either proves itself or falls apart)
Not all organic traffic is good traffic. Some of it is just… loud.
Segment queries and landing pages by intent:
– Informational: “how to,” “what is,” definitions
– Commercial research: “best,” “compare,” “reviews,” “pricing”
– Transactional: “buy,” “book,” “near me,” “quote”
– Navigational: brand + product names
Then watch what happens after the click.
If informational pages bring volume but never lead to anything—not even micro-conversions like newsletter signups—you’ve built a library that nobody exits into your product. That’s fixable (internal links, content upgrades, lead magnets), but only if you’re looking at intent and not just sessions.
Reports that actually prove optimization happened
You want a tight set of recurring artifacts. No fluff.
Monthly, these should exist:
- GSC report: clicks, impressions, CTR, average position by page + query cluster
- Landing page report: organic sessions + conversion rate + revenue/pipeline per page
- Indexation + crawl health report: coverage issues, canonicals, noindex mistakes, crawl anomalies
- Content change log: what was updated, when, and expected impact
- Link profile summary: new referring domains, relevance, anchor distribution (not just counts)
If you’re B2B or high-consideration, add one more: assisted conversion paths. SEO often introduces people, then paid/email closes them later. That shouldn’t be hand-waved away.
“But we made changes and traffic spiked…” (correlation isn’t causation)
Here’s the thing: SEO is full of coincidences.
A spike could be:
– Seasonality
– A Google update
– A competitor tanking
– A tracking change
– A one-off referral link
– Your PR team landing a mention
So how do you keep your agency honest?
Use simple causal discipline:
– Compare against a control set of pages that weren’t touched
– Track changes over multiple weeks, not two days
– Annotate everything (publish dates, template releases, migrations)
– If possible, run A/B tests on titles/meta (some CMS setups allow this cleanly)
And yes, you can apply statistical thinking here. Even basic significance checks reduce “we feel like it worked” storytelling.
A useful reference point: Google has stated that very fast sites tend to perform better in user experience, and CWV ties into that broader page experience system (see Google Search Central documentation on Core Web Vitals and page experience). That doesn’t mean speed fixes everything. It means slow sites often lose out when everything else is equal.
Source: Google Search Central, Core Web Vitals / Page Experience documentation.
Quick wins that are actually real (in my experience)
Some tweaks are boring but punch above their weight:
– Rewrite titles for intent match (not keyword stuffing)
– Fix cannibalization by consolidating overlapping pages
– Add internal links from high-authority pages to money pages
– Upgrade “ranking but not converting” pages with clearer CTAs and proof
– Improve SERP CTR by aligning meta descriptions to the promise on-page
– Repair index bloat (thin pages, parameter URLs, junk archives)
I’ve seen a single internal linking sprint move the needle faster than months of “new content,” especially on sites with strong existing authority but messy architecture.
What you should ask after the monthly report
Not a polite “any updates?” email. A real interrogation (friendly, but firm).
Ask these:
– Which URLs improved, and what did we change on them?
– Which KPI moved the most, and does it align with our goals?
– Are we winning more clicks because we rank higher, or because CTR improved?
– What lag time are we seeing between updates and results?
– Any technical risks right now: indexing drops, crawl spikes, template regressions?
– Are conversions being attributed consistently, or did the model change?
If they can’t answer without circling back, that’s a signal. If they never can answer, that’s the pattern.
What “good SEO” feels like operationally
You don’t just get numbers. You get clarity.
A strong SEO service feels like:
– A prioritized backlog you can inspect
– Transparent reporting you can cite in a meeting
– Specific hypotheses and tests (some win, some don’t)
– Fewer surprises and faster course correction
Because at the end of the day, the question isn’t “Are rankings up?”
It’s: can you draw a straight line from work → visibility → qualified demand → revenue, without bluffing.
