KPI Selection & Attribution Basics
When “good marketing” still feels invisible
You launch a few campaigns: a paid social test, some search ads, a newsletter push. Traffic goes up, leads trickle in, and a handful of sales appear in your CRM. Then the hard question lands: Which effort actually drove the result—and how do you prove it without guesswork? If you pick the wrong metrics, you’ll celebrate vanity wins (like clicks) or miss real progress (like qualified leads). If you credit the wrong channel, you’ll shift budget away from what’s truly working.
This lesson gives you two essentials for marketing analytics: choosing the right KPIs and understanding attribution basics. Together, they turn “we think this helped” into a defensible story about performance—one that aligns decisions, budgets, and expectations.
KPIs and attribution: the two questions analytics must answer
A KPI (Key Performance Indicator) is a metric you treat as decision-critical because it reflects progress toward a goal. Many metrics can be monitored, but only a small set should be elevated to KPI status. A simple way to think about it: metrics describe; KPIs decide. Good KPIs are specific, measurable, and tied to an outcome you can influence with marketing actions.
Attribution is the method you use to assign credit for a conversion (purchase, signup, lead, demo request) across marketing touchpoints. It answers a different question than KPIs: not “did we improve?” but “what gets the credit?” Attribution is never perfect, because people use multiple devices, encounter multiple messages, and act over time. The goal is not perfection—it’s consistency and usefulness for decision-making.
Two beginner misconceptions are worth clearing up early. First, “more data” doesn’t automatically mean better decisions; the right KPIs matter more than dozens of dashboards. Second, attribution isn’t a single “true” number; it’s a model with assumptions. You choose the model that best matches your buying cycle, channels, and reporting needs.
Choosing KPIs that match the job your marketing is doing
Picking KPIs works best when you start from the business outcome and then move backward to what marketing can realistically influence this week and this month. A strong KPI set typically includes one primary KPI (your north-star outcome for the initiative) and a few supporting KPIs that help you diagnose why the primary KPI is moving. If you skip the supporting KPIs, you see the “what” but not the “why.” If you skip the primary KPI, teams optimize activity instead of results.
A practical KPI rule: each KPI should have a clear decision attached. If the metric goes up or down, you should know what you would do next. “Impressions” rarely meets this standard by itself; “cost per qualified lead” often does. Another rule: use rates, not just counts, because counts can rise simply from higher spend or seasonality. For example, “Leads” can increase while lead quality collapses; “Lead-to-MQL rate” reveals whether the leads are getting better.
It also helps to treat KPIs as a funnel, even if you’re not using classic funnel stages. Marketing typically influences: reach/awareness, engagement/intent, conversion, and value/retention. Beginners often choose only top-of-funnel KPIs because they’re immediate and easy to measure. The risk is optimizing for attention that never becomes revenue. A balanced KPI set includes at least one measure close to the outcome (like revenue, pipeline, or purchases) and at least one measure that explains performance earlier in the journey (like click-through rate, landing page conversion rate, or cost per add-to-cart).
What attribution can and can’t tell you (and why the model matters)
Attribution tries to map a messy human journey into a clean accounting system. That mapping depends on a few core ingredients: a conversion definition (what counts as success), a lookback window (how far back you credit touches), and a model (how you distribute credit). Change any of those, and your “best channel” can change—even if nothing in the market changed. That’s not a failure of analytics; it’s a reminder that attribution is a lens, not reality.
A second key idea is the difference between measurement and incrementality. Attribution usually reports which tracked touchpoints were associated with conversions, not whether those touchpoints caused incremental conversions that wouldn’t have happened otherwise. For instance, branded search often looks like a hero in last-click attribution because it captures people already intending to buy. That doesn’t mean branded search is useless—it means you need to interpret the credit carefully and pair attribution with thoughtful KPI selection.
Common pitfalls show up when attribution is treated as a scoreboard rather than a decision tool. If teams fight over credit, they may start optimizing to “appear in the model” instead of improving outcomes. Another pitfall is ignoring channel roles within the journey: some channels create demand (prospecting), others capture it (retargeting, brand search), and some nurture it (email). A model that always rewards the final step can push you to over-invest in capture and under-invest in creation—until growth stalls.
The main attribution models, side-by-side
When you compare attribution approaches, you’ll notice they trade simplicity for realism. Beginners should start by understanding a few standard models and using them consistently, rather than switching models whenever a report looks uncomfortable.
| Dimension | Last-click | First-click | Linear | Time-decay | Position-based (U-shaped) |
|---|---|---|---|---|---|
| How credit is assigned | 100% credit to the final touchpoint before conversion. | 100% credit to the first tracked touchpoint in the journey. | Credit is split equally across all tracked touchpoints. | More credit goes to touches closer in time to conversion. | More credit goes to the first and last touchpoints; the middle gets less. |
| What it’s good for | Evaluating conversion capture channels (e.g., brand search, retargeting) and simplifying reporting. | Understanding demand generation and what starts journeys (e.g., prospecting, content). | Getting a broad view when journeys are multi-touch and you want to avoid over-weighting one step. | Situations where recent touches are likely more influential (e.g., short sales cycles). | Journeys where discovery and closure matter most, and middle touches support momentum. |
| Typical blind spot | Over-credits closers and under-credits discovery; can punish top-of-funnel. | Over-credits discovery; can ignore what actually closed. | Assumes all touches are equally important, which is rarely true. | Requires better timestamp accuracy; still can over-credit “late” channels that mainly capture intent. | Imposes an assumption that first/last are the most important, which may not fit all products. |
| Best beginner use | Use as a baseline, but interpret as “who closed,” not “who caused.” | Use to test which channels create new qualified traffic. | Use for a sanity check when you suspect multi-touch influence. | Use if your buying journey has a meaningful consideration period. | Use when you want a compromise between first- and last-click stories. |
Attribution debates often come from mixing questions. If your question is “Which channel is best at closing?” last-click is informative. If your question is “Where do new customers first hear about us?” first-click is closer. If your question is “How do we fairly represent the journey?” multi-touch models can help—but only if your tracking captures touchpoints reliably and consistently.
KPI selection best practices—and the traps that waste months
A high-quality KPI set is few, stable, and operationalized. “Few” means you can actually pay attention and act. “Stable” means you don’t redefine success every week, which breaks trend analysis. “Operationalized” means every KPI has a definition, a source of truth, and a cadence. Even at a beginner level, it’s worth writing down what exactly counts (for example, whether “lead” includes low-intent form fills) and where the number comes from (ad platform, analytics tool, CRM).
The biggest KPI pitfall is choosing metrics that are easy to improve but weakly connected to outcomes. Click-through rate can be improved with clicky creative that attracts the wrong audience. Cost per click can drop if you broaden targeting, while conversion rate falls. That’s why supporting KPIs should include at least one quality signal (like qualified lead rate, demo-show rate, or purchase conversion rate). Another common trap is failing to normalize for spend: if you double budget, raw conversions often rise—so pairing conversions with CPA (cost per acquisition) or ROAS (return on ad spend) prevents false confidence.
Attribution has its own traps. One is assuming the platform-reported attribution is comparable across platforms. Each ad platform has different tracking capabilities, view-through counting rules, and default windows. If you treat them as apples-to-apples, you’ll double-count success and overestimate total impact. Another trap is confusing correlation with causation: retargeting often targets people already interested, so it “gets credit” in many models without necessarily creating net-new demand. The best practice is to use attribution to guide hypotheses and budget direction, while letting your KPI framework keep you anchored to business outcomes.
[[flowchart-placeholder]]
Two real online marketing examples, worked through end-to-end
Example 1: Ecommerce launch—avoiding “ROAS tunnel vision”
You run a two-week launch for a new product. Channels include paid social prospecting, paid social retargeting, and email to your existing list. The team wants a single KPI, and someone proposes ROAS as the only measure because it’s “clean.” The risk is that ROAS can look great on retargeting and email (where intent is already high) while prospecting looks weak, even if prospecting is what fills the pipeline of future buyers.
A better KPI selection starts by stating the outcome and the decision. The outcome is revenue and profitable customer acquisition; the decision is how to split budget between prospecting and retargeting. Primary KPI: blended CPA or contribution margin per order (depending on what you can measure). Supporting KPIs: add-to-cart rate, checkout conversion rate, and new customer share (so you don’t only recycle existing buyers). This combination lets you see whether growth is coming from new demand versus harvesting existing intent.
Now apply attribution thoughtfully. If you use last-click, retargeting and email will likely dominate because they frequently appear at the end of the journey. That’s useful to understand closing efficiency, but it can lead to over-investing in retargeting until it saturates. If you add a first-click view, you might see prospecting initiating many journeys that later close via email or retargeting. The limitation is that neither view proves incrementality, but together they reveal channel roles: prospecting creates discoverability; retargeting and email capture and nurture. The practical outcome is a budget split that protects prospecting from being “punished” by last-click reporting while still holding it accountable to cost and quality signals.
Example 2: Lead-gen for a service—KPIs that prevent low-quality growth
A service business runs Google Search and LinkedIn ads to drive demo requests. In week one, leads spike, the cost per lead drops, and the team celebrates. Then sales reports that most leads are unqualified: wrong company size, low intent, or not the target role. The original KPI—cost per lead—optimized exactly what it measured: cheap form fills, not valuable opportunities.
Start by rewriting the KPI hierarchy to match the real goal: revenue from qualified accounts. Primary KPI: cost per qualified lead (CPQL) or cost per sales-accepted lead (depending on your process). Supporting KPIs: lead-to-qualified rate, show rate (scheduled demos that actually happen), and opportunity rate (demos that convert into pipeline). These KPIs connect marketing performance to downstream reality without requiring you to wait months for closed revenue to learn anything.
Attribution comes in when deciding which channel is “working.” Search often looks strong in last-click because people use it when they’re already motivated; LinkedIn might be earlier in the decision process, creating awareness and consideration. With a multi-touch view (like linear or position-based), LinkedIn may receive more credit for initiating or assisting journeys that later convert through search. The limitation: if your tracking misses touchpoints (device switches, offline conversations, untracked referrals), the model can under-credit certain channels. The practical takeaway is to treat attribution as a diagnostic lens while your KPI set enforces what “good” means: not just leads, but leads that become pipeline.
What to carry forward from this lesson
Good marketing analytics starts with choosing KPIs that reflect the outcome you want and using attribution models as lenses with assumptions, not as absolute truth. Keep your KPIs few, decision-linked, and balanced across outcome and diagnostic measures. Keep your attribution consistent, and interpret it based on the question you’re asking—who started the journey, who closed it, or who assisted it.
Next, we’ll build on this by exploring Tracking Foundations: UTMs, Events, Pixels [25 minutes].