My current obsession has been reporting. Everyone could benefit from paying more attention to it. Five years, countless ciders, and too many conferences into my career, I finally spent some time on it.

Bad reporting soaks up just as much time as pointless meetings. Analysts spend hours creating reports that no one will read, or making dashboards that never get looked it. Bad reporting means people either focus on the wrong goals, or they pick the right goals, but choose the wrong way to measure them. Either way, you end up in the same place.

So I thought I’d share what I’ve learned.

We’re going to split this into:

  • Definitions
  • What is the goal of a report and a dashboard? (And how are they different?)
  • Who is the data for?
  • How to create a good dashboard
  • How to create a good report
  • How to create useful graphs
  • Useful tools

(We’ll lean on SEO examples — we’re on Moz! — however, for those non-SEO folks, the principles are the same.)

What is the goal of a report versus a dashboard?

Dashboards

Dashboards should:

  • Measure a goal(s) over time
  • Be easily digestible at a glance

The action you take off a dashboard should be:

  • Let’s go look into this.

Example questions a dashboard would answer:

  • How are we performing organically?
  • How fast does our site load?

Reports

Reports should:

  • Help you make a decision

The action you take off a report should be:

  • Making a decision

Example questions a report would answer:

  • Are our product changes hurting organic search?
  • What are the biggest elements slowing our website?

Who is this data for?

This context will inform many of our decisions. We care about our audience, because they all know and care about very different things.

A C-level executive doesn’t care about keyword cannibalization, but probably does care about the overall performance of marketing. An SEO manager, on the other hand, probably does care about the number of pages indexed and keyword cannibalization, but is less bothered by the overall performance of marketing.

Don’t mix audience levels

If someone tells you the report is for audiences with obviously different decision levels, then you’re almost always going to end up creating something that won’t fulfill the goals we talked about above. Split up your reporting into individual reports/dashboards for each audience, or it will be left ignored and unloved.

Find out what your audience cares about

How do you know what your audience will care about? Ask them. As a rough guide, you can assume people typically care about:

  • The goals that their jobs depend on. If your SEO manager is being paid because the business wants to rank for ten specific keywords, then they’re unlikely to care about much else.
  • Budget or people they have control over.

But seriously. Ask them what they care about.

Educating your audience

Asking them is particularly important, because you don’t just need to understand your audience — you may also need to educate them. To go back on myself, there are in fact CEOs who will care about specific keywords.

The problem is, they shouldn’t. And if you can’t convince them to stop caring about that metric, their incentives will be wrong and succeeding in search will be harder. So ask. Persuading them to stop using the wrong metrics is, of course, another article in and of itself.

Get agreement now

To continue that point, now is also the time to get initial agreement that these dashboards/reports will be what’s used to measure performance.

That way, when they email you three months in asking how you’re doing for keyword x, you’re covered.

How to create a good dashboard

Picking a sensible goal for your dashboard

The question you’re answering with a dashboard is usually quite simple. It’s often some version of:

  • Are we being successful at x?

…where x is a general goal, not a metric. The difference here is that a goal is the end result (e.g. a fast website), and the metric (e.g. time to start render) is the way of measuring progress against that.

How to choose good metrics for dashboards

This is the hard part. We’re defining our goal by the metrics we choose to measure it by.

A good metric is typically a direct measure of success. It should ideally have no caveats that are outside your control.

No caveats? Ask yourself how you would explain if the number went down. If you can immediately come up with excuses that could be answered by things out of your control, then you should try to refine this metric. (Don’t worry, there’s an example in the next section.)

We also need to be sure that it will create incentives for how people behave.

Unlike a report, which will be used to help us make a decision, a dashboard is showing the goals we care about. It’s a subtle distinction, but an important one. A report will help you make a single decision. A dashboard and the KPIs it shows will define the decisions and reports you create and the ideas people have. It will set incentives and change how the people working off it behave. Choose carefully. Avinash has my back here; go read his excellent article on choosing KPIs.

You need to bear both of these in mind when choosing metrics. You typically want only one or two metrics per goal to avoid being overwhelming.

Example: Building the spec for our dashboard

Goal: Measure the success of organic performance

Who is it for: SEO manager

The goal we’re measuring and the target audience are sane, so now we need to pick a metric.

We’ll start with a common metric that I often hear suggested and we’ll iterate on it until we’re happy. Our starting place is:

  1. Metric: Search/SEO visibility
    1. “Our search visibility has dropped”: This could be because we were ranking for vanity terms like Facebook and we lost that ranking. Our traffic would be fine, but our visibility would be down. *Not a good metric.
  2. Metric: Organic sessions over time
    1. “Our organic sessions have dropped”: This could easily be because of seasonality. We always see a drop in the summer holidays. *Okay, also not a good metric.
  3. Metric: Organic sessions with smoothed seasonality
    1. “Our organic sessions with smoothed seasonality have dropped”: What if the industry is in a downturn? *We’re getting somewhere here. But let’s just see…
  4. Metric: Organic sessions with smoothed seasonality and adjusted for industry
    1. “Our organic sessions with smoothed seasonality and adjusted for industry have dropped”: *Now we’ve got a metric that’s getting quite robust. If this number drops, we’re going to care about it.

You might have to compromise your metric depending on resources. What we’ve just talked through is an ideal. Adjusting for industry, for example, is typically quite hard; you might have to settle for showing Google trends for some popular terms on a second graph, or showing Hitwise industry data on another graph.

Watch out if you find yourself adding more than one or two additional metrics. When you get to three or four, information gets difficult to parse at glance.

What about incentives? The metric we settled on will incentivize our team get more traffic, but it doesn’t have any quality control.

We could succeed at our goal by aiming for low-quality traffic, which doesn’t convert or care about our brand. We should consider adding a second metric, perhaps revenue attributed to…