step-by-step-process-scoring-content

Think of a piece of content your team published recently. On a scale of 0 to 100, how would you rate it? And how would your rating help your company?

Stumped? Consider the method that Jared Whitehead devised for scoring content performance.

Jared works as an analyst in the marketing operations group at Red Hat. After 10 years of growth and acquisitions, the B2B technology company found itself in “constant chaos” with its approach to content.

Leigh Blaylock, who manages Red Hat’s global content strategy group and worked with Jared, notes the company “had so many acquisitions, so many products, so many marketing teams” that no one knew what content was meaningful and what content to say no to.

Last year, Jared, Leigh, and their colleagues set out to get Red Hat’s content under control. They wanted to figure out what content they had, what they wanted to keep, what was and wasn’t performing, and what “performing” even meant.

Here’s how they did it:

  • Built a content-scoring team
  • Standardized content types
  • Audited the content
  • Developed a content-scoring method
  • Created a proof of concept

And here’s what they continue to do:

  • Find enthusiasts to promote their scoring method
  • Evolve the content-scoring method
  • Audit the content regularly

Red Hat’s new content-scoring method is proving its business value by giving content teams a consistent way to assess the performance of individual pieces of content so everybody knows what content to say no – or yes – to.

Leigh and Jared shared this initiative in their Content Scoring at Red Hat: Building and Applying a Repeatable Performance Model presentation at the Intelligent Content Conference.

1. Build a content-scoring team

Jared describes two schools of thought on how to build a content scorecard:

  • A content group develops a scoring method that others follow.
  • A cross-departmental group develops a scoring method that works for all.

Either approach can work. Choose what makes sense for the people and content in your situation. Either way, pick people to contribute to the performance scoring methodology who have the big picture of the content and have a sense of the systems used for creating, tagging, distributing, and managing that content.

For Red Hat, this meant Jared involved the marketing content team, which has the big picture of the company’s marketing assets and content systems from brand, to product marketing, to corporate marketing. Team members could say, “This is our CMS, this is our taxonomy. This is how we would get to the content to analyze it. These are the tools available to us. This is how we might use them to get what we’re looking for.”

When you have people who understand the content to score and the systems supporting that content, you have a better sense of the other skills needed on the team. For certain things, you may want to hire help; for other things, employees may be natural choices.

Red Hat hired a librarian, Anna McHugh, to join the team. Jared and Leigh refer to her as the project’s rock star. “She sees all the marketing assets,” says Leigh. “She knows what’s available, and she does a tremendous job of analyzing those assets.”

Jared adds, “I could write a novel about Anna’s role. She has become a curator in addition to a librarian. And an analyst. She does ALL the things.”

2. Standardize your content types

The Red Hat team started the initiative in 2012 by standardizing its content types – white papers, data sheets, infographics, etc. – across the marketing organization. It wanted all business units to have a common understanding of each type of content the company produces.

To accomplish this foundational governance work, Red Hat invited a representative from each marketing team to participate on a core team that developed standards for the types of content they worked on.

If you approach content scoring as a cross-functional team, as Red Hat did, you need to standardize content types across departments. If, on the other hand, you’re a single content group developing a scoring method, you don’t need to gather representatives from the other groups but you still need to standardize the content types in your group.

3. Audit your content

Next, the Red Hat team cleaned its house with a content audit. Its resource library – external-facing content repository on redhat.com – had grown to more than 1,700 assets. Leigh, Jared, and Anna didn’t know which ones were outdated or irrelevant, but they knew they had a lot of cleaning to do. “It was like having a space full of dust,” Leigh says, “causing visitors to get a sinus infection and leave, never wanting to return.”

They had to figure out a way to identify – and get approval to remove – the dusty content assets owned by multiple groups who invested time and money in those assets. They found 419 content assets more than 18 months old, listed those assets on a shared sheet, identified owners, and asked them to decide which assets needed to remain available.

Since the team couldn’t expect content owners to go through all those assets at once, they did a rolling audit over several months, looking at 25 assets per week. Each week, they sent an email to the content owners of each piece, giving them one week to justify any piece to keep in the resource library. Leigh explains:

We didn’t want a simple keep-it-in-there or no. We wanted to understand why they wanted to leave it in there. Was it being used in a nurture campaign or promotion? If so, we could sometimes suggest an alternative.

Eventually, by weeding out the ROT (redundant, outdated, trivial content), they reduced the 1,700-plus assets to 1,200.

4. Develop a content-scoring method

After cleaning up shop, the Red Hat team turned its attention to analyzing the remaining 1,200 content assets. Jared created a content-scoring method to apply across all content types and content groups.

Since all marketing groups used the same web analytics platform, Jared used that tool to learn what was important to them. His findings showed these important metrics by content type:

  • Blogs – time on a page or percentage of page scrolled
  • Videos – times people press play or percentage of the video viewed
  • PDFs – number of downloads

In other words, depending on the group or the content type, people had various ways of determining, “We’re winning. We’re doing our job.” It was up to Jared to devise a universal way of scoring content performance. He needed to get everyone speaking the same language.

That lingua franca of numbers had to work for people who love the geeky aspects of analytics as well as for those who prefer plain English: Did this content work or not? Did it do what we wanted it to do?

Jared devised a scoring method that gives each content asset an overall score between 0 and 100. This number is derived from four subscores – Volume, Complete, Trajectory, and Recency – each of which is a number between 0 and 100. The overall score includes a weighting factor, which accounts for the relative importance of each subscore for a given asset.

Volume

The Volume subscore is a relative measure of traffic. “This number is relative to all other collateral within our resource library. It’s not specific to a particular content type,” Jared says.

The Volume subscore speaks to awareness. It’s a ranking. It shows how many people have seen a given asset compared to the views of other assets on the site.

Example: If a Red Hat web page, which contains a downloadable white paper, receives more traffic than 60% of the other Red Hat web pages with downloadable assets, that web page gets a Volume subscore of 60 out of 100.

Complete

The Complete subscore is the percentage of visitors who download an asset.

Example: If 40 of 90 visitors download the white paper on a given page, that’s a 44% download rate. That page’s Complete subscore is 44 out of 100.

Trajectory

The Trajectory subscore reflects a trend.

Example: In month one, a web page has 900 visitors. Month two, 600 visitors. Month three, 300 visitors. Traffic to that page is declining. At Red Hat, that negative slope equates to a Trajectory subscore of 0.

If visits had increased over those three months, the Trajectory subscore would reflect a positive slope. The higher the slope, the higher the Trajectory subscore.

For example, an asset had 10 visits in week one, 20 in week two, and 30 in week three. The slope (rise over run) of this asset would be 30 divided by three, which equals 10. Here’s how that calculation breaks down:

rise of 30 (10 in week one + increase of 10 in week two + increase of 10 in week three)

over (divided by)

run of…