When I was considering a title for this radio ratings column, all the puns around weighting came up in my head. There’s the Tom Petty standby “The Waiting (Weighting)” because, of course, it’s the hardest part. Then there’s “The Weight” by The Band. They were all too easy so forgive me. Let’s just get into it.
As you know, the “black box” of ratings includes weighting for different variables. This means that the sample is adjusted to reflect the population or universe estimates for the weighted dimensions.
By the way, there is a subtle difference between population and universe, but for the purposes of this discussion, the words are interchangeable. Sometimes, the sample is very similar to the population so the results are adjusted just slightly, but other times, the sample is so far off on some weighting variable that some survey respondents count for much more than other participants. If their listening or viewing habits are unusual, the results can be skewed.
Let me give you a marvelous example from last century. I was VP-Research for Clear Channel and would occasionally head to Columbia, MD to review diaries at Arbitron. The prime purpose of this visit was to review WKCI (KC 101) in New Haven (Glenn Beck was PD and morning host which tells you how far back we’re going), but Clear Channel also had an AM station, WAVZ, that at the time was running Music of Your Life.
If you don’t remember MOYL, this was a “standards” format that probably had a core demo of 75+. Somehow, WAVZ was number one W18-24 middays, and if memory serves, top three in P18-24 middays as well. How could this happen? I found the diary of a young woman who reported that she listened to WAVZ every weekday from 8 AM to 5 PM. In the comments, she stated, “I work in a nursing home.” Honest reporting of behavior but completely off from the reality of the market.
Weighting is done because the rating services want to reflect the population. The weighting variables should have a direct correlation with what is being measured, in this case, media usage. For example, age matters. Does a 20-year-old listen to the same formats as a 50-year-old? Generally not. How about men versus women? If you’re Hispanic and you speak only Spanish, are you more likely to view Spanish language videos over English language videos?
Not every market uses the same variables. For Nielsen Audio, every market is weighted by age cells (18 of them for PPM and 16 for diary). From there, each market may be different. A one-county metro with low minority percentages will not be weighted for any other variable, but most markets have some sort of geographic weighting and it can get very complicated. Geographic weighting exists in both diary and PPM.
Both services also weighted for race (black), Hispanic ethnicity, and language dominance within Hispanic depending on the makeup of the market. I won’t rehash the arcane rules that determine each market’s weighting variables, but you can look up the rules in the latest Description of Methodology and the weighting variables for each market in the “Blue Book” or “Red Book”, the Nielsen publications that have the market ranks, populations, etc.
Nonetheless, being black, Hispanic, having a particular language dominance within Hispanic, and “Other” (non-black, non-Hispanic) does have a relationship with what audio you choose to use.
PPM uses a couple more variables which are the presence of children (yes/no) and employment status (employed full time/not employed full time), the latter of which is used only for persons 18+.
Why not use other variables, such as income or education? Don’t these variables have something to do with listening? For example, income and all news may correlate. How about education and public radio?
Weighting for a particular variable requires credible estimates of the population as well as good data from the respondents. With the exception of language dominance within Hispanic ethnicity, solid estimates projected from U.S. Census data (the gold standard) exist.
Let’s assume we have good universe estimates for education and income. Now the problem is respondent reporting. I remember when our demographer at Arbitron reviewed the education results from diary surveys against the population data. Not many people were willing to admit that they didn’t finish high school.
Respondents appeared to move themselves up one level, for example, high school dropouts reported finishing high school, high school graduates spent some time in college, and the “some college” group was now more likely to graduate. It’s similar in the income category.
You may not make a lot, but do you want to admit it? It doesn’t hurt to go up a category or two in a survey.
Further, the two variables didn’t have much value when added to the weighting mix. In weighting, sometimes, the cure is worse than the disease. Too much weighting (and too many variables) can interact and cause big swings in the data. In radio, even the biggest stations have relatively small AQH audiences (sorry to say that) and as everyone who uses the data knows, one meter or one diary can make a world of difference.
More opportunities for weird “bounces” are in no one’s interest, unless of course, your numbers improve and you make your bonus. Keep in mind that “what goes around comes around”, so if you’re lucky one time, you’re likely to be unlucky sometime in the future.
The net is that weighting is a good thing to improve sample representation, but you need to track how Nielsen is doing. It’s easily done by reviewing the E-book. Remember the E-Book?
That’s right…not many subscribers look at it because it has almost no listening data, but it’s the easiest and often the only place to find population information, how the sample for your market turned out, etc.
If Nielsen is grossly under or over-representing some characteristic for an extended period (not just one month or one diary survey), it may be time to bitch at your rep as a starting point. Pick your fights. And before you complain, look at how you can put the “misses” to your advantage.
If a particular variable is underrepresented, how does your station perform with that cell? Can you play to it knowing that each meter/diary carries a greater weight, meaning every quarter hour is worth more?
During my Cumulus time, I used to receive a “monthly” (every four weeks) report from Nielsen detailing the sample in each of our PPM metros at a granular level. If you’re a subscriber in a PPM metro, ask your rep to see the report regularly.
Let’s meet again next week.
