Last week, this column explained sampling error. It’s a Survey 101 concept, but rarely considered in the world of ratings. This week, we’ll go through another concept that you have likely seen in your numbers: ‘outliers’.
An outlier is someone who has unusual behavior, perhaps people who belong to “Communists for Trump” or the Kyiv branch of the Vladimir Putin Fan Club. In our world, it’s a panelist or diarykeeper who has atypical behavior.
I’ve often cited a diary example from years ago. When I was VP-Research at Clear Channel in the ‘90s, I regularly conducted diary reviews at Arbitron HQ. Once, I was visiting to look at New Haven diaries, specifically for WKCI-FM, KC101 (this was the station where Glenn Beck was PD and morning show host). One odd thing appeared in the ratings that quarter: co-owned WAVZ-AM, which ran the Music of Your Life format (if you’re not familiar, that was a mix of standards and easy listening music) was in first place with women 18-34 in middays, which made no sense.
After a little searching, the outlier diary appeared. A woman in her early 20s listened to WAVZ each weekday from 8AM to 5PM. In her comments, she wrote “I work in a nursing home”. Next survey, WAVZ was a no-show in that demo. The diarykeeper kept an accurate record of her listening, but she was not representative of much of anyone in that market. Nonetheless, Arbitron had to report that result.
In PPM world, this happens as well. If memory serves, a Spanish language news/talk AM daytimer in the Washington, DC market showed up with big numbers some time ago. Further, an HD2 in Tampa also had estimates that made little sense. As a result, Nielsen put together an “outlier mitigation” policy which you can find online. You can assume that the marketing department had nothing to do with that title.
The policy was meant to alleviate nonsensical results based on legitimate measurement. In their policy, Nielsen defines an outlier as a panelist with three characteristics:
- Contributes 50 percent or more of a station’s total metro quarter hours AND
- Is in the 99.5th percentile of the market’s listeners AND
- Exhibits no security risk or compliance concern
Nielsen checks their data for P6+, P18-34, and P25-34 to find these panelists and “trims” the listening. Nielsen agrees that some panelists, however diligent in their compliance, have listening behavior that just don’t make sense when projected to a population.
All fine and good, but what about outliers with competitive stations? It’s worth your while to look at the audience composition of top stations in your market, whether PPM or diary and that includes your own. In a diary market, you know that the “fluke” only lasted a week and is unlikely to be repeated, but with PPM, that panelist can be around for a couple of years.
Radio is a business based on stereotypes. We know that Spanish-dominant Hispanics are more likely to listen to Spanish language radio. We assume that black listeners are more likely to listen to urban radio. News/talk, sports talk, and rock stations are dominated by men while women listeners drive AC and CHR. Country tends a bit more female but is pretty broad-based although typically very “Other” (the Nielsen term for not black or Hispanic).
Let’s say you’re most interested in persons 25-54. Run it for the dayparts that matter and then review. No surprises? Split it up. Run it by ethnic groups and run it by the components of the demo, in this example 25-34, 35-44, and 45-54. Do the results make sense? Always check for location status. It could be that the strange results are “forced listening”, in other words, at a location where someone else controls the listening. Just use the “out of home” option in the PPM Analysis Tool to get that answer.
As an example, I’ve seen a situation where a country station was #1 in their market in black persons 25-54, coming from a zero the year prior to first place. In that situation, the panelists are doing what Nielsen has asked them to do, but even the most ardent country format person would suggest that it’s tough for a country station to blow away urban stations in the competition for African American listeners. Again, the measurement is good, but the projection to the population makes no sense.
When you find an anomaly like that (and you will over time), the first question is who benefits? If the lucky station is in your building, omerta is probably the best strategy and let the competition find the odd result. If it’s a competitive station, get in touch with your Nielsen rep as he/she is always your first stop. Explain the situation, send along some data, and demand to know more about the person(s) and the household. Some questions:
- How long has the household been in the panel? When will they be removed if nothing changes?
- What does the household’s compliance look like? If someone in the household is not a good complier, can Nielsen remove the household?
- Has their listening been consistent?
If you don’t like the answers, either go to someone higher up or threaten to go to the trades. If the situation is egregious, Nielsen won’t enjoy being embarrassed and may make a change. But do make sure the situation is truly bad, not just a little bit odd. No use “crying wolf.”
Let’s meet again next week.