The Interactive Media Bias Chart® (“IMBC”) is a data visualization displaying both bias and reliability ratings for news and “news-like” content. Source types include Web/Print, Podcast, and TV/Video Programs. Our politically balanced team of human analysts allow us to continually add new rated sources and update current ones.
The following description is a summary of our methodology. For more detail, please see our Methodology page.
Overall news source scores are generated based on scores of individual articles. Each individual article is rated by least three human analysts balanced by left, right, and center self-reported political viewpoints. That is, at least one person who has rated the article self-identifies as being right-leaning, one as center, and one as left-leaning. Sometimes articles are rated by larger panels of analysts for various reasons.
For each news source, we pick a sample of articles that are most prominently featured on that source’s website over several news cycles. We typically have at least 10-15 articles rated per source, but for larger sources (such as the New York Times and Washington Post) we have over 100 articles in our sample. We rate all types of articles, including those labeled analysis or opinion by the news source, and the dominant factor for how we select articles from a page is prominence.
Each overall source score is a weighted average of the individual article scores. Our weighting algorithm has changed over time to try to capture the effect that individual articles with low reliability or high bias have on overall perceptions of the news source and is subject to change in the future. Notably, in the most recent update of this interactive chart, (August 2020), we updated our weighting algorithm to increase weighting of bias scores.
While previous iterations have weighted low reliability scores heavily, they have not weighted bias scores. In the current weighting algorithm, bias scores falling into the skews left/right, hyper-partisan left/right, and most extreme left/right categories receive increasingly heavier weights. The most notable effect of this change was that certain sources near the top middle received slightly greater net bias scores. For example, The New York Times and Washington Post moved a few points to the left, and WSJ moved a few points to the right. We believe this weighting better accounts for the impression of bias that these sources’ opinion and analysis content conveys.
Each analyst has gone through extensive training on Ad Fontes Media’s rating methodology, which is based on content analysis of articles.
Each analyst gives each article ratings on three individual reliability sub-factors of 1) Expression, 2) Veracity, and 3) Headline/Graphic, and then the analyst gives the article an “Overall” reliability rating. Each of these ratings are on a numerical scale between 0-64, with 0 being the least reliable and 64 being the most reliable.
Each analyst also gives each article ratings on three individual bias sub-factors of 1) Language, 2) Political Position, and 3) Comparison, and then the analyst gives the article an “Overall” bias rating. Each of these ratings are on a numerical scale between -42 (left) and +42 (right).
The analysts’ scores are then averaged, and the average score is shown on the chart.
Since we first started producing the Media Bias Chart, one of the top requests for features has always been to display the “reach” of each source, because people have been interested in the impact certain sources have on audiences.
However, an “overall reach” number is hard to quantify for all these different kinds of news sources. All have web traffic, but they also have Facebook page likes and Twitter followers. Some have print circulation, apps, email lists, podcast downloads, TV programs, and YouTube channels. Further complicating things is that these various channels overlap; i.e., sometimes a Facebook page like makes the story show up in the feed and then takes them to the website, which counts as web traffic.
Additionally, many of these metrics are individually hard to track down. There are a variety of publicly available measures, but some metrics more reliable than others. Further, they change quickly over time.
To come up with an “overall reach” number that captures a high-level view of how popular one source is compared to others, we generated our own composite metric.
We gathered publicly available reach data for monthly website visits to each news source from SimilarWeb. SimilarWeb measures “total visits” per month for each site using a variety of data points in their own methodology. Other website measurement tools (e.g., Google Analytics), calculate monthly “unique” visitors, which is often a number smaller than similar measures of “total visits,” but how much smaller that number is varies from site to site. For example, some people may visit CNN.com every day, and some may visit it only once per month. Those frequencies might (or might not) be very different for a site like NewYorker.com.
Because so many people get their news from their social media feeds, we also gathered publicly available information on each news source’s Facebook page likes and Twitter followers. It’s nearly impossible to figure out how much news content from a particular source actually gets viewed by someone who likes a Facebook page or follows a Twitter account, but we counted it to get a general sense of comparable popularity. For some sources, their social media presences are more significant than their website visits, and for some, one social media platform is far more significant than the other. (e.g., large Twitter followings but relatively smaller Facebook page likes, and vice versa).
We wanted to account for each of the measurements of website visits, Facebook page likes, and Twitter followers, but the SimilarWeb “total visits” number often dwarfed the Facebook and Twitter numbers. We wanted to avoid that because we wanted to capture something closer to estimated “monthly unique visits” and because we wanted to factor in the influence of Facebook and Twitter on reach, because that influence is significant. Therefore, took the SimilarWeb numbers and divided them by four for all sites. Then we added that number to the number of Facebook page likes and Twitter followers to come up with an “Ad Fontes Media Composite Reach Metric.”
For the cable news channels, we also wanted to come up with a comparable monthly number. Nielsen ratings are the gold standard for measuring that statistic, but those numbers are often not publicly available. The most recent published Nielsen ratings we could find on the major cable news channels (FOX, MSNBC, and CNN) were from a Hollywood Reporter article in April 2020. We took the average numbers for “all day viewing” and multiplied them by 30 to get a rough estimate of how many people watch in a month. This number is the Ad Fontes Media Composite Reach Metric for TV.
The Ad Fontes Media Composite Reach Metric remains a very rough estimate of “reach” for all the reasons listed herein. We will continue gathering data and add metrics such as print publication, podcast downloads, etc. to improve this metric over time.
We hope you enjoy exploring our data on the Media Bias Chart, and we’ll keep adding as much as we can to it as our company grows. If you want to help us grow and get access to the most updated downloads of the Media Bias Chart all the time, please consider becoming a member!