Bias Correction

[epistemic warning: I was recovering from surgery and wanted to document my strategy for reading the news and correcting for bias. This is a very boring post, read at your own peril.]

Contemporary media seems to be growing increasingly outrage driven. Trying to explain the media as a causal reason for increased polarization, or increased polarization as a causal reason for outrage driven media, is a hopeless exercise. In my own mind at least, I think about it as a complex simulation of interacting humans that is running a democratic algorithm, which due to our stronger information technology, has more interconnections every year. When Democracy is running as an algorithm, the optimal classification would separate two sides as efficiently as possible.

The equilibrium here will be whatever information sources most efficiently can divide an individual into one of two political tribes. Truth has no reason to replicate better than fiction. No one person can absorb all the truth of a given day, but the way they absorb information can be wrong in different ways.

I recently read this Yale Cultural Cognition piece on alternative facts. The author did an incredible job filtering through the dynamics of how Trump engages with segments of the mainstream media. The piece builds largely off this other equally great previous piece on motivated reasoning.

From this simple model, we can see how identity-protective reasoning can profoundly divide opposing cultural groups.  Yet no one was being misled about the relevant information. Instead, the subjects were misleading themselves—to avoid the dissonance of reaching a conclusion contrary to their political identifies.

Nor was the effect a result of credulity or any like weakness in critical reasoning.

On the contrary, the very best reasoners—the ones best situated to make sense of the evidence—were the ones who displayed the strongest tendency toward identity-protective reasoning.

Such coverage, in turn, impels those who want to defend the truth to attack Trump in order to try to undo the influence his lies could have on public opinion.

But because the ascendency of Trump is itself a symbol of the status of the cultural groups that propelled him to the White House, any attack on him for lying is likely to invest his position with the form of symbolic significance that generates identity-protective cognition: the fight communicates a social meaning—this is what our group believes, and that what our enemies believe—that drowns out the facts (Nyhan et al 2010, 2013).

A problem I see here is that motivated reasoning isn’t always wrong in a clear way. It is one of the trickiest ways you can be wrong, and probably the hardest to identify.

When making motivated reasoning errors, I like to use heuristics to try and estimate how and why my model of the world might be biased.

Once you receive an information set, there are different ways to build your model of the world incorrectly. In this case your model of the world is simply how you build a structure to transform raw information into knowledge of the world around you. Most of the time the information you’re receiving is output from someone else model. While there are exceptions for primary sources, even this is often only a partial-exception, since most of the time someone else had to build the feed to capture primary information and send it someplace where you can access it e.g. youtube videos of primary sources still required another person to build a model of the world where capturing and transmitting that information was beneficial.

There are many different ways you can build an incorrect model. Less Wrong does a great job of documenting these, and going over how almost everyone has a biased view of the world due to simple cognitive biases such as confirmation bias. While there are entire books documenting how to evaluate these models of the world, the goal should be to make sure they are probabilistic, unbiased, and systematically are not over or under certain (i.e. an 80% chance of something happening, happens about 80% of the time).

As I mentioned, I want to focus on motivated reasoning. When we use our brains we are able to think about and ‘scan’ all the information we know on a topic. This also includes references we have, sources we know to investigate, and even general processing skills, such as Causal Inference, that help us organize, evaluate, and structure our information. This is high-dimensional information from a variety of sources, including (primarily) output from other peoples models. While we can control what we read and hear to an extent, we can’t remove information from our brain (unless we simply forget about it over time). If we want to model the world correctly we need to do a few things:

1.) Ensure our information set spans the true outcome set. All this means is that there has to be complete information within our brain such that there exists some model  that is able to map our information set to a true model of reality. For example, if you only ever read the NYtimes, you will have trouble developing any bias correction, since the information you feed into your model is based only on the distribution of the output of their model.

The set of all relevant information is a purely theoretical concept, which is a record of all information possibly related to a given event, which is far too much and far too complicated for the human brain to absorb. We can imagine any given source (whether it’s news, primary sources, videos, twitter etc) as representing a subset of the set of all relevant information. For raw information you’re simply receiving an arbitrary slice of recorded or documented information on an event. Sometimes news outlets attempt to aggregate this information, and then reduce its dimensionality to give you a small model that represents reality. The reality is all information we receive is a tiny slice of the total information set, and typically suffers from model bias. However, if we are able to aggregate enough information from different sources, such that we have the fidelity that there does exist some bias correction and weighting that could allow us to create a true unbiased model of the event, then our information set will span the outcome set.

2.) Have an existing estimate of how the unbiased distribution should look. The only way to do this, properly, is to study all past events where a given information provider (e.g. NYtimes), view their prediction or explanation of the world, then find systematic biases or misses. This is challenging in part because we aren’t used to thinking or reading news articles as predictions, which we generally think of as clear statements with assigned probability values. We know that lots of news sites writes lots of articles on differences in outcomes between white and black people, and reports on how these differences are due to systematic racial discrimination at an individual and institutional level. This constitutes a prediction of the world. It is comparing the mean between two populations, based on a chosen variable, and then interviewing or asking for comments by journalists or academics who share an underlying causal view on why this is the case.

Since no two events are the same the goal is to identify a set of latent unobserved dimensions that map to salient policy dimensions. What this means is that while there might be thousands of articles on race relations, we can explain the bias behind the model in each article by using a Bayesian filtering algorithm. Having an idea of what it means to detect and filter out bias correction using modern methodological research design helps us frame the correction in a more scientific way. I can try to get my brain to simulate how I know those models would work if we could actually run them.

Unfortunately, estimating this is still very far out of the realm of data science, as it requires robust evaluation of reports vs. reality, both at a deep level, and across decades. The closest our best Political Scientists can get now is to extract the single left-right latent dimension from US congress, and use textual analysis to match the text base in a newspaper to the estimated dimensional points. While this doesn’t let us identify latent dimensions behind the entire set of news reports, we can use it as a first-order bias correction. You already know this though, Fox is ‘too far to the right’ and the NYtimes is ‘too far to the left.’ (or there is the more common view that one of them is perfectly unbiased and correct, and the other is hopelessly wrong.)

3.) The first two are problems for all scientific inference as applied to inference of the news: Ensuring you have the proper information set, and correcting for bias. The third is similarly a challenge across disciplines, but is the most egregious in political analysis. Motivated reasoning is when you have an attachment to a specific model of the world being the correct outcome. This seems to be a conflict between how humans evolved to build and form tribal political relationships, and what it means to accurately perceive a political reality.

Using motivated reasoning we are more likely to engage in biased search to seek out information or sources that confirm our views, accept evidence using biased assimilation, and in general across the board seek out bias confirming information. It’s silly when you think about it, if you are confident that your view of the world is correct, you would want to build that confidence by pursuing an unbiased look at reality. As far as I can understand, the only reason this isn’t what happens is due to our evolutionary preferences to belong to tribes. We literally get high when we follow political meme pages, watch Hannity, or share Jon Stewart clips. That tribalistic feeling of belonging helps us run our democratic algorithm to gain more political power for our side.

Politics and power is fun. I remember when I was 17 and in my first year of university, I knew that I wanted to be an intellectual, and I liked being edgy. I remember going to the public library and renting books with titles like “The Anti-Corporate America Reader.” This was when Obama was campaigning, and everyone knew it was subversive and cool to be progressive. When I couldn’t reconcile these very far-left views with my Microeconomics courses, I resigned to be a Paul Krugman liberal.

With motivated reasoning it’s too easy to become and stay a progressive or a conservative. Since these represent the only two rallying points of political power in our democracy, almost every argument or model of the world attaches itself to one of the two. If you imagine these two points as being circles embedded in n-dimensional space, where every dimension is an abstracted political issue, no matter where you are, you must be close to one than the other (or equidistant). Here I think of motivated reasoning as not simply the way we tether ourselves to the point of closest political-power, but also the way everyone else works to keep you tethered, since your membership improves their power.

 

Leave a Reply

Your email address will not be published. Required fields are marked *