SciFi Politics: Part 1

People are mostly sane enough, of course, in the affairs of common life: the getting of food, shelter, and so on. But the moment they attempt any depth or generality of thought, they go mad almost infallibly. The vast majority, of course, adopt the local religious madness, as naturally as they adopt the local dress. But the more powerful minds will, equally infallibly, fall into the worship of some intelligent and dangerous lunatic, such as Plato, or Augustine, or Comte, or Hegel, or Marx. -David Stove

The Matrix
Political theory is weird, and it has bugged me for years. There is obviously something useful in lots of political theory, but it doesn’t mesh nicely with empirical and scientific ideals of experimentation. Plenty of theorists, and even English professors write criticisms of Neoliberalism. They clearly think they are uncovering something interesting or true. Economists view most of these guys as idiots.

The problem is sometimes theorists are right (or at least sound right), even though they don’t follow understood scientific methods super closely. And sometimes experimental practitioners are wrong, even though they follow the scientific method closely. This varies between fields and problems. It ends up being very awkward. Frequently smart scientists will write inane political posts on Facebook, which they view as being separate and distinct from the scientific methodology they use.

I can’t stand having a scientific methodology that is so strangely inconsistent, and has no way of incorporating Facebook posts, political commentary, books, and random Post-modern theory, other than claiming it’s wrong. If it is wrong, we need to be able to explain why.

I always try to start my methods comparisons by imagining God’s matrix, and working backwards. God’s matrix is the computer that simulates our universe. God has no interest in different models of the world. After all, the best model of a cat is another cat, preferably the same one. Reality is a perfect model of itself, there is only one file. In order to test counterfactual worlds, all God has to do is copy the matrix to another simulation and change some conditions. Here I imagine God as some sort of algorithmic version of Laplace’s demon. I don’t know if this is the right way to think of the world, but it’s at least consistent with how we understand scientific and counterfactual inference to work.

Thematrixincode99.jpg

When we run experiments we wish we could create a counterfactual universe, but we can’t. So instead we try to create a counterfactual universe within our own reality. Two random samples of 100 people is a lot like using the same 100 people but duplicating the universe. It’s not the same — but we can use sampling statistics to try and measure our variance. What does this have to do with political theory?

Before we go to political theory, I want to explain what I think of humans. Recently the technological progress of humans has created advanced algorithms to model the world. It’s hard to survey all of human writing and research, but in past writing on human nature we always sort of viewed humans as mythical. Most of the old theorists of political theory and human nature, were religious.

It’s sort of interesting actually the way guys like Hobbes wrestled with religion. It’s hard to simulate in your own mind, but you can tell something felt off for Hobbes, but he couldn’t really understand. It’s hard for us to understand how radical and atypical it was at the time to not believe in God.

Even as religion faded, there was still no basis for understanding the human brain. In the 20th century as math and computers got cooler, they were still immensely distinct from the human brain.

None of the computers built in the 20th century were particularly smart, even though the humans who coded the machines were. The equations themselves of the world had to be specified in the code. A human had to study math, stare out his window at the world, and then try to think of the right equation to impose on the world. This is still mostly true, but now it’s possible to see how it might not be true.

Most of the things that we think are easy as opposed to hard don’t actually map to complexity in the world. Our brains evolved for specific purposes, and are optimized to find patterns that help us survive and reproduce.

The idea is that there is no meaningful difference between the world we live in, and the computer world. Reality is nothing more than a constant refreshing set of information, following a set of physical rules. We observe it, and our brains project a picture of reality that we interact with. The distinction between binary computer code and machine, and human, is meaningless. It only feels like it’s distinct. We’re both information processing devices that take input and produce output.

Another way to think about it, is an AI would never know the difference between being stationary and solving differential equations, and standing up and stacking blocks on top of each other. In both cases it’s receiving a set up inputs, using them to solve some function, and producing a set of outputs.  The difference is only meaningful to us because we evolved to be interested in things that interact with our project of reality.

This idea has existed in some form for a while. There were hints of it in old Greek philosophy, but I don’t give them any credit because they wrote lots of garbage, and then we look back and go “Oh wow if you reread their argument with current knowledge you can tell they secretly knew it all along!”.

There were also guys like Bertrand Russell who sort of picked up on the idea that chairs are sensory data reconstructions. It was interesting at the time, because knowledge of reality was growing from math and physics, but still not exceptionally useful. Bertrand Russell wrote about tables, well before we had this computational matrix view of the world:

To make our difficulties plain, let us concentrate attention on the table. To the eye it is oblong, brown and shiny, to the touch it is smooth and cool and hard; when I tap it, it gives out a wooden sound. Any one else who sees and feels and hears the table will agree with this description, so that it might seem as if no difficulty would arise; but as soon as we try to be more precise our troubles begin. Although I believe that the table is ‘really’ of the same colour all over, the parts that reflect the light look much brighter than the other parts, and some parts look white because of reflected light. I know that, if I move, the parts that reflect the light will be different, so that the apparent distribution of colours on the table will change. It follows that if several people are looking at the table at the same moment, no two of them will see exactly the same distribution of colours, because no two can see it from exactly the same point of view, and any change in the point of view makes some change in the way the light is reflected.

He uses words like ‘sensory data’ to discuss the way humans interact with the world. I read his book on these problems of philosophy years ago. At the time I couldn’t really understand what he was trying to prove. There was this vague notion that sensory data and perception were weird or something.

There was some other garbage as well, by Jean Baudrillard. I actually think this quote is beautiful, it takes work to be so nonsensical.

“And so art is everywhere, since artifice is at the very heart of reality. And so art is dead, not only because its critical transcendence is gone, but because reality itself, entirely impregnated by an aesthetic which is inseparable from its own structure, has been confused with its own image. Reality no longer has the time to take on the appearance of reality. It no longer even surpasses fiction: it captures every dream even before it takes on the appearance of a dream.”

It makes more sense when you realize there is a set of information in God’s matrix, and we are program interacting with that information in a way to optimize our evolutionary function. We notice and build tables because they are useful to us, and we evolved to build useful things.

Pre-History

primate-ancestor

2001 Space Odyssey opens with a primate tribe fighting with another tribe over a watering hole. In a moment of evolutionary transcendence, one of the primates realizes he can use a bone as a weapon.

Since we literally evolved through that state it’s not complex for us to imagine what it would be like, and why using a tool would be obvious. The amount of information processing required though is incredibly complex: For the primate you would think that there is another tribe that has been agitating recently. You are also concerned the leader of your tribe is ineffective and needs to be overthrown. You’re currently leading a small raiding party. You’re simultaneously surveying the landscape, thinking of attack paths, considering leadership dynamics, and estimating backup plans. In addition, using primitive languages you’re communicating this information you’re receiving through your senses to other members of your tribe. As far as the raw sensory input, it is a lot of memory.

Not only can this brain absorb phenomenal amounts of information, but can actively process it, formulate and update it, and convert it to very low-information speech to communicate this information. Our silicon computers are very very far away from this ability. It’s easy to gloss over this point, but it’s important to appreciate. Our senses are absorbing massive amounts of information at every tick of time, you can envision it as matrix-type information where 101010s are scrolling past at incredible speeds. Our brains than take in this information and create a projection of reality within our brains.

From this point in pre-history it takes tens of thousands of years for the smartest humans to understand and solve partial differential equations. In terms of complexity, outside of human biology, this mapping of PDE input to output is much simpler. We are not optimized to do it. Consider Gauss or Von Neuman, through a combination of genetic accident their brains were very very slightly different than ours. This difference let them solve math and traditionally complex problems with an ease most of us can only dream. The reality is historically these small quirks in their brains probably wouldn’t help them survive and reproduce. Yet they were just the right quirks to advance human scientific knowledge by decades, maybe more.

Causality
We evolved to understand causality in a very specific way. It was never to our advantage to understand causality as 100, 1,000, or an arbitrarily high number of interacting events. It was relationship, tribal, and environmentally based. The same way in which we understand wars, elections, or history, is in that same relatively low-dimensional storybook way that is similar to a tribal history or strategic recapping of a buffalo hunt. This retelling of causality is perhaps meaningless to Laplace’s demon–or God’s matrix.

The field of causal inference and methodology has worked remarkably well for building a framework for us to run medical tests and controlled experiments. Imagining counterfactual worlds seems similar to the idea that the world is simulated.

It hasn’t worked as well in building a great encompassing theory on causality. The following are refinements on causality. Each time someone tries to make a nice statement, it turns out someone else can play around with it and make some example where it falls apart, so then they refine what causality means.

This first excerpt from the Stanford Philosophy page on Causality states the most bare bones interpretation.

Where c and e are two distinct actual events, e causally depends on c if and only if, if c were not to occur e would not occur.

Then some people complained or had a fit or something, because there could be things in between or whatever. It was refined:

c is a cause of e if and only if there exists a causal chain leading from c to e

Then there was concern that this might miss some (potential) probabilistic features of reality. It was refined again:

Where c and e are distinct actual events, e causally depends on c if and only if, if c were not occurred, the chance of e‘s occurring would be much less than its actual chance.

From here there has been more research–lots of it brilliant–trying to pin down a fully consistent model of causality.

My view is that since we only understand and interpret a specific set of information from reality, it’s nonsensical for us to have an encompassing law of causality. Our distinctions between events are more related to a storied interpretation of the world based on evolution, than some true documentation of reality.

If you imagine a gigantic computer screen of all binary information on earth, updating on small ticks of time, the information we extract from it is very small and only that which is evolutionarily relevant to our survival. In this conception of reality, causality is the pattern linking the past to the future.

We see specific patterns in the information, but those patterns are not fundamentally different or more important than ones on different scales that are invisible to us. And within each pattern are intersecting and weaving subpatterns throughout space and time. Whether the future flows deterministically from the past, or how it relates simultaneously with the microscopic physical structure of reality up to the level of abstraction that is easiest for us to understand, is probably an empirical question that we need to employ a future learning super-computer to analyze.

Even then, our ability to understand causality is an arbitrary way to understand the world. It works well enough though, since in practice we often use it to understand things like medicine and the benefit of education salary. Which we can think of as strictly empirical and predictive strategies, where we don’t really care about proving the true causal path.

This method does seem to go a little haywire though when we try to use it to understand grand histories. Tolstoy wrote ‘War and Peace’ on this exact point, sort of. At the time everyone was working to understand Napolean’s genius and the specific explanations behind his every action and victory. Tolstoy on the other hand viewed Napolean’s grand strategy, and the outcomes of specific battles, as far more due to random and undocumented events.

All he had to do was point out a series of dependencies. He ‘rolled’ the chain of causality up. It’s weird, but not hard, to imagine it rolled up further, into dimensions that don’t fit our storied interpretation of the world.

To us, their descendants, who are not historians and are not carried away by the process of research and can therefore regard the event with unclouded common sense, an incalculable number of causes present themselves. The deeper we delve in search of these causes the more of them we find; and each separate cause or whole series of causes appears to us equally valid in itself and equally false by its insignificance compared to the magnitude of the events, and by its impotence—apart from the cooperation of all the other coincident causes—to occasion the event. To us, the wish or objection of this or that French corporal to serve a second term appears as much a cause as Napoleon’s refusal to withdraw his troops beyond the Vistula and to restore the duchy of Oldenburg; for had he not wished to serve, and had a second, a third, and a thousandth corporal and private also refused, there would have been so many less men in Napoleon’s army and the war could not have occurred.

Had Napoleon not taken offense at the demand that he should withdraw beyond the Vistula, and not ordered his troops to advance, there would have been no war; but had all his sergeants objected to serving a second term then also there could have been no war. Nor could there have been a war had there been no English intrigues and no Duke of Oldenburg, and had Alexander not felt insulted, and had there not been an autocratic government in Russia, or a Revolution in France and a subsequent dictatorship and Empire, or all the things that produced the French Revolution, and so on. Without each of these causes nothing could have happened. So all these causes—myriads of causes—coincided to bring it about. And so there was no one cause for that occurrence, but it had to occur because it had to. Millions of men, renouncing their human feelings and reason, had to go from west to east to slay their fellows, just as some centuries previously hordes of men had come from the east to the west, slaying their fellows.

We can switch back into our mathy-matrix view of the world now. The historians of the time were reading relentlessly into the specific details, and overfitting their models. Their brains were using words to generate non-linear models to classify and explain the Napoleonic wars into a causal story that best satisfies the way the human brain likes to understand the world. That’s fine, but they were over-fitting by using idiosyncratic events in a way that perfectly explained the story. Napolean was a ‘great man’ so every action he took had to have been calculated genius. It wasn’t. And yet Napolean seemed to be a military genius, so he had to have made some brilliant choices we can learn from.

Tolstoy was on to something, but didn’t have the scientific context to place it. In actuality it’s an empirical question as to how much of Napolean’s success can be attributed to his strategic ability. This question though is itself based on previous history — after all — what is strategic ability? We classify strategic ability based on previous events where a military leader succeeded. In this sense getting an idea of what strategic ability is can be viewed as a  filtering algorithm that is searching for the structure of attributes an individual has that contribute to military success.

We would then need to test this by looking at all past military leaders, scoring their success, and searching for attributes correlated with success. That’s actually really hard. We can try to do this as a rough approximation by reading history books, but it’s hard to ever be sure we are on to something given how much information processing is required to do this the right way.

Hume made a good case that proving causality is a lost cause, but you can get arbitrarily close. For example, I can’t prove the sun will rise tomorrow, but it seems likely as we observe it in the past, and it predicts the future perfectly. This works for Napolean and military strategy as well, as we notice patterns we test them by using them to predict the future. Then if we were able to predict the future, it means the model we used to predict the future is valid. So the more we correctly predict the future, the better our conception of the scientific method and causality becomes. Predicting the future gives us a chance to test the structure of our world, and see if we can understand the signal within current observed patterns.

Through this we can understand history (historiography), political theory, most philosophy, and most general study of human past information, as empirical science. It’s pretty much a tautology: We are computers embedded in an information space. Everything we observe and consider is empirical. The question “Is math intrinsic to the universe? Or did humans discover it?” is nonsense. Our brains are embedded in the universe, and when we observe two rocks sitting together, our physical brain structure adapts to simulate an abstraction of those two things.

Don’t take this the wrong way, I’m not trying to claim some special authority on this blog to answer huge philosophical questions. What I want is everyone else to stop acting like they are clever, or worth talking about, or worth teaching to undergraduates as anything other than a history of broken human thought. If you can shake the notion that our brains are special, it’s easier to avoid these philosophical traps.

 

 

3 Responses to SciFi Politics: Part 1

  1. Anonymous says:

    > Consider Gauss or Von Neuman, through a combination of genetic accident their brains were very very slightly different than ours. This difference let them solve math and traditionally complex problems with an ease most of us can only dream. The reality is historically these small quirks in their brains probably wouldn’t help them survive and reproduce. Yet they were just the right quirks to advance human scientific knowledge by decades, maybe more.

    You might find some comments of Manin interesting (reproduced below).

    ==========

    For many years I led a seminar in Moscow on psycholinguistics and evolution of mind and consciousness. Amongst its participants and contributors were linguists, ethnologists, neurobiologists, psychologists and psychiatrists. All of us had different backgrounds and interests, and we tried to find common viewpoints and problems that might possibly be clarified by throwing together our diverging experiences.

    My own inquiry gradually focused on a project which probably only a dilettante could have conceived. I started imagining in some detail the emergence of language as a system of social behavior.

    I was trying to see through the mist of centuries, far beyond the boundary where the methods of comparative linguistics start failing because of the exponential decay of available data. (For example, nostratic reconstructions refer to a very late epoch of about (10-13)*10^(3) BC.) This justifies a change from a purely linguistic viewpoint to a psycholinguistic one.

    1. In modern societies, there are a (very) few persons whose level of linguistic competence is considerably higher than that not only of laymen but also of practically all other members of society. I have in mind such crystallizers of national languages such as Dante, Shakespeare and Pushkin.

    I postulated that this was true also at the very early stages of development of speech. Put somewhat paradoxically, there were people through which a yet unborn language spoke, and this unsystematic speech generated a mutated brain burst into a non-speaking environment through proto-shamans and proto-poets.

    Thus, in the Saussurean idiom, parole antedated language.

    2. The primary function of developing consciousness was not cognitive. It consisted in the introduction of a psychic mechanism that could temporarily stop inborn behavioral patterns.

    The primary function of developing speech was to provide a signal system for stopping such instinctive action; it could be interiorized and thus form a basis of an individual psyche.

    Closely connected with this, developing speech provided to the specially endowed individuals a means to control behavior of other individuals and a means to create the “alternative realities” which developed later into religion, literature, philosophy and science.

    3. The developing left brain / right brain asymmetry accompanying the growth of linguistic competence of Early Man, and probably expressed initially only in a scattered minority of individuals, could easily lead to what in modern terms would be described as severe neurotic disturbance. (Similar speculations were based on a different material, e.g. changes in sexual behavior from that characteristic of the animal condition to that of the first human societies.)

    At a certain stage of reconstruction, I realized that what I had been imagining was a figure strikingly resembling the mythological trickster. I started studying the literature on tricksters and found, to my delight, that tricksters all over the world seemed to be endowed with special linguistic abilities and were at the same time thoroughly neurotic…

    Evolution favored the trickster’s genes because his prodigious sexual activity was assisted by his manipulative skills. Moreover, the trickster’s role of a wise man near the source of power may have given him an additional reproductive advantage.

    Only recently have I discovered that approximately at the same time, in 1988, a group of researchers published the book “Machiavellian Intelligence.”

    Its content is briefly summarized in [MI2] as follows: “[…] the evolution of the intellect was primarily driven by selection for manipulative, social expertise within groups where the most challenging problem faced by individuals was dealing with their companions. The term ‘Machiavellian Intelligence’ was coined by the authors (or editors) precisely in order to express this manipulative social expertise, and evidence of its important role was already found in societies of primates.”

    I was happy to learn that my “Trickster” fits exactly this description.

  2. Levantine says:

    – What I want is everyone else to stop acting like they are clever, …. anything other than a history of broken human thought. …….

    In that spirit, I say I’m dissatisfied with the list I made the other day, the list of reading suggestions. I partly misread your post. Glancing through this blog, I’m getting a better picture of your interests.

    In the light of those interests I’d mention these books as important, though some of them may well be old hat:
    Oswald Spengler’s booklet from 1933, and his two booklets from the 1920s. They helped me a lot in gradually understanding how can the political left be fundamentally flawed and how can anything on the political right be respectable.

    On the basis of just how good it is, I’ll mention Disciplined Minds: A Critical Look at Salaried Professionals and the Soul-battering System That Shapes Their Lives by Jeff Schmidt.

    Quote:
    – The question “Is math intrinsic to the universe? Or did humans discover it?” is nonsense. Our brains are embedded in the universe, and when we observe two rocks sitting together, our physical brain structure adapts to simulate an abstraction of those two things……..

    I’d say that human minds involve more than brains. I think there is a ton of indications / evidence for it and arguments in favour of that, on Michael Prescott’s blog “Occasional thoughts on matters of life and death.”

    “But the more powerful minds will, equally infallibly, fall into the worship of some intelligent and dangerous lunatic, such as Plato, or Augustine, or Comte, or Hegel, or Marx.” (-David Stove)

    I’ve encountered powerful minds who fall in the worship of Darwin. And I’ve seen evolution being treated like religion.

    Having said that, I’m adding David Stove on _my_ list of readings.

    So much for now. Bye!

    • Sorry for the late reply! Thanks for the rec, I haven’t heard or read Oswald Spengler, and reading his wiki seems like the type of thing I’d like.

      Something I’ve found, well fun I guess, is learning how the political right can be respectable, as you said. After undergraduate and an adolescence of Bush and Obama, it seemed so obvious that the Right is fox news nonsense, and the left is obvious progress.

      Seeing the cracks in the picture was hard, because fox news obviously is nonsense, and the political right in the US doesn’t have many intellectuals, due in part (I think) to a purposeful lack of ideological diversity in academia.

      Guys like Stove aren’t as popular anymore, aren’t often assigned, and some stuff he wrote on women/feminism (anti-egalitarianism) was removed from the original archives of his university in Australia after he died.

      For me it’s more of an intellectual game though. I think it’s easy, at least I see it online, to start seeing the cracks in progressivism, reading a new set of materials, and jumping ship to a far-right internet blogosphere.

      Thanks for reading and commenting. And thanks again for the reading suggestions, I’m going to check those out.

Leave a Reply

Your email address will not be published. Required fields are marked *