Bear with me, this is very rough thinking yet…

Prisoner’s Dilemma

At uni, one of the more impressing encounters with political theory was the International Relations 101 course, especially the intro to Neorealism, which at its heart is an adaptation of the Prisoner’s Dilemma to the circus of International Relations.

The Prisoner’s Dilemma was developed by researchers at RAND1 in the 1950’s and was meant to give a theoretical frame to situations where individual actors act rationally, but this rationality leads to outcomes which are not the optimal outcomes possible in the given system2 and was used extensively to explain the arms race during the Cold War.

What is important is that the Prisoner’s Dilemma assumes that the players do not or cannot communicate. They are left in the dark about the intentions of the other party and, left to their own devices, pursue ego-rational strategies. The non-communication, and hence the uncertainty about the intent of the counterparty, are key.

Singalling Theory

What was made explicit in the Prisoner’s Dilemma, is an implicit assumption that economists make about human nature and a multitude and very diverse set of interactions.

We’re fundamentally ignorant3 of other’s intentions, so we have to rely on sending out signals about our intentions/status and interpreting those signals4. This is nothing new, nor extra-ordinary, but it’s often good to remind ourselves of this fact, especially since the neglect of these facts can lead to worrisome results. For instance, when the UK residential solar photovoltaics programme stalled and needed jumpstarting, many municipalities decided on equipping council housing with PV installations. This had an adverse effect on total PV installations, however, as now PV installations seemed to indicate lower-income council housing.

Now, we have plenty of mechanisms and supporting economic theory about interactions between actors based on this kind of signaling. Price essentially is a function of signalling, as can be a University degree. And as an episode of the Freakonomics podcast explains, it can account for the difference in sales of hybrid automobiles.

However, economics looks at signalling only at the level of human interaction and intent. Biology looks at signalling through a lens that’s labelled evolution. But in a world increasingly riddled with algorithmic actors, we have to ask: do they engage in some kind of signalling?

Efficiencies and disturbances in algorithmic environments

The main advantage of technical systems is that communications is cheap and explicit. Systems that can “talk” to each other usually do so explicitly, without much need for interpretation on the recipient system’s part. That makes a lot of things much easier. Imagine driving, for instance. There’s a lot of interpretation games going on while driving as you as the driver try to anticipate the actions other drivers around you are about to take. The situation is further complicated by every other driver trying to anticipate the actions of the drivers around them, all while nobody having any sort of complete or reliably information about the current flow of traffic they’re in.

Image machine communication added to that mix, where every car is broadcasting it’s current status. This broadcasting of information makes it much easier to gain an informational overview over traffic and reduces the need for continuous interpolation and interpretation, as the current status of the traffic around a driver has a massive effect on the option-space of the driver.

However, this assumes that cars in this example use “honest broadcasting” as no driver has anything to gain from fudging the machine’s signals. What happens though, when there is a strong incentive for machines to “lie”? The example that immediately comes to mind is, of course, algorithmic trading. With the flash crash and the recent Knight Capital Disaster5 there has been plenty of writing about the role of algorithms in complex, multi-variate systems, and how humans now are not capable of deciphering the decisions those algorithms make. These are usually concluded by extrapolating the financial markets and paint a gloomy picture about a world where algorithms take over, incomprehensible to us feeble humans.

What those arguments usually forget, however, is that it’s us humans that have written those arguments to engage in a very specific task: interprete human signals, which are often ambiguous and contradictory.

If machines are to take over larger and larger parts of our everyday, we better make them work with explicit information, rather than inferred signal interpretation. This way, they can solve huge problems.


  1. RAND Corp. is a think tank that spun out of a US Air Force research project. Some claim it’s “the original think tank.” 

  2. For a detailed account of the Prisoner’s Dilemma, please check the comprehensive Wikipedia entry. 

  3. This is really just a really fancy way of saying that we just don’t know… 

  4. The easiest intro to signalling theory I’ve so far encountered is an Episode of Freakonomics. Go check it out. 

  5. Flash Crash and Knight Capital 


Leave a Reply