Tag: Claude Shannon

The Punchline is Redundant

In graduate school, I was friends with a young man of a particularly restless disposition — a mathematician of the waggish inclination, given to a certain kind of tomfoolery. Often his antics would take the form of games of such banal simplicity that they felt like elaborate, conceptual pranks.

One game he set a number of us playing, during a longueur in one evening together with friends, sticks in my mind. Having first had each of us commit solemnly to absolute honesty, we each chose a number, greater than or equal to zero, which we would then one-after-the-other reveal (committed as we were to honesty), and whoever had chosen the lowest number that no one else had chosen was the winner. Several rounds were played, and while everyone wrestled with the question of whether to choose zero, or maybe one, trying to second guess each other, I refused to join in, offended by the very nature of the game.

A second game stays with me as well: pulling a mathematics journal from the shelf in the math department common room, my friend began reading aloud random sentences from various articles, pausing before the final word, inviting another friend, to guess the final word. He did pretty well, as I recall.

There was something powerful about these games. The first game, being stripped of all the usual frivolity, ritual, adornment, and pretense that usually accompanies games, revealed the essential nature of what a game is. That is to say a “game” in the sense that the mathematician John von Neumann formulated it. To von Neumann’s way of thinking Chess was not game in the sense he cared about: perfectly rational players would know the perfect set of moves to play and thus they would play those moves. He was more interested in Poker, where players have incomplete information (the cards in their hand and on the table), are left to compute the probabilities, and devise strategies.

Good poker players do not simply play the odds. They take into account the conclusions other players will draw from their actions, and sometimes try to deceive the other players. It was von Neumann’s genius to see that this devious was of playing was both rational and amenable to rigorous analysis.

The Prisoner’s Dilemma — William Poundstone

I recently discovered that my friend was not the true inventor of the second game either. Reading The Information by James Gleick, I learned that Claude Shannon, the founder of information theory, played a variation with his wife, as a kind of illustrative experiment.

He pulled a book from the shelf (it was a Raymond Chandler detective novel, Pickup on Noon Street), put his finger on a short passage at random, and asked Betty to start guessing the letter, then the next letter, then the next. The more text she saw, of course, the better her chances of guessing right. After “A SMAAL OBLONG READING LAMP ON THE” she got the next letter wrong. But once she knew it was D, she had not trouble guessing the next three letters. Shannon observed, “The errors, as would be expected, occur more frequently at the beginning of words and syllables where the line of thought had more possibility of branching out.”

The Information — James Gleick, page 230

Shannon’s counter-intuitive insight was to consider “information” through a notion he called entropy, which quantitatively captured the amount of new, novel, and surprising content in a message. Thus, the meaningful sentences of a novel, or indeed a math paper, contain all kinds of redundancy, while in contrast a random sequence of letters will always be surprising from one letter to the next, so therefore contains more of this stuff he referred to as “information”.

Von Neumann’s ideas about games would go on to shape the technocratic world view that was ascendant in the 20th century. Beyond mathematics the kind of games he defined could be found out in the fields of economics, social policy, geopolitics, and most infamously: the exchange of nuclear weapons.

Shannon’s ideas would have their greatest successes in science, and not only in the field of communication, where error correcting codes and encryption are the direct and intended applications of such thinking. But also in biology when DNA was discovered and life itself appeared to be reducible to a finite sequence of of four letters, and Physics via thermodynamics and later in quantum mechanics as information became a fundamental notion.

There is a variation on Shannon’s game that is a well established tradition around the Christmas dinner table: reading Christmas cracker jokes. (Popular within the former Commonwealth, but maybe less well known in the US). Having pulled the crackers and set the crepe party hats upon our heads, each of us will in turn read the set up of our joke, leaving the rest of the table to guess the punchline. The meta-joke being that while punchlines are supposed to be surprising, and thus amusing, Christmas cracker jokes are typically so bad that in their puns are quite predictable. Thus, somehow, in their perverse predictability, the jokes are funny all over again. But does that make them low entropy? Only if you allow for the mind to be addled enough that the punchline becomes predictable.

This is an important point. The ultimate arbiters of the question of assumed knowledge that Gleick offers are hypothetical aliens receiving our radio signals from across the galaxy, or the very real computers that we program here on earth. They do not share any of our cultural baggage and thus could be considered the most accurate yard sticks for “information”. When Gleick’s book was written, over a decade ago now, we had very different ideas about what computers and their algorithms should look like or be capable of doing. That has all changed in the intervening decade with the arrival of powerful artificial intelligence that gives the kind of output that we once could only have hoped for. The notions that Gleick covers were defined precisely and mathematically, but our intuition for these concepts, even to lay person, are dramatically shifting. Not that it would be the first time our expectations and intuition have shifted. We should recognize ourselves in Gleick’s description of the amusing misunderstandings that the new-fangled telegraph technology created upon its arrival.

In this time of conceptual change, mental readjustments were needed to understand the telegraph itself. Confusion inspired anecdotes, which often turned on awkward new meanings of familiar terms: innocent words like send, and heavily laden ones, like message. There was a woman who brought a dish of sauerkraut into the telegraph office in Karlsruhe to be “sent” to her son in Rastatt. She had heard of soldiers being “sent” to the front by telegraph. There was the man who brought a “message” into the telegraph office in Bangor, Maine. The operator manipulated the telegraph key and then placed the paper on the hook. The customer complained that the message had not been sent, because he could still see it hanging on the hook.

More mysterious still is the way information persists once it has arrived. Black Holes provided a thorny problem for physicists, but my own waggish friend poses his own set of questions. Assuming that he had not taken a course in information theory, or read of Shannon (which he may well have), that leaves the possibility that when he concocted his games he was subconsciously tapping into some kind of collective or ambient understanding. It is one things for the theory to be taught and for students to study the equations. It is quite another thing when ideas pervade our collective thinking in ways that cannot be easily accounted for. Information theory works when we can point to the individual bits and bytes. Things become much more tricky when not only can we not find the bits and bytes, but when the information is thoroughly not discrete, not even analogue, just out there in some way we don’t yet know how to think about.