Category: Mathematics

The vibe moves

In 1633 Galileo was put on trial by the Roman Catholic Inquisition. The ailing mathematician suffered through the indignity and ordeal of the trial to be declared guilty of advocating heliocentrism — the theory that the sun, not the earth lay at the center of the universe. Galileo’s book, “Dialogue Concerning the Two Chief World Systems”, was the principal piece of evidence presented against him. But as Dan Hofstadter repeatedly belabors in his book The Earth Moves, there was no material debate on the merits of Galileo’s arguments. The only relevant question was whether Galileo had usurped the authority Catholic Church over interpretation of scripture.

So the Counter-Reformation was fighting a losing battle against a freer, and often less literal reading of the Scriptures. Yet if there was one thing that had concerned the Council of Trent, it was the possibility that laymen would decide for themselves what passages in the Bible could be interpreted other than literally. In fact, the issue of the earth traveling about the sun had little if any bearing on the Catholic faith. But the notion that persons without theological training could decide for themselves to read this or that biblical passage in a non-literal sense constituted a mortal danger for Catholicism in the early seventeenth century.

Hofstadter’s book focuses on the trial, while also giving background on Galileo’s observations of the night sky through his recently invented telescope. We are not treated to a full biography of Galileo, nor the kind of exposition — which I would have certainly benefited from — on the nature of the reformation, counter-reformation, or the Roman Catholic Church of the time . Hofstadter’s own interests clearly lay in discursive discussions about Galileo’s engagement with the art and culture of the period.

There is much that is interesting, but also a great deal that is frustrating. After outlining the events of the trial, Hofstadter hedges against saying anything too definitive about the affair, conceding that both the event of the trial, along with its final verdict, may likely have been the consequence of unknown Papal intrigues and the obscure politics of the time. The book doesn’t seem to know what to do with certain pieces of context. Take the following insight into what we know of an individual’s religious conviction:

We do not know what Galileo or anybody really believed at this period, since religious belief was prescribed by an autocracy and heresy was an actionable offense. If one had misgivings, one kept them to oneself, so it would be naive to take religious ruminations penned in the papal realm or its client territories at face value. The Inquisition’s own records confirm that many people harbored reservations and heretical beliefs: before the Counter-Reformation, they had been much more candid about them.

Galileo had built a telescope that provided a tiny, limited window that gave him a better view upon the solar system. This insight shaped a scientific conviction that led him to stray on territory that the Church had claimed authority over. But it didn’t have to be science. If anything, science was at the fringes of what the Church controlled. Moral, social, and religious matters were the principal victims. Science just happened to have been the issue that the Church managed to look definitively foolish over. Frankly their other strictures look similarly bad today, at least to my eyes. The extract I just gave above seems to hit on something at least as important as the actual science that Galileo practiced. It almost seems to serve the Church to frame the affair as “science vs religion”, rather than as “religion vs freedom of thought”.

Hofstadter clearly loves Galileo as the Renaissance man, immersed in the art and culture of his day. I sensed a real nostalgia for the pre-Two Cultures world. There is a valor and a virtue that is popularly recognized in those early scientists — the “genius” we ascribe to those who were the first to figure certain things out. I am perhaps sensitive to a certain kind of sleight made against the “institutionalized” scientists for today who are fantastically empowered by the inherited work of earlier scientists. So great is the inheritance that it inevitably dwarfs any possible contribution they can make. It is a sentiment often derived from the valorization of those heroes of the scientific revolution. Sure, you can hear them sneer, you’re clever, but you aren’t a genius. I don’t sense any of that from Hofstadter. I could imagine him saying something more like: Sure, Galileo’s reasoning reads as scientifically illiterate to us today, but unlike academics today, he could actually write.

In case you don’t believe me about how Galileo’s reasoning reads today:

Aristotle had had no conception of impetus, and thus no conception of motion corresponding to what we may see and measure. He thought that the medium through which objects travel sustains their motion. By contrast, Galileo wrote “I seem to have observed that physical bodies have physical inclination to some motion,” which he then described — lacking the mathematics for an exact characterization — by a series of “psychological” metaphors, themselves of partly Aristotelian origin: inclination, repugnance, indifference, and violence. […] Galileo’s conception of the Sun’s motion is necessarily hesitant and ambiguous, and he was wary of flatly stating general principals. But one can perceive here the rough outline of what would become Newton’s first law of motion.

What Aristotle, and thus Galileo lacked was the basic material taught to high schoolers and undergraduates in science and mathematics.

It is certainly true that these were exciting and dangerous. Scientific progress has a particular quality of often looking more exciting the further back you stand from it. But the actual doing of science is in the close up; in the detail. And when you do look closer at the danger, at least in the case of Galileo, it looks far more grim than it does exciting.

The cookies are the monster

As is the case with our parents, there invariably comes a moment when a teacher reveals themselves to be utterly, fallibly human. Rather than being a reliable source of knowledge, with one stray remark they reveal themselves to being as prone to misconceptions and ignorance as the rest of us. We learn that whatever instruction they offer should be treated provisionally.

One such moment from my own youth: sixth form, in morning assembly — a venue for our teachers to share wonderfully secular homilies. The teacher taking the assembly that morning explained that when he was himself in school he witnessed the first computer arrive in the classroom. At that time it wasn’t clear what function this intimidating new appliance should serve. “For some reason,” he told us, “they decided to send it to the mathematics department.”

Sitting there I understood quite intuitively that the maths department would have been the obvious and appropriate place to send the computer. After all, what is a computer except a machine for performing a long sequence of mathematical operations? The teacher seemed to believe computers were elaborate typewriters with the additional capabilities of playing a game of solitaire or selling you something.

I will add that this was the same teacher who I had heard, from a reliable and highly placed source (another teacher), that perhaps the maths department should start offering an “applied math” course, stripped of all the impractical and superfluous “pure math”. You know, only the math a good worker would actually need. As if maths teachers were some kind of freaks who insisted on inflicting abstract suffering on on students before grudgingly teaching them useful stuff: statistics.

In retrospect, sat in that school assembly, we were living through a significant moment. Broadband had arrived along with youtube. Facebook was only the latest in a string of social media platforms. I got a gmail account, with its bottomless inbox. Teachers were beginning to be drilled on the importance of making use of online resources. Certain educators dictated long and complicated urls to us that we had to copy down carefully so that we could make use of them later. I’m not talking about the home page of a site — we were sent to pages deep within the site-map; urls trailing all kind of database tokens and php residue that we would someday learn was susceptible to the unpleasant sounding “link rot”. This is the future, those educators presumably thought to themselves as they had unhappily transcribed those web-addresses. No doubt they were far from convinced that any of this would ever be convenient.

Only a decade later and we would enjoy boomers falling into candy crush addiction. And computers became even less recognizable as machines of mathematics.


As part of a professional realignment, I have been learning the ropes of cyber security. The past month was dedicated to mastering low level memory exploits. Or at least the low level memory exploits of twenty years ago. Real zeros and ones stuff. Well, hexadecimal stuff really. Staring at (virtual) memory locations. Format string exploits. Messing around with debuggers. You might think that would have brought me to the mathematical heart of our digital engines. But no. I have instead had an almost gnostic revelation about the true nature of the Matrix.

Certainly, there is a lot of deep mathematics playing a fundamental role in the workings of all the code. To take a particularly central example: the existence of one-way-functions is the underpinning assumption of almost all cybersecurity. The cryptographic protection we enjoy (whether you realize it or not), is provisioned on the understanding that an adversary does not have the ability to reverse certain mathematical operations with reasonable efficiency. This assumption may well hold up. Quite likely P does not equal NP, and it is entirely possible that quantum computers are a physical impossibility. I may not live long to see such profound questions resolved.

But there is another side to our PC world. From where I am sitting, they are simply huge bureaucracies. Mathematical bureaucracies to be sure, but bureaucracies nonetheless. There are replete with elaborate filing and organizing systems, protocols with carefully written standards, and all the input and output amounts to a certain kind of paperwork. From this perspective most security breaches are the product of improper filing, out of date standards, and old fashioned mail fraud.

There is no undoing all the bureaucracy either. The further we get from the golden age of pre-broadband the more the bureaucracy swells; not only to deal with the non-tech-savvy hoi-polloi, but to integrate the hoi-polloi into the very system itself. The age of nerds noodling around with open source code and experimenting with new kinds of hardware has given away to a world of corporations and start-ups where every other college grad needs to get their “workflow” in-sync with their fellow internaut.

As with parents and teachers, the original architects of the cyberspace have revealed themselves to be shortsighted and ideologically blinkered human beings with their own unique set of foibles. It took us too long to see it. That teacher who revealed his digital ignorance to our year group happened to teach economics. I never took a class with him, but if he knew anything of economic history then there was a chance that he might have been able to teach us certain truths about technological advancement that we had overlooked.

The Punchline is Redundant

In graduate school, I was friends with a young man of a particularly restless disposition — a mathematician of the waggish inclination, given to a certain kind of tomfoolery. Often his antics would take the form of games of such banal simplicity that they felt like elaborate, conceptual pranks.

One game he set a number of us playing, during a longueur in one evening together with friends, sticks in my mind. Having first had each of us commit solemnly to absolute honesty, we each chose a number, greater than or equal to zero, which we would then one-after-the-other reveal (committed as we were to honesty), and whoever had chosen the lowest number that no one else had chosen was the winner. Several rounds were played, and while everyone wrestled with the question of whether to choose zero, or maybe one, trying to second guess each other, I refused to join in, offended by the very nature of the game.

A second game stays with me as well: pulling a mathematics journal from the shelf in the math department common room, my friend began reading aloud random sentences from various articles, pausing before the final word, inviting another friend, to guess the final word. He did pretty well, as I recall.

There was something powerful about these games. The first game, being stripped of all the usual frivolity, ritual, adornment, and pretense that usually accompanies games, revealed the essential nature of what a game is. That is to say a “game” in the sense that the mathematician John von Neumann formulated it. To von Neumann’s way of thinking Chess was not game in the sense he cared about: perfectly rational players would know the perfect set of moves to play and thus they would play those moves. He was more interested in Poker, where players have incomplete information (the cards in their hand and on the table), are left to compute the probabilities, and devise strategies.

Good poker players do not simply play the odds. They take into account the conclusions other players will draw from their actions, and sometimes try to deceive the other players. It was von Neumann’s genius to see that this devious was of playing was both rational and amenable to rigorous analysis.

The Prisoner’s Dilemma — William Poundstone

I recently discovered that my friend was not the true inventor of the second game either. Reading The Information by James Gleick, I learned that Claude Shannon, the founder of information theory, played a variation with his wife, as a kind of illustrative experiment.

He pulled a book from the shelf (it was a Raymond Chandler detective novel, Pickup on Noon Street), put his finger on a short passage at random, and asked Betty to start guessing the letter, then the next letter, then the next. The more text she saw, of course, the better her chances of guessing right. After “A SMAAL OBLONG READING LAMP ON THE” she got the next letter wrong. But once she knew it was D, she had not trouble guessing the next three letters. Shannon observed, “The errors, as would be expected, occur more frequently at the beginning of words and syllables where the line of thought had more possibility of branching out.”

The Information — James Gleick, page 230

Shannon’s counter-intuitive insight was to consider “information” through a notion he called entropy, which quantitatively captured the amount of new, novel, and surprising content in a message. Thus, the meaningful sentences of a novel, or indeed a math paper, contain all kinds of redundancy, while in contrast a random sequence of letters will always be surprising from one letter to the next, so therefore contains more of this stuff he referred to as “information”.

Von Neumann’s ideas about games would go on to shape the technocratic world view that was ascendant in the 20th century. Beyond mathematics the kind of games he defined could be found out in the fields of economics, social policy, geopolitics, and most infamously: the exchange of nuclear weapons.

Shannon’s ideas would have their greatest successes in science, and not only in the field of communication, where error correcting codes and encryption are the direct and intended applications of such thinking. But also in biology when DNA was discovered and life itself appeared to be reducible to a finite sequence of of four letters, and Physics via thermodynamics and later in quantum mechanics as information became a fundamental notion.

There is a variation on Shannon’s game that is a well established tradition around the Christmas dinner table: reading Christmas cracker jokes. (Popular within the former Commonwealth, but maybe less well known in the US). Having pulled the crackers and set the crepe party hats upon our heads, each of us will in turn read the set up of our joke, leaving the rest of the table to guess the punchline. The meta-joke being that while punchlines are supposed to be surprising, and thus amusing, Christmas cracker jokes are typically so bad that in their puns are quite predictable. Thus, somehow, in their perverse predictability, the jokes are funny all over again. But does that make them low entropy? Only if you allow for the mind to be addled enough that the punchline becomes predictable.

This is an important point. The ultimate arbiters of the question of assumed knowledge that Gleick offers are hypothetical aliens receiving our radio signals from across the galaxy, or the very real computers that we program here on earth. They do not share any of our cultural baggage and thus could be considered the most accurate yard sticks for “information”. When Gleick’s book was written, over a decade ago now, we had very different ideas about what computers and their algorithms should look like or be capable of doing. That has all changed in the intervening decade with the arrival of powerful artificial intelligence that gives the kind of output that we once could only have hoped for. The notions that Gleick covers were defined precisely and mathematically, but our intuition for these concepts, even to lay person, are dramatically shifting. Not that it would be the first time our expectations and intuition have shifted. We should recognize ourselves in Gleick’s description of the amusing misunderstandings that the new-fangled telegraph technology created upon its arrival.

In this time of conceptual change, mental readjustments were needed to understand the telegraph itself. Confusion inspired anecdotes, which often turned on awkward new meanings of familiar terms: innocent words like send, and heavily laden ones, like message. There was a woman who brought a dish of sauerkraut into the telegraph office in Karlsruhe to be “sent” to her son in Rastatt. She had heard of soldiers being “sent” to the front by telegraph. There was the man who brought a “message” into the telegraph office in Bangor, Maine. The operator manipulated the telegraph key and then placed the paper on the hook. The customer complained that the message had not been sent, because he could still see it hanging on the hook.

More mysterious still is the way information persists once it has arrived. Black Holes provided a thorny problem for physicists, but my own waggish friend poses his own set of questions. Assuming that he had not taken a course in information theory, or read of Shannon (which he may well have), that leaves the possibility that when he concocted his games he was subconsciously tapping into some kind of collective or ambient understanding. It is one things for the theory to be taught and for students to study the equations. It is quite another thing when ideas pervade our collective thinking in ways that cannot be easily accounted for. Information theory works when we can point to the individual bits and bytes. Things become much more tricky when not only can we not find the bits and bytes, but when the information is thoroughly not discrete, not even analogue, just out there in some way we don’t yet know how to think about.

Retrograde Motion

Before Newton there was Copernicus, and before Copernicus there was Ptolemy. Living in the second century AD, Ptolemy produced what would become the definitive work in astronomy for the next millennium. It was a geocentric system: the Earth, quite sensibly, set at the center of the solar system. While geocentricism was ultimately to suffer the ignominy of being synonymous with backward thinking, Ptolemy certainly didn’t lack in mathematical sophistication.

Keeping the Earth at the center of the solar system required a great deal creative invention. It was taken as axioms that the planets should travel at constant speeds, and adhere to the perfect forms of geometry (that is to say circles and spheres). But the planets that appeared in the night sky did not conform to these expectation. Unlike the sun and moon, which flattered us earthlings with their regular appearance and disappearance, the planets would sometimes slow down and reverse direction — what they called “retrograde motion”. The solution that Ptolemy and his predecessors developed was a whole Spirograph set of celestial structures called deferents and epicycles. This essentially involved imagining that the other planets were not set upon a wheel revolving about the Earth, but set on a wheel on a wheel in motion about the earth. And, if necessary, perhaps some greater sequence of nested wheels.

Copernicus, the Catholic canon and Polish polymath of the fifteenth and sixteenth century, had, like every other astronomer of his day read Ptolemy. Yet after carefully studying the night sky and much thought, he developed a heliocentric map of the solar system. That is to say, with the Sun at the center. While he managed to free himself from geocentric difficulties, and dramatically simplify the situation in many respects, he still adhered to a belief in constant speed and circular orbits. It would take Keplar and ultimately Newton to settle the matter with elliptic orbit determined by the force of gravity.


The heliocentric theory was controversial for two reasons. The first, and quite reactionary, objection was based on readings of a handful of bible verses. For example, when Joshua led the Israelite in battle against an alliance of five Amorite kings he ordered the sun to halt its motion across the sky, thus prolonging the day, and with it the slaughter of the opposing army. The point is that Joshua ordered the sun to stop, and not the earth. This might seem like pedantry, but that was precisely the point. The Catholic church hoped to hold a monopoly on biblical interpretation, and someone lower down the ecclesiastical hierarchy conducting their own paradigm shift equipped with nothing more than astronomical data and mathematics could set a dangerous precedent. At a time when many such precedents already being set.

The second, and quite serious objection, was that it created a whole new set of scientific questions. Why don’t we feel like we are moving through space? Not only about the sun, but when we make the Earth rotates daily about its own axis? The numbers required to calculate the implied velocity were known. And on top of that, if we were moving at such great speed, they why did we not observe a parallax effect between the stars? As the apparent distance between two buildings appears to change as we move past them, why couldn’t we observe a similar shift in the stars as we moved? Copernicus’ answer was that the stars were much farther away from earth than had ever been imagined before. It was a correct deduction that didn’t do much to convince anyone.


Both Copernicus and Newton were reluctant to publish their ideas. In Newton’s case he was satisfied to have developed his Calculus and did not care to suffer the scrutiny that others would subject his theory of gravity to. His experience justify his thinking to other scientists of his had soured his relationship with the wider scientific community of his day. It was only when it became clear that Leibniz had independently developed the tools of Calculus that he finally set about writing up, formalizing, and getting his hands on data in order to present his Principia.

Copernicus had gathered his data and written his book, yet for many years did not publish it. De revolutionibus orbium coelestium would only arrive in print as he lay on his death bed. While Copernicus had friends who supported his astronomical pursuits, it seems to have been the arrival of a young Lutheran mathematician Georg Joahim Rheticus, who was the key instigator in bringing the manuscript to print.

No one had invited him or even suspected his arrival. Had he sent advance notice of his visit he doubtless would have been advised to stay far away from Varmia. Bishop Dantiscus’ most recent anti-heresy pronouncement, issued in March, reiterated the exclusion of all Lutherans from the province — and twenty-five-year-old Georg Joachim Rheticus was not only Lutheran but a professor at Luther’s own university in Wittenberg. He had lectured there about a new direction for the ancient art of astrology, which he hoped to establish as a respected science. Ruing mundane abuses of astrology, such as selecting a good time for business transactions, Rheticus believed the stars spoke only of the gravest matters: A horoscope signaled an individual’s place in the world and his ultimate fate, not the minutiae of his daily life. If properly understood, heavenly signed would predict the emergence of religious prophets and the rise or fall of secular empires.

A More Perfect Heaven — Dava Sobel

I suspect that we may undervalue the weight that the belief in astrology may have carried in some (but not all) quarters. Many looked back to the Great Conjunction of 1524 as heralding the rise and spread of Lutheranism — an ideological shift with profound and widespread implications that might only be matched by Communism. We live in an age of scientific prediction, taking for granted the reliable weather forecast on our phone in the morning. We (at least most of us) accept the deep implications of the climate data for our future, while also paying heed to the sociology and political science can help us understand our lack of collective action. If we accept the astrology as being a kind of forebear to our own understanding, you can perhaps appreciate why Rheticus might have been willing to take such risks to pursue a better understanding of the stars.

We can only imagine what Rheticus must have said to Copernicus that led him to finally prepare his manuscript for publication. And that is what Dava Sobel has done, writing a biography of Copernicus, A More Perfect Heaven, which contains within it a two act play dramatizing how she imagines the conversation might have gone. It presents a Rheticus shocked to discover that Copernicus literally believes that the Earth orbits about the Sun, a Copernicus perplexed that the young man takes astronomy seriously, but who is won over by the prospect of taking on such a capable young mathematician as his student.

Rheticus’ principal legacy is in the précis of Copernicus’ theory that he wrote and had distributed as a means of preparing the way for the ultimate text. His contributions would ultimately be overshadowed by the later accusation, conviction, and banishment for raping the son of a merchant. While Sobel presents Rheticus in her play as pursuing/grooming a fourteen year old boy, it does not feel like she knows exactly where to take this dramatically. By way of contrast, John Banville in his novel Doctor Copernicus gleefully transitions to a Nabokovian narrative upon Rheticus’ arrival.


There is an interesting dramatic irony in the way Copernicus’ ideas were initially received. There was a ruse, by certain parties, to present Copernicus’ heliocentric theory as simply a means of computation. It could be tolerated if it was understood that one was not supposed to actually believe that the Sun was at the center of the solar system. Which struck some as a reasonable compromise. The Catholic church was drawing up what would become the Gregorian calender, and Copernicus’ made important contributions to calculating a more accurate average for the length of a year.

Yet now the situation has been reversed. While Copernicus’ techniques were rendered obsolete with the arrival of calculus, the conceptual understanding carries on in popular understanding. Meanwhile, as Terence Tao and Tanya Klowden have noted Ptolemy’s deferents and epicycles live on in the mathematics of Fourier analysis — a means of approximating arbitrary periodic functions using trigonometry.

Even within a field as definitive as mathematics and science it is interesting how even defunct and obsolete thinking can both be revealing and even persist with strange second lives. Why someone believed something can become more important than the truth of the thing. Eratosthenes deduced an impressive approximation for the Earth’s circumference after hearing a story about a well that would reflect the light of the sun at noon. We posses a more accurate figure now, but technique never grows old.

A Kind of Visitation or Possession

The biographer of Isaac Newton is in an unenviable position. Usually the writer thrives on the access they can get to their subject — their writings, correspondence, contemporaneous accounts. But in Newton’s case, the biographer is cursed with too much material. Newton’s unpublished writings form an extensive body of work spanning an impressive and embarrassing array of interests; from science and mathematics, to alchemy and heretical theology, Newton was a compulsive note taker. Susan Dry’s The Newton Papers gives a careful account of how these papers managed to escape their fate languishing forgotten in one of England’s aristocratic estates, and into the hands of scholars who could read and make sense of them. Or so they hoped. The writings were so extensive, that they were impossible for any individual to meaningfully absorb. The hope that a definitive or comprehensive view of Newton might be revealed revealed itself to be futile. Dry even concludes that the endeavor is fundamentally misguided.

The first two Newton books I wrote about here took two distinct strategies to avoid the trap presented. James Gleick’s Isaac Newton took the light touch, providing a readable biography that was blessed with being selective in what it presented to a reader. Thomas Levenson’s Newton and the Counterfeiter focused on a lesser known chapter of Newton’s life — his role as Warden of the Royal Mint. Levenson’s book was blessed with seeing Newton out in London, interacting with the world, and thus managed the feat of stepping far enough back from the man that we could begin to see him more fully in his time and place. What emerged was a far more interesting portrait than one man being the turning point of history.

Levenson’s most recent book, Money for Nothing, takes this a step further, to the point that we no longer have a “Newton book”. The real subject is the South Sea Bubble, the arrival of modern finance, and the connection to the “Scientific Revolution”. Newton’s significance, beyond having himself bought shares in the South Sea Company, is that he had developed the keenest understanding yet of the relationship between equations and the world that they could represent.

Ultimately, this mathematical insight is at the heart of modern physics, the science that Newton, more than any other single thinker, would create. It it’s simplest form, the idea is this: the full picture, the complete geometrical representation of all the available solutions to a system of equations, can be understood as all the possible outcomes for a given phenomena described by that mathematics. Each specific calculation, fed with observations of the current state of the whatever you’re interested in, the flight of a cannonball, the motion of a planet, how a curveball swerves, how rapidly an outbreak of the plague might spread, makes a prediction for what will happen next. In his twenties, working on his own, with almost no systematic experience of the study of the real world, Newton did not yet grasp the full power of the ideas implied by the way he had begun to think about the math.
That would come in time. But what made his annus mirabilis so miraculous, was the speed and depth with which Newton forged the foundations of his ultimately revolutionary way of comprehending the world.

Money For Nothing, Thomas Levenson

Levenson explains that this was not the Ponzi scheme of capitalism that many claim it is. The value the South Sea shares was a measure of trust that the Treasury could reliably pay out in future. Even in these early days of state finance, there was an understanding that a state that was constantly borrowing could also be worthy and trusted creditor. The theory was that the size of the national debt as it stood was only important when considered against the future productivity of the nation. In principal, and indeed in practice, a nation that invested in itself would grow and develop economically, allowing itself to making good on future repayments.

The second advantage that the British government possessed was the inexorable passage of time. The funds it borrowed at any moment became bets on the nation’s economic life year over year. The wager was that the ongoing work of every new enterprise, each voyage, everything that Britons did to get and spend in the future, would create enough wealth to support the debts being incurred. The chancellor of the Exchequer didn’t have to treat every expense as a pay-as-you-go imperative. Whole nations, as London’s monetary thinkers has discovered, need not perform the virtues embodied in the very good advice to pay off a credit card balance in full every month. Rather, the task was to balance the needs of the moment with an analytical picture that could be drawn of Britain as a whole, all its getting and spending and accumulation, integrated over years to come.

Money For Nothing

Making this case can be divisive. Indeed many, such as Daniel Defoe, seem to have been divided within themselves about this development; on one hand despising the traders and stockjobbers who ran the secondary markets, while supporting the state borrowing that they enabled on the other. Many readers, if they were to correctly read the argument I believe the book is making, would probably object to it. The argument is that national debt and secondary markets for financial products are important, necessary, and work (except when they don’t). Making this argument can be as tricky as convincing someone of the merits of modern art or free verse. In the case of government borrowing and stock markets, the most obvious problem you have in this case is what ultimately befell the South Sea Company.

After almost a decade of providing reliable and unremarkable returns via direct Treasury payments, there was in 1720 an attempt to convert a huge amount of illiquid government debt into the liquid and more manageable form of South Sea Shares. At this point, the win-win-win equation that held between company, shareholder, and government was badly abused. Just about every kind of financial crime was practiced (insider trading, artificially pumping up prices, and outright bribery), and over course of the year the price of the shares increased ten-fold, from 100 GBP to 1000 GBP.

Among the reasons Levenson presents for the South Sea Stock crashing at the moment it did, was a collective realization that the stock could not offer a rate of return any better than the most ordinary of private loans.
In truth the company couldn’t even offer that. In a desperate attempt to prop up the share price, a completely unsustainable dividend of 50 GBP was offered to shareholders. While that would be a magnificent return on the “par” price of 100 GBP, on the recent sale price of 1000 GBP this was a very ordinary 5% return.

William Hogarth – The South Sea Scheme

I cannot help but draw analogies with the current excitement around cryptocurrencies. In place of Hogarth’s satirical paintings, Defoe’s commentary, and Pope’s poetry, which accompany Levenson’s account, we have Twitter memes about buying the dip and right-clicking NTF art. We can also imagine that once cryptocurrencies begin to look a lot more “boring”, there might be a major correction. From this perspective the volatility of cryptocurrencies is less a liability and more of a feature.

If we are to embrace the analogy, there is a dis-quietening reality that the South Sea Bubble offers. Although the share price crashed, the political careers ended, and assets seized from many of the incriminated, the financial tools and derivatives that made it all possible would go on to form the backbone of modern finance (with some occasional regulation, if you can believe it). Similarly, even if bitcoin and etherium suffer some almighty crash, it doesn’t mean that it won’t find a place in the long term landscape of finance.

To be clear, I do not welcome a bitcoin future. Plenty of people, in particular those who understand what a blockchain actually is, are writing in strong terms about how little this offers. But at the heart of why I don’t like cryptocurrency is my suspicion of the world it would produce. As of the moment the main contributions of cryptocurrency to society are enabling cyber criminals looking to profit from ransomware, and diverting huge amounts of computational hardware, time, and energy towards “mining” these tokens. It is a libertarian future where governments can’t meddle with money on our behalf.

The South Sea Bubble, a Scene in ‘Change Alley in 1720 1847, exhibited 1847 Edward Matthew Ward 1816-1879 Presented by Robert Vernon 1847 http://www.tate.org.uk/art/work/N00432

Where is Newton left in all of this? As best we can tell, he wisely sold his initial investment in the South Sea Company, at a profit, mid way through the bubble. He then unwisely reinvested later, as the price continued its precipitous rise, and lost out when the bubble crashed. So Newton, for all his unprecedented insight, was just as vulnerable to making a fool of himself as the rest.

Instead Levenson presents us with with Archibald Hutcheson MP, who despite his lack of scientific training best embodied the scientific analysis of the market when he sat down an began computations to derive how much the shares would have to return to justify their price.

This was recognizably a scientific revolutionary’s way of thinking. In the Principia Newton had constructed mathematical models that could explore the behavior over time of the moons of Jupiter or could predict the motion of a comet with a track that remained mostly unknown. He published his results both as an exercise in scientific reasoning and with persuasive intent: he sought to persuade his readers that what he had discovered “cannot fail to be true.” In his earlier writings, Hutcheson attempted much the same double act. His work focused on the dynamics of budgets instead of celestial bodies, but it spoke in the same unassailable language of numbers in flux — and thus asserted a claim to like power: just as Newton had declared his system of the world, Hutcheson’s arguments could not fail to be on the money.

Money For Nothing

Although this paragraph is immediately followed by a caveat.

There was, of course, a key difference between Hutcheson’s calculations and the utterly authoritative demonstrations in the Principia. When Newton bragged about his work’s unassailable accuracy, he could let nature be the judge, pointing to the agreement between his mathematical account of a comet’s flight and the track it actually traversed. Hutcheson could not command such certainty. Instead, he used the cultural power Newton and his friends had given to mathematical reasoning to strengthen his political argument. Whatever truth his algebra might contain was continent on the uncertain behavior of the human actors involved in any financial choice.

Money For Nothing

I have often been perplexed to read of kings, rulers, and governments being compelled to certain courses of action by economic necessity. It is hard to buy into a motivation you have little intuition for, and that belongs to a game for you don’t know the rules. Currency crises and borrowing crises and monetary crises and even national productivity crises are often referenced with little explanation. This is all to admit a glaring hold in my own education, but I certainly can’t imagine that I’m the only one.

It is a testament to the success of Levenson’s book that I found it as enlightening as I did. Having read no previous account of the South Sea Bubble, I was effectively going in cold. Levenson takes the reader through all the mechanics of the swaps and trades, providing the important back of the envelope calculations that make sense of what happened. There is no unnecessary hand-holding, and I did reread certain passages, but it was all there.
On top of this, Levenson populates his account with an impressive dramatis personae, providing a vivid portrait of British society reacting to these events. The final chapters outlined the future success of British state borrowing, and I possessed a good sense of what that actually meant. I will be able to make far sense of at least some of the history I was reading than I did before.


There are some conspicuous omissions in Levenson’s narrative. While the South Sea Company’s involvement in the slave trade is covered (practiced, but not profitably) there is no consideration of how the rise of credit based finance might have driven the growth of the trade itself. There is far more discussion of how financing Britain’s wars made a secondary market for government issued debt necessary, and it is argued that the success of the treasury policy that Robert Walpole, Britain’s first Prime Minister, developed in the aftermath of the bubble both incentivized avoiding war while also enabling Britain to “punch above it’s weight” when it did go to war. I found the passages that did address this particularly interesting, and would have read more. But there was no reflection on the implications of a system that enabled Empire, and while Levenson mentions the industrial revolution in Britain as a triumph for capital, I was left wondering about the huge social cost to the working classes of Britain.

To be fair, this would be the subject of a different book. (David Graeber’s Debt springs to mind). There is a very specific moral that Levenson wants to lead the reader to: That the crash of 2008 was fundamentally no different from the crash of 1720. Financial markets are ingenious human inventions, but they need careful supervision and regulation.

A fine message — and I agree. But given what was being invested in back in 18th Century England, you might imagine that some people would have been quite happy to have seen the system crash, investors ruined, and a political system collapse. There are very different kinds of consequences out there that investors or a nation should consider than a crash. Dangers we should also be vigilant for and legislate against.

The wrangler’s insecurity.

[This is my third post on Newton. Previous posts: one and two.]

If you were to take a look around you during a math department seminar or colloquium, you would witness the audience’s attention begin to drift as the talk sunk further into detail and became increasingly difficult to follow. Losing interest in a talk is more or less expected, and the professional mathematicians in the audience come prepared. Maybe they bring a paper to read, or possibly exam scripts to grade. Sometimes they will turn to a fresh page of their notebook and begin doing some actual mathematics of their own.

As a graduate student at McGill, I remember watching a postdoc fill up a page with long exact sequences and all kinds of diagrams, the notation veering into doodles as he got stuck at what must have been a familiar dead-end. It was a rare, voyeuristic glimpse into someone else’s solitary mathematical practice. I later asked this postdoc — whose notebook I presumed was full of such pages — if he ever went back and reread what he had written. No, he admitted, with a guilty smile.

Which was a relief. Not only because my own notebooks were full of repetitious dead-ends, but also because I too almost never went back to review anything I’d written.

Much has been made of Imposter Syndrome among academics — doubting whether we have truly earned whatever position we have reached given how paltry our contributions can sometimes feel. There is a related sense of insecurity to be found in wondering if you are doing mathematics correctly. To be clear, I don’t mean whether a proof we have written up is sound, but whether or not our process of formulating and devising them is the the “proper” way. As if there might be a correct way (or even a professional way) of doing mathematics.

These are not new concerns to have.

Isaac Newton was incredibly secretive in his work and did not have anything approaching students as we might describe them. But after his death, the calculus he developed would form the foundation of modern mathematical and scientific education at Cambridge.

Those who scribbled hastily on those exam papers were students, above all, of Newton’s mathematical physics. Though Newton had not cultivated a following during his own tenure at Cambridge, by the end of the eighteenth century the principals laid down in the Principia — and in particular the mathematical contents of that book — formed the basis for an intensely competitive system of testing at the university by which students were ranked in descending order based on their results on terminal examinations. known as the “Mathematical Tripos.” (The origin of the term Tripos is uncertain, but it may refer to the three legged stool on which students originally sat to take the oral examinations.)

The Newton Papers – Sarah Dry, pg 85

The material, and especially the notation, would be modernized as European influences arrived, but Newton did not lose his centrality. The manner in which he actually arrived at his great insights became a matter of interest. There is a great distinction between how discoveries are made, and how they finally appear on the page. Everyone knew how they personally went about doing mathematics, and even how their tutors told them to do mathematics, but was that the same as how Newton went about making his original discoveries? For all they knew, it might have all been provided to him by divine revelation.

On the Quadrature of Curves. See the Cambridge website to view more scans of Newton’s papers.

In 1872 a means of settling the question presented itself. Newton’s papers — or at least a large portion of them, covering far more than mathematical physics — had resided for nearly 150 years in the library of one of England’s aristocratic houses: Hurstbourne park. But now the Earl of Portsmouth was donating the scientific portion of papers back to the University of Cambridge.

Newton had been famously coy about his own methods, suggesting that he had kept his true means of discovering the Principia private and had only cast them publicly in the language of geometry. The question was therefore whether he adhered to the rigorous, manly, and above all morally upright techniques of thinking that Cambridge undergraduates were coached to acquire. To answer this Stokes and Adams were forced to consider whether Newton himself should — or could– be held accountable to the techniques that were mastered in his name. The Newton papers had the potential to probe more deeply the shadowy divide between patient work and divine inspiration, offering the promise of settling not simply what Newton had done but how he had done it. […] the question had a special urgency at Cambridge where the moral value of study was paramount. In that respect the Newton papers mattered for every undergraduate preparing for the Tripos and for what the Tripos itself stood for. Would the man who served as a model for what should be learned also reveal himself through his private papers, as a model for how to learn?

The Newton Papers – Sarah Dry, pg 88

Sarah Dry, author of The Newton Papers, a chronicle of the journey Newton’s writings took after his death, presents an interesting comparison of the two mathematicians, John Couch Adams and George Gabriel Stokes, who were tasked with making sense of his old notebook papers.

John Couch Adams (1819-1882),

On the one hand was John Couch Adams, whose ability to compute mathematically in his head was the stuff of Cambridge legend. This savant-like ability came with a tenacious reluctance to write anything down. This reluctance cost English astronomers the first opportunity to observe Neptune. Having deduced, from Uranus’ orbital irregularities, where a mystery planet should be found in the night sky, he failed to explain himself clearly to the astronomical bigwigs, who had little patience for the recent graduate. Roughly a year later, in 1846, the Frenchman Urbain Le Verrier managed to solve the problem and pointed his country’s own telescopes in the right direction. Adams was left with nothing but his incomplete written accounts and undated papers declaring his discovery, making establishing precedence impossible. Not that he seemed much bothered by losing out on the glory. He was personally very satisfied simply to have managed the computation.

Sir George Gabriel Stokes, 1st Baronet (1819-1903)

George Gabriel Stokes (of Navier-Stokes and Stokes’ Theorem) on the other hand, wrote compulsively, both mathematically and in personal correspondence (often to ease his own insecurities). Later in life he became editor of the Philosophical Transactions of the Royal Society, then the foremost journal in science, and this involved dealing with a huge amount of correspondence. Unfortunately he was a hoarder of papers of all and every kind, filling the rooms at his disposal with tables on which to pile up his papers. This was all compounded by his inclination towards procrastination

They might have made a formidable team, had their temperaments combined to negate the other’s weaknesses. Instead the project to deliver a verdict on the value of Newton’s papers and reveal his way of thinking was subject to great delay. Of the two however, it seems that Adams was the one who was most readily able to probe the documents deeply. Sarah Dry quotes Glaisher (Adam’s obituarist) as saying:

[…it was a] difficult and laborious task, extending over years, but once which intensely interested him, and upon which he spared no pains. In several instances he succeeded in tracing the methods that Newton must have used in order to obtain the numerical results which occurred in the papers. The solution of the enigmas presented by these numbers written on stray papers, without any clue to the source from which they were derived, was the kind of work in which all Adam’s skill, patience, and industry found full scope, and his enthusiasm for Newton was so great that he had no thought of time when so employed. His mind bore naturally a great resemblance to Newton’s in many marked respects, and he was so penetrated with Newton’s style of thought that he was peculiarly fitted to be his interpreter. Only a few intimate friends were aware of the immense amount of time he devoted to these manuscripts of the pleasure he derived from them.

John Glaisher — Memoir of the life of John Couch Adams

What Adams was doing, in his own manner, was nerding out. As with the discovery of Neptune, it seems that his motivations were overwhelmingly personal, and less in service to the scientific community. Imagine a referee today reading the paper under review very carefully, but forgetting to take notes and neglecting to get back to the editor. Nevertheless, conclusions were eventually drawn out of their little committee and a report of their findings was presented.

Here was confirmation that Newton had indeed worked by process of refinement that inevitably included false starts and error. In this sense, Newton revealed himself to be less an otherworldly genius and more a figure with whom the Cambridge wranglers could identify, a tireless worker in the mathematical trenches, where progress was made by increments rather than leaps. Adams knew the feeling well. In 1853 he had published an important paper pointing out errors made by Laplace in determining lunar motion and promising to provide the correct calculations soon; it had taken him six long years to get the final numbers. Here, in the papers, was evidence that Newton had worked just as hard to come up with his results.

The Newton Papers – Sarah Dry, pg 105

By the time I had finished my PhD I had produced a sizable pile of used dollar-store notebooks. Browsing through them I could recognize the contours of what I’d spent the past four years trying (and occasionally succeeding) to do. I might even have reconstructed from the pictures and computations I had written out what I might have actually been thinking at the time. And aside from myself, there are a few people in the world who could possibly make sense of their contents. It all went in the recycling. If somehow one of my notebooks did manage to survive, and made its way into the hands of future scholars, I would be alarmed to consider them giving the content more than cursory attention.

The hundreds (?) of pdfs that I have produced, now sitting out there in the cloud stand a far better chance at outliving me. And not just my published work and arxiv pre-prints (which number in the tens). But everything I ever committed to a latex document in my own personal space up there in the cyber heavens. Among all the discarded drafts that might find evidence of something interesting. Not only what I managed to do, but also what I failed to do. What I thought I had succeeded in doing, but had in fact betrayed my own good sense. When I have found mistakes, I am occasionally mindful enough to leave a short note in all-caps to make it clear where the point of failure lies. There is a great deal we can learn from knowing what other mathematicians have tried and failed to do.

Unlike physical notebooks, our cloud storage is password protected. Digital inheritance is already “a thing”, but it seems unclear to me how it will work out in practice. Kafka left his manuscripts in the possession of Max Brod with the instructions that they be destroyed in the event of his death. Brod told Kafka himself that he certainly wouldn’t, and indeed when Kafka died at the age of 40 as a consequence of tuberculosis, Brod set about getting Kafka’s work published. I haven’t taken a survey, but I would imagine that most young writers have made no attempt to ensure their passwords and unpublished estates are in suitable hands. In principal it is possible to submit a request to Google for access to the accounts of the deceased, but I can’t imagine there are any guarantees. I certainly have no idea what the terms and conditions that I have accepted have to say about such eventualities.

Newton died a man of wealth and importance. With neither wife nor children he had no direct descendants, but he did have a slew of half-nephews, half-nieces, and children of his half-sisters. The assets of obvious value were split between them. Those assets of less obvious value — the leftover pile of notebooks and “reams of loose and foul papers” fell into the possession of Catherine Conduitt. She was one of Newton’s half-nieces, and wife to John Conduitt, who had actively assisted Newton in his duties as master of the Royal Mint. This was the consequence of some rather wild tying-up of loose ends:

Newton had died while holding the post of master of the Mint, which in those days required that its holder assume personal responsibility for the probity of each new coinage of money. That meant that at Newton’s death he had nominal debts amounting to the entire sum of Great Britain’s national coinage. John Conduitt agreed to take on this debt until the coinage had been certified, accepting liability for any imperfections in the coins. In exchange for assuming this risk, he asked for, and was granted Newton’s manuscripts.

The Newton Papers – Sarah Dry, pg 15

The Conduitts took ownership of these papers with the view of producing a biography, and begin the work of securing Newton’s posthumous reputation. They became the first in a long line of people who had access to the papers, but lacked the tools really required to properly make sense of them. Their daughter, Kitty Conduitt married John Wallop who would become the Earl of Portsmouth, and papers would enter the library of Hurstbourne Park, seat of the Portsmouth family. (That is to say they fell into the possession of the aristocracy.) And it was there that they would remain, save occasional minor forays, and the recovery of the substantial portion of scientific papers by Stokes and Adams. What finally shifted the remaining papers out into the open was the fall of the English Aristocracy. In 1936, under the financial pressure of death duties and a recent divorce, Gerald Wallop, the ninth Earl of Portsmouth had the papers put up for auction at Sothebys.

John Maynard Keynes, 1st Baron Keynes (1883-1946)

If there is a hero in Dry’s account of the Newton Papers, it must be John Maynard Keynes. His heroic virtue being exceptional taste and judgement. Having begun collecting books as a child (possibly his first foray into speculation) he developed a rather prescient sense for what should be considered valuable. Unlike the majority of collectors he shared the marketplace with, Keynes was actually interested in reading the books themselves. He was less interested in the superficial qualities: illuminations, illustrations, binding, or an illustrious list of prior owners left him unmoved.

Keynes’s new style of collection was self-consciously intellectual, as opposed to aesthetic or literary. It asserted that a particular history of ideas or chain of thought linked certain men through the ages. And it projected the implicit assumption that its creator was an inheritor of both the material and the intellectual masterpieces of a previous age. Keynes was a thoroughgoing Bloomsburyite in his respect. The paintings on the wall, the rugs on the floor, the furnishing in the room, and the books on the shelves were never just things: they were the physical embodiment of ideas and values whose display was a source of both aesthetic pleasure and moral reinforcement. A book in the hand, like the good life in Bloomsbury of the Sussex countryside, linked the life of the mind with that of the physical world.

The Newton Papers – Sarah Dry, pg 147

You might already get the sense that Dry sees Keynes as simply bringing a new set of beliefs to the table, complete with their own set of limitations. Indeed, Keynes considerable contribution to our modern impression and understanding of Newton as half magician and half scientist, was really a very hot take based on an initial reading. He was the one who announced that the papers reveal Newton devoted great time and energy to the disreputable pursuits of alchemy and heretical theology. Yet the fact that so many of Newton’s papers have remained together and in the possession of the University of Cambridge can be attributed to his prescience sense of the papers’ importance.

Abraham Shalom Yahuda (on the right) (1877–1951)

Kaynes was only one of two major buyers at Sotheby’s. Abraham Yahuda, a scholar of ancient languages, bought most of Newton’s theological writings. Yahuda had found himself alienated from his own field of scholarship, due to recent developments in Higher Criticism applied to biblical scholarship. The Documentary Hypothesis was a shocking new line of textual analysis that argued the origins of the Torah were of combination and synthesis with earlier texts. As a consequence, these texts cease to resemble one coherent whole revealed to man, and begin to look more like artifacts of history and culture.

For Yahuda this was a vision of criticism taken to extremes, the text reduced to nothing but error, the possibility of meaning dissolving amid a multiplicity of authors, leaving only commentary, a Talmud with no Torah left in it. He thought in particular that too many sources were being attributed to the Pentateuch and that too many “experts” were exerting themselves “in the art of text alterations and source-hunting.” Thus “the original text was distorted and disfigured and in its place was offered a quite new text of pure invention.” In Newton, who himself sought to return a blemished Christianity to its purer origins, Yahuda found a kindred soul. Interpreting ancient texts didn’t require robbing them of fixed meaning. Both Newton and Yahuda sought instead to find a singular truth amid the variations.

The Newton papers – Sarah Dry, pg170

As a consequence of Yahuda’s desire to find an ally in Newton, those theological papers now reside in The National Library of Israel.

The final portion of Dry’s book concerns the subsequent attempts at synthesis of the material. The fact of the matter is that the task was simply impossible. There is too much material, covering too many subjects for any grand unifying conclusions to be drawn. It was hard to even put together a definitive edition of the Principia that covered all the different editions as well as Newton’s own marginalia. When finally published it was controversial due to the inevitable editorial decisions to include or not include certain material.

It is worth making one final point clear. I have never read Newtons’ Principia. I don’t believe you could find a research mathematician alive who has — unless their research happens to be the history of mathematics. It is a book whose significance is measured in its influence. Many decisions in its composition — in particular the modelling Newton’s Laws on the axioms in Euclid’s Elements — were very important. But you should not read it. When we discuss “great” books, there is usually the tacit understanding that we are missing out if we have not actually read the book. I am quite certain that we have not missed out.

Hence, alchemy.

[This post is something of a sequel to my previous post.]

William Chaloner was born sometime around 1650, making him maybe a decade older than Isaac Newton. He did not receive the schooling Newton did, and he certainly didn’t have a chance at Cambridge. He had the misfortune to be apprenticed into a trade with little future: making nails. A machine — the slitting mill — had arrived that readily produced rods of steel that could easily be cut up and hammed into nails, rendering a previously skilled trade an unskilled one. With protective guilds unwilling to admit him into a more lucrative trade, and arriving in London with no obvious means to support himself, he turned to criminal enterprises.

That is, Chaloner’s first attempt to rise above mere subsistence turned him into a purveyor of sex toys. London in the 1690s was as famous, or perhaps notorious, for its spirit of sexual innovation as Berlin would be in the 1920s. Prostitution was ubiquitous, as much a part of the life of the wealthy as it was that of the poor, who supplied most of the trade’s worker’s. The best brothels vied to outdo each other in their range of offerings — so much so that Dr. John Arbuthnot, a man about town in the early eighteenth century apparently spoke for many when he told a madam at one of the better houses, “A little of your plain fucking for me if you please!”

Newton and the Counterfeiter by Thomas Levenson, pg 57

Chaloner soon moved on to various forms of con-artistry: quack medical advice, divination, and “thief-taking”. The latter involved informing on criminals or political subversives in order to collect financial reward. The Metropolitan police would not be formed until the 19th century, so such people were often the only avenue available for bringing criminals to justice. But the thief-takers often played both sides, exploiting whatever opportunities they could get, and often goaded people into committing crimes so that they could be “caught”. Chaloner made it his business to play both sides.

However much Chaloner made from such rackets — and it would not have been inconsiderable — he wanted more. And the biggest racket in all of England was going on in plain sight, with evidence everywhere to be seen. The racket in question was counterfeiting the King’s coin.

It was not an especially sophisticated game. The low production quality of hammered coins meant that an enterprising fellow could clip the edge of a coin and the coin would still be a coin, but you also had a fingernail of precious gold or silver. And in England it was silver that was of interest. The state of the commodity markets in Europe meant that you could take your pile of silver clippings to continental Europe, buy their cheaper gold, then return to England and convert it all back to silver at a profit and start clipping all over again. Classic arbitrage.

As a consequence, the silver coins of England were beginning to look somewhat diminished. Many of them weren’t even silver at all. Many were outright counterfeits made of baser metals. This caused all kinds of problems, not least of which was the ability of King William III to pay his own troops to fight his war in France. Foreign bankers were unwilling to accept English currency at a good price, and silver was vanishing from England for mainland Europe.

The solution was the Great Recoinage of 1696. The old coins were to be replaced with new machine-struck coins that bore milled edges to prevent clipping and render counterfeiting extremely difficult. This British state at this point in history was rife with corruption, sinecures, and cronyism, so initially at least this whole project was chaotic and in real danger of disaster. It was during this financial turmoil that Chaloner seized the opportunity and set up sophisticated counterfeiting operations that managed to produce high quality fakes of the new coins.

Reading Newton and the Counterfeiter by Thomas Levenson, it is unclear if counterfeiting really was so great a scam. Certainly there was no effective law enforcement in England at this time. And while Chaloner was committing a capital offense, juries were unwilling to sentence men to death on the contradictory hearsay that actually arrived in court. That said, the kind of operation Chaloner ran required the cooperation of a great many people. Not only the skilled craftsmen required to make the dies used to cast the fake coins, and the crew to actually run the production line, but also the actual buyers for the knock-off coins. All these people could potentially betray you. Even if you did not face the executioner, you might have to endure a brief stay in London’s hellish Newgate jail.

The jail used in 1696 was almost brand new, constructed on top of the ruins left by the Great Fire of 1666. The facade of the rebuilt prison was given a hint of the elegance with which its architect, Sir Christoper Wren, hoped to endow the whole city. But such graces did nothing to alter the essential character of a place that was, as Daniel Defoe’s Moll Flanders put it, not “the emblem of hell itself” but a kind of entrance to it” too. Defoe wrote from personal experience: he had been imprisoned there briefly, for debt. Other celebrated inmates confirmed Defoe’s judgement. Casanova, imprisoned at Newgate under accusation of child rape, called it “this abode of misery and despair,” and infernal place “such as Dante might have conceived.”

pg 151

Chaloner would be pursued, with unusual diligence, for his crimes by the recently appointed Warden of the Mint who he had been provoking with the particular flagrancy of his crimes and deceits. In what was an act of considerable bravado Chaloner, who had already been caught for counterfeiting activities, conducted a political campaign to gain access to the Royal Mint, ostensibly to offer his “expertise”, but in reality to take whatever advantage he could. This political campaign involved impugning the newly arrived Warden. As perhaps the title of Levenson’s book has given away, this Warden was Isaac Newton, the celebrated natural philosopher.

Given the absence of anything remotely like a rigorous understanding of economics, soliciting Newton’s views on the currency crisis in England was a pretty reasonable thing to do. That said, everyone seemed to have a view on potential solutions. Newton’s own views would be borne out — not just his understanding that re-coinage was necessary, but also the inevitable failure of having a currency simultaneously based on both gold and silver, and his prescient views on the potential of fiat currency. But his duties as Warden of the Mint were simply to oversee the re-coining, and prosecute clippers and counterfeiters.

The first of these tasks Newton was eminently suited, given his facility with quantitative reasoning. He also had the virtue of considering his position as more than a mere sinecure. Having tired of life in Cambridge, he had been seeking some eminent position in London with which to apply his talents. He made the entire process the object of his attention, from the amount of coal consumed each day, to the rate at which the crews could, and reasonably should, hammer our the coins. Under his oversight the re-coining was completed ahead of schedule. (And to the standards of the day, far more safely than it would have otherwise been done).

The second of his tasks — prosecuting counterfeiters — he abhorred. Nevertheless Newton proved himself to be utterly ruthless. The full details of the lengths he went to have been lost — in that the paperwork was deliberately destroyed in part of what was likely a cover-up.

Conduitt chose not to explain why Newton wanted to destroy the papers, but one inference is that Newton enjoyed the role of inquisitor too much. In this view, Newton proved willing, perhaps eager, to terrorize his captives in pursuit of the necessary confessions and betrayals with a viciousness that even that strong-stomached time would tolerate. Formally, torture had not been used in England as an investigative tool for about half a century before Newton came to the Mint. Elizabeth I had face repeated rebellion, often animated by Catholic ambitions on her Protestant throne — and she was England’s most prolific torturing monarch …

But while official torture fell out of favor, interrogators still knew how to put the boot in as needed. Isaac Newton had plenty of ways to extract the information he wanted from reluctant prisoners and he made use of them. Most of them were within the customary bounds of police detection: trading in fear, not pain. He offered brief reprieves for information: he coerced husbands with threats and promised rewards to wives and lovers. But there is one — and only one– reference to his use of more brutal methods in the records he did not burn. In March 1698, Newton received a letter from Newgate written by Thomas Carter, one of Chaloner’s closes associates. The letter was one of a flurry of messages Carter had sent to confirm that he was eager to testify against his former co-conspirator, but this one had a postscript. “I shall have Irons put on me tomorrow,” he wrote, “if yo[ur] Worship not order to the contrary.” In other words: Don’t hurt me! Please. I’ll talk. I’m ready.

Newton and the Counterfeiter by Thomas Levenson pg 165

Ultimately, Newton was victorious. He was patient and methodical and able to rally his superior resources to hound his man, subjecting Chaloner to an extended stay in Newgate while he gathered witnesses and finally wrong-foot him in the trial. The trial itself being a brief and prejudiced affair, as characterized English justice at that time.

But beyond the torture and lack of due process, there was a central hypocrisy to Newton’s activities. Newton and his famous chums were themselves guilty of crimes quite reminiscent to the ones he was prosecuting. The main difference, I think, was that Newton was practicing the upper class equivalents, which were not concerned with actually making a pile of money, but of a more recreational nature. Here is a passage on his relationship with John Locke:

In part, he relished the opportunity to tutor so well regarded a man. He gave Locke a private, annotated edition of the Principia and composed for him a simplified version of the proof that gravity makes the planets travel elliptical orbits. But Newton’s intimacy with Locke seems to have extended well beyond such benevolent displays of mastery. From the beginning, Newton allowed himself to write openly about secret matters. Both men had subterranean interests — in alchemy, for one, the ancient study of processes of change in nature; and in questions of biblical interpretation and belief, which brought them to the edge of what the established English church would damn as heresy.

Newton and the Counterfeiter by Thomas Levenson pg 43

And more seriously, and quite parallel to the crime of counterfeiting Newton was a very active alchemist. Literally attempting to turn base metals into gold. A process if successfully performed at scale would have created unprecedented economic chaos. But like I say, that wasn’t his ultimate goal. Really he was looking to alchemy to settle the theological implications of his scientific endeavors. He saw performing alchemy as a means of proving the intervention of “God” (or rather Newton’s own notion of God) in the natural world:

He knew that all the theorizing, all the theological argument, all the indirect evidence from the perfect design of the solar system could not match the value of one actual, material demonstration of the divine spirit transforming one metal into another in the here and now. If Newton could discover the method God used to produce gold from base mixtures, then he would know — and not just believe — the the King of Kings would indeed reign triumphant, forever and ever.

Newton and the Countereiter by Thomas Levenson pg 85

It should be understood that once you set aside all the secrecy and strange codes that Newton cloaked his alchemist pursuits in, the experiments he performed were serious and rigorous. Even if he failed to make any progress or establish any new body of knowledge. In this enterprise at least he resembles quite closely many of his peers — making quite serious, but ultimately unsuccessful attempts at making a breakthrough.

William Chaloner was hanged from the neck until he was dead, on 22 March 1699. It was not the worse fate he could have met under English law. Newton was not in attendance. He would live on until 1727 when he died in his eighties and buried in Westminster Abbey.

Levenson has a recently released book that seems to pick up where this one left off, tracking the rise of modern finance and the influence the Scientific Revolution had on it. I’ve also stumbled on this podcast where Cambridge historian of science, Patricia Fara, discusses her own upcoming book which seems to have considerable overlap with Levenson’s. The first question she is asked is how Isaac Newton managed to die a wealthy man, which was actually a pretty good place to start. (Newton had invested in the East India Trading company, which means, among other things, slavery.)

There has been a murder in Gathertown

If you orient yourself temporally you may remember that back in August there was a online fracas involving mathematics. A teenage girl, doing her makeup before work, decided to take the opportunity to lay down for her TikTok followers her skepticism about the idea of math generally:

Who came up with this concept? “Pythagoras!” But how? How did he come up with this? He was living in the … well I don’t know when he was living, but it was not now, where you can have technology and stuff, you know?

Grace Cunningham, TikTok user.

As was keenly observed by the many keen observers out there, the initial response was a pile-on that combined general misogyny with gen-Z hatred; it was the latest installment in the long running complaint about kids these days. This reactionary abuse was soon countered by a more positive wave of responses that acknowledged that her questions were not only legitimate, but exactly the kind of questions our curriculum does little to answer.

I don’t think many mathematicians are particularly satisfied about the way our subject is generally taught. At the university level I find it hard to love force marching students through rote material, stripping centuries worth of mathematics of all its scientific and historical context along the way. So obviously I am happy to see any student kicking back at what we inflict on them. But for those who have made mathematical communication their vocation it was a solid gold opportunity to evangelize. Euginia Cheng wrote a pdf answering Grace Cunningham’s formalized list of questions, and Francis Su wrote a twitter thread.

Grace Cunningham was calling the bluff on the pretenses of her education. In particular, the pretense that you should obviously be learning whatever we are telling you. “Why are we even doing this?” is a legitimate question in a mathematics course, and “why on earth did anyone prove these theorems in the first place” is an even better one. “How did people know that they were right,” presents the awkward truth that people most often are certainly incorrect about many things. What makes these questions awkward is that the people teaching you mathematics will frequently know little to nothing about the history and context within which the theory was developed. Mathematicians are terrible, as a rule, at scholarship, and the history of ideas within mathematics is an essentially distinct field. Most of the context that I have for the mathematics I do is essentially gossip, urban myth, and pablum. Fortunately, while we might be terrible historians we remain excellent gossips, so at least we have plenty of stories to tell.

(I should also concede that it is impossible to generalize in any way about most of my peers. Many of them are tremendously knowledgeable about all kinds of things and wonderful educators. I am, at least to some extent, either projecting or talking about our very worst failings.)

I was dissatisfied by the responses I found to Cunningham’s questions. Not least of all because I don’t think they really answer the questions. No actual historical context was given. The answers more resemble the kind of general motivation and propaganda we give students to encourage them to listen in class and do their homework. I think a good answer would address the fact that the people who developed much of classroom mathematics had some pretty wild ideas about what they were doing. Their motivations would be pretty alien to us, and is a far cry from their homework, exams, or getting a well paid job.

Just to make this explicit: How many of us who have ever taught or taken calculus a calculus course have even done any astronomy? Just from doing a little reading, an obvious observation seems to be that when people sat down to first learn calculus from Newton’s Principia, the big incentive for them was the promise of a serious set of answers about the Sun, the Earth, the Moon, the stars, and even comets. A modern mathematician explaining their motivation for calculus today is a little like a 21st century Western evangelical Christian explaining what the “Old Testament” is all about to an orthodox rabbi.

My modest reading has focused on the life of Isaac Newton. I read Jame’s Gleick’s biography of Newton (highly recommended) and I have a few more on the shelf. I already had some understanding that aside from developing calculus Newton was a heretic, alchemist, and later in life warden of the royal mint. I knew he lived through times of plague, apocalypse, dictatorship, conspiracies, and his work was a major part of the scientific revolution. Particularly pertinent to Cunningham’s question is the fact that for centuries after Newton’s death there was a suppression of the full range of Newton’s intellectual activities. It was only when John Maynard Keynes acquired a substantial portion of Newton’s surviving papers at auction that the truth came out. For a long time Newton’s preoccupations would be considered intellectually inconvenient for all those trying to boost his posthumous reputation, and that of British science with it.

The idea of knowledge as cumulative — as a ladder, or a tower of stones, rising higher and higher — existed only as one possibility among many. For several hundred years, scholars of scholarship had considered that they might be like dwarfs seeing further on the shoulders of giants, but they tended to believe more in rediscovery than progress. Even now, when for the first time Western mathematics surpassed what had been known in Greece, many philosophers presumed they were merely uncovering ancient secrets, found in sunnier times and the lost or hidden.

Isaac Newton – James Gleick (pg 34-35)

Here is a not entirely fanciful reading of Newton’s life: starting his university career dissatisfied with the existing knowledge, and curious about the latest developments in astronomy, Newton develops his theory of calculus. But he is not yet really a scientist. He is still very much a wizard. A young man who has uncovered some profound secrets and is keen to discover more. He invests huge amounts of time and energy in alchemy and theology. The alchemy involved tracking down obscure texts that he hoped would contain the secret knowledge of transforming base metals into precious metals, and his notebooks from this period often amount to his copying out these texts. It also involved working with mercury, a poisonous metal known to drive the alchemists who used it to madness.

His theological interests were no less hazardous since they would have been viewed as clearly heretical to both the Protestant and Catholic religious authorities at the time. By studying the earliest Greek manuscripts he discovered that the concept of the Trinity — that the Godhead is three and one; Father, Son, and Holy Spirit — emerged late in the early church, and certainly couldn’t be considered part of the original Christian tradition. Newton concluded Jesus was not at the same level as God and had never claimed to be. At a time in England when having Catholic sympathies could land you in trouble, this was a dangerous view to have.

I would argue that Newton transformed from a wizard into a scientist the moment the German mathematician Leibniz independently derived his own theory of calculus. No longer had Newton uncovered a forgotten knowledge, but he had derived a theory that someone else could also derive. He was now entered into a race to establish the precedence for his own results — and this meant writing up.

For decades his tools of calculus had languished in notebooks and in his mind. Now he had to write them down, and he chose to present them in the style of Euclid’s Elements, with axioms, definitions, lemmas, theorems. And most intriguingly, in order to prove the correctness of his theory, he drew upon experimental data: astronomical observations from the newly establish Greenwich observatory and tidal charts. He was able to explain and predict natural phenomena that perplexed his contemporaries such as the sudden appearance of comets, and their unusual paths across the night sky. We can recognize this now as a prototype of the modern scientific method, but back then it was controversial, becoming part Newton’s dispute with Leibniz.

Newton wrote many private drafts about Leibniz, often the same ruthless polemic again and again, varying only by a few words. The priority dispute spilled over into the philosophical disputes, the Europeans sharpening their accusations that his theories resorted to miracles and occult qualities. What reasoning, what causes, should be permitted? In defending his claim to first invention of the calculus, Newton stated his rules for belief, proposing a framework by which his science — any science — out to be judged. Leibniz observed different rules. In arguing against the miraculous, the German argued theologically. By pure reason, for example, he argued from the perfection of God and the excellence of his workmanship to the impossibility of the vacuum and of atoms. He accused Newton — and this stung — of implying an imperfect God.

Newton had tied knowledge to experiments. Where experiments could not reach, he had left mysteries explicitly unsolved. This was only proper, yet the German threw it back in his face: ‘as if it were a Crime to content himself with Certainties and let Uncertainties alone.’

Isaac Newton – James Gleick (pg 176-177)

Data is now the recognized currency of modern science, and theology is, well, theology. The mathematical analysis that makes calculus rigorous didn’t come until much later. Newton had started using infinite series in his calculus, but it was understood that you had to be careful because sometimes you could get some bad results.

When Cunningham asks her TikTok followers how early mathematicians knew they were right, in Newton’s case at least, it seems that there are three answers. Newton first convinced himself with arguments we would not consider mathematically rigorous along with his his own empirical observations. Decades later he convinced his peers by publishing a full written account of his theory (in Latin) that provided supporting data. Then a century or so later the full theory of mathematical analysis was developed.

These questions have complicated answers for Newton, but they are really no less complicated for us today, even if they are quite different answers. We live in the age of the arxiv, computer assisted proofs, machine learning, and bodies of work that amount to many hundreds of pages. I’m not going to lie; I love the drama of it all. Some would like to present mathematical proof and progress as being an enterprise free from being sullied with the humanity of its practitioners. For my part I am of the belief that the reasons people commit themselves to mathematics are more complicated than just the aesthetic appreciation of equations.

On finite covers of surfaces with boundary…

I have a new preprint on the arxiv, joint with Emily Stark. We provide the first known examples of one-ended hyperbolic groups which are not abstractly coHopfian. That means that there is a one ended hyperbolic group \(G\) which contains a finite index subgroup \(G’ \leq G \) that embeds \( G’ \rightarrow G\) as an infinite index subgroup. I encourage you to look at the paper for details. The main example and proof can be drawn out on a single side of A4 — it’s a simple surface amalgam and we exploit the tremendous flexibility you have when you take a finite cover of a surface with boundary.

We use the following Lemma extensively. It’s from Walter Neumann’s 2001 paper Immersed and virtually embedded \(\pi_1\)-injective surfaces in 3-manifolds, although, as he says, it is apparently “well known”.

The utility of this Lemma is that it reassures you that if you can imagine your desired cover — such as the following I’ve drawn below — and it satisfies a basic necessary Euler characteristic computations, then the cover does in fact exist.

The surface \(F’\) is a degree 5 cover. The collection of degrees in this example are 1+3+1=5. Note that if the 3 was replaced with a 2 and I tried to find a degree 4 cover with the specified boundary it would be impossible.

For our main example you can compute your desired covers by hand, but it is worth knowing what kind of covers of a surface with boundary you can take. This lemma tells you exactly how much control you have. And mathematical research, like all forms of insecurity, is really all about control.

The proof given above is brief, to say the least, so I think it is worth expanding on the details.

First, we remind ourselves how you might construct such a cover by hand. Take a surface with genus one and a single boundary component. From a group theoretic point of view this is just the free group generated by two element \( \mathbb{F}_2 = \langle x, y \rangle \) bundled together with the conjugacy class of the commutator \( [x,y] \). For me at least, finding finite index subgroups of the free group boils down to futzing around with graphs. Thus, we let \(X\) be a bouquet of two circles and let \(\langle x,y \rangle = \pi_1X\).

I actually drew it by hand.

On the right I drew the surfaces with boundary and on the left I drew the corresponding graphs with the loop corresponding to the boundary. Once you have drawn the graphs out it’s easy to verify that you have the boundary components you want. The trick is knowing you can find the desired finite covers. The key insight is that \(\alpha\)-sheeted covers of a graph \(X\) are in a correspondence with representations into the permutation group on \(n\) elements: \( \pi_1 X \rightarrow \textrm{Sym}(\alpha) \)

We can see how this correspondence works in practice in the example I just drew:

Giving each of vertex a number we can see that the edges labelled by a given generator of our free group gives a permutation. In this example the generator \(x\) gives the permutation \((1,2)(3)\) (see the red edges on the right), while the generator \(y\) gives the permutation \((1,2,3)\) (see the blue edges on the left). Thus we have a homomorphism determined by mapping the generators to the corresponding permutation.

At this point we need to be careful because there are some left-right issues hidden here. When I multiply group elements \(xy\) I am composing paths in the fundamental group. That means I concatenate the corresponding paths, starting with the \(x\) path and then following it with the \(y\) path. I’m reading the composition from left to right. In contrast, when I usually compose a pair of permutations \(\sigma_1 \sigma_2\) I compute the composition by reading them from right to left. But in order to be able to interpret my homomorphism correctly I’m going to have to compose my permutations in reverse order, from left to right.

Now we can compute the image of the element corresponding to the boundary curve: $$ xyx^{-1}y^{-1} \mapsto (1,2) \circ (1,2,3) \circ (1,2) \circ (3,2,1) = (1,2,3).$$ Tracing out how the boundary curve lifts is equivalent to computing this permutation element. This makes it clear that there is a single boundary component covering the previous with degree 3. (In this example it doesn’t matter in which direction we composed the permutations).

Conversely, choosing pair of permutions, say, $$(1,2)(3,4)\textrm{, and } (2,4,3) \in \textrm{Sym(4)}$$ to be the images of \(x\) and \(y\) we can construct a corresponding cover by taking 4 vertices and adding the appropriate labelled edges:

Now when we compute (remembering to compose our permutations from left to right) the image of our commutator element we get $$ xyx^{-1}y^{-1} \mapsto (1,2)(3,4) \circ (2,4,3) \circ (1,2)(3,4) \circ (3,4,2) = (1,4)(2,3).$$ Thus the surface has two boundary components, each covering the boundary in the base surface with degree two.

The take away from this discussion is that finding suitable covers corresponds to finding a suitable homomorphism $$\phi : \pi_1 X \rightarrow \textrm{Sym($\alpha$)}$$ such that the image of the elements corresponding to boundary curves are permutaions with the desired decomposition into cycles. Our weapon of choice is the fact that any even permutation can be written as the commutator of an \(\alpha\)-cycle and an involution:

Un résultat extrémal en théorie des permutations. Jacques, Alain; Lenormand, Claude; Lentin, André; Perrot, Jean-François, C. R. Acad. Sci. Paris Sér. A-B 266 1968 A446–A448

First we consider the case where \(\Sigma\) has a single boundary component. So \(\Sigma\) is a surface with genus \(g\), Euler characteristic $$ \chi(\Sigma) =  2 – 2g – |\partial \Sigma | $$ and a single boundary component, so \(|\partial \Sigma | = 1\). This corresponds to the free group generated by \(2g\) elements and the group element corresponding to the boundary, which is the product of commutators: $$( \langle x_1, y_1, \ldots x_g, y_g \rangle, [x_1,y_1]\cdots [x_g, y_g] ).$$

Suppose we wish to construct a cover of degree \(\alpha\) with the boundary components of degrees \(\alpha_1, \ldots, \alpha_k\). Then apply the above Theorem to the permutation $$\sigma = (1, \ldots, \alpha_1)(\alpha_1 +1, \ldots, \alpha_1 + \alpha_2) \cdots (\alpha_1 + \cdots + \alpha_{k-1} +1, \ldots, \alpha).$$ The theorem only applies if \(\sigma\) is an even permutation, which we compute to be equivalent to $$ \sum_i (\alpha_i -1) = \alpha – k$$ being even. As \(\chi(\Sigma) = 1-2g\) this is equivalent to \(k\) having the same parity as \(\alpha\chi(\Sigma)\), the sufficient condition given in the statement of our theorem.

Thus there exists permutations \( \sigma_x, \sigma_y \in \textrm{Sym}(\alpha)\) such that \( [\sigma_x, \sigma_y] = \sigma \). The homomorphism $$ \phi: \pi_1 \Sigma \rightarrow \textrm{Sym($\alpha$)}$$ given by mapping \(x\) to \(\sigma_x\) and \(y\) to \(\sigma_y\) therefore corresponds to a cover with the desired boundary.

Now we consider the slightly trickier general case where we have multiple boundary components, which is to say \(|\partial \Sigma| = b\). In which case the pair \((\Sigma, \partial \Sigma)\) corresponds to $$ (\langle x_1, y_1, \ldots, x_g, y_g, t_1, \ldots t_{b-1} \rangle , \{t_1,\ldots, t_{b-1}, t_{b-1}\cdots t_{1}[x_1,y_1] \cdots[x_n,y_n] \} ).$$

Now suppose we desire that the \(i\)-th boundary component is covered with degrees \(\alpha_1^i, \ldots, \alpha_{k_i}^i\), then let $$ \sigma_i = (1, \ldots, \alpha_1^i)(\alpha_1^i +1, \ldots, \alpha_1^i + \alpha_2^i) \cdots (\alpha_1^i + \cdots + \alpha_{k_i-1}^i +1, \ldots, \alpha)$$ for \(1 \leq i \leq b\). Now we wish to find \(\sigma_x, \sigma_y\) such that $$[\sigma_x, \sigma_y] = \sigma_1 \cdots \sigma_b.$$ This requires that the product of the \(\sigma_i\) is even. This means that the sum $$ \sum_i \sum_j (\alpha_j^i – 1) = \sum_i (\alpha – k_i) =\alpha b – \sum_i k_i = \alpha (\chi(\Sigma) -2 + 2g) – \sum_i k_i$$ should be even, which is true precisely when the total number of prescribed boundary components \(\sum_i k_i\) has the same parity as \(\alpha \chi(\Sigma)\).

Given that our parity condition is satisfied, we define our homormorphism \(\pi_1 \Sigma \rightarrow \textrm{Sym}(\alpha)\) as follows:

$$\begin{align} x_1 \mapsto & \sigma_x  \\ y_1 \mapsto & \sigma_y \\ x_2 \mapsto & 1 \\ \vdots \\ y_g \mapsto & 1 \\ t_1 \mapsto & \sigma_1^{-1} \\ t_2 \mapsto & \sigma_2^{-1} \\ \vdots \\ t_{b-1} \mapsto & \sigma_{b-1}^{-1} \end{align} $$

Then it only remains to verify that $$ t_{b-1}\cdots t_{1} [x_1,x_2] \cdots [x_g, y_g] \mapsto \sigma_{b-1}^{-1} \cdots \sigma_1^{-1} [\sigma_x, \sigma_y] =  \sigma_{b-1}^{-1} \cdots \sigma_1^{-1} \sigma_1 \cdots \sigma_{b-1} \sigma_b = \sigma_b $$ and conclude this gives us our desired cover.

QED.

(I’d like to thank Emily for informing me about Neumann’s Lemma, and Nir for various discussions related to this.)

Gromov, cheese, pretending to quit mathematics, and French.

In December last year, the Notices of the AMS ran a collection of reminiscences in memory of Marcel Burger (1927-2016), the late French differential geometer. He was also a former director of the Institut des Haute Etudes Scientific and, according to the Wikipedia, played a major role in getting Gromov positions in Paris and at the IHES in the 80s. Gromov contributed to the article, listing Berger’s mathematical achievements, before sharing a more personal anecdote:

Within my own field Gromov has had a profound influence. His essay Hyperbolic Groups led to the term “combinatorial group theory” being more or less abandoned and replaced with “geometric group theory”. As a graduate student I found the monograph frequently cited as the origin for an astounding range of ideas. At some point I had trouble finding a copy of the paper online and for a brief moment wondered if the paper itself were just an urban myth or elaborate hoax.

Gromov’s foray into group theory is just one episode in a long career. His first major breakthrough was in partial differential equations; the “h-principal” which, according to Larry Guth, was analogous to observing that you don’t need to give an explicit description of how to put a wool sweater into a box in order to know that you can actually put it into the box. It is actually a little hard, at least for myself, to get a full grasp on Gromov’s other contributions as they span unfamiliar fields of mathematics, but I recommend this nice What is… article, written by the late Marcel Berger, describing Gromov’s contribution to the understanding of isosystolic inequalities.

There is the perception about great mathematicians that, while they are no doubt very clever, somehow they have lost a little of the common sense that the rest of us possess. I’m naturally inclined to discount such thinking; I am far happier believing that we are all fool enough to take absence of common sense, on certain occasions. However, it is hard to dismiss the  idea entirely given the following admission by Gromov, in his personal autobiographical recollections he wrote on receipt of the Abel prize in 2009:

The passage speaks for itself, but I wish to emphasize that Gromov’s discovery of the correct pronunciation of French verb endings came after ten years of living in Paris. You don’t have to have extensive experience learning foreign languages to appreciate how remarkable an oversight this is. It certainly puts his remarks in other interviews about dedicating one’s life to mathematical pursuits in a rather strong light.

If you read the entire autobiographical essay you will find it rather short on biography. The best biographical details I have found came from this La Monde article written on his being awarded the Abel prize. Even then the details seems to be coming second hand. The most interesting parts concern his leaving the Soviet Union:

I wanted to leave the Soviet Union from the age of 14. […] I could not stand the country. The political pressure there was very unpleasant, and it did not come only from the top. […] The professors had to teach in such a way as to show respect for the regime. We felt the pressure of always having to express our submission to the system. One could not do that without deforming one’s personality and each mathematician that I knew ended up, at a certain age, developing a neurosis accompanied  by severe disorders. In my opinion, they had become sick. I did not want to reach that point.

Gromov, according to Georges Ripka, via La Monde (apologies for my translation)

As the article goes on to explain, Gromov decided his best chance to escape was to hide his mathematical talent. He quit math, quit his university, and burned all his academic bridges. He stopped producing mathematics. Or at least writing it. He joined some meteorological institute and did research on paper pulp. Eventually, he was granted permission to emigrate to Israel, but on landing in Rome in 1974 he set off for the States instead, where Jim Simons secured him a position at Stony Brook.

(As a side note, this is Jim Simons of Renaissance Technology and Simons foundations fame. Simons, aside from his mathematical contributions, is probably one of the most important mathematicians alive in terms of funding, supporting, and propagandizing for mathematics. He is considered influential enough for the New Yorker to profile. Alongside Gromov, he is one of the names that every mathematicians should know.)