Kyle Harrison
← Bookshelf
The Man From The Future
Related from this site
Key Takeaways
Under Consideration — to be added.
Interconnections
Under Consideration — to be added.
Highlights
- ‘How can it be that mathematics,’ asked Einstein, ‘being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality?’
- The same year he joined the Manhattan Project, von Neumann was finishing, with the economist Oskar Morgenstern, a 640-page treatise on game theory – a field of mathematics devoted to understanding conflict and cooperation. That book would change economics, make game theory integral to fields as disparate as political science, psychology and evolutionary biology and help military strategists to think about when leaders should – and should not – push the nuclear button.
- ‘As he moved from pure mathematics to physics to economics to engineering, he became steadily less deep and steadily more important,’ observed von Neumann’s former colleague, mathematical physicist Freeman Dyson.
- when von Neumann is remembered it is largely for his legendary feats of mental gymnastics. Yet his legacy is omnipresent in our lives today. His views and ideas, taken up by scientists, inventors, intellectuals and politicians, now inform how we think about who we are as a species, our social and economic interactions with each other and the machines that could elevate us to unimaginable heights or destroy us completely.
- Jancsi was a formidable mental calculator even as a child.6 Some sources suggest that he could multiply two eight-digit numbers together in his head when he was six.
- The Minta was a test bed for von Kármán’s educational theories, largely imported from Germany. Discipline and rigour were central, and education was based on problem-solving rather than rote-learning. ‘At no time did we memorize rules from the book,’ says von Kármán’s son, Theodore, who attended the school. ‘Instead we sought to develop them ourselves. In my case the Minta gave me a thorough grounding in inductive reasoning, that is, deriving general rules from specific examples – an approach that remained with me throughout my life.’14 The younger von Kármán was to become the twentieth century’s leading expert on aerodynamics and would shape the aircraft designs of both the German Luftwaffe (inadvertently) and the US Air Force.
- The maths classes ‘set me back several years’, he complained in his memoirs. ‘Challenging students to explore ideas was not a common aim at the Minta.’
- Von Neumann himself attributed his generation’s success to ‘a coincidence of some cultural factors’ that produced ‘a feeling of extreme insecurity in the individuals, and the necessity to produce the unusual or face extinction’
- ‘Johnny’s unique gift as a mathematician was to transform problems in all areas of mathematics into problems of logic,’ says Freeman Dyson. He was able to see intuitively the logical essence of problems and then to use the simple rules of logic to solve the problems.
- ‘Out of nothing,’ wrote Bolyai to his father, ‘I have created a strange new universe.’
- In 1880, the famous physiologist Emil du Bois-Reymond had declared there were some questions (he called them ‘world riddles’), such as the ultimate nature of matter and force, that science could never answer. ‘Ignoramus et ignorabimus,’ as he put it: ‘we do not know and will not know’. Hilbert was having none of it. In 1900, he sounded his opposition to du Bois-Reymond’s pessimism, denying that there were any such limits to knowledge. Every question had a definite answer, he argued, even if what that answer showed was that answering the original question was impossible.
- ‘For us there is no ignorabimus, and in my opinion none whatever in natural science,’ Hilbert was to thunder even thirty years later. ‘In opposition to the foolish ignorabimus our slogan shall be: Wir müssen wissen – wir werden wissen.’ We must know – we will know.
- In 1901, the British philosopher and logician Bertrand Russell found a paradox at the heart of set theory, a branch of mathematics pioneered by Georg Cantor a quarter of a century earlier. Cantor, a brilliant, deeply religious Russian-born German Protestant, was the first mathematician to see that there are a multitude of different infinities, and some infinities are demonstrably larger than others. To the greatest of his infinities, the ‘Absolute’, Cantor assigned the Greek capital omega, Ω. Only in the mind of God, he said, could Ω be truly contemplated in all its glory.
- One group, the ‘intuitionists’, led by the pugilistic and highly strung young Dutch mathematician L. E. J. ‘Bertus’ Brouwer, argued Russell’s paradox showed that mathematics was bumping up against the limits of the human mind. Brouwer was wary of transfinite numbers. There was no reason to believe, he argued, that the rules of logic could be applied to everything in mathematics and, in particular, not to Cantor’s dubious infinite sets.
- Max, concerned that his prodigy was taking an overly strong interest in mathematics, approached Theodore von Kármán, around twenty years Jancsi’s senior and already a renowned aerospace engineer, to dissuade him from pursuing the subject at university. ‘Mathematics’, he explained to von Kármán, ‘does not make money.’
- Hilbert’s dreams of a perfect mathematics were soon to be crushed. Within a decade, some of the brightest minds in mathematics answered his call. They would show that mathematics was neither complete nor consistent nor decidable.
- ‘The more successes the quantum theory enjoys, the more stupid it looks,’ said Einstein, who realized early on that the shotgun wedding of classical and quantum concepts could not last.1 Physicists soon wanted an amicable divorce.
- What followed was five of the most remarkable years in the history of science, during which a mechanics to describe the quantum world would be forged, much of it at Göttingen.
- Pondering the physical meaning of noncommutativity led Heisenberg in 1927 to an extraordinary new law of nature, which stated that the position and momentum of a particle cannot both ever have exact values at the same time. And if it is impossible to know at any moment both the location and velocity of a particle exactly then one cannot, as physicists had long assumed, predict where it will be next. Heisenberg’s insight was to become known as his uncertainty principle.
- Some physicists, satisfied there were now two formulations of quantum mechanics that appeared to give the right answers, shrugged off the unresolved questions swirling around quantum theory. Just choose the theory best suited to the problem at hand, they suggested, and hang the existential consequences.
- But Bohr, Einstein and other scientists were disturbed the two theories seemed to be saying very different things about the nature of reality.
- Giants of quantum theory like Dirac and Schrödinger had tried to prove the equivalence of the two. Von Neumann was the first to crack it, showing decisively that wave and matrix mechanics were fundamentally the same theory. But never before had two descriptions of the same phenomena implied such different pictures of reality.
- German, not English, was the language of science in the 1920s. Practically all the founding papers of quantum mechanics were written in it. There was a flood of congresses and conferences for young researchers to attend. Academic talks would often spill over into coffee houses and bars. ‘The United States in those years was a bit like Russia: a large country without first-rate scientific training or research,’ Wigner told an interviewer in 1988. ‘Germany was then the greatest scientific nation on earth.’
- How different would the world have been if, rather than carry out a genetic vendetta, Hitler had risen to power and emphasized his country’s place as the leader of science in the world? Emphasizing scientific achievement over bitter biases is the difference between creating a great renaissance and a great atrocity.
- When someone else’s presentation bored him, he would look engrossed while mentally retreating from the room to think about other, more interesting mathematical problems.
- “Preach a sermon to yourself on the subject.” Henry Eyring
- Physicists have wrestled with what quantum mechanics is really telling us about the nature of the physical world since its early days.
- The existence of GPS, computer chips, lasers and electron microscopes attest that quantum theory works beautifully. But nearly a hundred years after Heisenberg published his paper on matrix mechanics there is still no agreement on its meaning.
- Physicists ruefully joke that though new interpretations of quantum physics arrive with astonishing regularity, none ever go away. For many, that joke is turning sour. ‘It is a bad sign’, theoretical physicist Steven Weinberg noted recently, ‘that those physicists today who are most comfortable with quantum mechanics do not agree with one another about what it all means.’
- But no matter how many such steps we add, von Neumann argues, the sequence of events must end with someone perceiving these events. ‘That is,’ he says, ‘we are obliged always to divide the world into two parts, the one being the observed system, the other the observer.’
- According to his formalism, their wave functions become so inextricably intertwined that both must be represented together by a single wave function. Schrödinger would in 1935 coin the term ‘quantum entanglement’ to describe this phenomenon. This means that measuring some property of one of the pair instantly collapses the wave function of the whole system, even if the objects are separated by some vast distance after their initial interaction. Einstein, who was probably the first to fully appreciate this consequence of entanglement and did not like it one bit, called it ‘spooky action at a distance’
- Others, including his friend Wigner, who often discussed such things with him, would later suggest that the consciousness of the (human) observer was responsible – a conclusion implied but not stated overtly in von Neumann’s work.45 Einstein strongly objected to this idea – the Dutch physicist and historian Abraham Pais recalled ‘that during one walk Einstein suddenly stopped, turned to me and asked whether I really believed that the moon exists only when I look at it’.46 Einstein (and he was hardly alone) felt that things should have properties regardless of whether there was someone there to see them.
- Schrödinger’s cat was a gotcha of the highest order, a takedown of efforts to paper over the cracks in quantum theory. A cat, most people would agree, can be either dead or alive. But if we follow von Neumann’s logic, until someone opens the chamber, the cat’s wave function is entangled (the term is used for the first time in this paper) with that of the radioactive substance, and the unfortunate feline is both alive and dead. If quantum mechanics can result in such patently obvious nonsense at the macroscopic scale, how can we know the theory ‘truly’ describes the atomic realm?
- An electron must have properties – be somewhere, for example – before it is caught on a phosphor-coated screen. In what has become the most controversial part of his book in recent years, von Neumann discusses this idea – and seemingly dismisses it.
- ‘Jancsi felt at home in America from the first day,’ Wigner continued. ‘He was a cheerful man, an optimist who loved money and believed firmly in human progress. Such men were far more common in the United States than in the Jewish circles of central Europe.’
- One of many objections raised to Everett’s ideas is that universes multiply like rabbits, a consequence that strikes some physicists as using an ontological sledgehammer to crack an epistemic nut.77 In some versions of the theory ‘measurement’ can mean any quantum interaction. Every time a nucleus emits an alpha particle or a photon interrogates an atom, a whole new universe springs into being.
- Most computers today work by manipulating binary digits – bits, which can each be either 1 or 0. A quantum computer instead works with a bit in a superposition of states. These quantum bits or ‘qubits’ each have a probability of potentially being 1 or 0 but are, in effect, both, until a measurement is made. A qubit, however, really comes into its own when it is entangled with others – ideally, with hundreds of others rather than the few dozen or so that have been corralled together to date. Physicists are probing the limits of quantum theory with experiments to find out whether such large assemblies of particles (which might be atoms, photons or electrons, for example) can be entangled and kept in quantum states long enough to do useful computing.
- ‘In spite of the fact that quantum mechanics agrees well with experiment,’ he says in his book, ‘one can never say of the theory that it has been proved by experience, but only that it [is] the best known summarization of experience.’
- When a fire gutted the Reichstag the following month, Hitler was awarded emergency powers. Freedom of speech, freedom of the press and the right to protest were suspended along with most other civil liberties. In March, he consolidated his power with the Enabling Act, which effectively allowed Hitler and his cabinet to bypass parliament. One of the first acts of the new regime was to introduce the ‘Law for the Restoration of the Professional Civil Service’, which called for the removal of Jewish employees and anyone with communist leanings.
- In June that same year, von Neumann wrote to Veblen: ‘If these boys continue for only two more years (which is unfortunately very probable), they will ruin German science for a generation – at least.’86 How right he was. By the end of 1933, Germany was a totalitarian dictatorship, and the trickle of scientists leaving the country became a flood.
- He found that university science departments that were bombed during the war recovered by the 1960s, but those that had lost staff remained sub-par well into the 1980s. ‘These calculations suggest that the dismissal of scientists in Nazi Germany contributed about nine times more to the decline of German science than physical destruction during WWII,’ Waldinger notes.
- Ignoring them, Hilbert triumphantly led a delegation of sixty-seven mathematicians to the congress. ‘It is a complete misunderstanding of our science to construct differences according to people and races, and the reasons for which this has been done are very shabby ones,’ he declared. ‘Mathematics knows no races. For mathematics, the whole cultural world is a single country.’
- Next year, when Rust attended a banquet at Göttingen, he asked Hilbert whether it was true that mathematics had suffered after the removal of Jews. ‘Suffered?’ replied Hilbert. ‘It hasn’t suffered, Herr Minister. It just doesn’t exist anymore.’
- The golden age of German science was over. America was about to get an injection of talent that would transform its fortunes for ever. Von Neumann would soon be reunited with many of his Göttingen colleagues – not to discuss the finer points of quantum mechanics this time, but to design the most powerful bomb ever made.
- Johnny began to court Mariette, rather bumblingly, in 1927, when she was studying economics at the University of Budapest. She was a fashionable socialite, and he was already a world-renowned mathematician, but she was never intimidated by his intellect. Von Neumann proposed, in his way, two years later. ‘You and I could have a lot of fun together,’ he told her, ‘for instance, you like to drink wine and so do I.’
- ‘These poor bastards could now sit and think clearly all by themselves, OK?’ wrote one critic, the Nobel Prize-winning physicist Richard Feynman. ‘They have every opportunity to do something, and they’re not getting any ideas … Nothing happens because there’s not enough real activity and challenge: You’re not in contact with the experimental guys. You don’t have to think how to answer questions from the students. Nothing!’
- Von Neumann, however, was never troubled by a lack of ideas. If anything, he had too many: one criticism of his 1930s work was that, after a few dazzling papers on a subject, he would lose interest and leave the mundane job of following them up to others.
- Germany’s infamous Paris gun had a staggering firing range of over 70 miles. But a projectile hurled high and long in this way flies through progressively thinner air as it gains altitude, and so experiences less resistance to its motion. A failure to adequately account for this meant that early efforts to calculate trajectories were wildly off, and shells flew far beyond their intended targets. Throw in some more complications – a moving target, boggy ground and so forth – and the equations of motion often become impossible to solve exactly (in mathematical terms they become ‘non-linear’), forcing mathematicians to approximate. That required arithmetic and lots of it: hundreds of multiplications for a single trajectory. What was needed, but not available (yet), was a device able to perform such calculations accurately at the rate of thousands per second. Some of the earliest room-sized computers would be built to solve exactly this problem.
- ‘There was a bottle of wine and the confab started,’ she says. ‘As often as not, another bottle was passed around; pretty soon a gypsy band was summoned, perhaps some close friends cajoled out of bed and a full-fledged “mulatsag” was on its way.’ A mulatsag, Klári explains was ‘simply the spontaneous combustion of a bunch of people having a good time’. ‘At six o’clock in the morning,’ she continues, ‘the band was dismissed, we went back upstairs, had a quick shower, the men went to work, the children to school, and the ladies with their cooks to the market.’
- von Neumann was not always a consolation. He was sometimes distant and aloof; ‘some people, especially women, found him lacking in curiosity about subjective feelings and perhaps deficient in emotional development,’ says Ulam.
- The extraordinary news that uranium could be split arrived in the US by boat on 16 January 1939. German chemist Otto Hahn and his assistant Fritz Strassmann had discovered that the uranium nucleus, bombarded with neutrons, appeared to burst into fragments including barium – an element more than half its size. But with no known mode of radioactive decay to account for their observations, the chemists were baffled. It was Lise Meitner, Hahn’s erstwhile collaborator, and her nephew, Otto Frisch, who first explained the physics behind their results, a type of nuclear reaction that Frisch later named ‘fission’.
- The arm of the OSRD charged with nuclear work morphed into what became known as the Manhattan Project. The massive effort to build the atom bomb, codenamed Project Y, would cost the US $2 billion (more than $20 billion today) and at its height employ more than 100,000 people.24 In September 1942, the forty-six-year-old Army engineer Leslie Groves was appointed to lead it. The very next month, Groves chose Oppenheimer to head the top-secret laboratory that would develop the bomb.
- Worst of all, from the military’s standpoint, he was a left-winger whose closest associates – his girlfriend, wife, brother and sister-in-law – had been, and perhaps still were, members of the Communist Party. Even Oppenheimer’s landlady in Berkeley was a communist. These facts would be used to strip Oppenheimer of his security clearance in 1954, a public act of humiliation that effectively ended his work for the government.
- In Oppenheimer, Groves recognized another tireless leader who would get things done. ‘He’s a genius,’ Groves would say after the war (though he too would testify against him in the end). ‘A real genius. Why, Oppenheimer knows about everything. He can talk to you about anything you bring up. Well, not exactly. He doesn’t know anything about sports.’
- Von Neumann was irked when newspapers reported that he had received the medal for showing that a ‘miss was better than a hit’. He had actually discovered that large bombs cause far more damage over a wider area when they are detonated in the air above their target than on the ground. The principle was well known, but von Neumann showed that the effect of an airburst was much larger than previously thought, and he improved the accuracy of the calculations to determine the optimal altitude of a bomb’s detonation.
- It was never used, one concern being that if the bomb did actually produce an explosion a fraction of the expected size, Jumbo would instantly be transformed into 200 tons of radioactive shrapnel. Groves, fearing Congress would regard the $12 million Jumbo to be a white elephant, ordered it destroyed. Several 500 pound demolition bombs could only blow the ends off the vessel, however. Jumbo’s rusting hulk still stands in the New Mexico desert today.
- The best estimates of Trinity’s power put the figure somewhere between 20,000 and 22,000 tons. Oppenheimer reached for poetry, recalling a verse from ancient Hindu scripture, the Bhagavad Gita, which he had read in the original Sanskrit. ‘Now I am become Death,’ he said, ‘the destroyer of worlds.’ Bainbridge was pithier. ‘Now we are all sons of bitches,’ he told Oppenheimer. ‘The war is over,’ Groves’ deputy, Thomas Farrell, declared at Base Camp, 5 miles south of S-10000. ‘Yes,’ Groves replied, ‘after we drop two bombs on Japan.’
- In Potsdam, Truman had decided quite quickly that it would not, after all, be in America’s interests for the Soviet Union to enter the Pacific war. When he got word of the successful test, he deliberated over whether he could tell Stalin about the bomb without provoking a Soviet invasion of Japan. In the end, Truman sidled over to the Russian leader and told him that the United States now possessed a ‘new weapon of unusual destructive force’. Stalin, completely unperturbed, told him to make good use of it. Spies in the Manhattan Project had long ago told Stalin everything he needed to know.
- There is evidence from as early as May 1943 that Germany was not in the cross-hairs of the American government. Policy-makers feared that if an atom bomb failed to detonate on German soil, scientists there would be better placed to use it to aid their own bomb-making efforts than their Japanese counterparts. Some historians have argued that racism played a role: the Japanese were despised by the American public. The internment of thousands of Japanese-Americans during the war, for example, was later investigated and found to be based on ‘racial prejudice, wartime hysteria, and a failure of political leadership’ rather than any real security risk.35 Another motive was revenge for the attack on Pearl Harbor. Whatever the reasons, had the émigré scientists working on the Manhattan Project known that the ultimate destination of the bomb they were helping to build was Imperial Japan rather than Nazi Germany, many might have quit – or never joined at all.
- Good weather allowed Little Boy to be dropped on the primary target of Hiroshima, but a crosswind blew the bomb away from the aiming point, Aioi Bridge, so that it detonated 1,900 feet (580 metres) above Shima hospital, modelled by its founder on the Mayo clinic in Rochester, Minnesota.
- The explosion, equivalent to about 17,000 tons of TNT, and the resulting firestorm killed some 70,000 people, mostly civilians. Many thousands more would die from burns and radiation poisoning by the end of the year.
- Little Boy and Fat Man claimed more lives in minutes than the senseless firebombing of Dresden by hundreds of Allied (mostly British) aircraft. No decision of such magnitude should pass into the annals of history unscrutinized, and no schoolchild taught to accept uncritically that the horrors visited on the citizens of those two cities can be justified. ‘The experience of these two cities’, concludes a study of Hiroshima and Nagasaki written by Japanese scientists and doctors some thirty-six years after the bombings ‘was the opening chapter to the possible annihilation of mankind.’
- ‘Since the beginning of the war I had been retreating step by step from one moral position to another, until at the end I had no moral position at all.’40 The story of the Second World War as a whole is, likewise, one of progressive moral retreat; the bombings of Hiroshima and Nagasaki a last hideous refutation of the idea that the lives of civilians mattered, and that any war could be ‘ethical’.
- ‘Computers in the future may have only 1,000 vacuum tubes and perhaps weigh only 1½ tonnes.’ Popular Mechanics, March 1949
- When he awoke late that night, von Neumann began to prophesy at great speed, stuttering as he did when he was under strain. ‘What we are creating now,’ he told her, is a monster whose influence is going to change history, provided there is any history left, yet it would be impossible not to see it through, not only for the military reasons, but it would also be unethical from the point of view of the scientists not to do what they know is feasible, no matter what terrible consequences it may have. And this is only the beginning! The energy source which is now being made available will make scientists the most hated and also the most wanted citizens of any country. But then von Neumann abruptly switched from talking about the power of the atom to the power of machines that he thought were ‘going to become not only more important but indispensable’. ‘We will be able to go into space way beyond the moon if only people could keep pace with what they create,’ he said. And he worried that if we did not, those same machines could be more dangerous than the bombs he was helping to build.
- The Internet was conceived many times over before computers were linked together in the 1960s and ’70s to form the ARPANET.
- They were aided by mathematician Adele Goldstine, Herman’s wife, who would later write the ENIAC’s user manual, and six newly trained operators – all women, four with mathematics degrees.
- In December 1947, by which time the ENIAC had been moved to the BRL in line with contractual obligations, a piece in the New York Times noted that 17 per cent of the time was spent on set-up and testing, 41 per cent on trouble-shooting and fixing problems and only 5 per cent – about two hours per week – on doing actual work.
- But he was not an engineer – he was a mathematician, with an extraordinary capacity to cleave problems of their superficial complexities and render them in their most elemental form. He now brought this talent to bear on the jumbled ideas of the ENIAC team. ‘John von Neumann was no aesthete,’ note Haigh and his colleagues, ‘but his intellectual response to ENIAC might be likened to that of a Calvinist zealot who, having taken charge of a gaudy cathedral, goes to work whitewashing frescos and lopping off ornamental flourishes.’15 That impulse would yield a design for the new machine that would inspire generations of engineers and scientists to build computers in its image.
- An unlikely turn of history would entangle the intellectual roots of the modern computer with Hilbert’s challenge to prove that mathematics was complete, consistent and decidable. Soon after Hilbert issued his challenge, the intellectually dynamic but psychologically frail Austrian mathematician Kurt Gödel would demonstrate that it is impossible to prove that mathematics is either complete or consistent. Five years after Gödel’s breakthrough, a twenty-three-year-old Turing would attack Hilbert’s ‘decision problem’ (Entscheidungsproblem) in a way completely unanticipated by any other logician, conjuring up an imaginary machine to show that mathematics is not decidable. The formalisms of these two logicians would help von Neumann crystallize the structure of the modern computer. The result of his musings, First Draft of a Report on the EDVAC, would become the most influential document in the history of computing.16 ‘Today,’ says computer scientist Wolfgang Coy, ‘it is considered the birth certificate of modern computers.’
- First-order logic is a set of rules and symbols that can, for example, express the sort of formal arguments or ‘syllogisms’ found in the classical logic of Aristotle.19 The beauty of this sort of symbolic logic is that the system formalizes natural language, stripping the statements down to their bare logical bones.
- In other words, there are truths in mathematics that cannot be proven by mathematics.
- The following day Hilbert gave his retirement address, a passionate speech in which he declared again there were no unsolvable problems in mathematics and uttered the words that would become the epitaph on his gravestone: We must know – we will know. But Gödel had already proved him wrong.
- Generations of philosophers and mystics have made much of Gödel’s theorem, each claim wilder than the next. The cognitive scientist Douglas Hofstadter has seen in that loop of self-reference a glimmer of our self-awareness, the very essence of human consciousness.24 Some have even suggested that the work provides evidence for the existence of God (who had left these truths floating free for mathematicians to discover?), claims bolstered by the discovery of an unfinished proof for the existence of God in Gödel’s papers after his death.
- Von Neumann called Gödel the greatest logician since Aristotle and gave up working on the foundations of mathematics.
- From the 1950s onwards, the universal machine has been appropriated as a foundation stone of theoretical computer science. So often is Turing’s name mentioned in the same breath as the programmable computer, the myth has sprung up that he somehow invented the computer. The difficulties of distinguishing myth from reality are exacerbated by the fact that Turing did design a computer (the ACE, for the National Physical Laboratory in 1945, after he saw von Neumann’s EDVAC report), and by Turing’s very real formative contributions to the field of artificial intelligence. But ‘On Computable Numbers’ was a work of abstract logic; its avowed purpose to solve Hilbert’s decision problem. Like Schrödinger’s famous description of an alive-dead cat, published exactly twelve months earlier, Turing’s machine was a thought experiment. ‘Schrödinger was not trying to advance the state of the art of feline euthanasia,’ says Haigh. ‘Neither was Turing proposing the construction of a new kind of calculating machine.’
- His ‘hypothetical element’ is an idealized neuron, shorn of its physiological complexities. This seems odd today, but von Neumann, Turing, Norbert Wiener and other thinkers who contributed to the foundations of the field that became known as ‘artificial intelligence’ did think about computers as ‘electronic brains’. Today using ‘brain’ or ‘neuron’ in the context of computers seems laughably naive. Yet we accept the similarly anthropomorphic use of ‘memory’ to mean ‘storage’ without blinking an eye.
- Computer designers now refer to the whole configuration as the ‘von Neumann architecture’, and nearly all computers in use today – smart phones, laptops, desktops – are built according to its precepts.
- Von Neumann emphasized that the memory of the machine should be large, citing the sort of number-crunching capabilities that he knew would be required by Los Alamos. The EDVAC report called for the computer to have a storage capacity of 8,000 words, each 32 bits long – much more than the ENIAC, which could store only twenty ten-digit numbers.
- ‘Just as ENIAC was shaped by the firing table problem,’ historians note, ‘EDVAC was shaped by the fluid dynamics of the atom bomb.’
- There would never be a finished version of the EDVAC report. In the summer of 1945, von Neumann had more pressing commitments. The ‘first draft’ he sent to Goldstine stopped abruptly and contained blank spaces that had been left to add references and notes. Descriptions of the input and output units were sparse, and von Neumann later sent details of how programs might be recorded and read from magnetic tapes, to be included in a future second draft.
- Not everyone was pleased. The report Goldstine circulated had only the name of John von Neumann on its title page. Eckert and Mauchly, who were hoping to patent aspects of computer design, were furious. The ENIAC’s inventors accused von Neumann of inflating his contribution to the project and rehashing their work.
- ‘He sold all our ideas through the back door to IBM,’ Eckert complained in 1977.47
- Goldstine, who would also work with von Neumann at Princeton, agreed. ‘It is obvious that von Neumann, by writing his report, crystallized thinking in the field of computers as no other person ever did. He was, among all members of the group at the Moore School, the indispensable one.’
- There is evidence that von Neumann had a nobler motive for playing down Eckert and Mauchly’s roles. He wanted to accelerate the development of computers and feared the commercial route the ENIAC’s inventors were pursuing would stifle progress with trade secrets and litigation.
- #[[Open Source]]
- The purpose of the EDVAC report, von Neumann testified the following year, was ‘to contribute to clarifying and coordinating the thinking of the group’ and ‘further … the art of building high speed computers’ by disseminating the work as quickly and as widely as possible. ‘My personal opinion was at all times, and is now, that this was perfectly proper and in the best interests of the United States.’
- ‘I certainly intend to do my part to keep as much of this field ‘in the public domain’ (from the patent point of view) as I can,’ von Neumann wrote to Frankel as he made plans for building his own computer at the IAS.51 Patent rights to the IAS machine were in large part handed over to the government in mid-1947. The IAS team sent a stream of detailed progress reports to about 175 institutions in several different countries, helping to spawn a generation of computers across the world. ‘The remarkable feature of the reports,’ noted I. J. Good, who worked with Turing during and after the war, ‘was that they gave lucid reasons for every design decision, a feature seldom repeated in later works.’
- #[[Open Source]]
- What had become the longest trial in the history of the federal court system concluded with the ruling that the most valuable invention of the twentieth century could not be patented. The open source movement, born a decade or so later, would soon shun corporate secrecy, lauding the benefits of freely sharing information to drive forward innovation. Thanks to von Neumann those principles were baked into computing from the very beginning.
- Gleb Wataghin, a Russian-Italian physicist who had recently returned from Brazil. ‘I suppose you are not interested in mathematics anymore,’ Wataghin teased. ‘I hear you are now thinking about nothing but bombs.’56 ‘That is quite wrong,’ von Neumann replied. ‘I am thinking about something much more important than bombs. I am thinking about computers.’
- A team led by Jean Bartik (born Betty Jean Jennings), one of the ENIAC’s original operators, was hired in March 1947 to programme the machine in its new guise – the first time anyone had been employed solely for that task. The job of computer programmer was born.
- In the summer of 1947, Los Alamos hired her as a consultant. ‘I learned how to translate algebraic equations into numerical forms,’ she reflected many years later, ‘which in turn then have to be put into machine language in the order in which the machine was to calculate it.’ In other words, she ‘became one of the first “coders”’. She described her new occupation in terms that would be familiar to many programmers today. It was, she said, a ‘very amusing and rather intricate jig-saw puzzle’ that was ‘lots and lots of fun’.
- Ulam realized that many real-world problems are surprisingly similar in nature to working out the chances of winning solitaire. A complex situation can be made tractable by setting up a model that is then run repeatedly to reveal the most likely outcomes. ‘It’s infinitely cheaper to imitate a physical process in a computer and make experiments on paper, as it were, rather than reality,’ Ulam explained later.
- In March 1947, von Neumann sent an eleven-page plan for running Monte Carlo bomb simulations on an electronic computer to Robert Richtmyer, head of the Los Alamos theoretical division. Computers now run Monte Carlo simulations thousands of times a day, and applications range from optimizing stock portfolios to testing the properties of new materials.
- Still, she was able to write up a report on the conversion and use of the ENIAC. Expanded and edited, the document was to become the definitive record of the Monte Carlo runs she and Metropolis had overseen. Klári returned to the BRL in October. Historians have recently recovered the complete program for this second Monet Carlo run: twenty-eight pages of code written in Klári’s hand.
- The full scope of Klári’s contributions to the early days of computing have only recently come to light. Run on an ENIAC emulator today, her Monte Carlo code reliably spits out the expected numbers, virtually plotting the fates of neutrons inside the implosion bomb that von Neumann had helped build.
- Debate still rages in some quarters over whether the ENIAC in its new guise really constituted a true ‘stored-program’ computer. There can be little doubt, however, that Klári’s Monte Carlo code is the first truly useful, complex modern program ever to have been executed.
- The writer George Dyson memorably compares the machine’s appearance to ‘a turbocharged V-40 engine, about 6 feet high, 2 feet wide, and 8 feet long’. ‘The computer itself,’ he continues, ‘framed in aluminium, weighed only 1,000 pounds, a microprocessor for its time. The crankcase had 20 cylinders on each side, each containing, in place of a piston, a 1,024-bit memory tube.’
- Spurred by von Neumann and the numerous computers springing up in the wake of his project, the company rapidly changed course, producing digital stored-program machines in the EDVAC mould. The IBM 701 was, says Bigelow, ‘a carbon copy of our machine’.82 By the 1960s, IBM manufactured about 70 per cent of the world’s electronic computers. ‘Probably’, Teller told his biographers, ‘the IBM company owes half its money to Johnny von Neumann.’
- Did von Neumann understand the potential of the machines he helped to invent? Yes, he did. In reflective mood in 1955, he noted that the ‘over-all capacity’ of computers had ‘nearly doubled every year’ since 194584 and often implied in conversation that he expected that trend to continue. His observations prefigure ‘Moore’s law’, named after Intel’s cofounder Gordon Moore, who predicted in 1965 that the number of components on an integrated circuit would double every year.
- By a quirk of history, the individual who had perhaps the deepest understanding of the logical and mathematical underpinnings of the modern computer also had the power, influence and managerial skills to build one, as well as the good sense to ensure that the race to faster, more powerful machines he initiated was run (at least to begin with) in public. ‘Von Neumann cleared the cobwebs from our minds as nobody else could have done,’ wrote Bigelow long afterwards. ‘A tidal wave of computational power was about to break and inundate everything in science and much elsewhere, and things would never be the same.’
- Von Neumann was rational. At times some may have thought him too rational. Perhaps this is best illustrated by the child custody arrangements that he and his first wife Mariette came up with for their daughter, Marina, when she was just two years old. The two agreed that until the age of twelve, Marina would live with her mother and spend holidays with her father. After that, when she was ‘approaching the age of reason’, Marina would live with her father to receive the benefit of his genius.1 ‘It was a thoughtful and well-intentioned agreement,’ Marina says in her memoirs, ‘but they were too inexperienced to realize that adolescence is often the stage in life farthest removed from the age of reason.’
- Her father’s letters, Marina notes, often reflected his ‘lifelong desire to impose order and rationality on an inherently disorderly and irrational world’.3 Game theory sprang from von Neumann’s urge to find neat mathematical solutions to knotty real-world problems during one of the most ‘disorderly and irrational’ periods in human history.
- Lasker became famous for shunning textbook moves, making risky plays and throwing his opponents into confusion. Psychology was central to his game: Lasker did not play chess, it was said, so much as the man in front of him.
- ‘The science of contest’, Lasker predicted, ‘will progress irresistibly, as soon as its first modest success has been scored.’ Institutes dedicated to the new discipline would ‘breed teachers capable of elevating the multitude from its terrible dilettantism’ in matters of negotiation, transforming politics completely, and ‘aid the progress and the happiness of all humankind’. Ultimately, he writes, their aim would be to render war obsolete by providing rational methods for reaching agreements.
- The first decisive step towards a ‘science of contest’ would be taken towards the end of the following year. On 7 December 1926, von Neumann unveiled his proof of the minimax theorem to mathematicians at Göttingen. Published in 1928, the paper expounding his proof, On the Theory of Parlour Games,10 would firmly establish game theory as a discipline, framing human cooperation and conflict in truly mathematical terms.
- Von Neumann could not get any further with multi-player games, so he switched to thinking about a situation with just two opponents whose individual payouts sum to zero. ‘It is not enough to succeed. Others must fail,’ Iris Murdoch once wrote. Von Neumann coined the term ‘zero-sum’ to describe such games of total conflict, in which one person’s loss is the other’s gain. One indication of the influence of game theory is that ‘zero-sum’ has now passed into the vernacular.
- Von Neumann set out to prove that every two-player zero-sum game similarly has a ‘solution’. That is, a strategy for each player that guarantees the best outcome given they are up against a rational player who is exclusively out for themselves too.
- ‘Any event,’ von Neumann says, ‘may be regarded as a game of strategy if one looks at the effect it has on the participants.’ He describes this interplay between players as the ‘principal problem of classical economics’. ‘How,’ he muses, ‘is the absolutely selfish “homo economicus” going to act under given external circumstances?’
- He felt the discipline of economics was not ready for a more concrete treatment. ‘Economics, as a science, is only a few hundred years old,’ he explained later. The natural sciences were more than a millennium old when the first really important progress was made … methods in economic science are not worse than they were in other fields. But we will still require a great deal of research to develop the essential concepts – the really usable ideas.
- Privately, he was less diplomatic. ‘If these books are unearthed sometime a few hundred years hence, people will not believe that they were written in our time,’ von Neumann confided to a friend in 1947, referring to some of the discipline’s most lauded contemporary works. ‘Rather they will think that they are about contemporary with Newton, so primitive is their mathematics. Economics is simply still a million miles away from the state in which an advanced science is, such as physics.’
- Despite the generally frosty reception, ‘A Model of General Economic Equilibrium’ sparked a revolution. Mathematicians, inspired by von Neumann’s achievement, poured into economics and began applying fresh methods to the dismal science. By the 1950s, the subject was transformed.
- But as was his way, von Neumann had moved on long before anyone really recognized its significance. A year before the translated paper appeared, he produced Theory of Games and Economic Behavior, the book that would forever change the social sciences and profoundly influence economic and political decision-making from the 1950s to the present day.
- Morgenstern found Princeton to be much as Einstein had described it five years earlier; ‘a quaint and ceremonious village of puny demigods on stilts’.
- Morgenstern’s principal contribution to the volume was the introduction – which would become the most widely read part of Theory of Games. By the time he wrote it, he was thoroughly disillusioned with his discipline. ‘Economists simply don’t know what science means,’ he complained privately. ‘I am quite disgusted with all of this garbage.’45 John Maynard Keynes, whose thinking shaped government policy the world over for much of the twentieth century, was ‘one of the biggest charlatans who has ever appeared on the economic scene’, said Morgenstern. ‘And everybody is on their belly before him.’
- In part, the lack of meaningful progress in economics was due to a dearth of data. Newtonian mechanics, which revolutionized physics in the seventeenth century, was built on systematic observations made by astronomers over several millennia. ‘Nothing of this sort has occurred in economic science,’ says Theory of Games. Sweeping generalizations had been made on the basis of very little evidence. Mathematics had been used as window dressing to disguise the perilously weak foundations on which the discipline stood. ‘Economists frequently point to much larger, more ‘burning’ questions, and brush everything aside which prevents them from making statements about these,’ the book continues. ‘The experience of more advanced sciences, for example physics, indicates that this impatience merely delays progress.’
- Von Neumann had achieved the supposedly impossible – a rigorous way to assign numbers to nebulous human desires and predilections. ‘To this day the most important theory in the social sciences’ was how Nobel laureate Daniel Kahneman described von Neumann’s accomplishment in 2011, more than sixty years after Theory of Games first appeared.50 The influence of utility theory and the notion of the rational calculating individual that is at its heart would quickly reach far beyond the ivory tower.
- So far, von Neumann has dealt only with games that are zero-sum. But life is rarely a game of total conflict. Economic growth is not zero-sum: the world is a more prosperous place now than it was 200 years ago. Often a situation really is win-win. Sometimes, everyone loses. ‘Zero-sum games are to the theory of games what the twelve-bar blues is to jazz: a polar case, and a historical point of departure,’ says economist Michael Bacharach.
- Von Neumann does not discuss in detail the related situation of two sellers and one buyer (a ‘monopsony’). Rather cheekily, he leaves the maths as an exercise for the reader. But by analogy with the case of two buyers, either the sellers compete and the buyer picks up the product cheaply (below the more expensive seller’s minimum price) or the sellers form a coalition, driving up the price until they hit the unfortunate buyer’s spending limit.
- The discussion of the one-, two- and three-player market games is more or less the sum total of the offerings to economists in Theory of Games – no more than a tantalizing hint of its potential. Luckily, game theory’s applications to the business world soon had an eloquent, if unlikely, champion in the form of the American business journalist John McDonald. An ardent Trotskyite, McDonald travelled to Mexico in 1937 to serve on his hero’s secretarial staff. In 1945, he joined Fortune magazine and wrote a string of books and articles popularizing game theory with the help of Theory of Games’ two authors.
- Von Neumann and Morgenstern’s Theory of Games and Economic Behavior appeared in 1944. The first edition promptly sold out – an article about the book on the front page of the New York Times and a dozen glowing reviews in prestigious journals turned the opus into an unlikely best-seller. ‘Ten more such books,’ declared one reviewer, ‘and the progress of economics is assured.’
- Another economics Nobel laureate, psychologist Daniel Kahneman, challenged game theory’s assumption that humans are entirely rational and had preferences and tastes that never changed. An admirer of von Neumann, ‘one of the giant intellectual figures of the twentieth century’, Kahneman and his close collaborator, Amos Tversky, studied how real people actually make decisions and devised their own ‘prospect theory’ to explain findings that ran counter to some of utility theory’s predictions.
- Tech firms themselves have employed leading game theorists to help design online advertising marketplaces, bidding systems, product-ranking algorithms – and ways to stay ahead of regulators.79 The most useful – and profitable – area of application has been in the realm of auction design – specifically those that determine the price of keywords used to place ads in search results.80 Keyword auctions are now responsible for a large chunk of income for a swathe of Internet companies – making billions for Google, as well as other firms better known for selling goods rather than ads including Amazon, Apple and Alibaba. Game theorists have since been drawn into every corner of Internet commerce, from pricing cloud computing services to taxi rides, and the design of addictive reward and ratings systems that keep users coming back for more.
- Hamilton’s theory, now known as ‘inclusive fitness’, was popularized by Richard Dawkins in The Selfish Gene.
- Price became so appalled by the idea that altruistic behaviour could be adequately explained by selfishness, rather than by the existence of some nobler motivation, that he took to performing random acts of kindness in a bid to convince himself he was mistaken. He ultimately became so depressed that, in 1975, he ended his own life by slitting his carotid artery with a pair of nail scissors.
- Hence the tournament: would the most selfish strategies come out on top? They did not. The winning strategy, called ‘Tit-for-Tat’, came from game theorist Anatol Rapoport. It was incredibly simple: cooperating by default but behaving selfishly when the opponent did. Axelrod’s game had illustrated that cooperation might develop even if animals were evolutionarily inclined to act purely in their own self-interest.
- ‘That one can use the same set of tools to analyse a game of tennis, the decision of when to run for office, predator–prey relationships, how much to trust a stranger, and how much to contribute to a public good makes game theory one of the most important analytic tools available to all of the social sciences,’ noted Ostrom in 2012.83 Ostrom’s list is missing one of the earliest applications of game theory, one with which it remains practically synonymous even today. While economists were still scratching their heads over von Neumann and Morgenstern’s tome, the US military quickly saw its worth for honing nuclear strategy. Much of that honing would be done at the RAND Corporation, a global policy think tank based a block from the beach in Santa Monica, California, that would quickly become a veritable Who’s Who of game theorists charged with ‘thinking about the unthinkable’.
- If any one man can be regarded as the founding father of the RAND Corporation, then that man would be Henry ‘Hap’ Arnold, the commanding general of the US Air Force during the Second World War.
- Months before the end of the war, Arnold began to worry that the scientific expertise assembled to aid the American military would quickly disperse after the conflict was over. Something of a visionary, Arnold foresaw the advent of intercontinental ballistic missiles (ICBMs). ‘Someday, not too distant,’ he wrote in 1943, ‘there can come streaking out of somewhere – we won’t be able to hear it, it will come so fast – some kind of gadget with an explosive so powerful that one projectile will be able to wipe out completely this city of Washington.’4 Arnold urged the Air Force to prepare for a future in which scientists would play a leading role in warfare. ‘For the last twenty years we have built and run the Air Force on pilots,’ he told the Navy top brass. ‘But we can’t do that anymore. We’ve got to think of what we’ll need in terms of twenty years from now.’5 On 7 November 1944, he wrote to his chief scientific adviser: ‘I believe the security of the United States of America will continue to rest in part in developments instituted by our educational and professional scientists. I am anxious that the Air Force’s post war and next war research and development be placed on a sound and continuing basis.’
- #Anduril
- ‘I told these scientists’, Arnold wrote in 1949, ‘that I wanted them to think … about supersonic-speed airplanes, airplanes that would move and operate without crew, improvements in bombs … defenses against modern and future aircraft … communication systems … television … weather, medical research, atomic energy.’ In short, anything that ‘might affect the development of the airpower to come’.
- #Anduril
- Thirteen months later, von Kármán and his colleagues presented Arnold with a massive thirty-three-volume report entitled Toward New Horizons.9 It did not disappoint him. ‘The scientific discoveries in aerodynamics, electronics and nuclear physics open new horizons for the use of air power,’ von Kármán wrote. His introduction preceded hundreds of pages of remarkably foresighted technical analysis, charting the way to developments such as intercontinental ballistic missiles and drones. Much of the information was culled from captured German scientists. The seeds of what would become the RAND Corporation appeared in a small section on the application of science to operations analysis – the brains of the war machine. The United States had assiduously developed expertise in mission-planning during the war. Ending that work, the report warned, would be ‘a great mistake’. Instead, there should be established ‘in peacetime a nucleus for scientific groups such as those which successfully assisted in the command and staff work in the field during the war. In these studies experts in statistical, technical, economic and political science must cooperate.’
- #Anduril
- Collbohm understood as well as anyone that science would play a pivotal role in any future conflict and was dismayed to see scientists drifting back to their universities at the end of the war.
- Arthur Raymond, Douglas Aircraft’s chief engineer, suggested the name: RAND for ‘Research ANd Development’.
- The think tank’s very first report was released on 2 May 1946. Preliminary Design of an Experimental World-Circling Spaceship concluded that ‘modern technology has advanced to a point where it now appears feasible to undertake the design of a satellite vehicle’. Such a craft would be ‘one of the most potent scientific tools of the Twentieth Century’ and the achievement ‘would inflame the imagination of mankind, and would probably produce repercussions in the world comparable to the explosion of the atomic bomb’. Eleven years later the Soviet Union put Sputnik into orbit, humbling the United States – and turbocharging both the space race and the arms race.
- For All Mankind
- During the Second World War, the AMP supported the new field of ‘operations research’, pioneered in Britain by the physicist Patrick Blackett. Operations research brought the methods of the sciences to bear on wartime problems. The idea was simple: collect and analyse as much data as possible, test hypotheses in the field and use the results to home in on solutions.
- Embedded academics
- With defence budgets tightening, spending on new weapons systems or military operations would have to be weighed carefully against other demands. Weaver’s solution to this problem was the notion of ‘military worth’, a simple score that captured all the complex pros and cons of such choices so that decisions could be made more easily. And the mathematical apparatus to carry out military worth calculations was that of game theory. ‘Military worth, as the phrase is here used, is closely related to the general concept of utility in economic theory,’ Weaver explained in a report in 1946,
- In September 1947, at a RAND-sponsored conference in New York, Weaver set out a manifesto for the nascent organization. Operations research had ‘resulted only from the pressure and necessity of war’, he said. RAND would provide in peacetime an environment where similar techniques could be more widely used for ‘analyzing general theories of warfare’. Chess master Emanuel Lasker’s Jazz Age dreams of a ‘science of contest’ were at last taking shape. ‘I assume that every person in this room is fundamentally interested in and devoted to what can broadly be called the rational life … as compared with living in a state of ignorance, superstition and drifting-into-whatever-may-come,’ Weaver continued. I think that we are not interested in war but in peace … I assume that every person in this room is desperately dedicated to the ideals of democracy, and to so running our own business, so cleaning our own house, and so improving our own relations with the rest of the world that the value of those ideals in which we believe becomes thereby evident.16 RAND analysts pride themselves on their dedication to the ‘rational life’ to the present day. The organization’s commitment to peace and democracy – at least beyond the borders of the United States – would be brought into question again and again.
- Williams would write a humorous primer on the subject, The Compleat Strategyst, strewn with in-jokes and featuring many of RAND’s analysts, transformed into comic characters. Translated into at least five languages including Russian, the book would become one of RAND’s most popular publications.
- ‘We would send you all working papers and reports of RAND which we think would interest you, expecting you to react (with frown, hint, or suggestion) when you had a reaction.’ For his services, von Neumann would receive US$200 a month – the average monthly salary at that time. The offer from Williams came with a charming stipulation: ‘the only part of your thinking time we’d like to bid for systematically is that which you spend shaving: we’d like you to pass on to us any ideas that come to you while so engaged’.
- According to journalist Clay Blair, RAND scientists came to him with a problem they thought too difficult to solve by conventional means: After listening to the scientists expound, Von Neumann broke in: ‘Well, gentlemen, suppose you tell me exactly what the problem is?’ For the next two hours the men at Rand lectured, scribbled on blackboards, and brought charts and tables back and forth. Von Neumann sat with his head buried in his hands. When the presentation was completed, he scribbled on a pad, stared so blankly that a Rand scientist later said he looked as if ‘his mind had slipped his face out of gear,’ then said, ‘Gentlemen, you do not need the computer. I have the answer.’ While the scientists sat in stunned silence, Von Neumann reeled off the various steps which would provide the solution to the problem. Having risen to this routine challenge, Von Neumann followed up with a routine suggestion: ‘Let’s go to lunch.’
- The young Nash had never lacked confidence. In 1948, his very first year as a graduate student, he arranged to see Einstein in his office at the IAS to discuss some pressing ideas on the interaction of particles with fluctuating gravitational fields. Nash spent nearly an hour trying to unwind his thoughts at Einstein’s blackboard but eventually came unstuck. ‘You had better study some more physics, young man,’ Einstein told Nash with a kind smile before sending him on his way.
- He had come up with a mathematical framework allowing the analysis of any type of game – whether zero-sum or not – with any number of participants, and showed that there are certain outcomes for all games in which no player can do any better by unilaterally changing their strategy. These kinds of solutions to a game are now called Nash equilibria. It was a staggering accomplishment, though no one, least of all Nash, had any idea how thoroughly useful his idea would prove to be.
- Nash would later ascribe von Neumann’s coolness as a defensive response prompted by a Young Turk invading his turf. ‘I was playing a non-cooperative game in relation to von Neumann rather than simply seeking to join his coalition,’ he told the historian Robert Leonard. ‘And of course, it was psychologically natural for him not to be entirely pleased by a rival theoretical approach.’37 ‘Natural’, perhaps, from Nash’s point of view and in keeping with reports that von Neumann could react angrily to being contradicted.38 But von Neumann’s rather more magnanimous reaction to being corrected brusquely, and publicly, by Shapley the year before at RAND suggests there was more at stake for him here than the embarrassment of being outfoxed by a younger mathematician.
- Surveys of the roots and influence of game theory have generally taken a dim view of its progenitor. ‘Game theory portrays a world of people relentlessly and ruthlessly but with intelligence and calculation pursuing what each perceives to be his own interest,’ says the physicist turned historian Steve J. Heims. ‘The harshness of this Hobbesian picture of human behaviour is repugnant to many, but von Neumann would much rather err on the side of mistrust and suspicion than be caught in wishful thinking about the nature of people and society.’
- John Adams vs Thomas Jefferson #[[Human Nature]]
- By the time von Neumann visited Europe again in 1949, his belief in people had evaporated away altogether. ‘I feel the opposite of nostalgia for Europe,’ he wrote to Klári, ‘because every corner reminds me … of the world which is gone, and the ruins of which is no solace. My second reason for disliking Europe is the memory of my total disillusionment in human decency between 1933 and September 1938.’
- #[[Human Nature]]
- ‘It seems unlikely that the Nash equilibrium is in any realistic sense the correct solution,’ Flood notes.49 Though the participants were prohibited from reaching an understanding on dividing up the winnings, they leaned towards the von Neumann-Morgenstern solution of mutual cooperation.
- His answer was preventive war – a surprise attack that would wipe out the Soviet Union’s nuclear arsenal (and a good number of its people too) before the country was able to retaliate. ‘If you say why not bomb them tomorrow, I say why not today?’ he reportedly said in 1950. ‘If you say today at 5 o’clock, I say why not one o’clock?’
- The four-week hearing had begun on 12 April 1954. Oppenheimer’s security clearance was stripped on 29 June. In 2009, historians with access to the KGB archives found that Soviet intelligence had made many attempts to recruit Oppenheimer – but failed.
- His wife, Roberta, worked there too, first as a book reviewer in the Social Sciences Division, then as a highly respected analyst whose authoritative study on surprise attacks, published in 1962 as Pearl Harbor: Warning and Decision,73 would still be cited years later, including in 2004 by the 9-11 Commission.
- Known as ‘systems analysis’, their method, developed at RAND, was related to operations research but with a different emphasis. Operations research was a science of the possible: what can be achieved with the equipment and supplies available? Systems analysis, by contrast, was goal-orientated – what future weapons and strategies would be necessary to a specified mission? With its tacit commitment to considering every ‘rational’ eventuality, systems analysis is almost megalomaniac in its ambition.
- LeMay had led the Twentieth Air Force’s campaign of carpet-bombing Japanese cities. ‘All war is immoral,’ he once declared. ‘If you let that bother you, you’re not a good soldier.’ LeMay’s preferred nuclear strategy, the ‘Sunday Punch’, was ‘Massive Retaliation’ by another name: a no-holds-barred attack on the Soviet Union with every atom bomb at SAC’s disposal in response to any aggression.
- Wohlstetter continued thinking about America’s nuclear defences. At the end of the 1950s, he laid out his ideas in ‘The Delicate Balance of Terror’, an article that helped to shape strategic thought in America for decades.75 Wohlstetter attacked the widely held belief that the existence of two nuclear powers eliminated the risk of an all-out global war. There was no atomic stalemate, he argued. The West had been lulled into a false sense of security by imagining that Soviet leaders favoured attacking in ways that would result in plenty of warning for the US. ‘However attractive it may be for us to narrow Soviet alternatives to these, they would be low in the order of preference of any reasonable Russian planning war,’ he argued and, echoing game theory’s minimax principle, he added that ‘In treating Soviet strategies it is important to consider Soviet rather than Western advantage and to consider the strategy of both sides quantitatively. The effectiveness of our own choices will depend on a most complex numerical interaction of Soviet and Western plans.’
- #Anduril
- In 1950, RAND had produced a number of studies concluding that the development of long-range ballistic missiles should be an Air Force priority.76 Partly in response, the Defense Department initiated the Atlas Missile Project in 1951 to determine whether a rocket armed with a 3,000-pound warhead could be sent to destroy cities more than 5,000 miles away. But the bombs dropped on Hiroshima and Nagasaki were many times too heavy for Atlas, and the first US thermonuclear test, codenamed ‘Ivy Mike’, on 1 November the previous year had used a 74 metric-ton device, too heavy to be loaded on a plane, never mind a missile. Atlas was consequently a rather lower-priority project – a moonshot for the future. But in 1953, von Neumann, accompanied by Teller, told RAND’s physicists that the weaponeers at Los Alamos were on the verge of being able to make hydrogen bombs light enough to fit on rockets, and Hap Arnold’s vision of city-destroying projectiles that ‘come streaking out of somewhere’ could quickly be realized.
- #Anduril
- The Atlas programme’s managers had asked for missiles to be built to demanding specifications. They wanted a rocket that would fly halfway around the world at six times the speed of sound and land within a half-mile of the target. Augenstein realized that a lightweight hydrogen bomb rendered those requirements unnecessary. Using figures he got from Los Alamos, he calculated that a bomb weighing less than 1,500 pounds would produce a blast of several megatons. His research also suggested that the Russians would have trouble shooting down missiles travelling much slower than the speeds envisioned by those in charge of the project. Augenstein’s most significant discovery, however, was that the destructive power of the new warhead meant that a missile that landed between 3 and 5 miles away from the target would be sufficient – and within the capabilities of contemporary missile guidance technology. The US could develop ICBMs years earlier than the Atlas programme envisaged, perhaps as soon as 1960. And Augenstein knew that if he had reached these conclusions, the Russians had too – only earlier. Augenstein’s report landed on Collbohm’s desk on 11 December 1953. Much impressed, Collbohm took it to Washington the next day to try to convince senior Air Force officers of the urgency of the situation. They wanted to wait. In October, the Air Force had itself assigned eleven of the country’s leading scientists and engineers to examine the feasibility of ICBMs – with von Neumann as their chairman. Codenamed the ‘Teapot Committee’, the panel had started their deliberations the previous month. Augenstein returned to RAND to prepare a formal report that would flesh out the technical details of the missiles and estimate how many missiles of lesser accuracy would be needed to destroy Soviet cities. The Air Force received Augenstein’s final analysis, entitled A Revised Development Program for Ballistic Missiles of Intercontinental Range, on 8 February, 1954 – two days ahead of von Neumann’s committee. Their conclusions and recommendations were almost identical. Within a couple of months of the two reports, the US had relaxed the tight strictures imposed on the Atlas project and started a fast-track programme to develop missiles tipped with H-bombs.
- #Anduril
- To the chagrin of his colleagues, Kahn toured the US cheerfully recasting their theories as provocatively as possible, rapidly becoming the most infamous of RAND’s ‘defence intellectuals’. ‘Thinking about the Unthinkable’, an idea virtually synonymous with the RAND ethos, was the title of one of Kahn’s books and reflected game theory’s rational, at times pathological, precept of imagining the worst possible response to any policy. A true ‘jester of death’,77 Kahn played deterrence theory for laughs, delivering his lines with relentless deadpan humour while he reasoned his way through the apocalypse, always willing to go a step further than anyone else.
- Kahn eventually compiled his early lectures on deterrence into a massive tome of more than 600 pages and gave a copy to Wohlstetter, who advised him to burn it.81 Instead, he published it, and ‘On Thermonuclear War’ went on to sell a remarkable 30,000 copies in hardback.82 In it, Kahn asserted that nuclear war with the Soviet Union might be survivable and ‘would not preclude normal and happy lives for the majority of survivors and their descendants’. ‘Will the survivors envy the dead?’ Kahn asked at the foot of one table before concluding that they would not. The table, headed ‘Tragic but Distinguishable Postwar States’, listed numbers of dead (from 2 to 160 million) against the time Kahn thought that the economy would take to recover (up to 100 years).
- Pacifists, including Russell, felt Kahn had inadvertently made the case for universal disarmament. One of the most notorious reviews of Kahn’s book, by mathematician James Newman, appeared in Scientific American.85 ‘Is there really a Herman Kahn? It is hard to believe … No one could write like this; no one could think like this,’ wrote Newman. ‘Perhaps the whole thing is a staff hoax in bad taste.’86 ‘This is a moral tract on mass murder,’ Newman continued, ‘how to plan it, how to commit it, how to get away with it, how to justify it.’
- Von Neumann’s last word on nuclear deterrence was published in 1955. Defense in Atomic War expresses the new bomb’s power in stark terms.90 ‘The increases of firepower which are now before us are considerably greater than any that have occurred before,’ he says. The entire tonnage of TNT dropped on all battlefields during all of World War II by all belligerents was a few million tons. We delivered more explosive power than this in a single atomic blast.91 Consequently, we can pack in one airplane more firepower than the combined fleets of all the combatants during World War II.
- As a full-force nuclear attack from one side is impossible to defend against, ‘this will probably mean you will be forced not to ‘do your worst’ at all times, because then when the enemy does his worst you cannot defend against it … Hence, you may have to hold this trump card in reserve.’
- Counterforce was quintessential RAND, epitomizing the think tank’s quest for, as Kahn put it, ‘more reasonable forms of using violence’. The problem was that avoiding bloodshed was not a universally popular idea within the US military. The Strategic Air Command (SAC), in charge of America’s bombers and ICBMs, was particularly hostile to the new strategy. Counterforce would find more receptive ears in government after the 1960 election of President John F. Kennedy, whose campaign was covertly aided by some of RAND’s experts. Among them was Daniel Ellsberg, who would leak the top secret ‘Pentagon Papers’, with their damaging revelations about the Vietnam War, to the press in 1971. Kennedy’s defence secretary, Robert McNamara, would hustle a host of RAND analysts, including Kaufmann, into the White House. The young ‘Whizz Kids’, as they became known, earned the enmity of the Air Force, their one-time sponsors, as their systems analysis studies undermined prized bomber and rocket projects – but supported accelerating the Navy’s submarine-launched Polaris missiles and expanding the Army’s conventional forces. Tired of having their defeats rubbed in their faces by the new gang of Ivy Leaguers, the Air Force soon hired their own analysts, and the Navy and the Army followed suit. RAND’s methods were embedded in US military thinking, shaping the country’s approach to theoretical nuclear conflicts – and to the very real ‘little wars’ to come in Southeast Asia and elsewhere.
- #Anduril - Polaris missiles and the chart of increased precision
- In June 2019, the Pentagon accidentally published to its website the US military’s guidelines for planning and executing small-scale nuclear warfare. The sixty-page document, JP 3-72 on Joint Nuclear Operations, was quickly removed – but not before it had been downloaded by the Federation of American Scientists (FAS), a charity founded by Manhattan Project researchers in 1945 that is devoted to peaceful uses of atomic energy.93 The report’s focus is worst-case scenarios, and its emphasis is on fighting wars, rather than deterrence. Critics aver that such talk of limited nuclear war helps convince America’s enemies that the US would be prepared to use the bomb – increasing the chances that someone will. By imagining the worst, the worst is brought a step closer to being realized.
- Tactical nuclear weapons - #Anduril
- The dilemma is one that RAND’s analysts would have recognized. Seventy years after they began applying the tools of game theory to nuclear strategy, the stakes are higher than ever, the bombs dropped on Hiroshima and Nagasaki mere firecrackers compared to some of the bombs in the American and Russian arsenals. More countries now possess the weapons, and others are threatening to produce them. The expertise required to build a device is now widespread enough that it is not unfeasible that a well-organized terrorist group could build one. So a strategy document from the world’s most powerful bearer of nuclear arms might be expected to be unrecognizable in its scope compared with the stuff produced by the Cold-War-hardened strategists of the 1950s. Much of the report, however, is strikingly familiar – not least the epigraph that begins the third chapter on ‘Planning and Targeting’: ‘My guess is that nuclear weapons…
- #Anduril - tactical nukes
- ‘The most spectacular event of the past half century is one that did not occur,’ said Schelling in 2005, a couple of days before collecting his Nobel Prize. ‘We have enjoyed sixty years without nuclear weapons exploded in anger.’ Schelling attributed our ‘stunning good fortune’ to an unspoken taboo against the use of even the ‘smallest’ bomb. Should the horror of Hiroshima and Nagasaki fade from the public consciousness, he warned, should more nations or even terrorist groups acquire nuclear…
- #Anduril - tactical nukes
- Machines to make machines and machines to make minds ‘“The androids,” she said, “are lonely too.”’ Do Androids Dream of Electric Sheep?, Philip K. Dick, 1968
- The ‘Snappy’ is a RepRap – a self-copying 3D printer – that is able to print around 80 per cent of its own parts.
- #Anduril
- Engineer and mathematician Adrian Bowyer first conceived the idea he calls ‘Darwinian Marxism’ in 2004 – that eventually everyone’s home will be a factory, producing anything they want (as long as it can be made out of plastic, anyway). Engineers at Carleton University in Ottawa are working on filling in that stubborn last 20 per cent, to create a printer that can fully replicate itself even if you don’t have a DIY shop handy. Specifically, they are thinking about using only materials that can be found on the surface of the moon. Using a RepRap as their starting point, the researchers have begun to design a rover that will print all its parts and the tools it needs to copy itself using only raw materials harvested in situ by, for example, smelting lunar rock in a solar furnace.2 They have also made experimental motors and computers with McCulloch–Pitts-style artificial neurons to allow their rover to navigate. Semiconductor-based electronic devices would be practically impossible to make on the moon, so in a charming 1950s twist, they plan to use vacuum tubes instead. ‘When I came across RepRap, although it was a modest start, for me it was catalytic,’ says Alex Ellery, who leads the group. ‘What began as a side project now consumes my thoughts.’
- Once they are established on the moon, Ellery’s machines could multiply to form a self-expanding, semi-autonomous space factory making … virtually anything. They might, for example, print bases ready for human colonizers – or even, as Ellery hopes, help to mitigate global warming by making swarms of miniature satellites that can shield us from solar radiation or beam energy down to Earth.
- #Anduril - 3D printing and space warfare
- The inspiration for all these efforts and more is a book entitled Theory of Self-reproducing Automata; its author, John von Neumann.
- #Anduril
- From 1944, meetings instigated by Norbert Wiener helped to focus von Neumann’s thinking about brains and computers. In gatherings of the short-lived ‘Teleological Society’, and later in the ‘Conferences on Cybernetics’, von Neumann was at the heart of discussions on how the brain or computing machines generate ‘purposive behaviour’. Busy with so many other things, he would whizz in, lecture for an hour or two on the links between information and entropy or circuits for logical reasoning, then whizz off again – leaving the bewildered attendees to discuss the implications of whatever he had said for the rest of the afternoon. Listening to von Neumann talk about the logic of neuro-anatomy, one scientist declared, was like ‘hanging on to the tail of a kite’.
- The theory of automata was first unveiled in a lecture in Pasadena on 24 September 1948, at the Hixon Symposium on Cerebral Mechanisms in Behaviour and published in 1951.7 Von Neumann had been thinking about the core ideas for some time, presenting them first in informal lectures in Princeton two years earlier. His focus had shifted subtly. Towards the end of his lecture, von Neumann raises the question of whether an automaton can make another one that is as complicated as itself. He notes that at first sight, this seems untenable because the parent must contain a complete description of the new machine and all the apparatus to assemble it. Although this argument ‘has some indefinite plausibility to it,’ von Neumann says, ‘it is in clear contradiction with the most obvious things that go on in nature. Organisms reproduce themselves, that is, they produce new organisms with no decrease in complexity. In addition, there are long periods of evolution during which the complexity is even increasing.’ Any theory that claims to encompass the workings of artificial and natural automata must explain how man-made machines might reproduce – and evolve. Three hundred years earlier, when the philosopher René Descartes declared ‘the body to be nothing but a machine’ his student, the twenty-three-year-old Queen Christina of Sweden, is said to have challenged him: ‘I never saw my clock making babies’.8 Von Neumann was not the first person to ask the question ‘can machines reproduce?’ but he would be the first to answer it.
- Von Neumann begins by considering what a Turing-machine-like automaton would need to make copies of itself, rather than just compute. He argues that three things are necessary and sufficient. First, the machine requires a set of instructions that describe how to build another like it – like Turing’s paper tape but made of the same ‘stuff’ as the machine itself. Second, the machine must have a construction unit that can build a new automaton by executing these instructions. Finally, the machine needs a unit that is able to create a copy of the instructions and insert them into the new machine.
- Five years before the discovery of the structure of DNA in 1953, and long before scientists understood cell replication in detail, von Neumann had laid out the theoretical underpinnings of molecular biology by identifying the essential steps required for an entity to make a copy of itself. Remarkably, von Neumann also correctly surmised the limits of his analogy: genes do not contain step-by-step assembly instructions but ‘only general pointers, general cues’ – the rest, we now know, is furnished by the gene’s cellular environment.
- That von Neumann was wary of journalists sensationalizing his work and too busy to popularize the theory himself probably did not help (Schrödinger, by contrast, had the lay reader in mind as his audience).
- In lectures over the next few years and an unfinished manuscript, von Neumann began to detail his theory about automata – including a vision of what his self-reproducing machines might look like. His work would be meticulously edited and completed by Arthur Burks, the mathematician and electrical engineer who had worked on both the ENIAC and von Neumann’s computer project. The resulting book, Theory of Self-reproducing Automata, would only appear in 1966.14
- The second, which appeared in Galaxy Science Fiction later that year, was by the author Philip K. Dick, whose work would form the basis of films such as Blade Runner (1982), Total Recall (1990 and 2012) and Minority Report (2002).16 His ‘Autofac’ is the tale of automatic factories set on consuming the Earth’s resources to make products that no one needs – and more copies of themselves. Dick closely followed von Neumann’s career, and his story had been written the year before the Scientific American piece about automata appeared.
- ‘Floccinaucinihilipilification’, meaning ‘the habit of regarding something as worthless’, was Conway’s favourite word. It was how he thought fellow mathematicians felt about his work.
- They shared Conway’s belief that extraordinarily complex phenomena may be underpinned by very simple rules. Termites can build fabulous mounds several metres tall, but, as renowned biologist E. O. Wilson notes, ‘no termite need serve as overseer with blueprint in hand’.
- Fredkin was driven by a belief that irresistibly drew him to Toffoli’s work. ‘Living things may be soft and squishy. But the basis of life is clearly digital,’ Fredkin claimed. ‘Put it another way – nothing is done by nature that can’t be done by a computer. If a computer can’t do it, nature can’t.’36 These were fringe views even in the 1970s, but Fredkin could afford not to care. He was a millionaire many times over thanks to a string of successful computer ventures and had even bought his own Caribbean island. Unafraid of courting controversy, Fredkin once appeared on a television show and speculated that, one day, people would wear nanobots on their heads to cut their hair. When he contacted Toffoli, Fredkin was busy assembling a group at MIT to explore his interests – particularly the idea that the visible manifestations of life, the universe and everything were all the result of a code script running on a computer. Fredkin offered Toffoli a job at his new Information Mechanics Group. Toffoli accepted.
- But there was trouble in paradise, in the shape of a young mathematician by the name of Stephen Wolfram. Wolfram cuts a divisive figure in the scientific world. He won a scholarship to Eton but never graduated. He went to Oxford University but, appalled by the standard of lectures, he dropped out. His next stop was the California Institute of Technology (CalTech), where he completed a PhD in theoretical physics. He was still only twenty. He joined the IAS in 1983 but left academia four years later after founding Wolfram Research. The company’s flagship product, Mathematica, is a powerful tool for technical computing, written in a language he designed. Since its launch in 1988, millions of copies have been sold.
- Like Fredkin, Wolfram thinks that the complexity of the natural world arises from simple computational rules – possibly just one rule – executed repeatedly.38 Wolfram guesses that a single cellular automaton cycling through this rule around 10400 times would be sufficient to reproduce all known laws of physics.
- 2002, with A New Kind of Science.46 That book, the result of ten years of work done in hermit-like isolation from the wider scientific community, is Wolfram’s first draft of a theory of everything. The 1,280-page tome starts with characteristic modesty. ‘Three centuries ago science was transformed by the dramatic new idea that rules based on mathematical equations could be used to describe the natural world,’ he declares. ‘My purpose in this book is to initiate another such transformation.’ And that transformation, he explained, would be achieved by finding the single ‘ultimate rule’ underlying all other physical laws – the automaton to rule them all, God’s four-line computer program. Wolfram had not found that rule. He was not even close. But he did tell a journalist he expected the code to be found in his lifetime – perhaps by himself.
- Humans rise up against ceaselessly productive machines in Philip K. Dick’s ‘Autofac’.
- The problem with Wolfram’s models, as his critics were quick to point out, is that it is impossible to determine whether they reflect reality unless they make falsifiable predictions – and Wolfram’s do not (yet). Wolfram hoped that his book would galvanize researchers to adopt his methods and push them further, but his extravagant claim of founding ‘a new kind of science’ got short shrift from many scientists.
- A New Kind of Science is a beautiful book. It may prove in time also to be an important one. The jury, however, is still very much out.
- ‘If people do not believe that mathematics is simple,’ von Neumann once said, ‘it is only because they do not realize how complicated life is.’
- A true maverick, Barricelli was never awarded a doctorate because he refused to cut his 500-page long thesis to a length acceptable to his examiners.
- Langton had almost cried tears of joy when he stepped up to deliver his lecture at the Los Alamos conference in 1987. His reflections afterwards were more sombre. ‘By the middle of this century, mankind had acquired the power to extinguish life,’ he wrote. ‘By the end of the century, he will be able to create it. Of the two, it is hard to say which places the larger burden of responsibilities on our shoulders.’
- ‘If the model designed for the seashore proved a success, the next step would be to tackle the harder problems of designing artificial living plants for the ocean surface, for desert regions or for any other locality having much sunlight but not now under cultivation,’ said Moore. ‘Even the unused continent of Antarctica,’ he enthused, ‘might be brought into production.’
- The risk that the Earth might be swamped by machines intent on multiplying themselves endlessly could be entirely mitigated, they conjectured – by sending them into space. In recognition of the fact these types of craft are inspired by the original theory of self-reproducing automata, they are now called von Neumann probes.
- In 2004, he and computer scientist Ralph Merkle would produce the veritable bible of self-replicating technology, Kinematic Self-Replicating Machines, a definitive survey of all such devices, real or imagined.
- In 1980, at the request of President Jimmy Carter, NASA convened a workshop in Santa Clara on the role of artificial intelligence and automation in future space missions. Eighteen academics were invited to work with NASA staff. By the time the final report was filed, the exercise had cost over US$11 million. The group quickly settled on four areas that they thought would require cutting-edge computing and robotics, then split into teams to flesh out the technical requirements and goals of each mission. The ideas included an intelligent system of Earth observation satellites, autonomous spacecraft to explore planets outside the Solar System and automated space factories that would mine and refine materials from the Moon and asteroids. The fourth proposal was considered the most far-fetched of all. Led by Richard Laing, the team behind it laid out how a von Neumann-style automaton might colonize the Moon, extraterrestrial planets and, in time, the far reaches of outer space. ‘Replicating factories should be able to achieve a very general manufacturing capability including such products as space probes, planetary landers, and transportable ‘seed’ factories for siting on the surfaces of other worlds,’ they declared in their report. ‘A major benefit of replicating systems is that they will permit extensive exploration and utilization of space without straining Earth’s resources.’
- #[[Forgotten Ideas]]
- The SRS team produced two detailed designs for fully self-replicating lunar factories. The first unit is a sprawling manufacturing hub that strip-mines surrounding land to make commercial products or new copies of itself. A central command and control system orchestrates the whole operation. Extracted materials are analysed and processed into industrial feedstock and stored in a materials depot. A parts production plant uses this feedstock to make any and all components the factory needs. These parts are then either transported to a facility to make whatever products the Earth commands or to a universal constructor, which assembles more factories. A drawback of this scheme is that a whole factory has to be built on the Moon before the automaton can replicate. The team’s second design, a ‘Growing Lunar Manufacturing Facility’ avoids this difficulty by requiring nothing more to start construction than a single 100 ton spherical ‘seed’ craft, packed with robots dedicated to different tasks. Dropped onto the lunar surface, the seed cracks open to release its cargo. Once more, a master computer directs the action. First scouting bots survey the immediate surroundings of the seed to establish where exactly the facility should be built. A provisional solar array is erected to provide power. Five paving robots roll out of the craft and construct solar furnaces to melt lunar soil, casting the molten rock into basalt slabs. The slabs are laid down to form the factory’s foundations, a circular platform 120 metres in diameter. Working in parallel, other robots begin work on a massive roof of solar cells that will eventually cover the entire workspace, supplying the unit with all the power it needs for manufacturing and self-replication. Meanwhile, sectors for chemical processing, fabrication and assembly are set up. Within a year of touchdown, the team predicted, the first self-replicating factory on the Moon will be fully functional, churning out goods and more factories.
- #[[Forgotten Ideas]]
- A drawback of this scheme is that a whole factory has to be built on the Moon before the automaton can replicate. The team’s second design, a ‘Growing Lunar Manufacturing Facility’ avoids this difficulty by requiring nothing more to start construction than a single 100 ton spherical ‘seed’ craft, packed with robots dedicated to different tasks. Dropped onto the lunar surface, the seed cracks open to release its cargo. Once more, a master computer directs the action. First scouting bots survey the immediate surroundings of the seed to establish where exactly the facility should be built. A provisional solar array is erected to provide power. Five paving robots roll out of the craft and construct solar furnaces to melt lunar soil, casting the molten rock into basalt slabs. The slabs are laid down to form the factory’s foundations, a circular platform 120 metres in diameter. Working in parallel, other robots begin work on a massive roof of solar cells that will eventually cover the entire workspace, supplying the unit with all the power it needs for manufacturing and self-replication. Meanwhile, sectors for chemical processing, fabrication and assembly are set up. Within a year of touchdown, the team predicted, the first self-replicating factory on the Moon will be fully functional, churning out goods and more factories.
- The dream has not been forgotten. In 2021, the Initiative for Interstellar Studies, a charity based in London, unveiled an updated design of a von Neumann probe that they say could be built within a decade.78 Ellery’s group is whittling away at the last few percentage points that will get them to 100 per cent closure. The faithful wait for the call from a patron truly prepared to ‘dare mighty things’.
- The first to articulate this vision was American engineer Eric Drexler, who began using the term ‘nanotechnology’ to describe the new field of molecular manufacturing. ‘When biochemists need complex molecular machines, they still have to borrow them from cells,’ Drexler says in his influential book Engines of Creation. ‘Nevertheless, advanced molecular machines will eventually let them build nanocircuits and nanomachines as easily and directly as engineers now build microcircuits or washing machines.’
- Drexler had come across the transcript of a provocative talk entitled ‘There’s Plenty of Room at the Bottom’ given by Richard Feynman in 1959. ‘What would happen’, Feynman asks, ‘if we could arrange the atoms one by one the way we want them?’ Fired up by the idea, which was consistent with his own thinking at the time, Drexler wanted to realize Feynman’s vision. In Engines of Creation, he proposes an ‘assembler’ a few billionths of a metre long that is able to both replicate itself and make other machines, atom by atom.
- Schelling had produced two powerful conclusions with a fairly elementary model. First, cities can become segregated along lines of race even if no one minds living in a mixed community. Second, only an active desire for diversity leads to diverse neighbourhoods. Indifference results in segregation.
- ‘The computer – a new form of life dedicated to pure thought – will be taken care of by its human partners,’ predicted the astronomer Robert Jastrow in 1981, ‘who will minister to its bodily needs with electricity and spare parts. Man will also provide for computer reproduction … We are the reproductive organs of the computer.’91 He was almost right. Forty years after Jastrow wrote those words, there are 2 billion computers in the world. But their numbers have been dwarfed by another more fecund automaton – the smart phone. First surpassing the world’s population in 2014, there are now over 10 billion SIM cards in use worldwide, far exceeding the number of people on the planet.92 Over 1.5 billion smart phones were sold in 2019, and they are multiplying in unprecedented numbers, out-breeding humans by a ratio of at least ten to one. Those billions of SIMs are for now largely being used by human beings. That may soon change. For while we use our devices to chat to each other across the ether, more and more of them have started talking to each other too . .
- A rich cornucopia of possibilities flowed from von Neumann’s lectures on ‘The General and Logical Theory of Automata’ at the Hixon Symposium on Cerebral Mechanisms in Behaviour in 1948. But von Neumann was interested in more than merely proving the possibility of replicating machines. As suggested by ‘cerebral mechanisms’ in the symposium’s title, von Neumann’s automata theory also served to introduce his ideas about the workings of the brain.
- In the audience of von Neumann’s Hixon Symposium lecture was a recent mathematics graduate named John McCarthy. He too was electrified and decided there and then that he would try to develop thinking machines.94 His idea was to produce smart machines by tapping evolution. ‘My idea was to experiment with automata,’ McCarthy says. ‘One automaton interacting with another which would be its environment. You would experiment to see if you could get a smart one.’ McCarthy was proposing to bootstrap his way to machine intelligence by pitting one automaton against another. McCarthy wrote to von Neumann with the idea, and he liked it. Next year, McCarthy went to Princeton to start a doctorate and met with the great mathematician to discuss the scheme again. ‘Write it up, write it up!’ von Neumann urged. Despite performing a few preliminary experiments, McCarthy never did. But the passion for making machines that think, sparked by von Neumann’s talk, stayed with him. It was McCarthy who later coined the term ‘artificial intelligence’ and, together with Marvin Minsky, founded one of the first labs dedicated to its study at MIT in the late 1950s.
- Yet he had managed to flesh out his thoughts sufficiently for them to be published as a book the year after his death. In The Computer and the Brain von Neumann systematically compares the powers of the machines he helped to invent with the computation that goes on in the soft machine inside the human skull.96 Brains don’t look so impressive. A neuron can fire perhaps 100 times a second, while the best computers at the time were already able to carry out a million or more operations a second – and a modern laptop is at least a thousand times faster than this. Worse, neurons were billions of times less accurate than computer components: every time a signal is transmitted from one neuron to another, there is a risk that errors are exacerbated.
- This was the first time anyone had so clearly compared brains and computers. ‘Prior to von Neumann,’ says inventor and futurologist Ray Kurzweil, ‘the fields of computer science and neuroscience were two islands with no bridge between them.’
- And the metaphor has proved extraordinarily useful for computer scientists working on neural networks and artificial intelligence – albeit after a series of false starts. As von Neumann was writing his Silliman lectures, psychologist Frank Rosenblatt was improving on the McCulloch-Pitts artificial neuron by making one that was capable of learning. Initial high hopes for this ‘perceptron’ faded; in 1969, Minsky and Seymour Papert published a damning book assessing the computational limitations of simple networks of artificial neurons. Funding dried up in the 1970s – the first ‘AI winter’ – and then renewed optimism in the 1980s gave way to a second AI winter, triggered in part by researchers who argued that progress required feedback from the senses (cameras and microphones) and interactions with the real world (through pressure or temperature sensors, for example). Despite the criticisms, in recent years, artificial intelligence algorithms have started to achieve staggering feats – from beating world champion board-game players to teaching themselves how to program – and these algorithms are often run on neural networks composed of artificial neurons not so different from Rosenblatt’s perceptrons.
- Some futurologists are now speculating that a superhuman artificial intelligence could transform human society beyond all recognition. That possibility has become known as the technological ‘singularity’ – and that term was first used by someone who had foreseen the possibility decades earlier: John von Neumann.
- ‘Von Neumann, when I was there at Princeton, was under extreme pressure,’ says Benoît Mandelbrot, who had come to the IAS in 1953 at von Neumann’s invitation, ‘from mathematicians, who were despising him for no longer being a mathematician; by the physicists, who were despising him for never having been a real physicist; and by everybody for having brought to Princeton this collection of low-class individuals called “programmers”’.
- The curse of being an interdisciplinary genius
- ‘There probably is a God,’ he had once told his mother. ‘Many things are easier to explain if there is than if there isn’t.’
- Marina says her father was thinking of Pascal’s wager and had always believed that in the face of even a small possibility of suffering eternal damnation the only logical course is to be a believer before the end: ‘My father told me, in so many words, once, that Catholicism was a very rough religion to live in but it was the only one to die in.’
- Another frequent visitor to von Neumann’s bedside was Teller. ‘I have come to suspect,’ he said later, ‘that to most people thinking is painful. Some of us are addicted to thinking. Some of us find it a necessity. Johnny enjoyed it. I even have the suspicion that he enjoyed practically nothing else.’ ‘When he was dying of cancer, his brain was affected,’ Teller recalled. ‘I think that he suffered from this loss more than I have seen any human to suffer in any other circumstances.’
- ‘For the first time in my life I have relaxed and stopped chasing rainbows,’ she wrote on the last page of her unfinished memoirs. The chapter entitled ‘Johnny’ begins, ‘I would like to tell about the man, the strange contradictory and controversial person; childish and good-humoured, sophisticated and savage, brilliantly clever yet with a very limited, almost primitive lack of ability to handle his emotions – an enigma of nature that will have to remain unresolved.’
- He was keenly aware of the various ways in which his work might ultimately contribute to humanity’s undoing. In coining the term ‘singularity’, in conversation with Ulam, von Neumann imagined a point ‘in the history of the race beyond which human affairs, as we know them, could not continue’.2 Whether that would be in a negative or positive sense remains a matter of debate: thinkers have variously speculated that an artificial superintelligence might end up fulfilling all human desires, or cosseting us like pets, or eradicating us altogether.
- Which of these was the real von Neumann? ‘Both were real,’ Marina says.6 But the dissonance between them confused even her, she admits. Beneath the surface the two facets of his personality were at war. Von Neumann hoped the best in people would triumph and tried to be as magnanimous and honourable as possible. But experience and reason taught him to avoid placing too much faith in human virtue.
- Published in June 1955 in Fortune magazine, ‘Can We Survive Technology?’ begins with a dire warning: ‘literally and figuratively, we are running out of room’.7 Advances in domains such as weaponry and telecommunications have greatly increased the speed with which conflicts can escalate and magnified their scope. Regional disputes can quickly engulf the whole planet. ‘At long last,’ he continues, ‘we begin to feel the effects of the finite, actual size of the earth in a critical way.’
- #Anduril intro
- Von Neumann speculates that nuclear reactors will rapidly become more efficient and held out hope that mankind would harness fusion too in the long term. Automation would continue, he predicted, accelerated by advances in solid-state electronics that will bring much faster computing machines. But all technological progress, he warns, will also inevitably be harnessed for military use. Sophisticated forms of climate control, for example, could ‘lend themselves to forms of climatic warfare as yet unimagined’. Preventing disaster will require the invention of ‘new political forms and procedures’ (and the Intergovernmental Panel on Climate Change, established in 1988, arguably embodies one attempt to do exactly that). But what we cannot do, he says, is stop the march of ideas. ‘The very techniques that create the dangers and the instabilities are in themselves useful, or closely related to the useful,’ he argues. Under the ominous heading ‘Survival – A Possibility’, he continues: ‘For progress there is no cure. Any attempt to find automatically safe channels for the present explosive variety of progress must lead to frustration. The only safety possible is relative, and it lies in an intelligent exercise of day-to-day judgment.’