LIVING IN VON NEUMANN'S SHADOW
JOHN VON NEUMANN WAS THE GENIUS’S GENIUS, LAUDED BY MANY OF THE INTELLECTUAL TITANS OF THE 20TH CENTURY.
Among his numerous achievements, he made major contributions to the development of two of the modern world’s existential threats: atomic weapons and artificial intelligence. However, his suggestions for dealing with these threats were more modest – and relied on human qualities which nowadays seem in short supply.

ANDREW VAN BILJON
Research Director
TECHNOLOGICAL PROGRESS HAS DRAWN HUMANITY OUT OF THE ERA OF THE MASTER GENERALIST. PROFOUND SPECIALISM IS NOW ESSENTIAL TO NAVIGATE DEEP SUBJECT FIELDS.
Arthur C Clarke proclaimed that “any sufficiently advanced technology is indistinguishable from magic.” The same could be said about advanced human intelligence. John von Neumann might then rightfully be considered a master illusionist and one of the last great generalists.
This Hungarian-born mathematician and scientist could divide two eight-digit numbers and speak ancient Greek by the age of six and understood both differential and A integral calculus by eight. While these claims may include some post hoc embellishment, his life’s accomplishments confirm he was an outlier on most dimensions of intelligence. Witness the astonishing breadth of his work, with pioneering contributions in fields from mathematics to physics, from computer science to economics. He seemed to apply intuition effortlessly across boundaries.
In 1926, aged 23, von Neumann graduated simultaneously with a degree in chemistry from ETH Zurich and a PhD in mathematics from the University of Budapest. Professorships at Göttingen, Berlin and Hamburg followed, with four years as a visiting professor at Princeton.
Then, in April 1933, the newly formed Nazi government issued a law banning non Aryans and communists from the German civil service, academia, jurisprudence and government. This propelled a wave of emigration of leading Jewish and ideologically opposed scientists, such as Einstein, Franck, Schrödinger and von Neumann, who took a tenured professorship at Princeton. Ironically, these refugees from Nazi Germany helped win the Second World War for the Allies, in the process contributing to many key advancements of the 2oth century.
“Ironically, these refugees from Nazi Germany helped win the Second World War for the Allies, in the process contributing to many key advancements of the 2oth century.”
MANHATTAN TRANSFER
His first-hand experience of Nazism left von Neumann in no doubt who the enemy was. On becoming a US citizen in 1937, he applied for the US Army reserves but was rejected as too old. Undeterred, he applied his intellect relentlessly to help defeat the Axis powers.
The complexity of the physics underlying hydrodynamics and shock waves posed little difficulty for him, and he made significant contributions to British work on chemical explosives from the late 1930s. He was therefore an ideal candidate for Robert Oppenheimer, who recruited him to the Manhattan Project in 1943.
The problems the project faced were both scientific and practical. The Americans’ supply of suitable nuclear material was limited, so they worked on developing two bombs – one using uranium, the other plutonium. Uranium can be tipped into a strong chain reaction by splitting its atoms with a projectile. Plutonium, however, contains isotopes that are more likely to split spontaneously. If you use a projectile, the reaction fizzles out.
The proposed solution was to use an implosion driven by conventional explosives to compress the plutonium core until neutrons are released. But the implosion had to be precisely symmetrical – a feat likened to crushing a beer can without spilling any beer. Von Neumann had the explosives experience and the mathematical nous to calculate the exact design necessary to achieve optimal implosion.
Subsequently, he helped plan the deployment of the bombs, contributing to the calculations underpinning the target selection. If von Neumann showed a hint of empathetic reasoning in opting against the Imperial Palace due to its cultural significance, this was more than offset by his desire to target Kyoto. The city was an intellectual hub, with a significant population thus far untouched by bombing. It was only spared by a veto from the US Secretary of War, who had honeymooned there decades earlier.
Von Neumann’s hawkishness continued after the war, turning his ruthless logic on a new enemy. As the Soviets rushed to develop their own nuclear weapons, he argued, “If you say why not bomb them tomorrow, I say why not bomb them today?”

BIG PROBLEMS, SMALL DECISIONS
Von Neumann’s life was cut short by cancer in 1957, when he was just 53. However, he had by then foreseen the sustained unease of the post-nuclear decades. In his 1955 essay Can we survive technology?, he argued that scientific progress would face an environment which was both “undersized and underorganized.” He didn’t just mean physically (big bombs on a small planet), but also politically, organisationally, economically and culturally. Technology was outgrowing our planet.
In typical von Neumann fashion, the essay is fantastically broad, encompassing automation, climate and nuclear fusion. It is also chilling. In a section called ‘Survival –a possibility’, he argues that, since technologies can’t be effectively policed, navigating the perilous equilibrium will require “a long sequence of small, correct decisions” and “intelligent exercise of day-to-day judgment.” It’s no wonder that, in today’s compass-less world, fears of a nuclear conflict have surged to levels not seen since the 1960s.
“I have known a great many intelligent people in my life… None of them had a mind as quick and acute as Jancsi von Neumann.”
EUGENE WIGNER NOBEL PRIZE FOR PHYSICS 1963
FROM BOMBS TO BRAINS
It’s not just explosions raising the spectre of Armageddon, though. Recent advances in artificial intelligence have us worrying about computer-driven demise as well. A misaligned AI, the theory goes, could harm humans if doing so served its higher aims – whatever those might be. And, once we have systems intelligent enough to advance themselves, we lose control of their evolution.
Once again, we have von Neumann to blame, at least indirectly. Building on others’ work, he came up with the von Neumann Architecture, still the predominant basic structure for computing systems. He also designed pioneering artificially intelligent systems, such as a self-reproducing universal constructor, an abstract machine that could reproduce itself when run.
Towards the end of his life, von Neumann explored the similarities and differences between how the brain works and how computers operate. He noted that artificial systems tend to be serial, performing one operation at a time. By contrast, natural systems are highly parallel, performing many operations simultaneously, which he saw as integral to the complexity of which they were capable. Notably, the advent of cheap parallel computing power in recent years has been largely responsible for large language model (LLM) development, pushing AI into the mainstream.
In comparing computers with the brain directly, von Neumann acknowledged there was much we still didn’t understand and that the gap in their capabilities was vast. Yet he was characteristically confident, quipping: “You insist that there is something a machine cannot do. If you tell me precisely what it is a machine cannot do, then I can always make a machine which will do just that.”
“Artificial systems tend to be serial, performing one operation at a time. By contrast, natural systems are highly parallel, performing many operations simultaneously.”

“I have sometimes wondered whether a brain like von Neumann’s does not indicate a species superior to that of man.”
HANS BETHE NOBEL PRIZE FOR PHYSICS 1967
A MOST INGENIOUS PARADOX
On the surface, it’s a difficult claim to refute. However, it’s also surprising, given von Neumann’s relationship with the Czech logician Kurt Gödel. Von Neumann had been present at the famous Königsberg conference in 1930 when Gödel presented his incompleteness theorems.
These theorems are not just a mouthful but a brainful. Simplistically, they say that any formal system of rules – for performing arithmetic, say – will always be left with some statements that are true but can’t be proved. David Hilbert, a great German mathematician, had been trying to rebuild the foundations of mathematics in a complete, consistent and provable way, so it came as a shock when Gödel showed his task to be impossible.
Taking note, von Neumann shifted his focus to more applied topics. But Gödel’s theorems must surely also reflect on von Neumann’s views on the capability of machines. His statement that we can build a machine to achieve any aim, while appearing true on the surface, falls foul of the same paradox at the heart of the incompleteness theorems. What if we build a machine to prove there’s a task a machine can’t perform?
American computer scientist Douglas Hofstadter has argued more recently that these very paradoxes (or ‘strange loops’), lurking in otherwise consistent systems, give rise to the nature of consciousness. By that logic, we shouldn’t want – or need – our machines to be all encompassing and infallible.
“While this natural approach offers less arithmetical precision, it provides far greater logical reliability – think reliably right on average, rather than often exactly wrong.”
NATURAL RICHNESS
Von Neumann’s final thoughts on the brain touch on another key difference from computers. Artificial systems running sequentially are prone to both accumulating errors and amplifying earlier errors in subsequent computations, whereas neurons in the brain rely on the statistical nature of signals. As he put it, “what matters are not the precise positions of definite markers, digits, but the statistical characteristics of their occurrence.”
In a serial computing system, if even the smallest part of a signal is altered, its meaning will change completely, and this error will be included in all subsequent operations. In a parallel, stochastic system like the brain, however, signals are considered in the context of both their own meaning and the other signals being sent in parallel. The constellation is what matters, not the exact position of individual stars. The result is a rich, complex environment of interaction, full of Hofstadter’s paradoxes.
While this natural approach offers less arithmetical precision, it provides far greater logical reliability – think reliably right on average, rather than often exactly wrong. Despite using massively parallel computation, today’s LLM ‘brains’ are fundamentally deterministic, with no random wiggle room. They make good librarians, but are they capable of genuine innovation – what we might term ‘true’ intelligence? (Debate still rages over whether AlphaGo, the Go playing AI, displayed remarkable insight in playing a shocking move number 37 against then world champion Lee Sedol.)
The answer must hinge on whether our system for creating artificial intelligence maintains space for strange loops and emergent, statistical depth or merely regurgitates combinations of existing knowledge. Von Neumann was right about the parallel and stochastic nature of brains, but seemingly hadn’t appreciated how it could be the true path to artificial intelligence.
“Von Neumann would carry on a conversation with my three-year-old son, and the two of them would talk as equals, and I sometimes wondered if he used the same principle when he talked to the rest of us.”
EDWARD TELLER 'THE FATHER OF THE HYDROGN BOMB'

HUMAN QUALITIES
In a 1968 magazine article, Arthur C Clarke lamented, “there are people who will never accept the possibility of artificial intelligence and would deny its existence even if they encountered it.” Today’s proponents would doubtless level the same criticism, pointing to the eerily human nature of correspondence with AI. The Turing Test has surely been well and truly passed.
And yet isn’t there an emptiness under the surface that ought to be filled with randomness, insight and innovation? If the great mind of John von Neumann was the inception, at least in part, of our two great modern existential fears, what comfort does that offer us for the implications of ever more advanced intelligence?
His proposed narrow path through the valley of the shadow of nuclear Armageddon – small, sequential good decisions – was displaced by anxiety, uncertainty and a few close calls. And his attitude to reassembling the human brain in code – that he could serially build a machine to accomplish any task – would likely never yield the Cambrian explosion required to accomplish true intelligence.
So we should take some heart that the messy, statistical human approach has (so far) steered us through peril and that borrowing parallel computation has brought us remarkable simulacra of our least understood organ.
But there’s a real chance they remain just simulacra and that the road to genuine, self-enforcing artificial general intelligence lies down a different fork. If we hope to resurrect von Neumann – the inspiration for Dr Strangelove – in silicon, we need to account for strange loops. Then we can worry about whether he intends our doom. ⬤
The views expressed in this document are not intended as an offer or solicitation for the purchase or sale of any investment or financial instrument. The information contained in the document is fact based and does not constitute investment research, investment advice or a personal recommendation, and should not be used as the basis for any investment decision. References to specific securities should not be construed as a recommendation to buy or sell these securities. This document reflects Ruffer’s opinions at the date of publication only, and the opinions are subject to change without notice. Information contained in this document has been compiled from sources believed to be reliable but it has not been independently verified; no representation is made as to its accuracy or completeness, no reliance should be placed on it and no liability is accepted for any loss arising from reliance on it. Nothing herein excludes or restricts any duty or liability to a customer, which Ruffer has under the Financial Services and Markets Act 2000 or under the rules of the Financial Conduct Authority.
Ruffer LLP is a limited liability partnership, registered in England with registration number OC305288. The firm’s principal place of business and registered office is 80 Victoria Street, London SW1E 5JL. This financial promotion is issued by Ruffer LLP which is authorised and regulated by the Financial Conduct Authority in the UK and is registered as an investment adviser with the US Securities and Exchange Commission (SEC). Registration with the SEC does not imply a certain level of skill or training. © Ruffer LLP 2026 ruffer.co.uk
For US institutional investors: securities offered through Ruffer LLC, Member FINRA. Ruffer LLC is doing business as Ruffer North America LLC in New York. Ruffer LLC is the distributor for Ruffer LLP, serving as the marketing affiliate to introduce eligible investors to Ruffer LLP. More information about Ruffer LLC is available at BrokerCheck by FINRA. Any statements or material contained herein is for institutional investor use only and is not intended to be, nor shall it be construed as legal, tax or investment advice or as an offer, or the solicitation of any offer, to buy or sell any securities. This material is provided for informational purposes only as of the date hereof and is subject to change without notice. Any Information contained herein, has been supplied by Ruffer LLP and, although believed to be reliable, has not been independently verified and cannot be guaranteed. Ruffer LLC makes no representations or warranties as to the accuracy, validity, or completeness of such information. Ruffer LLC is generally compensated by Ruffer LLP for finding investors for the respective Ruffer LLP funds it represents. Ruffer LLP is a registered investment adviser advising the respective Ruffer LLP funds, and is responsible for handling investor acceptance.
