

FEBRUARY 17 2023
One year has passed since the start of Russia’s invasion of Ukraine, and nothing seems to indicate that the flames of war are dying. Why does the war still continue? Why are military tensions rising in the world?
We reject the thesis of a “clash of civilisations”. Rather, we need to recognise that the contradictions in the deregulated global economic system have made geopolitical tensions more acute (Opinion, February 14).
One of the worst faults of the present system is the imbalance in economic relations inherited from the era of free-market globalisation. We refer to international net positions, where the US, the UK and various other western countries have large external debts, while China, other eastern countries, and to some extent Russia are in an external credit position.
A consequence of this imbalance is a tendency to export eastern capital to the west, no longer only in the form of loans but also of acquisitions leading to a centralisation of capital in eastern hands.
To counter this trend, the US and its major allies have for several years abandoned their previous enthusiasm for deregulated globalism and have adopted a policy of “friend shoring”: an increasingly pronounced protectionist closure against goods and capital from China, Russia and much of the non-aligned east. The EU too has been joining this American-led protectionist turn.
If history is any guide, these uncoordinated forms of protectionism exacerbate international tensions and create favourable conditions for new military clashes. The conflict in Ukraine and rising tensions in the Far and Middle East can be fully understood only in the light of these major economic contradictions.
A new international economic policy initiative is therefore required to head off the threat of further wars.
A plan is needed to regulate current account imbalances, which draws on John Maynard Keynes’s project for an international clearing union.
A development of this mechanism today should start from a double renunciation: the US and its allies should abandon the unilateral protectionism of “friend shoring,” while China and other creditors should abandon their espousal of unfettered free trade.
The task of our time is urgent: we need to assess whether it is possible to create the economic conditions for world pacification before military tensions reach a point of no return.
With the world increasingly turning away from economic integration and cooperation, the second wave of globalization is threatening to give way to fragmentation and conflict, as the first wave did in 1914. Averting catastrophe requires developing strong political foundations capable of sustaining a stable international order.
LONDON – Is the world economy globalizing or deglobalizing? The answer would have seemed obvious in 1990. Communism had just collapsed in Central and Eastern Europe. In China, Deng Xiaoping was unleashing capitalist enterprise. And political scientist Francis Fukuyama famously proclaimed the “end of history,” by which he meant the triumph of liberal democracy and free markets.
Years earlier, the British economist Lionel Robbins, a firm believer in free markets, warned that the shaky political foundations of the postwar international order could not support a globalized economy. But in the euphoria and triumphalism of the early 1990s, such warnings fell on deaf ears. This was, after all, a “unipolar moment,” and American hegemony was the closest thing to a world government. With the Soviet Union vanquished, the thinking went, the last political barrier to international economic integration had been removed.
Dazzled by abstractions, economists and political scientists should have paid more attention to history. Globalization, they would have learned, tends to come in waves, which then recede. The first wave of globalization, which took place between 1880 and 1914, was enabled by a huge reduction in transport and communication costs. By 1913, commodity markets were more integrated than ever, the gold standard maintained fixed exchange rates, and capital – protected by empires – flowed freely and with little risk.
Alas, this golden age of liberalism and economic integration gave way to two world wars, separated by the Great Depression. Trade shrank to 1800 levels, capital flows dried up, governments imposed tariffs and capital controls to protect industry and employment, and the largest economies separated into regional blocs. Germany, Japan, and Italy went to war to establish blocs of their own.
The second wave of globalization, which began in the 1980s and accelerated following the end of the Cold War and the rise of digital communications, is now rapidly retreating. The global trade-to-GDP ratio fell from a peak of 61% just before the 2008 financial crisis to 52% in 2020, and capital movements have been increasingly restricted in recent years. As the United States and China lead the formation of separate geopolitical blocs, and the world economy gradually shifts from interconnectedness to fragmentation, deglobalization seems well underway.
To understand why globalization has broken down for a second time, it is worth revisiting John Maynard Keynes’s memorable description of London on the eve of World War I. “The projects and politics of militarism and imperialism, of racial and cultural rivalries, of monopolies, restrictions, and exclusion, which were to play the serpent to this paradise,” he wrote in 1919, “were little more than the amusements of [the investor’s and consumer’s] daily newspaper, and appeared to exercise almost no influence at all on the ordinary course of social and economic life, the internationalization of which was nearly complete in practice.”
In our own time, geopolitics is once again threatening to break the international order. Commerce, as Montesquieu observed, has a pacifying effect. But free trade requires strong political foundations capable of soothing geopolitical tensions; otherwise, as Robbins warned, globalization becomes a zero-sum game. In retrospect, the failure to make the United Nations Security Council truly representative of the world’s population might have been the original sin that led to the current backlash against economic openness.
But geopolitics is not the only reason for the breakdown of globalization’s second wave. Neoliberal economics, which came to dominate policymaking in the 1980s, has fueled global instability in three major ways.
First, neoliberals fail to account for uncertainty. The efficient-market hypothesis – the belief that financial markets price risks correctly on average – provided an intellectual basis for deregulation and blinded policymakers to the dangers of setting finance free. In the run-up to the 2008 crisis, experts and multilateral institutions, including the International Monetary Fund, were still claiming that the banking system was safe and that markets were self-regulating. While that sounds ridiculous in retrospect, similar views still lead banks to underprice economic risks today.
Second, neoliberal economists have been oblivious to global imbalances. The pursuit of market-led economic integration accelerated the transfer of manufacturing production from developed economies to developing economies. Counterintuitively, though, it also led to a flow of capital from poor to rich countries. Simply put, Chinese workers supported the West’s living standards while Chinese production decimated Western manufacturing jobs. This imbalance has fueled protectionism, as governments respond to public pressure by restricting trade with low-cost producers, and contributed to the splintering of the world economy into rival economic blocs.
Lastly, neoliberal economics is indifferent to rising inequality. Following four decades of hyper-globalization, tax cuts, and fiscal tightening, the richest 10% of the world’s population own 76% of the total wealth, while the poorest half own barely 2%. And as more and more wealth ends up in the hands of tech speculators and fraudsters, the so-called “effective altruism” movement has invoked Laffer curve-style logic to argue that allowing the rich to become even richer would encourage them to donate to charity.
Will globalization’s second wave collapse into a world war, as the first one did? It is certainly possible, especially given the lack of intellectual heft among the current crop of world leaders. To prevent another descent into global chaos, we need bold ideas that build on the economic and political legacies of Bretton Woods and the 1945 UN Charter. The alternative could be a more or less direct path to Armageddon.
The UK’s draconian Public Order Bill, which seeks to restrict certain forms of protest used by climate activists, will expand the state’s ability to detain people deemed disruptive and limit the courts’ ability to restrain it. This will align the British legal system with those of authoritarian countries like Russia.
LONDON – In December 1939, police raided the home of George Orwell, seizing his copy of D.H. Lawrence’s Lady Chatterley’s Lover. In a letter to his publisher after the raid, Orwell wondered whether “ordinary people in countries like England grasp the difference between democracy and despotism well enough to want to defend their liberties.”
Nearly a century later, the United Kingdom’s draconian Public Order Bill, passed by the UK House of Commons last year and now being considered in the House of Lords, vindicates Orwell’s doubt. The bill seeks to restrict the right to protest by extending the scope of criminality, reversing the presumption of innocence in criminal trials, and weakening the “reasonableness” test for coercive action. In other words, it widens the government’s scope for discretionary action while limiting the courts’ ability to restrain it.
When the police seized Orwell’s copy of Lady Chatterley’s Lover, the novel was banned under the Obscene Publications Act of 1857, which prohibited the publication of any material that might “deprave and corrupt” readers. In 1959, the nineteenth-century law was replaced by a more liberal measure that enabled publishers to defend against obscenity charges by showing that the material had artistic merit and that publishing it was in the public interest. Penguin Books succeeded with this defense when it was prosecuted for publishing Lady Chatterley’s Lover in 1960; by the 1980s, the book was taught in public schools.
But while Western democracies have stopped trying to protect adults from “depravity,” they are constantly creating new crimes to protect their “security.” The Public Order Bill creates three new criminal offenses: attaching oneself to objects or buildings (“locking on” or “going equipped to lock on”), obstructing major transport works, and interfering with critical national infrastructure projects. All three provisions target forms of peaceful protest, such as climate activists blocking roads or gluing themselves to famous works of art, that the government considers disruptive. Disrupting critical infrastructure could certainly be construed as a genuine threat to national security. But this bill, which follows a raft of other recently enacted or proposed laws intended to deal with “the full range of modern-day state threats,” should be seen as part of a broader government crackdown on peaceful protest.
By transferring the burden of proof from the police to alleged offenders, the Public Order Bill effectively gives police officers the authority to arrest a person for, say, “attaching themselves to another person.” Rather than requiring the police to show reasonable cause for the arrest, the person charged must “prove that they had a reasonable excuse” for locking arms with a friend.
The presumption of innocence is not just a legal principle; it is a key political principle of democracy. All law-enforcement agencies consider citizens potential lawbreakers, which is why placing the burden of proof on the police is an essential safeguard for civil liberties. The Public Order Bill’s presumption of guilt would reduce the extent to which the police are answerable to the courts, aligning the UK legal system with those of authoritarian countries like Russia and China, where acquittals are rare.
The bill also weakens the “reasonableness” requirement for detention and banning orders, allowing officers to stop and search any person or vehicle without any grounds for suspicion if they “reasonably believe” that a protest-related crime may be committed. Resistance to any such search or seizure would be a criminal offense. And magistrates could ban a person or organization from participating in a protest in a specified area for up to five years if their presence was deemed likely to cause “serious disruption.” And since being “present” at the crime scene includes electronic communications, the ban could involve digital monitoring.
The question of what should be considered reasonable grounds for coercive action was raised in the landmark 1942 case of Liversidge v. Anderson. Robert Liversidge claimed that he had been unlawfully detained on the order of then-Home Secretary John Anderson, who refused to disclose the grounds for the arrest. Anderson argued that he had “reasonable cause to believe” that Liversidge was a national-security threat, and that he had acted in accordance with wartime defense regulations that suspended habeas corpus. The House of Lords ultimately deferred to Anderson’s view, with the exception of Lord Atkin, who in his dissent accused his peers of being “more executive-minded than the executive.”
Even in wartime, Atkin claimed, individuals should not be arbitrarily detained or deprived of their property. If the state is not required to provide reasons that could stand up in court, the courts cannot restrain the government. The UK’s current wave of national security and counter-terrorism bills directly challenges this view, making Atkin’s dissent even more pertinent today than it was during the war.
Law enforcement’s growing use of big data and artificial intelligence makes the UK government’s efforts to curtail the right to protest even more worrisome. While preventive policing is not new, the appearance of scientific impartiality could give it unlimited scope. Instead of relying on informers, police departments can now use predictive analytics to determine the likelihood of future crimes. To be sure, some might argue that, because authorities have so much more data at their disposal, predictive policing is more feasible today than it was in the 1980s, when the British sociologist Jean Floud advocated “protective sentences” for offenders deemed a grave threat to public safety. American University law professor Andrew Guthrie Ferguson, for example, has argued that “big data will illuminate the darkness of suspicion.”
But when considering such measures, we should keep in mind that the state can sometimes be far more dangerous than terrorists, and certainly more than glued-down protesters. We must be as vigilant against the lawmaker as we are against the lawbreaker. After all, we do not need an algorithm – or Orwell – to tell us that handing the government extraordinary powers could go horribly wrong.
Sir, Your leading article (“Digital Danger”, Jan 2) warns of the use of Chinese-made surveillance systems to track people in the UK. But neither your editorial nor the surveillance watchdog, Fraser Sampson, seems to have any qualms about British-made equipment being used for the same purpose. In 1786 Jeremy Bentham designed the Panopticon, in which a central prison watchtower could shine a light on all the encircling prison cells without the inmates being able to tell that they were being watched. This, he thought, would motivate them to behave legally. Bentham thought his contrivance was equally applicable to hospitals, schools and factories. In Orwell’s dystopian novel Nineteen Eighty-Four, one-way TV systems are installed in every flat. Big Brother would always be watching you.
The danger of where a surveillance system is made seems of minor importance compared with our acceptance of the right of democratic governments to spy on their citizens whenever and wherever they please in the name of national security.
Nov 8, 2022 ROBERT SKIDELSKY and PHILIP PILKINGTON
Decades of deindustrialization have hollowed out the UK economy and made it woefully ill-prepared for wartime disruptions. As the financial speculators who funded its current-account deficits turn against the pound, policymakers should consider Keynesian taxes and increasing public investment.
LONDON – A wartime economy is inherently a shortage economy: because the government needs to direct resources toward manufacturing guns, less butter is produced. Because butter must be rationed to make more guns, a war economy may lead to an inflationary surge that requires policymakers to cut civilian consumption to reduce excess demand.
In his 1940 pamphlet “How to Pay for the War,” John Maynard Keynes famously called for fiscal rebalancing, rather than budgetary expansion, to accommodate the growing needs of the United Kingdom’s World War II mobilization effort. To reduce consumption without driving up inflation, Keynes contended, the government had to raise taxes on incomes, profits, and wages. “The importance of a war budget is social,” he asserted. Its purpose is not only to “prevent the social evils of inflation,” but to do so “in a way which satisfies the popular sense of social justice whilst maintaining adequate incentives to work and economy.”
Joseph E. Stiglitz recently applied this approach to the Ukraine crisis. To ensure the fair distribution of sacrifice, he argues, governments must impose a windfall-profit tax on domestic energy suppliers (“war profiteers”). Stiglitz proposes a “non-linear” energy-pricing system whereby households and companies could buy 90% of the previous year’s supply at last year’s price. In addition, he advocates import-substituting policies such as increasing domestic food production and greater use of renewables.
Stiglitz’s proposals may work for the United States, which is far less vulnerable to external disruption than European countries. With a quarter of the global GDP, 14% of world trade, and 60% of the world’s currency reserves, the US can afford belligerence. But the European Union cannot, and the UK even less so.
While the UK has been almost as aggressive as the US in its response to Russia’s actions, Britain is far less prepared to manage a war economy than it was in 1940: it makes fewer things, grows less food, and is more dependent on imports. The UK is more vulnerable to external shocks than any major Western power, owing to decades of deindustrialization that have shrunk its manufacturing sector from 23% of gross value added in 1980 to roughly 10% today. While the UK produced 78% of the food it consumed in 1984, this figure had fallen to 64% by 2019. The British economy’s growing reliance on imported energy has made it even less self-sufficient.
For decades, the financial sector propped up the UK’s hollowed-out economy. Financial flows into the City of London allowed the country to neglect trade and artificially maintain higher living standards than its export capacity warranted. Britain’s current-account deficit is now 7% of GDP, compared to a current-account surplus of 1.3% of GDP in 1980. Until recently, the British formula had been to finance its external deficit by attracting speculative capital into London via the financial industry, which had been deregulated by the “big bang” of 1986.
This was brilliant but unstable financial engineering: foreigners sent the UK goods that it otherwise could not afford, Britain sent them sterling in return, and foreigners used the pound to buy British-domiciled assets. But this was a short-term fix for the long-term decline of manufacturing, enabling the UK to live beyond its means without improving its productivity.
In his 1930 Treatise on Money, Keynes distinguishes between “financial circulation” and “industrial circulation.” The former is mainly speculative in purpose. But an economy that depends on speculative inflows experiences financial booms and busts without any improvement to its underlying growth potential. The UK’s strategy echoed this observation: it did little to develop exportable goods that could improve the current-account balance, and its success depended on foreigners not dumping the pound.
But the speculator’s logic, as George Soros explains, is to make a quick buck and get out before the crash. Relying on speculators is like a narcotics addiction: a temporary high becomes a necessary crutch. The energy crisis brought on by the Russia-Ukraine war was the equivalent of cold-turkey withdrawal, blowing an even larger hole in the UK’s trade balance. The current-account deficit is expected to increase to 10% of GDP by the end of 2023, providing short-term investors with a strong incentive to sell their sterling-denominated bonds.
The pound’s ongoing decline will make UK imports more expensive. And since import prices will likely rise faster than export values, the decline in the sterling’s exchange rate will probably widen the current-account deficit, not least because the country’s diminished manufacturing sector depends heavily on imported inputs. As the pound depreciates, the price of these imports will increase, resulting in even greater erosion of living standards.
This leaves policymakers with few good options. The Bank of England has already raised interest rates to maintain inflows of foreign capital, but high interest rates will likely crash housing and other asset markets that have become addicted to rock-bottom rates over the past 15 years. Taking steps to balance the budget may temporarily calm markets, but such measures would not address the British economy’s underlying weakness. Moreover, there is no evidence that fiscal consolidation leads to economic growth.
One possible remedy would be to revive government investment. UK public investment fell from an average of 47.3% of total investments between 1948 and 1976 to 18.4% between 1977 and 2007, leaving overall investment dependent on volatile short-term expectations.
The only way the UK could “pay for the war” is to implement an industrial strategy that aims to increase self-sufficiency in energy, raw materials, and food production. But such a policy will take years to bear fruit.
All European countries, not just Britain, face an energy crisis as a result of the disruption of oil and gas supplies from Russia, and policymakers are eager to increase energy inflows. But any deal with Russia as it wages its war on Ukraine with apparent disregard for human life would carry enormous moral and political costs.
One possible way forward may be to reach an agreement to ease economic sanctions in exchange for a resumption of gas flows. Given its special economic vulnerability, and following Brexit, Britain is well placed to explore this idea on behalf of – but independently from – the EU.
A limited agreement could ease Europe’s energy crisis while allowing continued military support for Ukraine. But it should be conditional on Russia reducing the intensity of its horrific “special military operation.” Negotiation of a limited energy-sanctions deal could, perhaps, open the door to a wider negotiation aimed at ending the war before it engulfs Europe.
As for the UK, in the short term it will remain dependent on City-generated financial inflows to prevent a catastrophic near-term collapse of the pound, forcing the new British Chancellor, Jeremy Hunt, to scramble to “restore confidence” in the British economy. In lieu of Keynesian taxes or public investment, that will most likely mean drinking more of the austerity poison that caused Britain’s current malady.
Oct 19, 2022ROBERT SKIDELSKY
Admired in the West but loathed by his countrymen as a harbinger of Russia’s post-Cold War misfortune, Mikhail Gorbachev fully grasped the immense challenges of reforming the ailing Soviet Union. Today’s Russia largely reflects the anti-Western grievances stemming from his failure.
LONDON – Mikhail Gorbachev, the Soviet Union’s last leader, was buried last month at the Novodevichy Cemetery in Moscow next to his wife Raisa and near fellow Soviet leader Nikita Khrushchev. To no one’s surprise, Russian President Vladimir Putin did not attend the funeral. Novodevichy, after all, is where “unsuccessful” Soviet leaders had been consigned to their final rest.
Putin’s snub reminded me of a conversation I had two decades ago during a midnight stroll through Red Square. On impulse, I asked the army officer stationed in front of Lenin’s Tomb who was buried in the Soviet Necropolis behind it, and he offered to show me. There, I saw a succession of graves and plinths for Soviet leaders: from Stalin to Leonid Brezhnev, Alexei Kosygin, and Yuri Andropov. The last plinth was unoccupied. “For Gorbachev, I suppose?” I asked. “No, his place is in Washington,” the officer replied.
Ironically, Gorbachev has been lionized in the West for accomplishing something he never set out to do: bringing about the end of the Soviet Union. He was awarded the Nobel Peace Prize in 1990, yet Russians widely regarded him as a traitor. In his ill-fated attempt at a political comeback in the 1996 Russian presidential election, he received just 0.5% of the popular vote.
Gorbachev remains a reviled figure in Russia. A 2012 survey by the state-owned pollster VTsIOM found that Gorbachev was the most unpopular of all Russian leaders. According to a 2021 poll, more than 70% of Russians believe their country had moved in the wrong direction under his rule. Hardliners hate him for dismantling Soviet power, and liberals despise him for clinging to the impossible ideal of reforming the communist regime.
I became acquainted with Gorbachev in the early 2000s when I attended meetings of the World Political Forum, the think tank he founded in Turin. The organization was purportedly established to promote democracy and human rights. But in practice, its events were nostalgic reminiscences where Gorbachev held forth on “what might have been.” He was usually flanked by other has-beens of his era, including former Polish leaders Wojciech Jaruzelski and Lech Wałęsa, former Hungarian Prime Minister Gyula Horn, Russian diplomat Alexander Bessmertnykh, and a sprinkling of left-leaning academics.
Gorbachev’s idea of a “Third Way” between socialism and capitalism was briefly fashionable in the West but was soon swamped by neoliberal triumphalism. Nonetheless, I liked and respected this strangely visionary leader of the dying USSR, who refused to use force to resist change.
Today, most Russians cast Gorbachev and Boris Yeltsin as harbingers of Russia’s misfortune. Putin, on the other hand, is widely hailed as a paragon of order and prosperity who has reclaimed Russia’s leading role on the world stage. In September, 60% of Russians said they believe that their country is heading in the right direction, though this no doubt partly reflects the tight control Putin exercises over television news (the main source of information for most citizens).
In the eyes of most Russians, Gorbachev’s legacy is one of naivete and incompetence, if not outright betrayal. According to the prevailing narrative, Gorbachev allowed NATO to expand into East Germany in 1990 on the basis of a verbal commitment by then-US Secretary of State James Baker that the alliance would expand “not one inch eastwards.” Gorbachev, in this telling, relinquished Soviet control over Central and Eastern Europe without demanding a written assurance.
In reality, however, Baker was in no position to make such a promise in writing, and Gorbachev knew it. Moreover, Gorbachev repeatedly confirmed over many years that a serious promise not to expand NATO eastward was never made.
In any case, the truth is that the Soviet Union’s control over its European satellites had become untenable after it signed the 1975 Helsinki Final Act. The accord, signed by the United States, Canada, and most of Europe, included commitments to respect human rights, including freedom of information and movement. Communist governments’ ability to control their population gradually eroded, culminating in the chain of mostly peaceful uprisings that ultimately led to the Soviet Union’s dissolution.
Yet there is a grain of truth to the myth of Gorbachev’s capitulation. After all, the USSR had not been defeated in battle, as Germany and Japan were in 1945, and the formidable Soviet military machine remained intact in 1990. In theory, Gorbachev could have used tanks to deal with the popular uprisings in Eastern Europe, as his predecessors did in East Germany in 1953, Hungary in 1956, and Czechoslovakia in 1968.
Gorbachev’s refusal to resort to violence to preserve the Soviet empire resulted in a bloodless defeat and a feeling of humiliation among Russians. This sense of grievance has fueled widespread distrust of NATO, which Putin used years later to mobilize popular support for his invasion of Ukraine.
Another common misconception is that Gorbachev dismantled a functioning economic system. In fact, far from fulfilling Khrushchev’s promise to “bury” the West economically, the Soviet economy had been declining for decades.
Gorbachev understood that the Soviet Union could not keep up with the US militarily while satisfying civilian demands for higher living standards. But while he rejectedthe Brezhnev era’s stagnation-inducing policies, he had nothing coherent to put in their place. Instead of facilitating a functioning market economy, his rushed abandonment of the central-planning system enriched the corrupt managerial class in the Soviet republics and led to a resurgence of ethnic nationalism.
To my mind, Gorbachev is a tragic figure. While he fully grasped the immense challenges facing Soviet communism, he had no control over the forces he helped unleash. Russia in the 1980s simply lacked the intellectual, spiritual, and political resources to overcome its underlying problems. But while the Soviet empire has been extinct for 30 years, many of the dysfunctions that contributed to its demise now threaten to engulf the entire world.
Sep 12, 2022 ROBERT SKIDELSKY
Since World War II, Britain’s influence in the world has relied on its “special relationship” with the United States, its position as head of the Commonwealth (the British Empire’s successor), and its position in Europe. The Americans are still there, but Europe isn’t, and now the head of the Commonwealth isn’t, either.
LONDON – Amid the many, and deserved, tributes to Queen Elizabeth II, one aspect of her 70-year reign remained in the background: her role as monarch of 15 realms, including Australia, New Zealand, and Canada. She was also the head of the Commonwealth, a grouping of 56 countries, mainly republics.
This community of independent states, nearly all of them former territories of the British Empire, has been crucial in conserving a “British connection” around the world in the post-imperial age. Whether this link is simply a historical reminiscence, whether it stands for something substantial in world affairs, and whether and for how long it can survive the Queen’s passing, have become matters of great interest, especially in light of Britain’s withdrawal from the European Union.
In the nineteenth-century era of Pax Britannica, Britain exercised global power on its own. The sun never set on the British Empire: the British navy ruled the waves, British finance dominated world markets, and Britain maintained the European balance of power. This era of “splendid isolation” – never as splendid or isolated as history textbooks used to suggest – ended with World War I, which gravely wounded Britain’s status as a world power and correspondingly strengthened other claimants to that role.
As the results of WWI were confirmed by World War II, British foreign policy came to center on the doctrine of the “three circles.” Britain’s influence in the world would rely on its “special relationship” with the United States, its position as head of the Commonwealth (the empire’s successor), and its position in Europe. By its membership of these overlapping and mutually reinforcing circles, Britain might hope to maximize its hard and soft power and mitigate the effects of its military and economic “dwarfing.”
Different British governments attached different weights to the three roles in which Britain was cast. The most continuously important was the relationship with the US, which dates from WWII, when the Americans underwrote Britain’s military and economic survival. The lesson was never forgotten. Britain would be the faithful partner of the US in all its global enterprises; in return, Britain could draw on an American surplus of goodwill possessed by no other foreign country. For all the pragmatic sense it made, one cannot conceive of such a connection forged or enduring without a common language and a shared imperial history.
Imperial history was also central to the second circle. The British Empire of 1914 became the British Commonwealth in 1931, and finally just The Commonwealth, with the Queen as its titular head. Its influence lay in its global reach. Following the contours of the British Empire, it was the only world organization (apart from the United Nations and its agencies) which spanned every continent.
The Commonwealth conserved the British connection in two main ways. First, it functioned as an economic bloc through the imperial preference system of 1932 and the sterling area that was formalized in 1939, both of which survived into the 1970s. Second, and possibly more durably, its explicitly multiracial character, so ardently supported by the Queen, served to soften both global tensions arising from ethnic nationalism, and ethnic chauvinism in the “mother country.” Multicultural Britain is a logical expression of the old multicultural empire.
The European link was the weakest and was the first to snap. This was because Britain’s historic role in Europe was negative: to prevent things from happening there which might endanger its military security and economic livelihood. To this end, it opposed all attempts to create a continental power capable of bridging the Channel. Europe was just 20 miles away, and British policy needed to be ever watchful that nasty things did not happen “over there.”
John Maynard Keynes expressed this permanent sense of British estrangement from the Continent. “England still stands outside Europe,” he wrote in 1919. “Europe’s voiceless tremors do not reach her: Europe is apart and England is not of her flesh and body.” The Labour leader, Hugh Gaitskell, famously evoked this sense of separation when he played the Commonwealth card in 1962, urging his party not to abandon “a thousand years of history” by joining the European Economic Community.
Britain’s policy towards Europe has always been to prevent the emergence of a Third Force independent of US-led NATO. Charles de Gaulle saw this clearly, vetoing Britain’s first application to join the EEC in 1963 in order to prevent an American “Trojan Horse” in Europe.
Although Prime Minister Tony Blair wanted Britain to be at “the heart” of Europe, Britain pursued the same game inside the EU from 1974 until 2021. The only really European-minded prime minister in this period was Edward Heath. Otherwise, British governments have sought to maximize the benefits to Britain of trade and tourism, while minimizing the dangers of political contamination. Today, it is not surprising that Britain joins the US to project NATO power in Eastern Europe over the stricken torso of the EU itself.
So, Britain is left with just two circles. In the wake of Brexit, the Queen’s legacy is clear. Through her official position and personal qualities, she preserved the Commonwealth as a possible vehicle for projecting what remains of Britain’s hard power, such as military alliances in the South Pacific. And whatever one may think of Britain’s hard power, its soft power – reflecting its trading relationships, its cultural prestige in Asia and Africa, and its multicultural ideal – is a global public good in an age of growing ethnic, religious, and geopolitical conflict.
I doubt whether the two remaining circles can compensate for Britain’s absence from the third. The question that remains to be answered is how much the Commonwealth’s durability depended on the sheer longevity of the late monarch, and how much of it can be preserved by her successor.
Aug 22, 2022 ROBERT SKIDELSKY
The widening gaps in policy formation nowadays reflect the division of labor and increasing specialization that has taken us from the sixteenth-century ideal of the Renaissance man. And today’s biggest policymaking gap has grown so large that it threatens global catastrophe.
LONDON – Just as the insistent demand for more “transparency” is a sure sign of increasing opacity, the current clamor for “joined-up thinking” indicates that the need for it far outstrips the supply. With its recent report on energy security, the UK House of Lords Economic Affairs Committee has added its voice to the chorus.
The report’s language is restrained, but its message is clear: Without a “joined-up” energy policy, the United Kingdom’s transition to net zero by 2050 will be “disorderly” (read: “will not happen”). For example, the policy aimed at improving home insulation is at odds with local authorities’ listed-building regulations.
In April, the government called on the Bank of England and financial regulators to “have regard to” energy security. What does this mean? Which institution is responsible for which bits of energy security? How does energy security relate to the net-zero goal? Never mind gaps in the data: The real problem is yawning chasms in thinking.
In a masterly understatement, the committee’s report says that “The Russian invasion of Ukraine has created global energy supply issues.” In fact, economic sanctions against the invader have contributed significantly to a massive energy and food crisis that threatens the sanctioning countries with stagflation and many people in developing economies with starvation.1
China provides key minerals for renewable-energy technologies, including wind turbines and solar cells, and supplies 66% of finished lithium-ion batteries. The report concludes that the UK must “not become reliant on strategic competitors, notably China, for critical minerals and components.” Moreover, the government “will need to ensure that its foreign and trade policies […] and its policy on net zero are aligned.” Quite a bit more joining up to do, then.
The widening gaps in policy formation reflect the increasing division of labor resulting from the relentless march of complexity. Today’s policymakers and their advisers know more and more about less and less, recalling Adam Smith’s description in The Wealth of Nations of a pin factory worker:
“The man whose whole life is spent in performing a few simple operations, of which the effects too are, perhaps, always the same, or very nearly the same, has no occasion to exert his understanding, or to exercise his invention in finding out expedients for removing difficulties which never occur. He naturally loses, therefore, the habit of such exertion, and generally becomes as stupid and ignorant as it is possible for a human creature to become.”
No one in Smith’s factory would need to know how to make a pin, or even what the purpose of producing one was. They would know only how to make a part of a pin. Likewise, the world is becoming full of “experts” who know only little bits of their subject.
The ideal of the “Renaissance man,” who could do a lot of joined-up thinking, did not survive the growing division of labor. By the eighteenth century, knowledge was being split up into “disciplines.” Now, subdisciplines sprout uncontrollably, and communication of their findings to the public is left to journalists who know almost nothing about everything.
Today’s biggest gap, one so large that it threatens catastrophe, is between geopolitics and economics. Foreign ministries and Treasuries no longer talk to each other. They inhabit different worlds, use different conceptual languages, and think about different problems.
The geopolitical world is divided up into “strategic partners” and “strategic rivals.” Borders are alive and well. States have conflicting national interests and pursue national-security policies. Economics, in contrast, is the science of a single market: Its ideal is economic integration across frontiers and a global price mechanism that automatically harmonizes conflicting preferences. Economics also tells us that commerce softens the asperities of politics, creating over time a single community of learning and culture.
The eighteenth-century English historian Edward Gibbon described history as “little more than the register of the crimes, follies, and misfortunes of mankind.” But the end of the Cold War led to hopes that the world was at last growing up; in 1989, the American academic Francis Fukuyama proclaimed the “end of history.”
Today, however, geopolitics is back in the saddle. The West regards Russia as a pariah because of its invasion of Ukraine; China, if not yet quite meriting that status, is a strategic rival with which trade in many areas should be shunned. At the same time, Western governments also insist on the need for global cooperation to tackle potentially lethal climate change and other existential dangers like nuclear proliferation and pandemics.
In 2012, for example, the University of Cambridge established the Centre for the Study of Existential Risk to help mitigate threats that could lead to humanity’s extinction. Sixty-five years earlier, atomic scientists from the Manhattan Project founded a bulletin to warn humanity of the possible negative consequences arising from accelerating technological advances. They started a Doomsday Clock, the time of which is announced each January. In 1947, the clock was set at seven minutes to midnight. Since January 2020, it has stood at 100 seconds to midnight, marking the closest humanity has come to Armageddon in 75 years.
The danger of nuclear proliferation, which prompted the clock’s creation, is still very much with us. As the bulletin points out, “Development of hypersonic glide vehicles, ballistic missile defenses, and weapons-delivery systems that can flexibly use conventional or nuclear warheads may raise the probability of miscalculation in times of tension.” Do those who urge the expulsion of Russian forces from Ukraine, by economic or military means, consider this risk? Or is this another “gap” in their knowledge of the world?
It is a tragedy that the economic basis of today’s world order, such as it is, is now being put at risk. That risk is the result of actions for which all the great powers share responsibility. The ironically named United Nations, which exists to achieve planetary security, is completely marginalized. Its virtual absence from the big scenes of international conflict is the biggest gap of all, and one which jeopardizes our common future.
Jul 19, 2022 ROBERT SKIDELSKY
Although words like “unprincipled,” “amoral,” and “serial liar” seem to describe the outgoing British prime minister accurately, they accurately describe more successful political leaders as well. To explain Johnson’s fall, we need to consider two factors specific to our times.
LONDON – Nearly all political careers end in failure, but Boris Johnson is the first British prime minister to be toppled for scandalous behavior. That should worry us.
The three most notable downfalls of twentieth-century British leaders were caused by political factors. Neville Chamberlain was undone by his failed appeasement policy. The Suez fiasco forced Anthony Eden to resign in 1957. And Margaret Thatcher fell in 1990 because popular resistance to the poll tax persuaded Tory MPs that they could not win again with her as leader.
True, Harold Macmillan was undone in 1963 by the Profumo sex scandal, but this involved a secretary of state for war and possible breaches of national security. Election defeats following economic failure brought down Edward Heath and James Callaghan in the 1970s. Tony Blair was forced to resign by the Iraq debacle and Gordon Brown’s impatience to succeed him. David Cameron was skewered by Brexit, and Theresa May by her failure to deliver Brexit.
No such events explain Johnson’s fall.
David Lloyd George, a much greater leader than Johnson, is his only serious rival in sleaze. But though the sale of seats in the House of Lords, slipshod administrative methods, and dishonesty had weakened Lloyd George, the immediate cause of his fall (exactly a century ago) was his mishandling of the Chanak crisis, which brought Britain and Turkey to the brink of war.
The more familiar comparison is with US President Richard Nixon. Every Johnson misdemeanor is routinely labeled “gate” after the Watergate break-in that ended Nixon.
John Maynard Keynes called Lloyd George a “crook”; Nixon famously denied that he was one. Neither they nor Johnson were crooks in the technical sense (of being convicted of crimes), but Nixon would have been impeached in 1974 had he not resigned, and Johnson was fined £50 for breaking lockdown rules. Moreover, all three showed contempt for the laws they were elected to uphold, and for the norms of conduct expected from public officials.
We struggle to describe their character flaws: “unprincipled,” “amoral,” and “serial liar” seem to capture Johnson. But they describe more successful political leaders as well. To explain his fall, we need to consider two factors specific to our times.
The first is that we no longer distinguish personal qualities from political qualities. Nowadays, the personal really ispolitical: personal failings are ipso facto political failings. Gone is the distinction between the private and the public, between subjective feeling and objective reality, and between moral and religious matters and those that government must address.
Politics has crossed into the realm previously occupied by psychiatry. This was bound to happen once affluence undermined the old class basis of politics. Questions of personal identity arising from race, gender, sexual preference, and so on now dominate the spaces vacated by the politics of distribution. Redressing discrimination, not addressing inequality, became the task of politics.
Johnson is both a creature and a victim of identity politics. His rhetoric was about “leveling up” and “our National Health Service.” But, in practice, he made his personality the content of his politics. No previous British leaders would have squandered their moral capital on trivial misdemeanors and attempted cover-ups, because they knew that it had to be kept in reserve for momentous events. But momentous events are now about oneself, so when a personality is seen as flawed, there is no other story to tell.
Johnson’s personality-as-politics was also the creation of the media. In the past, newspapers, by and large, reported the news; now, focusing on personalities, they create it. This change has given rise to a corrupt relationship: personalities use the media to promote themselves, and the media expose their frailties to sell copy.
There has always been a large market for sexual and financial gossip. But even in the old “yellow press,” there was a recognized sphere of public events that took priority. Now the gossip stories are the public events.
This development has radically transformed public perceptions about the qualities a political leader should have. Previous generations of political leaders were by no means all prudes. They lied, drank, fornicated, and took bribes. But everyone concerned with politics recognized that it was important to protect the public sphere. Leaders’ moral failings were largely shielded from scrutiny, unless they became egregious. And even when the public became aware of them, they were forgiven, provided the leaders delivered the goods politically.
Most of the offenses that led to Johnson’s resignation would never have been reported in the past. But today the doctrine of personal accountability justifies stripping political leaders naked. Every peccadillo, every lapse from correct expression, becomes a credibility-destroying “disgrace” or “shame.” People’s ability to operate in the public sphere depends on privacy. Once that is gone, their ability to act effectively when they need to vanishes.
The other new factor is that politics is no longer viewed as a vocation so much as a stepping stone to money. Media obsession with what a political career is worth, rather than whether politicians are worthy of their jobs, is bound to affect what politically ambitious people expect to achieve and the public’s view of what to expect from them. Blair is reported to have amassed millions in speaking engagements and consultancies since leaving office. In keeping with the times, The Times has estimated how much money Johnson could earn from speaking fees and book deals, and how much more he is worth than May.
In his resignation speech, Johnson sought to defend the “best job in the world” in traditional terms, while criticizing the “eccentricity” of being removed in mid-delivery of his promises. But this defense of his premiership sounded insincere, because his career was not a testimony to his words. The cause of his fall was not just his perceived lack of morality, but also his perceived lack of a political compass. For Johnson, the personal simply exposed the hollowness of the political.