The Monetarist Fantasy Is Over

Feb 17, 2020 ROBERT SKIDELSKY
UK Prime Minister Boris Johnson, determined to overcome Treasury resistance to his vast spending ambitions, has ousted Chancellor of the Exchequer Sajid Javid. But Johnson’s latest coup also is indicative of a global shift from monetary to fiscal policy.
LONDON – The forced resignation of the United Kingdom’s Chancellor of the Exchequer, Sajid Javid, is the latest sign that macroeconomic policy is being upended, and not only in the UK. In addition to completing the ritual burial of the austerity policies pursued by UK governments since 2010, Javid’s departure on February 13 has broader significance.
Prime Minister Boris Johnson is determined to overcome Treasury resistance to his vast spending ambitions. The last time a UK prime minister tried to open the government spending taps to such an extent was in 1964, when Labour’s Harold Wilson established the Department of Economic Affairs to counter Treasury hostility to public investment. Following the 1966 sterling crisis, however, the hawk-eyed Treasury re-established control, and the DEA was soon abolished. The Treasury, the oldest and most cynical department of government, knows how to bide its time.
But Johnson’s latest coup also is indicative of a global shift from monetary to fiscal policy. After World War II, stabilization policy, the brainchild of John Maynard Keynes, started off as strongly fiscal. The government’s budget, the argument went, should be used to balance an unstable economy at full employment.
In the 1970s, however, came the monetarist counter-revolution, led by Milton Friedman. The only stabilizing that a capitalist market economy needed, Friedman said, was of the price level. Provided that inflation was controlled by independent central banks and government budgets were kept “balanced,” economies would normally be stable at their “natural rate of unemployment.” From the 1980s until the 2008 global financial crisis, macroeconomic policy was conducted in Friedman’s shadow.
But now the pendulum has swung back. The reason is clear enough: monetary policy failed to anticipate, and therefore prevent, the Great Recession of 2008-09, and failed to bring about a full recovery from it. In many countries, including the UK, average real incomes are still lower than they were 12 years ago.
Disenchantment with monetary policy is running in parallel with a much more positive reading of US President Barack Obama’s 2008-09 fiscal boost, and a much more negative view of Europe’s post-slump fiscal austerity programs. A notable turning point was the 2013 rehabilitation of fiscal multipliers by the International Monetary Fund’s then-chief economist Olivier Blanchard and his colleague Daniel Leigh. As Blanchard recently put it, fiscal policy “has been underused as a cyclical tool.” Now, even prominent central bankers are calling for help from fiscal policy.
The theoretical case against relying on monetary policy for stabilization goes back to Keynes. “If, however, we are tempted to assert that money is the drink which stimulates the system to activity,” he wrote, “we must remind ourselves that there may be several slips between the cup and the lip.” More prosaically, the monetary pump is too leaky. Too much money ends up in the financial system, and not enough in the real economy.
Mark Carney, the outgoing governor of the Bank of England, recently admitted as much, saying that commercial banks had been “useless” for the real economy after the slump started, despite having had huge amounts of money thrown at them by central banks. In fact, orthodox theory still struggles to explain why trillions of dollars’ worth of quantitative easing, or QE, remains stuck in assets offering a negative real rate of interest.1
Kenneth Rogoff of Harvard recently argued that fiscal stabilization policy “is far too politicized to substitute consistently for modern independent technocratic central banks.” But instead of considering how this defect might be overcome, Rogoff sees no alternative to continuing with the prevailing monetary-policy regime – despite the overwhelming evidence that central banks are unable to play their assigned role. At least fiscal policy might in principle be up to the task of economic stabilization; there is no chance that central banks will be.
This is due to a technical reason, the validity of which was established both before and after the collapse of 2008. Simply put, central banks cannot control the aggregate level of spending in the economy, which means that they cannot control the price level and the aggregate level of output and employment.
A less skeptical observer than Rogoff would have looked more closely at proposals to strengthen automatic fiscal stabilizers, rather than dismissing them on the grounds that they will have (bad) “incentive effects” and that policymakers will override them on occasion. For example, a fair observer would at least be open to the idea of a public-sector job guarantee of the sort envisaged by the 1978 Humphrey-Hawkins Act in the US, which authorized the federal government to create “reservoirs of public employment” to balance fluctuations in private spending.
Those reservoirs would automatically be depleted and refilled as the economy waned and waxed, thus creating an automatic stabilizer. The Humphrey-Hawkins Act, had it been implemented, would have greatly reduced politicians’ discretion over counter-cyclical policy, while creating a much more powerful stabilizer than the social-security systems on which governments now rely.
To be sure, both the design and implementation of such a job guarantee would give rise to problems. But for both political and economic reasons, one should try to tackle them rather than concluding, as Rogoff does, that, “with monetary policy hampered and fiscal policy the main game in town, we should expect more volatile business cycles.” We have the intelligence to do better than that. Continue reading “The Monetarist Fantasy Is Over”

The Terrorism Paradox

There was, all too predictably, no shortage of political profiteering in the wake of November’s London Bridge terror attack, in which Usman Khan fatally stabbed two people before being shot dead by police. In particular, the United Kingdom’s prime minister, Boris Johnson, swiftly called for longer prison sentences and an end to “automatic early release” for convicted terrorists.

In the two decades since the September 11, 2001, terror attacks in the United States, terrorism has become the archetypal moral panic in the Western world. The fear that terrorists lurk behind every corner, plotting the wholesale destruction of Western civilization, has been used by successive British and US governments to introduce stricter sentencing laws and much broader surveillance powers – and, of course, to wage war.

In fact, terrorism in Western Europe has been waning since the late 1970s. According to the Global Terrorism Database (GTD), there were 996 deaths from terrorism in Western Europe between 2000 and 2017, compared to 1,833 deaths in the 17-year period from 1987-2004, and 4,351 between 1970 (when the GTD dataset begins) and 1987. Historical amnesia has increasingly blotted out the memory of Europe’s homegrown terrorism: the Baader-Meinhof gang in Germany, the Red Brigades in Italy, the IRA in the UK, Basque and Catalan terrorism in Spain, and Kosovar terrorism in the former Yugoslavia.

The situation is clearly different in the US – not least because the data are massively skewed by the 9/11 attacks, in which 2,996 people died. But even if we ignore this anomaly, it is clear that, since 2012, deaths from terrorism in America have been rising steadily, reversing the previous trend. Much of this “terrorism,” however, is simply a consequence of having so many guns in civilian circulation.

To be sure, Islamist terrorism is a real threat, chiefly in the Middle East. But two points need to be emphasized. First, Islamist terrorism – like the refugee crisis – was largely a result of the West’s efforts, whether hidden or overt, to achieve “regime change.” Second, Europe is in fact much safer than it used to be, partly because of the influence of the European Union on governments’ behavior, and partly because of improved anti-terrorist technology.Yet, as the number of deaths from terrorism declines (at least in Europe), alarm about it grows, offering governments a justification for introducing more security measures. This phenomenon, whereby our collective reaction to a social problem intensifies as the problem itself diminishes, is known as the “Tocqueville effect.” In his 1840 book Democracy in America, Alexis de Tocqueville noted that, “it is natural that the love of equality should constantly increase together with equality itself, and that it should grow by what it feeds on.”

Moreover, there is a related phenomenon that we can call the Baader-Meinhof effect: once your attention is drawn to something, you begin to see it all the time. These two effects explain how our subjective estimates of risk have come to diverge so sharply from the actual risks we face.In fact, the West has become the most risk-averse civilization in history. The word itself comes from the Latin risicum, which was used in the Middle Ages only in very specific contexts, usually relating to seafaring trades and the emerging maritime insurance business. In the courts of the sixteenth-century Italian city-states, rischio referred to the lives and careers of courtiers and princes, and their ensuing risks. But the word was not frequently used. It was far more common to attribute successes or failures to an external source: fortune, or fortuna. Fortune was unpredictability’s avatar. Its human counterpart was prudence, or the Machiavellian virtu.In the early modern period, nature acted upon humans, whose only rational response was to choose between reasonable expectations. Only with the scientific revolution did the modern discourse ofrisk begin toflower. Modern humanity acts upon and controls the natural world, and therefore calculates the degree of danger it poses. As a result, tragedy need no longer be a normal feature of life.The German sociologist Niklas Luhmann argued that, once individual actions came to be seen to have calculable, predictable, and avoidable consequences, there was no hope of returning to that pre-modern state of blissful ignorance, wherein the course of future events was left to the fates. As Luhmann cryptically put it, “The gate to paradise remains sealed by the term risk.”Economists, too, believe that all risk is measurable and therefore controllable. In that respect, they are bedfellows with those who tell us that security risks can be minimized by extending surveillance powers and enhancing the techniques by which we gather information about potential terror threats. A risk, after all, is the degree to which future events are uncertain, and – as Claude Shannon, the founder of information theory, wrote – “information is the resolution of uncertainty.”There is a clear benefit to being safer, but it comes at the price of an unprecedented intrusion into our private lives. Our right to information privacy, now guaranteed by the EU’s General Data Protection Regulation, is increasingly in direct conflict with our demand for security. Omnipresent devices that see, hear, read, and record our behavior produce a glut of data from which inferences, predictions, and recommendations can be made about our past, present, and future actions. In the face of the adage “knowledge is power,” the right to privacy withers.Furthermore, there is a conflict between safety and wellbeing. To be perfectly safe is to eliminate the cardinal human virtues of resilience and prudence. The perfectly safe human is therefore a diminished person.For both these reasons, we should stick to the facts and not give governments the tools they increasingly demand to win the “battle” against terrorism, crime, or any other technically avoidable misfortune that life throws up. A measured response is needed. And when it comes to the chaos and mess of human history, we should recall Heraclitus’s observation that “a thunderbolt steers the course of all things.”

For a public sector job guarantee

My Lords, I think I am the only macroeconomist contributing to this debate, which is perhaps rather odd as it is a debate on economic affairs. As instructive and important as the other contributions have been, I want to talk about economic policy, because unless the economy works a lot better than it has in the last 10 years, none of the spending pledges, to be quite honest, will be worth the paper that they are written on, and how well it works will largely depend on economic policy.

The good news is that fiscal policy is back. The gracious Speech said: “My Government will invest in the country’s public services … My Government will prioritise investment in infrastructure and world-leading science research and skills”.

That is good. Governments everywhere have started to inch back to fiscal policy. Retiring ECB chairman, Mario Draghi, admitted that monetary policy “needs help from fiscal policy.”

Evidently the Chancellor agrees. That agreement is indicated by the figures of extra spending that he promises over the next five years. Austerity is over.

Why the turnabout? First is the realisation that monetary policy cannot deliver the required boost to spending. We are told that central banks have run out of ammunition. The truth is that they never had enough ammunition to bring a sick economy back to health. The reason was the liquidity trap: most of the extra money pumped out by central banks simply was not spent on the real economy, it got locked up in financial assets.

Second is the realisation that fiscal policy was pointing the wrong way. There is a lot of myth-making going on here. It is claimed that, thanks to years of austerity, the Chancellor now has the “fiscal space” to boost investment, but the logic of that is all wrong. Trying to balance the budget when the economy was depleted did enormous damage to millions of people; making the economy smaller made the budget more difficult to balance. The result of that has been missed targets, less investment and rising national debt. To say that the nation had to sacrifice itself for 10 years in order to enable the Government to spend more on the health service or infrastructure now is simply terrible fraud. There has been no mea culpa from the perpetrator of that fraud: George Osborne.

The Government promise to increase spending while maintaining the sustainability of the public finances. It is just possible that the Chancellor will meet his much-revised fiscal targets; it really depends what happens to the economy, and most people are expecting a recession. If or when that happens, the Chancellor will have to talk about “headwinds” rather than “headroom”.

Now that fiscal policy is back in fashion, can we do better than the current hit-and-miss strategy? Former Fed chairmen Bernard Bernanke and Janet Yellen have called for more powerful automatic stabilisers. It is a slightly technical phrase but, in this connection, I urge the Government to seriously consider a public sector job guarantee. Its purpose would be to balance fluctuations in private sector employment in a non-discretionary way. The reservoir of public sector jobs would deplete or fill up automatically as the economy waxed or waned. Not only would this be a much more powerful automatic stabiliser than trying to balance the economy by paying out more on unemployment benefits, but it would remove the discretionary element from tax and spending policies that did so much to discredit fiscal policy in the past.

Finally, I am encouraged by the promise in the gracious Speech to give communities more control over how investment is spent, so that they can decide what is best for themselves. John Maynard Keynes long ago emphasised the importance of rightly distributed demand—that is, investment channelled to underheating, not overheating regions. The Government’s pledge to prioritise investment in poorer regions will give communities more control over how money is spent. It would also dovetail neatly into a job guarantee programme.

It would be tragic if the second coming of fiscal policy were to be wrecked on the same inattention to the need for a fiscal constitution as the last one. As Paul Johnson, director of the IFS, recently said: “The trouble is that setting supposedly binding fiscal rules, missing them, abandoning them and replacing them with something new” is not a fiscal constitution, it is back to the bad old days of the political business cycle: we must do better than that this time.

Economic Possibilities for Ourselves

The most depressing feature of the current explosion in robot-apocalypse literature is that it rarely transcends the world of work. Almost every day, news articles appear detailing some new round of layoffs. In the broader debate, there are apparently only two camps: those who believe that automation will usher in a world of enriched jobs for all, and those who fear it will make most of the workforce redundant.

This bifurcation reflects the fact that “working for a living” has been the main occupation of humankind throughout history. The thought of a cessation of work fills people with dread, for which the only antidote seems to be the promise of better work. Few have been willing to take the cheerful view of Bertrand Russell’s provocative 1932 essay In Praise of Idleness. Why is it so difficult for people to accept that the end of necessary labor could mean barely imaginable opportunities to live, in John Maynard Keynes’s words, “wisely, agreeably, and well”?

The fear of labor-saving technology dates back to the start of the Industrial Revolution, but two factors in our own time have heightened it. The first is that the new generation of machines seems poised to replace not only human muscles but also human brains. Owing to advances in machine learning and artificial intelligence, we are said to be entering an era of thinking robots; and those robots will soon be able to think even better than we do. The worry is that teaching machines to perform most of the tasks previously carried out by humans will make most human labor redundant. In that scenario, what will humans do?

The other fear factor is the increasing precariousness of wage labor – though this concern is seemingly belied by headline statistics suggesting that unemployment is at a historic low. The problem is that an economy at “full employment” now contains a large penumbra of what economist Guy Standing calls the “precariat”: under-employed people who work less and for lower pay than they would like. A growing number of workers, seeming to lack any kind of job (and pay) security, are thus forced to work well below their ability.

It is natural that one would interpret the onset of precariousness as the first stage in a broader trend toward workforce redundancy, especially if one pays attention to alarmist predictions of the next category of “jobs at risk.” But this conclusion is premature. The penetration of robotics into the world of work has not yet been sufficient to explain the rise of the precariat. So far, “cost cutting” in the West has largely taken the form of offshoring to the East, where labor is cheaper, rather than replacing humans with machines. But “onshoring” work that was previously offshored will offer cold comfort to workers if machines get most of the jobs.

ROBO-RAPTURE

According to the first view – let us call it “job enrichment” – technology will eventually create more, better human jobs than it destroys, as has always been the case in the past. Simple, mundane tasks may increasingly be automated, but human labor will then be freed up for more “interesting” and “creative” cognitive work.

In late 2017, the McKinsey Global Institute (MGI) published Jobs Lost, Jobs Gained, which claimed that as much as 50% of working hours in the global economy could theoretically be automated; the authors suggested, however, that not more than 30% actually would be. Further, they estimated that less than 5% of occupations could be fully automated; but that in 60% of occupations, at least 30% of the required tasks could be.

In line with the usual mainstream assessment, MGI believes that while there will be no net loss of jobs in the long run, the “transition may include a period of higher unemployment and wage adjustments.” It all depends, the authors say, on the rate at which displaced workers are re-employed: a low re-employment rate will lead to a higher medium-term unemployment rate, and vice versa.

MGI’s proposal for massive investment in education to lower the unemployment cost of the transition is also conventional. The faster the labor reabsorption, the higher the wage growth. Lower re-employment levels will cause wages to fall, with a greater share of the gains from automation accruing to capital, not labor. But the authors hasten to add:

“Even if the particulars of historical experience turn out to differ from conditions today, one lesson seems pertinent: although economies adjust to technological shocks, the transition period is measured in decades, not years, and the rising prosperity may not be shared by all.”

This assessment is typical, and it has led many to call on governments to invest heavily in so-called “upskilling” programs. In a commentary for Project SyndicateZia Qureshi of the Brookings Institution argues that, “with smart, forward-looking policies, we can … ensure that the future of work is a better job.” In this view, automation is simply the continuation of the move toward more, higher-quality jobs that has characterized capitalist growth since the Industrial Revolution.

History is on the optimists’ side. Mechanization has been the durable engine of productivity and wage growth as well as reductions in working hours, albeit usually with a considerable lag. Although the Roberts loom cost hundreds of thousands of handloom weavers their jobs in the nineteenth century, the broader wave of new industrial technologies enabled a much larger population to be maintained at a higher standard of living.

ROBO-REDUNDANCY

But, according to the second view – call it “job destruction” – this time is different. The programming of machines to perform ever more complex tasks with ever-increasing speed, accuracy, precision, and reliability will result in mass unemployment. In Rise of the Robots, author and entrepreneur Martin Ford addresses the techno-optimists head-on. “There is a widely held belief – based on historical evidence stretching back at least as far as the industrial revolution – that while technology may certainly destroy jobs, businesses, and even entire industries, it will also create entirely new occupations … often in areas that we can’t yet imagine.” The problem, Ford argues, is that information technology has now reached the point where it can be considered a true utility, much like electricity.

It stands to reason that the successful new industries that will emerge in the years ahead will have taken full advantage of this powerful new utility and the distributed machine intelligence that accompanies it. That means they will rarely – if ever – be highly labor-intensive. The threat is that as creative destruction unfolds, the “destruction” will fall primarily on labor-intensive businesses in traditional areas like retail and food preparation, whereas the “creation” will generate new industries that simply don’t employ many people.

On this view, the economy is heading for a tipping point where job creation will begin to fall consistently short of what is required to employ the workforce fully. We will soon reach the stage where the machine-driven destruction of existing human jobs far outpaces the creation of new human jobs, resulting in inexorably rising mass “technological unemployment.”

THE UPSKILLING MIRAGE

Optimists’ response to such concerns is that the workforce simply needs to be trained or upskilled in order to “race with the machines.” Typical of this outlook is the following headline on a commentary published by the World Economic Forum: “How new technologies can create huge numbers of meaningful jobs.” According to the author, concerns about “the looming devastation that self-driving technology will have on the 3.5 million truck drivers in the US” are “misdirected.” Augmented-reality technology, we are told, can create loads of new jobs by enabling people to work from home. All that will be needed is training of the kind offered by “Upskill, an augmented reality company in the manufacturing and field services sectors,” which “uses wearable technologies to provide step-by-step instructions to industrial workers.”

The author, himself the co-founder of an augmented-reality company, goes on to argue that, “With the pace of technological progress only accelerating and with increasing specialization becoming the norm in every industry, reducing the time necessary to retrain workers is pivotal to maintaining the competitiveness of industrialized economies.” There is no mention of the wages that will be offered to these “upskilled” workers in their “meaningful” new jobs. We are simply told that they will be relocated to “lower cost areas more in need of job creation.” Only at the very end of the commentary does the author acknowledge that, in fact, “Technology is a force that has the potential to eliminate entire industries through robotics and automation, and for that we should be concerned.”

The retraining argument should give us pause. In portraying upskilling as the solution to the labor displacement caused by new technologies, optimists rarely admit that if predictions about “thinking robots” turn out to be anywhere near true, workers would need to be trained in technical skills to an extent that is unprecedented in human history.

Moreover, the time it takes to upgrade the skills of the workforce will inevitably exceed the time it takes to automate the economy. This will be true even if claims about an imminent deluge of automation are greatly exaggerated. In the interval, there will be under- and unemployment. In fact, this has already been happening. Although automation is not yet bearing down on workers to the extent that has been predicted, it has nonetheless pushed more of them into less-skilled jobs; and its mere possibility may be exerting downward pressure on wages. There are already signs of the new class structure envisioned by the pessimists: “lovely jobs at the top, lousy jobs at the bottom.”

A more fundamental question is what we mean by upskilling, and what its consequences might be. Often, heavy emphasis is placed on the importance of better technological education at all levels of society, as if all people will need to succeed in the future is to be taught how to write and understand computer code.

As the technology writer James Bridle has shown, this line of argument has a number of limitations. While encouraging people to take up computer programming might be a good start, such training offers only a functional understanding of technological systems. It does not equip people to ask higher-level questions along the lines of, “Where did these systems come from, who designed them and what for, and which of these intentions still lurk within them today?” Bridle also points out that arguments for technological education and upskilling are usually offered in “nakedly pro-market terms,” following a simple equation: “the information economy needs more programmers, and young people need jobs in the future.”

THE MISSING DIMENSION

More to the point, the upskilling discourse totally ignores the possibility that automation could also allow people simply to work less. The reason for this neglect is twofold: it is commonly assumed that human wants are insatiable, and that we will thus work ad infinitum to satisfy them; and it is simply taken for granted that work is the primary source of meaning in human lives.1

Historically, neither of these claims holds true. The consumption race is a rather recent phenomenon, dating no earlier than the late nineteenth century. And the possibility that we might one day liberate ourselves from the “curse of work” has fascinated thinkers from Aristotle to Russell. Many visions of Utopia betray a longing for leisure and liberation from toil. Even today, surveys show that people in most developed countries would prefer to work less, even in the workaholic United States, and might even accept less pay if it meant logging fewer hours on the clock.

The deeply economistic nature of the current debate excludes the possibility of a . Yet if we want to meet the challenges of the future, it is not enough to know how to code, analyze data, and invent algorithms. We need to start thinking seriously and at a systemic level about the operational logic of consumer capitalism and the possibility of de-growth.

In this process, we must abandon the false dichotomy between “jobs” and “idleness.” Full employment need not mean full-time employment, and leisure time need not be spent idly. (Education can play an important role in ensuring that it is not.) Above all, wealth and income will need to be distributed in such a way that machine-enabled productivity gains do not accrue disproportionately to a small minority of owners, managers, and technicians.

A Post-Election Reckoning for British Politics

Leaving the European Union on January 31, 2020, will be UK Prime Minister Boris Johnson’s repayment of the debt he owes to the many Labour supporters who “lent” his Conservatives their votes. But “getting Brexit done” won’t be enough for the Tories to hold on to their parliamentary seats.
LONDON – Speaking outside No. 10 Downing Street following his emphatic election victory, British Prime Minister Boris Johnson thanked long-time Labour supporters for having “lent” his Conservative Party their votes. It was a curious phrase, whose meaning depended entirely on context. The Tories had breached Labour’s strongholds in the Midlands and North East England on the promise of “getting Brexit done.” Leaving the European Union, as Britain will on January 31, 2020, will be Johnson’s repayment of the debt he owes these voters.
But “getting Brexit done” won’t be enough for the Tories to hold on to their parliamentary seats, as Johnson recognized. The Conservatives, he said, will need to turn themselves once again into a “one nation” party. For its part, if Labour is to regain its heartlands, it will need to find a way of reconnecting with its alienated supporters.
What this double reconfiguration entails is reasonably clear. The Conservatives will need to break with Thatcherite economics, and Labour will need to loosen its embrace of minorities and minority culture. Both will need to move back to a middle ground. The libertarian dream of a free market in both economics and morals does not resonate with an economically interventionist but socially conservative electorate.
Brexit was a reaction to economic betrayal, the British version of a European-wide revolt by what French President Emmanuel Macron called the “left-behinds.” This label is precisely right as a description, but overwhelmingly wrong as a prescription, for it suggests that the future is technologically determined, and that people simply will have to adapt to it. The state’s duty, according to this view, is to enable the left-behinds to board the cost-cutting, labor-shedding bullet express, whereas what most people want is a reasonably secure job that pays a decent wage and gives them a sense of worth.
No one would deny that governments have a vital role to play in providing people with the employment skills they need. But it is also governments’ task to manage the trade-off between security and efficiency so that no sizeable fraction of the population is left involuntarily unemployed.
Guaranteed full employment was the key point of consensus of the Keynesian economics of the 1950s and 1960s, embraced by right and left, with the political battle centered on questions of wealth and income distribution. This is the kind of dynamic center the Conservatives should try to regain.

Any Toryism that seeks to be genuinely “one nation” must acknowledge that the fiscal austerity that the Conservatives imposed on the country from 2010 to 2017 caused great and unnecessary harm to millions of people. The Tories must show that they understand why austerity was wrong in those circumstances, and that the purpose of the budget is not to balance the government’s accounts, but to balance the economy at full employment. Deficits and surpluses reflect the state of the economy. This means that no effort should be made to cut the deficit when the economy is shrinking or to expand it when the economy is growing, because that produces deflation in the downswing and inflation in the upswing – exactly the opposite of what is needed. George Osborne’s greatest contribution to Toryism now would be to explain where and how, as chancellor of the exchequer, he went wrong between 2010 and 2016.
A party pledged to govern from the center should implement policies to stabilize the labor market. These should include a permanent public investment program aimed at rebalancing the United Kingdom’s regions and “greening” its infrastructure, together with a buffer of guaranteed public-sector jobs that inflates and deflates automatically with economic downturns and upturns. The beauty of the second lies precisely in its automaticity, guarding it against the charge of being at the mercy of vote-hungry politicians.
Together, these policies would limit business fluctuations, rebalance the economy geographically, and lay the ecological foundations for future growth. What they imply is a deceleration of the rush to automate and globalize, regardless of social cost.
Labour, for its part, needs to recognize that most of its voters are culturally conservative, which became clear with respect to Brexit. The election result disclosed a culture gap between Remainers and Leavers, which for a subset of London and university-campus-based Remainers amounted to a culture war between a politically correct professional class and a swath of the population routinely dubbed stupid, backward, and undereducated, or, more generously, misinformed. One symptom of this gap was the common media depiction of Johnson as a “serial liar,” as though it was his mendacity that obscured from befuddled voters the truth of their situation.
Political correctness ramifies through contemporary culture. I first became aware of a cultural offensive against traditional values in the 1970s, when school history textbooks started to teach that Britain’s achievements were built on the exploitation of colonial peoples, and that people should learn to feel suitably apologetic for the behavior of their forbears. Granted that much history is myth-making, no community can live without a stock of myths in which it can take pride. And “normal” people don’t want to be continuously told that their beliefs, habits, and prejudices are obsolete.1
In the continuous evolution of cultural norms, therefore, a new balance needs to be struck between the urge to overthrow prejudice and the need to preserve social cohesion. Moreover, whereas the phrase “left behind” may reasonably describe the situation of the economically precarious, it is quite wrong as a cultural description. There are too many cultural left-behinds, and their cultural “re-skilling” will take much longer than any economic re-skilling. But such re-skilling is not the right prescription. Metropolitan elites have no right to force their norms on the rest of the country. Labour will need to remember that “normal” people want a TransPennine railway much more than a transgender future.
In short, just as the right went wrong in forcing economic individualism down people’s throats, so the left has gone wrong in its contempt for majority culture. In the UK, the price for elite incapacity in both areas has been Brexit; in Europe and the United States generally, it has been the growth of populism.
Economic and cultural utopians alike are destroyers: they want to tear down what has been built in order to create something more perfect. The dream of perfection is the death of statesmanship. Politicians who aspire to govern on behalf of the whole community should aim not for the best possible result, but the best result possible. Continue reading “A Post-Election Reckoning for British Politics”

China’s Quest for Legitimacy

December 3, 2019

The conventional Western view is that China faces the alternatives of integrating with the West, trying to destroy it, or succumbing to domestic violence and chaos. But the Chinese scholar Lanxin Xiang instead proposes a constitutional regime based on a modernized Confucianism.
LONDON – Liberal democracy faces a legitimacy crisis, or so we are repeatedly told. People distrust government by liberal elites, and increasingly believe that the democracy on offer is a sham. This sentiment is reflected in the success of populists in Europe and the United States, and in the authoritarian tilt of governments in Turkey, Brazil, the Philippines, and elsewhere. In fact, liberal democracy is not only being challenged in its European and American heartlands, but also has failed to ignite globally.
Democracies, it is still widely believed, do not go to war with each other. Speaking in Chicago in 1999, the United Kingdom’s then-prime minister, Tony Blair, averred that, “The spread of our values makes us safer,” prompting some to recall Francis Fukuyama’s earlier prediction that the global triumph of liberal democracy would spell the end of history. The subsequent failure of Russia and China to follow the Fukuyama script has unsurprisingly triggered fears of a new cold war. Specifically, the economic “rise of China” is interpreted as a “challenge” to the West.
On this reading, peaceful transfers of international power are possible only between states that share the same ideology. In the first half of the twentieth century, therefore, Britain could safely “hand over the torch” to the US, but not to Germany. Today, so the argument goes, China poses an ideological as well as a geopolitical challenge to a decaying Western hegemony.
This perspective, however, is vigorously contested by the Chinese scholar Lanxin Xiang. In his fascinating new book The Quest for Legitimacy in Chinese Politics, Xiang shifts the spotlight from the crisis of rule in the West to the crisis of rule in China.
In one sense, this is familiar territory. Western political scientists have long believed that constitutional democracy is the only stable form of government. They therefore argue that China’s one-party state, imported from Bolshevism, is doomed, with the current protests in Hong Kong foreshadowing the mainland’s fate.
Xiang’s contribution lies in challenging the conventional Western view that China faces the alternatives of integrating with the West, trying to destroy it, or succumbing to domestic violence and chaos. Instead, he proposes a constitutional regime with Chinese characteristics, based on a modernized Confucianism.
Continue reading “China’s Quest for Legitimacy”