What Would Keynes Have Done

In the long-run, Covid-19 may well change the way we work and live. It may – and should – lead us towards a greener, less consumption-driven economy. The question for now is what to do about the economic devastation it will bring in its wake. Around 730,000 UK jobs were lost between March and July, the biggest quarterly decline since 2009, and unemployment is forecast by the Office for Budget Responsibility to reach its highest level since 1984 (11.9 per cent). 

The coming downturn is as inevitable as the rain announced by blackening clouds. In this respect it is quite unlike the banking collapse of 2008, or even Covid-19 itself, both of which were unforeseen. Remember the Queen’s question in 2008 to a group of economists at the London School of Economics: “Why did no one see it coming?” The approaching unemployment crisis is an expected event, not an unexpected “shock”. Because it is fully anticipated, governments should be in a good position to offset its effects, if not fully, at least in large part, provided they know what to do. But the theoretical vacuum lying at the heart of current policymaking discourages any undue optimism that they might.

Admittedly, this will be a most unusual depression. As the New York Times columnist Paul Krugman has noted: “What’s happening now is that we’ve cut down both supply and demand for part of the economy because we think high-contact activities spread the coronavirus.” Businesses have been paid not to do business; their employees paid not to work. As a result, the UK’s GDP contracted by a cumulative 22.1 per cent in the first half of this year compared with the end of 2019 (the largest fall of any G7 country). It does not yet feel like a depression because millions of people’s incomes are being artificially maintained through the Job Retention Scheme. 

But the furlough scheme, as it has become known, is being wound down and will formally end on 31 October. The optimistic expectation is that as businesses reopen and workers return to work, the economy will naturally and speedily revert to its former size. This is called a “V-shaped” recovery. But in many cases there won’t be jobs to go back to, because firms will have folded or continue to be restricted in the amount of business they are allowed to do. Added to this, Britain is mainly a service economy, and one has to consider the effect on spending of compulsory social distancing, plus voluntary resistance to physical contact. In the absence of further measures to support incomes, total demand will soon start falling to the level of the reduced supply, with savage consequences for employment.

But the more fundamental reason for scepticism about future government policy is that public officials and their economic advisers still subscribe to models that assume economies normally do best without government help. Stimulus measures can be justified in an emergency, but they are not seen as part of the policy framework, any more than keeping people in intensive care is seen as a prescription for healthy living. As the Chicago economist Robert Lucas once observed, all governments are “Keynesians in the fox hole”. The fact the stimulus measures advocated by JM Keynes – such as higher public spending and tax cuts – are expected to be for emergencies only reflects the damage the neoclassical (or free-market) economics of the 1980s and 1990s inflicted on his theory: damage has never been repaired.  

***

The fundamental feature of today’s neoclassical orthodoxy is a disbelief in the ability of governments permanently to improve the level and direction of economic activity or to alter the distribution of wealth and income. Markets, say mainstream economists, churn out results, which, if not always optimal, cannot be improved on without dire consequences for long-term prosperity. Since the 1980s Western governments have abandoned the full employment, growth and income-equalising targets of the Keynesian social-democratic era. 

Behind this rejection of the beneficial power of government are a number of specific theoretical and policy propositions: that market economies are normally stable; that with flexible wages and prices there can be no unwanted unemployment; that governments are less efficient in allocating capital than private firms; that public budgets should be balanced to prevent governments surreptitiously  stealing resources from the private sector; that the only macroeconomic responsibility of government is to maintain “sound money”; and that this task should be outsourced to central banks who alone can be trusted not to inflate the economy for electoral reasons.

So-called New Keynesians would add numerous qualifications. They would point to the existence of “market imperfections”, which allow for more short-term “policy space” than neoclassical orthodoxy permits. Nevertheless, they are hamstrung by their adherence to economic models that in principle deny the need for, and stress the baleful consequences of, government interference with market forces. Their common sense is stronger than their logic. 

Against this orthodoxy, juxtapose the key Keynesian propositions, which justify a much more robust economic role for the state: the instability of private investment due to uncertainty; the inability of flexible wages and prices to maintain full employment; the power of government policy to improve long-run and not just short-run outcomes; and the importance of the state’s budget for balancing the economy. 

Consider the Keynesian argument for denying that flexible wages will lead to a V-shaped recovery from an economic shock. Every producer, Keynesians argue, is also a consumer. A cut in production costs (wages) simultaneously cuts the community’s spending power and thus, far from hastening recovery, deepens the slump. By the same logic, cutting government spending in a slump makes matters worse, not better.  

For this reason it is likely to fail on its own terms. The former chancellor George Osborne never succeeded in “balancing the budget” in six years of trying. The budget cannot be balanced without a recovery in government revenue; the way to increase government revenue is to increase government spending. This apparent paradox arises only because we think of governments as ordinary households, which cannot “afford” to spend more than their incomes. But the government is a super-household: in a slump its spending creates its own income by enlarging its tax take. That is why fears of a runaway explosion in the national debt are largely illusory. The debt only becomes an unsupportable burden if it grows faster than the economy. Starting from a position when the economy is shrinking, an increase in government spending will cause the economy to grow faster than the debt.

And as for the neoclassical view that public investments are bound to be wasted, Keynes replied that even the most wasteful conceivable public investment is less wasteful than unemployment.

***

The traditional Keynesian response to a downturn in demand is to stimulate the economy through a mixture of fiscal and monetary measures: on the fiscal side by cutting taxes or by increasing public spending; on the monetary side by “printing” money. Such stimulus packages are intended to reverse the fall in total demand, leading to a recovery in economic output and employment. An example of fiscal stimulus from the UK government in 2009 was the car scrapping incentive scheme, whereby car owners received a £2,000 discount from the Treasury when trading in their old car for a newer model. The enlarged market for newer cars led to an increase in car production and sales, which led to increased employment in the motor car industry, and this helped sustain employment elsewhere. The Eat out to Help Out Scheme, which offered restaurant diners up to a £10 discount per head, is a more recent example.

However, such is the fear that government spending equates to “socialism” (think of the phrase “socialised medicine”) that even today’s Keynesians would prefer to stimulate the economy through unconditional cash grants to private individuals, rather than direct government spending. But cutting taxes (equivalent to giving people extra cash) will not increase employment if people are reluctant to spend; new money issued by the central bank won’t increase spending if it goes straight into cash reserves. Even negative real interest rates won’t prompt businesses to borrow if their expectation of profit is zero. The truth is that indirect stimulus won’t stimulate anything much in the face of a widespread collapse in consumer and investor confidence. Only direct state spending will do the job.

I would frame my anti-slump measures around a robust Keynesian model. The war against the economic consequences of Covid-19 must be fought with the weapons of public investment and job creation.

One result of the discrediting of Keynesian theory has been the collapse of state investment: the UK government’s share of total investment fell from an average of 47.3 per cent in 1948-76 to 18.4 per cent in 1977-2007. This left the economy much more dependent on the variable expectations of the business community. More pertinently for today, it left the public health services denuded of capacity to cope with the pandemic, and unduly reliant on foreign supply chains for essential medical equipment. A sound principle in today’s world is that all the goods and services necessary to maintain the health and security of the nation should be produced within its own borders, or those of its close political allies. If that means curtailment of market-led globalisation, so be it. 

More generally, I would immediately expand and accelerate all public construction and procurement projects – infrastructure, social housing, schools, hospitals – taking the opportunity to make them energy efficient. However, not all public investment needs to be performed directly by the state. I would create a state-holding company to take equity shares in private firms that are needed in the national interest. In today’s insecure world, no country can afford to leave the direction of its economic life, especially its scientific and technological direction, to the vagaries of global market forces. 

Secondly, I would replace the furlough scheme with a public sector job and training guarantee. This would cut off the coming jobs crisis at its root. Ideally, it should be part of a permanent system for ending the unemployment that has scarred all economies since the Industrial Revolution. Every person of working age able and willing to work who cannot find work in the private sector at the minimum wage should be offered a public-sector job or training at the minimum wage. Such a scheme, by guaranteeing work for all those able and willing to work, would fulfil the old trade union demand of “work or maintenance”. 

If this system were in place there would be no need for minimum wage legislation, since anyone offered a private-sector job at below the minimum wage would have the alternative of a higher-paid public-sector job. Periodic upward adjustment of the public-sector minimum wage would substitute an upward for a downward pressure on wage levels throughout the economy.  

There are two further advantages of a public-sector job guarantee. First, it would be a much more powerful automatic stabiliser than unemployment benefit. At present, the government’s budget deficit expands automatically in a slump as state revenues fall and public spending on income support rises. This limits the fall in economic activity, but does not avoid it. Under the job guarantee scheme, although government spending would rise more than at present, private incomes and therefore public revenues would be better maintained, not only minimising the recession, but ensuring that much of the enlarged budget deficit would be self-liquidating. 

A second advantage would be the stimulus that a job guarantee would provide for decentralisation. The programme would be funded nationally, but would be administered locally by a variety of agencies: local governments, NGOs and social enterprises. Each would be tasked with creating “on-the-spot” employment opportunities where they are most needed (environmental, civic, and human care), matching unfilled community needs with unemployed or underemployed people. Good models would be Franklin D Roosevelt’s Works Progress Administration and Civilian Conservation Corps, which provided millions of local jobs to the unemployed, often with a strong green slant. Local authorities might even offer prizes for residents who devise the boldest and most imaginative ideas.

A frequent criticism of such public work schemes is that they would simply “make work”. This is to take at face value Keynes’s off-the-cuff remark that if the unemployed were sent to dig up old bottles full of banknotes, there need be no more unemployment. People never quote the follow-up: “It would, indeed, be more sensible to build houses and the like; but if there are political and practical difficulties in the way of this, the above would be better than nothing.” Of course, there are many more sensible things that need doing in every community, which only a petrified imagination stops from being conceived and carried out.

If public money is to be spent – and much more will need to be spent in the years ahead – it is much better spent creating work than maintaining millions in idleness waiting for the private economy to heal itself. The big idea which needs to be grasped is that state-created work is itself part of the healing process. Not only does it add value to the community, but it expands the market for goods and services, which the private sector needs to return to health.

Keynes was convinced that if democracies failed to tackle mass unemployment, people would turn to dictatorships. He gave democracies a programme of action. We must build on it today. The economics profession has a special responsibility to show the way, which it has shamefully shirked.

Robert Skidelsky is the author of a three-volume biography of J M Keynes, a cross-bench peer and emeritus professor of political economy at the University of Warwick. His most recent book is Money and Government: The Past and Future of Economics

The Monetarist Fantasy Is Over

Feb 17, 2020 ROBERT SKIDELSKY
UK Prime Minister Boris Johnson, determined to overcome Treasury resistance to his vast spending ambitions, has ousted Chancellor of the Exchequer Sajid Javid. But Johnson’s latest coup also is indicative of a global shift from monetary to fiscal policy.
LONDON – The forced resignation of the United Kingdom’s Chancellor of the Exchequer, Sajid Javid, is the latest sign that macroeconomic policy is being upended, and not only in the UK. In addition to completing the ritual burial of the austerity policies pursued by UK governments since 2010, Javid’s departure on February 13 has broader significance.
Prime Minister Boris Johnson is determined to overcome Treasury resistance to his vast spending ambitions. The last time a UK prime minister tried to open the government spending taps to such an extent was in 1964, when Labour’s Harold Wilson established the Department of Economic Affairs to counter Treasury hostility to public investment. Following the 1966 sterling crisis, however, the hawk-eyed Treasury re-established control, and the DEA was soon abolished. The Treasury, the oldest and most cynical department of government, knows how to bide its time.
But Johnson’s latest coup also is indicative of a global shift from monetary to fiscal policy. After World War II, stabilization policy, the brainchild of John Maynard Keynes, started off as strongly fiscal. The government’s budget, the argument went, should be used to balance an unstable economy at full employment.
In the 1970s, however, came the monetarist counter-revolution, led by Milton Friedman. The only stabilizing that a capitalist market economy needed, Friedman said, was of the price level. Provided that inflation was controlled by independent central banks and government budgets were kept “balanced,” economies would normally be stable at their “natural rate of unemployment.” From the 1980s until the 2008 global financial crisis, macroeconomic policy was conducted in Friedman’s shadow.
But now the pendulum has swung back. The reason is clear enough: monetary policy failed to anticipate, and therefore prevent, the Great Recession of 2008-09, and failed to bring about a full recovery from it. In many countries, including the UK, average real incomes are still lower than they were 12 years ago.
Disenchantment with monetary policy is running in parallel with a much more positive reading of US President Barack Obama’s 2008-09 fiscal boost, and a much more negative view of Europe’s post-slump fiscal austerity programs. A notable turning point was the 2013 rehabilitation of fiscal multipliers by the International Monetary Fund’s then-chief economist Olivier Blanchard and his colleague Daniel Leigh. As Blanchard recently put it, fiscal policy “has been underused as a cyclical tool.” Now, even prominent central bankers are calling for help from fiscal policy.
The theoretical case against relying on monetary policy for stabilization goes back to Keynes. “If, however, we are tempted to assert that money is the drink which stimulates the system to activity,” he wrote, “we must remind ourselves that there may be several slips between the cup and the lip.” More prosaically, the monetary pump is too leaky. Too much money ends up in the financial system, and not enough in the real economy.
Mark Carney, the outgoing governor of the Bank of England, recently admitted as much, saying that commercial banks had been “useless” for the real economy after the slump started, despite having had huge amounts of money thrown at them by central banks. In fact, orthodox theory still struggles to explain why trillions of dollars’ worth of quantitative easing, or QE, remains stuck in assets offering a negative real rate of interest.1
Kenneth Rogoff of Harvard recently argued that fiscal stabilization policy “is far too politicized to substitute consistently for modern independent technocratic central banks.” But instead of considering how this defect might be overcome, Rogoff sees no alternative to continuing with the prevailing monetary-policy regime – despite the overwhelming evidence that central banks are unable to play their assigned role. At least fiscal policy might in principle be up to the task of economic stabilization; there is no chance that central banks will be.
This is due to a technical reason, the validity of which was established both before and after the collapse of 2008. Simply put, central banks cannot control the aggregate level of spending in the economy, which means that they cannot control the price level and the aggregate level of output and employment.
A less skeptical observer than Rogoff would have looked more closely at proposals to strengthen automatic fiscal stabilizers, rather than dismissing them on the grounds that they will have (bad) “incentive effects” and that policymakers will override them on occasion. For example, a fair observer would at least be open to the idea of a public-sector job guarantee of the sort envisaged by the 1978 Humphrey-Hawkins Act in the US, which authorized the federal government to create “reservoirs of public employment” to balance fluctuations in private spending.
Those reservoirs would automatically be depleted and refilled as the economy waned and waxed, thus creating an automatic stabilizer. The Humphrey-Hawkins Act, had it been implemented, would have greatly reduced politicians’ discretion over counter-cyclical policy, while creating a much more powerful stabilizer than the social-security systems on which governments now rely.
To be sure, both the design and implementation of such a job guarantee would give rise to problems. But for both political and economic reasons, one should try to tackle them rather than concluding, as Rogoff does, that, “with monetary policy hampered and fiscal policy the main game in town, we should expect more volatile business cycles.” We have the intelligence to do better than that. Continue reading “The Monetarist Fantasy Is Over”

The Terrorism Paradox

There was, all too predictably, no shortage of political profiteering in the wake of November’s London Bridge terror attack, in which Usman Khan fatally stabbed two people before being shot dead by police. In particular, the United Kingdom’s prime minister, Boris Johnson, swiftly called for longer prison sentences and an end to “automatic early release” for convicted terrorists.

In the two decades since the September 11, 2001, terror attacks in the United States, terrorism has become the archetypal moral panic in the Western world. The fear that terrorists lurk behind every corner, plotting the wholesale destruction of Western civilization, has been used by successive British and US governments to introduce stricter sentencing laws and much broader surveillance powers – and, of course, to wage war.

In fact, terrorism in Western Europe has been waning since the late 1970s. According to the Global Terrorism Database (GTD), there were 996 deaths from terrorism in Western Europe between 2000 and 2017, compared to 1,833 deaths in the 17-year period from 1987-2004, and 4,351 between 1970 (when the GTD dataset begins) and 1987. Historical amnesia has increasingly blotted out the memory of Europe’s homegrown terrorism: the Baader-Meinhof gang in Germany, the Red Brigades in Italy, the IRA in the UK, Basque and Catalan terrorism in Spain, and Kosovar terrorism in the former Yugoslavia.

The situation is clearly different in the US – not least because the data are massively skewed by the 9/11 attacks, in which 2,996 people died. But even if we ignore this anomaly, it is clear that, since 2012, deaths from terrorism in America have been rising steadily, reversing the previous trend. Much of this “terrorism,” however, is simply a consequence of having so many guns in civilian circulation.

To be sure, Islamist terrorism is a real threat, chiefly in the Middle East. But two points need to be emphasized. First, Islamist terrorism – like the refugee crisis – was largely a result of the West’s efforts, whether hidden or overt, to achieve “regime change.” Second, Europe is in fact much safer than it used to be, partly because of the influence of the European Union on governments’ behavior, and partly because of improved anti-terrorist technology.Yet, as the number of deaths from terrorism declines (at least in Europe), alarm about it grows, offering governments a justification for introducing more security measures. This phenomenon, whereby our collective reaction to a social problem intensifies as the problem itself diminishes, is known as the “Tocqueville effect.” In his 1840 book Democracy in America, Alexis de Tocqueville noted that, “it is natural that the love of equality should constantly increase together with equality itself, and that it should grow by what it feeds on.”

Moreover, there is a related phenomenon that we can call the Baader-Meinhof effect: once your attention is drawn to something, you begin to see it all the time. These two effects explain how our subjective estimates of risk have come to diverge so sharply from the actual risks we face.In fact, the West has become the most risk-averse civilization in history. The word itself comes from the Latin risicum, which was used in the Middle Ages only in very specific contexts, usually relating to seafaring trades and the emerging maritime insurance business. In the courts of the sixteenth-century Italian city-states, rischio referred to the lives and careers of courtiers and princes, and their ensuing risks. But the word was not frequently used. It was far more common to attribute successes or failures to an external source: fortune, or fortuna. Fortune was unpredictability’s avatar. Its human counterpart was prudence, or the Machiavellian virtu.In the early modern period, nature acted upon humans, whose only rational response was to choose between reasonable expectations. Only with the scientific revolution did the modern discourse ofrisk begin toflower. Modern humanity acts upon and controls the natural world, and therefore calculates the degree of danger it poses. As a result, tragedy need no longer be a normal feature of life.The German sociologist Niklas Luhmann argued that, once individual actions came to be seen to have calculable, predictable, and avoidable consequences, there was no hope of returning to that pre-modern state of blissful ignorance, wherein the course of future events was left to the fates. As Luhmann cryptically put it, “The gate to paradise remains sealed by the term risk.”Economists, too, believe that all risk is measurable and therefore controllable. In that respect, they are bedfellows with those who tell us that security risks can be minimized by extending surveillance powers and enhancing the techniques by which we gather information about potential terror threats. A risk, after all, is the degree to which future events are uncertain, and – as Claude Shannon, the founder of information theory, wrote – “information is the resolution of uncertainty.”There is a clear benefit to being safer, but it comes at the price of an unprecedented intrusion into our private lives. Our right to information privacy, now guaranteed by the EU’s General Data Protection Regulation, is increasingly in direct conflict with our demand for security. Omnipresent devices that see, hear, read, and record our behavior produce a glut of data from which inferences, predictions, and recommendations can be made about our past, present, and future actions. In the face of the adage “knowledge is power,” the right to privacy withers.Furthermore, there is a conflict between safety and wellbeing. To be perfectly safe is to eliminate the cardinal human virtues of resilience and prudence. The perfectly safe human is therefore a diminished person.For both these reasons, we should stick to the facts and not give governments the tools they increasingly demand to win the “battle” against terrorism, crime, or any other technically avoidable misfortune that life throws up. A measured response is needed. And when it comes to the chaos and mess of human history, we should recall Heraclitus’s observation that “a thunderbolt steers the course of all things.”

Economic Possibilities for Ourselves

The most depressing feature of the current explosion in robot-apocalypse literature is that it rarely transcends the world of work. Almost every day, news articles appear detailing some new round of layoffs. In the broader debate, there are apparently only two camps: those who believe that automation will usher in a world of enriched jobs for all, and those who fear it will make most of the workforce redundant.

This bifurcation reflects the fact that “working for a living” has been the main occupation of humankind throughout history. The thought of a cessation of work fills people with dread, for which the only antidote seems to be the promise of better work. Few have been willing to take the cheerful view of Bertrand Russell’s provocative 1932 essay In Praise of Idleness. Why is it so difficult for people to accept that the end of necessary labor could mean barely imaginable opportunities to live, in John Maynard Keynes’s words, “wisely, agreeably, and well”?

The fear of labor-saving technology dates back to the start of the Industrial Revolution, but two factors in our own time have heightened it. The first is that the new generation of machines seems poised to replace not only human muscles but also human brains. Owing to advances in machine learning and artificial intelligence, we are said to be entering an era of thinking robots; and those robots will soon be able to think even better than we do. The worry is that teaching machines to perform most of the tasks previously carried out by humans will make most human labor redundant. In that scenario, what will humans do?

The other fear factor is the increasing precariousness of wage labor – though this concern is seemingly belied by headline statistics suggesting that unemployment is at a historic low. The problem is that an economy at “full employment” now contains a large penumbra of what economist Guy Standing calls the “precariat”: under-employed people who work less and for lower pay than they would like. A growing number of workers, seeming to lack any kind of job (and pay) security, are thus forced to work well below their ability.

It is natural that one would interpret the onset of precariousness as the first stage in a broader trend toward workforce redundancy, especially if one pays attention to alarmist predictions of the next category of “jobs at risk.” But this conclusion is premature. The penetration of robotics into the world of work has not yet been sufficient to explain the rise of the precariat. So far, “cost cutting” in the West has largely taken the form of offshoring to the East, where labor is cheaper, rather than replacing humans with machines. But “onshoring” work that was previously offshored will offer cold comfort to workers if machines get most of the jobs.

ROBO-RAPTURE

According to the first view – let us call it “job enrichment” – technology will eventually create more, better human jobs than it destroys, as has always been the case in the past. Simple, mundane tasks may increasingly be automated, but human labor will then be freed up for more “interesting” and “creative” cognitive work.

In late 2017, the McKinsey Global Institute (MGI) published Jobs Lost, Jobs Gained, which claimed that as much as 50% of working hours in the global economy could theoretically be automated; the authors suggested, however, that not more than 30% actually would be. Further, they estimated that less than 5% of occupations could be fully automated; but that in 60% of occupations, at least 30% of the required tasks could be.

In line with the usual mainstream assessment, MGI believes that while there will be no net loss of jobs in the long run, the “transition may include a period of higher unemployment and wage adjustments.” It all depends, the authors say, on the rate at which displaced workers are re-employed: a low re-employment rate will lead to a higher medium-term unemployment rate, and vice versa.

MGI’s proposal for massive investment in education to lower the unemployment cost of the transition is also conventional. The faster the labor reabsorption, the higher the wage growth. Lower re-employment levels will cause wages to fall, with a greater share of the gains from automation accruing to capital, not labor. But the authors hasten to add:

“Even if the particulars of historical experience turn out to differ from conditions today, one lesson seems pertinent: although economies adjust to technological shocks, the transition period is measured in decades, not years, and the rising prosperity may not be shared by all.”

This assessment is typical, and it has led many to call on governments to invest heavily in so-called “upskilling” programs. In a commentary for Project SyndicateZia Qureshi of the Brookings Institution argues that, “with smart, forward-looking policies, we can … ensure that the future of work is a better job.” In this view, automation is simply the continuation of the move toward more, higher-quality jobs that has characterized capitalist growth since the Industrial Revolution.

History is on the optimists’ side. Mechanization has been the durable engine of productivity and wage growth as well as reductions in working hours, albeit usually with a considerable lag. Although the Roberts loom cost hundreds of thousands of handloom weavers their jobs in the nineteenth century, the broader wave of new industrial technologies enabled a much larger population to be maintained at a higher standard of living.

ROBO-REDUNDANCY

But, according to the second view – call it “job destruction” – this time is different. The programming of machines to perform ever more complex tasks with ever-increasing speed, accuracy, precision, and reliability will result in mass unemployment. In Rise of the Robots, author and entrepreneur Martin Ford addresses the techno-optimists head-on. “There is a widely held belief – based on historical evidence stretching back at least as far as the industrial revolution – that while technology may certainly destroy jobs, businesses, and even entire industries, it will also create entirely new occupations … often in areas that we can’t yet imagine.” The problem, Ford argues, is that information technology has now reached the point where it can be considered a true utility, much like electricity.

It stands to reason that the successful new industries that will emerge in the years ahead will have taken full advantage of this powerful new utility and the distributed machine intelligence that accompanies it. That means they will rarely – if ever – be highly labor-intensive. The threat is that as creative destruction unfolds, the “destruction” will fall primarily on labor-intensive businesses in traditional areas like retail and food preparation, whereas the “creation” will generate new industries that simply don’t employ many people.

On this view, the economy is heading for a tipping point where job creation will begin to fall consistently short of what is required to employ the workforce fully. We will soon reach the stage where the machine-driven destruction of existing human jobs far outpaces the creation of new human jobs, resulting in inexorably rising mass “technological unemployment.”

THE UPSKILLING MIRAGE

Optimists’ response to such concerns is that the workforce simply needs to be trained or upskilled in order to “race with the machines.” Typical of this outlook is the following headline on a commentary published by the World Economic Forum: “How new technologies can create huge numbers of meaningful jobs.” According to the author, concerns about “the looming devastation that self-driving technology will have on the 3.5 million truck drivers in the US” are “misdirected.” Augmented-reality technology, we are told, can create loads of new jobs by enabling people to work from home. All that will be needed is training of the kind offered by “Upskill, an augmented reality company in the manufacturing and field services sectors,” which “uses wearable technologies to provide step-by-step instructions to industrial workers.”

The author, himself the co-founder of an augmented-reality company, goes on to argue that, “With the pace of technological progress only accelerating and with increasing specialization becoming the norm in every industry, reducing the time necessary to retrain workers is pivotal to maintaining the competitiveness of industrialized economies.” There is no mention of the wages that will be offered to these “upskilled” workers in their “meaningful” new jobs. We are simply told that they will be relocated to “lower cost areas more in need of job creation.” Only at the very end of the commentary does the author acknowledge that, in fact, “Technology is a force that has the potential to eliminate entire industries through robotics and automation, and for that we should be concerned.”

The retraining argument should give us pause. In portraying upskilling as the solution to the labor displacement caused by new technologies, optimists rarely admit that if predictions about “thinking robots” turn out to be anywhere near true, workers would need to be trained in technical skills to an extent that is unprecedented in human history.

Moreover, the time it takes to upgrade the skills of the workforce will inevitably exceed the time it takes to automate the economy. This will be true even if claims about an imminent deluge of automation are greatly exaggerated. In the interval, there will be under- and unemployment. In fact, this has already been happening. Although automation is not yet bearing down on workers to the extent that has been predicted, it has nonetheless pushed more of them into less-skilled jobs; and its mere possibility may be exerting downward pressure on wages. There are already signs of the new class structure envisioned by the pessimists: “lovely jobs at the top, lousy jobs at the bottom.”

A more fundamental question is what we mean by upskilling, and what its consequences might be. Often, heavy emphasis is placed on the importance of better technological education at all levels of society, as if all people will need to succeed in the future is to be taught how to write and understand computer code.

As the technology writer James Bridle has shown, this line of argument has a number of limitations. While encouraging people to take up computer programming might be a good start, such training offers only a functional understanding of technological systems. It does not equip people to ask higher-level questions along the lines of, “Where did these systems come from, who designed them and what for, and which of these intentions still lurk within them today?” Bridle also points out that arguments for technological education and upskilling are usually offered in “nakedly pro-market terms,” following a simple equation: “the information economy needs more programmers, and young people need jobs in the future.”

THE MISSING DIMENSION

More to the point, the upskilling discourse totally ignores the possibility that automation could also allow people simply to work less. The reason for this neglect is twofold: it is commonly assumed that human wants are insatiable, and that we will thus work ad infinitum to satisfy them; and it is simply taken for granted that work is the primary source of meaning in human lives.1

Historically, neither of these claims holds true. The consumption race is a rather recent phenomenon, dating no earlier than the late nineteenth century. And the possibility that we might one day liberate ourselves from the “curse of work” has fascinated thinkers from Aristotle to Russell. Many visions of Utopia betray a longing for leisure and liberation from toil. Even today, surveys show that people in most developed countries would prefer to work less, even in the workaholic United States, and might even accept less pay if it meant logging fewer hours on the clock.

The deeply economistic nature of the current debate excludes the possibility of a . Yet if we want to meet the challenges of the future, it is not enough to know how to code, analyze data, and invent algorithms. We need to start thinking seriously and at a systemic level about the operational logic of consumer capitalism and the possibility of de-growth.

In this process, we must abandon the false dichotomy between “jobs” and “idleness.” Full employment need not mean full-time employment, and leisure time need not be spent idly. (Education can play an important role in ensuring that it is not.) Above all, wealth and income will need to be distributed in such a way that machine-enabled productivity gains do not accrue disproportionately to a small minority of owners, managers, and technicians.

A Post-Election Reckoning for British Politics

Leaving the European Union on January 31, 2020, will be UK Prime Minister Boris Johnson’s repayment of the debt he owes to the many Labour supporters who “lent” his Conservatives their votes. But “getting Brexit done” won’t be enough for the Tories to hold on to their parliamentary seats.
LONDON – Speaking outside No. 10 Downing Street following his emphatic election victory, British Prime Minister Boris Johnson thanked long-time Labour supporters for having “lent” his Conservative Party their votes. It was a curious phrase, whose meaning depended entirely on context. The Tories had breached Labour’s strongholds in the Midlands and North East England on the promise of “getting Brexit done.” Leaving the European Union, as Britain will on January 31, 2020, will be Johnson’s repayment of the debt he owes these voters.
But “getting Brexit done” won’t be enough for the Tories to hold on to their parliamentary seats, as Johnson recognized. The Conservatives, he said, will need to turn themselves once again into a “one nation” party. For its part, if Labour is to regain its heartlands, it will need to find a way of reconnecting with its alienated supporters.
What this double reconfiguration entails is reasonably clear. The Conservatives will need to break with Thatcherite economics, and Labour will need to loosen its embrace of minorities and minority culture. Both will need to move back to a middle ground. The libertarian dream of a free market in both economics and morals does not resonate with an economically interventionist but socially conservative electorate.
Brexit was a reaction to economic betrayal, the British version of a European-wide revolt by what French President Emmanuel Macron called the “left-behinds.” This label is precisely right as a description, but overwhelmingly wrong as a prescription, for it suggests that the future is technologically determined, and that people simply will have to adapt to it. The state’s duty, according to this view, is to enable the left-behinds to board the cost-cutting, labor-shedding bullet express, whereas what most people want is a reasonably secure job that pays a decent wage and gives them a sense of worth.
No one would deny that governments have a vital role to play in providing people with the employment skills they need. But it is also governments’ task to manage the trade-off between security and efficiency so that no sizeable fraction of the population is left involuntarily unemployed.
Guaranteed full employment was the key point of consensus of the Keynesian economics of the 1950s and 1960s, embraced by right and left, with the political battle centered on questions of wealth and income distribution. This is the kind of dynamic center the Conservatives should try to regain.

Any Toryism that seeks to be genuinely “one nation” must acknowledge that the fiscal austerity that the Conservatives imposed on the country from 2010 to 2017 caused great and unnecessary harm to millions of people. The Tories must show that they understand why austerity was wrong in those circumstances, and that the purpose of the budget is not to balance the government’s accounts, but to balance the economy at full employment. Deficits and surpluses reflect the state of the economy. This means that no effort should be made to cut the deficit when the economy is shrinking or to expand it when the economy is growing, because that produces deflation in the downswing and inflation in the upswing – exactly the opposite of what is needed. George Osborne’s greatest contribution to Toryism now would be to explain where and how, as chancellor of the exchequer, he went wrong between 2010 and 2016.
A party pledged to govern from the center should implement policies to stabilize the labor market. These should include a permanent public investment program aimed at rebalancing the United Kingdom’s regions and “greening” its infrastructure, together with a buffer of guaranteed public-sector jobs that inflates and deflates automatically with economic downturns and upturns. The beauty of the second lies precisely in its automaticity, guarding it against the charge of being at the mercy of vote-hungry politicians.
Together, these policies would limit business fluctuations, rebalance the economy geographically, and lay the ecological foundations for future growth. What they imply is a deceleration of the rush to automate and globalize, regardless of social cost.
Labour, for its part, needs to recognize that most of its voters are culturally conservative, which became clear with respect to Brexit. The election result disclosed a culture gap between Remainers and Leavers, which for a subset of London and university-campus-based Remainers amounted to a culture war between a politically correct professional class and a swath of the population routinely dubbed stupid, backward, and undereducated, or, more generously, misinformed. One symptom of this gap was the common media depiction of Johnson as a “serial liar,” as though it was his mendacity that obscured from befuddled voters the truth of their situation.
Political correctness ramifies through contemporary culture. I first became aware of a cultural offensive against traditional values in the 1970s, when school history textbooks started to teach that Britain’s achievements were built on the exploitation of colonial peoples, and that people should learn to feel suitably apologetic for the behavior of their forbears. Granted that much history is myth-making, no community can live without a stock of myths in which it can take pride. And “normal” people don’t want to be continuously told that their beliefs, habits, and prejudices are obsolete.1
In the continuous evolution of cultural norms, therefore, a new balance needs to be struck between the urge to overthrow prejudice and the need to preserve social cohesion. Moreover, whereas the phrase “left behind” may reasonably describe the situation of the economically precarious, it is quite wrong as a cultural description. There are too many cultural left-behinds, and their cultural “re-skilling” will take much longer than any economic re-skilling. But such re-skilling is not the right prescription. Metropolitan elites have no right to force their norms on the rest of the country. Labour will need to remember that “normal” people want a TransPennine railway much more than a transgender future.
In short, just as the right went wrong in forcing economic individualism down people’s throats, so the left has gone wrong in its contempt for majority culture. In the UK, the price for elite incapacity in both areas has been Brexit; in Europe and the United States generally, it has been the growth of populism.
Economic and cultural utopians alike are destroyers: they want to tear down what has been built in order to create something more perfect. The dream of perfection is the death of statesmanship. Politicians who aspire to govern on behalf of the whole community should aim not for the best possible result, but the best result possible. Continue reading “A Post-Election Reckoning for British Politics”

China’s Quest for Legitimacy

December 3, 2019

The conventional Western view is that China faces the alternatives of integrating with the West, trying to destroy it, or succumbing to domestic violence and chaos. But the Chinese scholar Lanxin Xiang instead proposes a constitutional regime based on a modernized Confucianism.
LONDON – Liberal democracy faces a legitimacy crisis, or so we are repeatedly told. People distrust government by liberal elites, and increasingly believe that the democracy on offer is a sham. This sentiment is reflected in the success of populists in Europe and the United States, and in the authoritarian tilt of governments in Turkey, Brazil, the Philippines, and elsewhere. In fact, liberal democracy is not only being challenged in its European and American heartlands, but also has failed to ignite globally.
Democracies, it is still widely believed, do not go to war with each other. Speaking in Chicago in 1999, the United Kingdom’s then-prime minister, Tony Blair, averred that, “The spread of our values makes us safer,” prompting some to recall Francis Fukuyama’s earlier prediction that the global triumph of liberal democracy would spell the end of history. The subsequent failure of Russia and China to follow the Fukuyama script has unsurprisingly triggered fears of a new cold war. Specifically, the economic “rise of China” is interpreted as a “challenge” to the West.
On this reading, peaceful transfers of international power are possible only between states that share the same ideology. In the first half of the twentieth century, therefore, Britain could safely “hand over the torch” to the US, but not to Germany. Today, so the argument goes, China poses an ideological as well as a geopolitical challenge to a decaying Western hegemony.
This perspective, however, is vigorously contested by the Chinese scholar Lanxin Xiang. In his fascinating new book The Quest for Legitimacy in Chinese Politics, Xiang shifts the spotlight from the crisis of rule in the West to the crisis of rule in China.
In one sense, this is familiar territory. Western political scientists have long believed that constitutional democracy is the only stable form of government. They therefore argue that China’s one-party state, imported from Bolshevism, is doomed, with the current protests in Hong Kong foreshadowing the mainland’s fate.
Xiang’s contribution lies in challenging the conventional Western view that China faces the alternatives of integrating with the West, trying to destroy it, or succumbing to domestic violence and chaos. Instead, he proposes a constitutional regime with Chinese characteristics, based on a modernized Confucianism.
Continue reading “China’s Quest for Legitimacy”

Placido Domingo: cancel culture?

‘People who do really good stuff have flaws’ said Barack Obama in a recent talk.  About the same time I read: ‘Placido Domingo has withdrawn from all future engagements at New York’s Metropolittan Opera [after 51 consecutive years] following allegations of sexual harrassment made by several women, including a soprano who said he reached down her robe and grabbed her bare breast’.[The Week,5 October 2019] Domingo’s burnished tenor and acting ability has thrilled generations of opera lovers.  At 78 it was probably  time he hung up his boots. But should he be driven  off stage by allegations of sexual impropriety?

placido_domingo

I reproduce below two comments I received from friends, the first a man, the second a woman,  both of whom share my love of opera.

First,
‘In my view, the primary dilemma is between a deontological understanding of ethics,  the standards of which are valid across time and space,  and a more context-bounded one. Without embracing a radical ethical relativism I wonder whether it is appropriate to totally ignore the context-boundedness of ethical behavior.  I think, we should take into account that ethical consciousness (i.e. what people consider ethical standards) changes over time,  notwithstanding the fact that some core ethical principles remain unchanged. But even if we embrace a context-insensitive understanding of ethics I wonder whether the accused persons have no rights at all. Anonymous accusations can destroy lives’ .

Second,
‘Domingo  has the following  problems:
(A.) There are a lot of complainants;
(B.) He was in a position of real power in a business notorious for that power being abused; and, worst of all
(C.) The present atmosphere, especially in the US, is not far off a lynch mob…..

I find differences of view are geographical and generational.  Our generation – you and I … have an open mind and are wary of mass judgements. Our daughters’ generation can’t  get enough of it..

In the USA, Australia and I suspect the UK  where ‘Me Too ‘ has serious traction,  I doubt there is a future for PD … [But] I expect Milan and Berlin to carry on as usual.

Fifty years ago and even more recently such behaviour was accepted.  It must be remembered  that it works both ways and it would be foolish to believe he was not actively  pursued by women working in the business. That should never be forgotten

As with Karajan who had a spotty  background for other reasons we keep watching genius at work and separate what may now be classified as ‘no go’.’

A number of interesting moral issues arise.  Should we judge the  past behaviour of individuals  by present standards? My young (24 year old) research assistant (male) is quite clear about this: ‘What Domingo did was as morally wrong then as it is  now, and he knew it.The fact that it was socially acceptable then for men to grope women is no defence. Our generation is just not as hypocritical as yours’.

I find myself in an ambivalent position. On the one hand, Domingo’s behaviour was deplorable, and should not be excused on the ground of ‘customary’ standards.

Against this is the thought that we have created a culture of exploitable  victimhood. If you’re not being sexist, you’re  being racist. The politician Rory Stewart, campaigning for the mayoralty of London, made the mistake of referring  approvingly to the mixed population of  Brick Lane, as the kind of  area ‘where three sort of minor gangsters can come up to me and tell me I am an idiot’. As chance had it, the men who had called him an idiot, accused him of racism, and  demanded that he apologise for ‘trying to take advantage of black boys when it’s convenient, then ridiculing them’. It turned out they objected to being called ‘minor’.

Curiously for a society which has thrown off  so many Puritan inhibitions we  seem to be relentlessly intent on  spreading guilt. I  prefer the Catholic doctrine of forgiveness.  Opera lovers should forgive Placido his transgressions,  and enjoy the one or two remaining years of his superb stage craft.