Spending Review: Beyond Accountancy

The furlough and the business support schemes, started in March, would end in October to coincide with the reopening of the economy. This meant that the UK economy would be much the same -give and take some minor “scarring”- in 2021 as it was in 2019: a year’s growth lost, but that was the limit of the damage.

The expectation of a fourth quarter bounce-back was always unrealistic: severely damaged economies never “bounce back” unaided. The Chancellor’s response to the “second wave” of infections and lockdowns in October/ November was, in essence, to extend his March 2020 job retention measures until next March. The “V” was becoming more like a “U”, the scarring would be worse, but the assumption seems to have been that very little extra support for businesses and jobs would be needed after March 2021. The £280bn spent :getting the country through COVID-19″ would have been spent; the fiscal task ahead was to start paying it back. Revenue would automatically increase as the economy returned to “normal”, but most the hugely expanded deficit would have to be reduced by raising taxes.

“The expectation of a fourth quarter bounce-back was always unrealistic: severely damaged economies never “bounce back” unaided.

The main realisation underlying the Chancellor’s new spending review yesterday, 25 November, was that this second, less rosy, outcome was not going to happen either. The Office of Budgetary Responsibility (OBR) now forecasts that the UK economy will be 6% smaller, and unemployment twice as large, in 2021 than it was before the pandemic hit. It was therefore necessary to switch tack from waiting for economic life to awaken from its frozen slumber to jolting it back into life by positive action. In addition to the money already spent or pledged, Rishi Sunak has promised an extra £55bn to fund those public services in the front line of the fight against COVID-1, especially the NHS.

More interesting, from the economic point of view, are the pledges around investment in future job creation. Capital spending next year will total £100bn, £27 bn more than in 2019-20; and local authorities will be able to bid for projects to improve what Sunak called “the infrastructure of local life”: a new bypass, upgraded railways stations, less traffic, more libraries, museums, galleries, better high streets and town centres. The Chancellor has also promised a National Infrastructure Bank, headquartered in the North of England, and tasked with working with the private sector from next spring onwards to finance major new investment projects across the UK.

“The government reacted to the pandemic by putting sticking plaster over the temporary wounds it inflicted on the private economy.

All of this represents a considerable revolution in thinking, but without the new language which would make this obvious. The new programmes are tagged onto the old programmes, with bits of extra money for them. This is partly because the Treasury’s thinking is still evolving; partly because a Spending Review is not the place to announce a new relationship between the state and the economy. But this is the logical consequence of the Chancellor’s announcement.

The government reacted to the pandemic by putting sticking plaster over the temporary wounds it inflicted on the private economy. What Sunak’s new measures implied was that, in future, the state is going to play a much more active role in maintaining the patient’s health. The accelerated and expanded capital investment programmes (together with the announcement of a National Infrastructure Bank) amount to a reassertion of the state’s investment function, which had withered away after the Thatcher revolution. The endorsement of local authority job creation schemes, together with the expansion of Kickstart, implies that the state cannot, and will not, in future be indifferent to the scourge of unemployment.

“These are only tentative gropings towards a new economic philosophy.

At the moment these are only tentative gropings towards a new economic philosophy. Thatcherite neoliberalism was all of a piece. It held that provided inflation was avoided, the market system could be expected to produce the best outcomes, in the short-run and the long run. This encompassed a global market; it included indifference to the distribution of wealth and income. In the neo-liberal nirvana, everyone received the “right” rate for the job.

The new philosophy is only half formed. It lacks a language of public responsibility; and it has too many missing paragraphs. Crucially, what should be the role of the financial system, and what measures should be taken to chain it to responsibility? How can a system of national protection be made compatible with market-driven globalisation, which make national prosperity dependent on global supply chains over which no one has any control? What steps can, or should, governments take to prevent cost-cutting automation from destroying jobs, livelihoods, and communities?

These are the questions which leaked out from the accountant’s words of 25 November. They will not go away.

Lord Skidelsky is emeritus professor of Political Economy at Warwick University; and author of a prize-winning biography of the economist John Maynard Keynes.

Why the West failed to contain COVID-19

The promise of a “final” end to lockdowns in the spring of 2021 is the kind of hyperbole we have come to expect about new products and policies. The Oxford University vaccine may work; it may even be delivered effectively. Meanwhile, Covid-19 is still around, the UK government is extending lockdown for large parts of the country and effective protections are still being ignored, at grave cost.

From the start of the pandemic, the policy choice in Europe has been presented as a trade-off between lives and livelihoods. Since priority was (rightly) attached to saving lives, the livelihoods of large sections of the population have been sacrificed, with income support for workers in the form of long paid holidays called furloughs, and loans and grants for business prevented from trading. As a consequence of widespread business distress and associated redundancies, European countries face a huge problem in reopening their economies in the wake of the pandemic. Forecasts suggest that the UK economy will be 6 per cent smaller in 2021 than in 2019, and unemployment at 7.5-8.0 per cent – roughly double its pre-crisis level. 

The experience of East Asia shows the choice in Europe was, and continues to be, wrongly presented. Countries such as China, Japan, South Korea and Taiwan found a way of protecting both lives and livelihoods. Their death rates per head of population have been much lower than in Europe; their economies have barely contracted; and they are forecast to be larger, not smaller, next year. Their secret was an effective system of testing, tracking and quarantining. The question is why such a system was not adopted as the first line of defence in Europe. This is not just a historical question. If it were technically feasible in March it is even more so today. It is still not too late to avoid future economic and social damage, even though most of the damage already inflicted cannot be repaired.


Virologists identify the nature of an epidemic, while epidemiologists study the way it spreads. It was an almost-forgotten English doctor, Ronald Ross, who first developed a predictive model of malaria transmission, which was later generalised as the SIR (Susceptible, Infected and Recovered) model of contagious disease epidemics. His successors concluded that a viral infection ends when the virus runs out of hosts in which it can reproduce itself; that is, when the population develops “herd immunity”.

Central to the SIR model is the “R rate”, the rate of infection. The statisticians work out a data-based prediction of the rate of infection in a susceptible population. Politicians, advised by medical scientists, as well as by health professionals who tell them about medical capacity, and economists who tell them about strains on the economy, decide policy. Their skill lies in assessing political reactions to their policies.

“Mass protection” and “focused protection” have been the two main political responses to the models produced by these experts. They have been presented as polar opposites, but are essentially two variants of the same response.

Think of them in terms of breaking the S⇀ I chain. If a virus is circulating in a susceptible population, the effect of a mass lockdown (semi-isolation) is to reduce the transmission rate and thus the overload of medical services. As soon as the lockdown is eased, the spread rate picks up, but because the susceptible population has been reduced by recovery or death, R continues to decline (with smaller spikes) to the point when normal life can be resumed. The treatment is effective, but the economic cost is horrendous.

The alternative of focused protection, as proposed by the Great Barrington Declaration and others, is a restricted application of mass protection. It aims to remove the most susceptible population (the old and those with underlying health issues) only from the path of the virus. It thus makes possible a more normal life for most people and, in principle, reduces the pandemic’s economic costs. However, the Great Barrington Declaration’s signatories never made clear how a sizeable section of the population can be securely “shielded” from the rest. For this reason, it is supported by very few public health professionals. Sweden has shown that any exponential growth of the virus in a healthy population is bound to spread to the vulnerable. The country’s death rate was similar to that of mass protection countries; and the effects on its economy also similar.

The bird that never flew in most of Europe (Germany was a partial exception) was “targeted protection” based on digital testing, tracking and self-isolating. Such systems exist, and have been rolled out in the UK as well as in other European countries, but never as an alternative to mass or focused protection. Indeed, the UK government decided early in March to stop contact tracing and impose a mass lockdown.  

Digital or targeted protection works not by locking down pre-determined blocks of people but only those individuals and clusters who test positive. Every individual spreader and his or her contact group is isolated straight away. As a result, normal life, with a few sensible precautions (masking, distancing, hand washing) can continue as before. No lockdowns, total or partial, are needed. The South Korean economy contracted by 2.74 per cent between March and September; the UK economy by more than 9 per cent. (Taiwan’s economy actually grew.)

South Korea is the pin-up story for digital protection. Only 289 out of 51 million inhabitants are known to have died from Covid-19 between February and July 2020; in the UK it was 44,600 out of 66 million. South Korea avoided a national lockdown; the UK has had two.

South Korea’s success is generally attributed to early identification and management of cases, clusters and contacts. Every time an individual tests positive in the densely populated country, 100 contacts are identified by location and payment data and then tested. The country has also skilfully controlled its borders (as has Taiwan and China) and it has sensible precautions such as masking and selective social distancing and localised temporary lockdowns. Short of a vaccine, there is no single efficacious system of protection; the difference lies in the weight given by policymakers to the different elements of protection. The national lockdown method is by far the most expensive and, on the evidence, the least efficacious.

Electronic shielding was never central to the European policy response. The reasons are complicated, but essentially due to a mixture of unfamiliar technology, logistics and politics. Consider these in turn.


The technology behind digital shielding is quite simple. It consists of a mobile app on every mobile phone. This is similar to the English NHS Covid-19 app, but without the need for a QR code check-in. 

An effective system works by identifying those users who have been exposed to someone else who has tested positive. The system asks them to be tested within 48 hours and for them to quarantine in the meantime. In a lighter version of the scheme they can be asked if they have symptoms and only then will they have to isolate while they wait to get tested.

An efficient test and track system of this kind would obviate the need for either mass lockdowns or continuous mass testing. It would concentrate on the crucial infection-spreading interactions, allowing optimisation of testing and isolating to those who really are at risk. Thus the rate of transmission is curbed not by locking down whole populations for months or testing everyone, but by briefly quarantining small clusters. You would avoid the ludicrous need to lock down a million people because infection had spread in a cluster of a hundred. The economic effects are incomparably milder.

Such a system was available from the start of the pandemic. But it was never central to stopping the spread of the virus in Europe. Indeed, it was presented as a minor separate measure, subordinate to lockdown. There was no effective lobby for its priority. At no point did those familiar with the technology say to politicians: “Why not make this your main weapon?” Nor did the media take up the cause. Two obstacles seemed insuperable to mass implementation: logistical and political.


To obviate or minimise the need for other measures everybody has to have the app. The scale of roll-out depends on medical capacity to test and track and user capacity to understand what is required of them. Lack of medical capacity seems not to have been the decisive constraint, except in the UK.   

A more important barrier to roll-out was the large fraction of the “digitally challenged” in the population – those who didn’t know how to install the app or who had obsolete telephones. (The same problem is faced when any service is put online.) This is where a big governmental effort was needed and was not forthcoming. The government might have provided free up-to-date phones and free help in setting them up. Along with video tutorials, FAQs and other online resources every technology shop staff could have been trained and subsidised to offer free assistance to the digitally disadvantaged, while ensuring that the app was designed according to user-centred techniques.  

For those with an obsolete mobile phone, who refused to replace it, but still wanted to take part in the test and track scheme, an alternative device such as a digital bracelet or necklace could have been offered to track the users, as have been trialled and deployed in Hong Kong, Bulgaria and Bahrain. Mobile phones or bracelets need to be with the user at all times when outdoors and infringement would need to be pursued by law, just as lockdown rules, mask-wearing and social distancing now are.

Enriching the population technologically in double-quick time would have cost a significant amount of money, though a tiny fraction of the cost of the lockdowns. And it would have offered a valuable opportunity to use the crisis to achieve an enhanced overall level of digital competence, with favourable knock-on effects on people’s employability.

Nor was the technical processing of data quickly the problem it has been made out to be. These are very simple and light data (a few bytes, probably not even kilobytes). The ghastly delays in processing information in the UK, revealed by the Panorama programme “Test and Trace Exposed“, was the result of organisational ineptitude, not technical difficulty.

The central point is that the technology was available from the start and the logistical problem of wiring everyone up and getting up-to-the-minute information could have been solved had attention and resources been directed to solving it. The reason they were not was political.


As European failures have demonstrated, if you opt for digital shielding you can’t be half-pregnant: failing to make sure that everyone follows the rules of the scheme would not be a partial success. It would be a complete failure.

This means two things: authorities need to make sure that the system covers the entire population; second, that every test returning a positive result is synced on to the system, and rapidly processed accordingly. Once this is done, all the new tests performed by the health authority are addressed to the close contacts (as identified above) of those who are positive. The colossal waste of mass lockdown is avoided.

Standing in the way were, and remain, serious concerns about privacy and enforcement.

The following incident in South Korea in May 2020, as reported by the Mail on Sunday, produced national headlines: “Night clubs in Seoul have been linked with 119 coronavirus cases nationwide after a ‘super-spreader’ visited a number of bars in the Itaewon district. [The] 29-year-old man, who is thought to be at the epicentre of the latest cluster of cases, was tracked by authorities… and tested positive for Covid-19.”

It turned out the bars were in the gay district, so, understandably, most of the client names and addresses registered by these clubs and available to the health and law enforcement authorities were false. And the fact that electronic tagging is widely used everywhere for tracing movement of criminals and that CCTV cameras might be used to track people’s movements increases the feeling that Big Brother is watching you.

The charity Privacy International argues persuasively: “Unprecedented levels of surveillance, data exploitation and misinformation are being tested across the world [in the light of Covid-19]. Many of those measures are based on extraordinary powers, only to be used temporarily in emergencies. Others use exemptions in data protection laws to share data. Some may be effective and based on advice from epidemiologists, others will not be. But all of them must be temporary, necessary and proportionate. It is essential to keep track of them. When the pandemic is over, such extraordinary measures must be put to an end and held to account.”

The key question is how to secure the efficient population protection offered by digital shielding while keeping the information it needs anonymous. 

The Italian app Immuni provides the answer. Not only is it not reliant on a centralised database to trace Covid-19, it also does not rely on venue check-ins, preferring closer range Bluetooth connection with other devices. This is far more accurate (in that it drastically reduces the risk of false positives) and grants full anonymity. The system doesn’t need to know my name, nor where the interaction took place. It just knows the date and that the length of the interaction was long enough and the distance was close enough for people to be infected by me. My identity is not disclosed at any step of the process, nor the places I have been. My anonymity will only be breached if someone has to visit me because I have not followed the rule of disclosure. 

It is hard to evaluate the argument that Asian culture is more hospitable than European to tracing because Asians value personal liberty less, and the “public good”, more. Contemporary Europeans associate liberty with privacy, but as Hannah Arendt pointed out in her 1958 book The Human Condition, the right to be protected from the public gaze cannot ever be more than relative.

Enforcement concerns are easily addressed: anybody can opt out of the scheme and embrace the alternative solution in the form of a personal lockdown. The prospect of this would itself be an incentive to become digitally literate.

In our kind of society, people would probably need some financial incentive to join the scheme (such as free up-to-date systems or cash payments). Those who join the scheme but who don’t follow the rules would face fines or policing through phone calls and other checks.

Digital shielding presupposes some degree of scrutiny and enforcement of its rules. But this has to be weighed against the prohibitions of normal life (as well as enforcement of these prohibitions) entailed by mass lockdowns. The lesson of the lockdowns is that the liberty to do what I want requires some regulation of that liberty to ensure it does not harm others.


Angela Merkel said people “needed beds to be full before they would accept a lockdown”. A further implication might be that people had to experience a full lockdown before they would be ready to accept the degree of intrusion involved in digital shielding. This explains why the test, track, isolate scheme has been implemented at best as subsidiary to full or partial lockdowns. This political calculation may have been right for contemporary Europe; it was clearly not so in many Asian countries.

There are certainly legitimate concerns about limiting personal freedoms. It’s only in the light of much stricter limitations, such as those that come with a full lockdown, that one can put things in perspective and see digital shielding as a desirable alternative. In the end, the aim of this solution is to offer an alternative to general lockdown in order to avoid the ascending economic, social and medical costs for the community.

The world economy was hugely damaged; trillions of pounds, dollars and euros were spent protecting communities, and hundreds of thousands of unnecessary deaths have been caused by want of effective deployment of a system of testing, tracing and quarantining. The technical requirements for such a system existed from the start; its effective deployment would have taken some time in Europe, so that the earliest lockdowns were probably inevitable. But by May or June, there should have been no need for further lockdowns. The reason digital shielding was so slow in coming is not technical or epidemiological, but political. With some notable outliers, most European citizens have preferred the anonymity of lockdown to the individual scrutiny of testing and tracing. Given the absolute priority attached to protecting lives, the policy was therefore set. But, despite optimistic claims for a vaccine, Covid-19 is not over. It is not too late to change.

Robert Skidelsky is the author of a three-volume biography of JM Keynes, a crossbench peer and emeritus professor of political economy at the University of Warwick. His most recent book is “Money and Government: The Past and Future of Economics”.

Massimiliano Bolondi is a technology adviser.

Job Creation is the New Game in Town

November 13 2020Even if a successful rollout of a new COVID-19 vaccine causes the current health crisis to recede by next spring, the unemployment crisis will remain. That is especially true in the United Kingdom, where fiscal stimulus is urgently needed to avert a lost decade – if not a lost generation – of growth.

EDINBURGH/LONDON – In the wake of the COVID-19 pandemic, both the US and European economies are gearing up for large-scale job creation. US President-elect Joe Biden has pledged to invest $700 billion in manufacturing and innovation, plus $2 trillion in a “Biden Green Deal” to combat climate change and promote clean energy. Meanwhile, Germany has abandoned years of thrift by backing a €750 billion ($887 billion) European Union recovery fund and, like France, will maintain its own national job retention and job creation program throughout 2021.

By contrast, the United Kingdom’s chancellor of the exchequer, Rishi Sunak, has fallen behind the curve. Back in March, many expected that Britain would experience a V-shaped recovery. As this prospect faded, it became clear that Sunak’s rescue operation needed to be matched with a viable recovery plan.

The consensus view is that both the UK and the global economy will be smaller in 2021 than they were in 2019. The International Monetary Fund predicts that the global economy will be 6.5% smaller than was forecast before the COVID-19 crisis, with a legacy of unemployment at least double the pre-pandemic norm.

These gloomier forecasts have prompted international calls for the reinstatement of active fiscal policy, with the IMF urging rich-country governments to start large public investment programs. In its latest Fiscal Monitor, the Fund says that increasing public investment by 1% of GDP could boost GDP by 2.7%, private investment by 10%, and employment by 1.2%.

The IMF’s call to action is particularly important, because the Fund was a champion of fiscal retrenchment during the 2008-09 global financial crisis, despite the obvious need for stimulus. Its earlier macroeconomic model, like those of most other economists and policymakers at that time, was based on the flawed theory that market economies have a natural tendency to reach full employment. This ignored the truth, most persuasively articulated by John Maynard Keynes, that in the absence of government stimulus, economies can remain naturally stuck in recession for a long time.

The Bank of England, too, has changed its tune. The BOE is about to inject an additional £150 billion ($198 billion) into the UK economy, in addition to the more than £200 billion it already has pumped out in 2020, and now realizes that it cannot do all the heavy lifting. Businesses will not invest, no matter how low the cost of capital, until they see a market. That is why the BOE has now joined the US Federal Reserve and the European Central Bank in calling for fiscal stimulus.

Before COVID-19, monetary policy seemed to be the only game in town. Now, if we are to avoid mass unemployment and the consequent loss of demand in the economy, job creation must become the overriding priority after the lockdown.

To its credit, the UK government brought forward £8 billion in infrastructure spending this past summer. But that is a mere fraction of what is needed. The government is now frontloading its £40 billion, five-year investment plan into the next two and a half years, and giving priority to big environmental projects and social housing. Retrofitting homes and local amenities could quickly create many jobs, with immediate multiplier effects.

Regional and local job and training schemes are essential to the longer-term task of reallocating work and skills toward the labor market of the future. The lesson of the UK’s 1998 New Deal for Young People and the 2009 Future Jobs Fund is that such programs must offer not only training and work experience but also assistance with job searches and incentives for employers to hire people on a permanent basis.

We estimate that one million young Britons under the age of 25 are neither working nor in training or education. But the government’s Kickstart job-creation scheme, which was launched belatedly earlier this month, has offered job placements to young people only for six-month periods.

The government expected that Kickstart would secure placements for 300,000 young people, but perhaps only around 100,000 will be enrolled in the scheme by the end of 2020. Ministers assumed that 5% of UK employers would take on young people, but outside of the retail and logistics sectors, thousands of firms are instead planning redundancies and will almost certainly not offer employment on anything like the hoped-for scale in the coming months.

If we are to assist the other 900,000 or so under-25s in need of help and create the estimated 1.5 million youth placements that will be required over the next year, the public sector will have to become the employer of last resort. So, rather than passively responding to a rise in unemployment, fiscal policy should aim to replace Karl Marx’s “reserve army of the unemployed” with a buffer stock of state-supported jobs and training schemes that expands or contracts with the business cycle.

What we need above all from UK policymakers is an updated full-employment commitment in the spirit of Keynes and US President Franklin D. Roosevelt. An essential condition for this is the coordination of monetary and fiscal policy. The BOE should retain its anti-inflation mandate, but policymakers should not use this to cut off necessary fiscal stimulus.

Earlier this month, the BOE echoed then-ECB President Mario Draghi’s famous 2012 pledge to save the euro by stating that it “stands ready to take whatever additional action is necessary” to boost the economy. To boost the credibility of such forward guidance, the government could give the BOE a dual mandate to fight both inflation and unemployment, while the bank could state that it will not tighten monetary policy until unemployment falls below its pre-crisis level of 4%.

A successful rollout of Pfizer’s new COVID-19 vaccine (and possibly others) could return life to a semblance of normality by next spring. But even if the health crisis recedes, the unemployment crisis will remain. UK policymakers must act now to avert a lost decade – if not a lost generation – of growth.

Robert Skidelsky Speech on Internal Withdrawal Bill

I will confine my remarks to Part 5 of this Bill. I find myself swayed by two completely opposite accusations of bad faith. The government accuses    EU negotiators of bad faith in seeking to erect  ‘unreasonable’ customs barriers between Northern Ireland and the rest of the UK .

Opponents of the Bill  say the bad faith is our own government’s. The Withdrawal Agreement set up a  Joint Committee to  resolve trade disputes; the government have chosen not to use it So, as Ed Milliband argued in his powerful phillipic in the other place,  the government was proposing to breach international law for bogus reasons.

I cannot support the regret motion, and would like to explain why.

To my mind, international law is not the main issue. ‘Never before’ say Noble Lords, ‘has a British government  sought  to break international law’.  But never before has Britain faced the  problem of   extricating itself from as complex a political, economic, and legal structure as the EU.   Law has to take account of political reality. As   John Maynard Keynes said in answer to  legal fundamentalists of his day:  ‘What I want from lawyers is to devise means by which it will be lawful for me to go on being sensible  in unforeseen conditions’. Noble Lords know very well that not  every contingency can be foreseen.

So, My Lords, I ask you to judge  the legislation before the House on three different grounds:     sufficient reason, motive, and consequences.

On the first,I agree with the argument that sufficient reason has not been established for the  override of Part 5  at the government’s discretion. But no noble Lord has mentioned  Amendment 66, by which    the government has   agreed to obtain parliamentary approval before activating  Part 5. I think that’s a reasonable compromise between those who think  Part 5 is   essential and those who think it unnecessary.

 Second,I sympathise with the  argument that the government signed the Agreement in bad faith in order to meet the PM’s  political requirements.   However,  most  Noble Lords  have ignored the argument that it was always going to require some bad faith-and legal creativity – to make  the Brexit decision consistent  with the Good Friday Agreement. When Ed Milliband said ‘A competent government would  never have entered into a binding agreeent with provisions they could not live with’ I’m afraid he is setting the bar of competence much too high. Contrary to Baroness Humphreys,deliberate  ambiguity is the hallmark  of statecraft.

Finally, what will the consequences be?   The legal fundamentalists  say it will damage our ability to get an agreement because it will damage trust  in the government’s word; the pragmatists believe it will force the EU negotiators to come up with a workable exit formula. Time will tell whether the government has calculated the balance of risks properly.

 My own feeling, contrary to much noble rhetoric, is that  we are still largely in the world of posturing.  That is the way EU and many other international negotiations work: public posturing, followed by  a last minute outbreak of commonsense.

 I think that’s the way it will turn out, and don’t want us to do anything which will weaken the hands of our own negotiators.(550) 

Policing Truth in the Trump Era

Social-media companies’ only incentive to tackle the problem of fake news is to minimize the bad press that disseminating it has generated for them. But unless and until telling the truth serves the bottom line, it is futile to expect them to change course.

LONDON – On October 6, US President Donald Trump posted a tweet claiming that the common flu sometimes kills “over 100,000” Americans in a year. “Are we going to close down our Country?” he asked. “No, we have learned to live with it, just like we are learning to live with Covid, in most populations far less lethal!!!”

Trump’s first claim is true: the flu killed over 100,000 Americans in 1918 and 1957. “We have learned to live with it,” is a matter of opinion, while his claim that COVID-19 is “far less lethal” than flu in most populations is ambiguous (which populations, and where?).

There seemed nothing particularly unusual about the tweet: Trump’s fondness for the suggestio falsi is well known. But, soon after it was posted, Twitter hid the tweet behind a written warning, saying that it had violated the platform’s rules about “spreading misleading and potentially harmful information related to COVID-19.” Facebook went further, removing an identical post from its site entirely.

Such online controversies are becoming increasingly common. In 2018, the now defunct political consulting firm Cambridge Analytica was said to have willfully spread fake news on social media in order to persuade Americans to vote for Trump in the 2016 US presidential election. Since then, Facebook and Twitter have removed millions of fake accounts and “bots” that were propagating false stories. This weeding-out operation required the platforms themselves to use artificial-intelligence algorithms to find suspicious accounts.

Our reliance on firms that profit by allowing “disinformation” to take the lead in policing the truth reflects the bind in which digital technology has landed us. Facebook and Twitter have no incentive to ensure that only “true” information appears on their sites. On the contrary, these companies make their money by harvesting users’ data and using it to sell advertisements that can be individually targeted. The more time a user spends on Facebook and Twitter, and the more they “like,” click, and post, the more these platforms profit – regardless of the rising tide of misinformation and clickbait.

This rising tide is partly fueled by psychology. Researchers from the Massachusetts Institute of Technology found that from 2006 to 2017, false news stories on Twitter were 70% more likely to be retweeted than true ones. The most plausible explanation is that false news has greater novelty value compared to the truth, and provokes stronger reactions – especially surprise and disgust. So, how can companies that gain users and revenue from false news be reliable guardians of true news?

In addition, opportunities to spread disinformation have increased. Social media have vastly amplified the audience for stories of all kinds, thus continuing a process that started with Johannes Gutenberg’s invention of the movable-type printing press in the fifteenth century. Just as Gutenberg’s innovation helped to wrest control of knowledge production from the Roman Catholic Church, social media have decentralized the way we receive and interpret information. The Internet’s great democratizing promise was that it would enable communication without top-down hierarchical strictures. But the result has been to equalize the credibility of information, regardless of its source.

But the problem is more fundamental: “What is truth?” as the jesting Pontius Pilate said to Jesus. At one time, truth was God’s word. Later, it was the findings of science. Nowadays, even science has become suspect. We have put our faith in evidence as the royal road to truth. But facts can easily be manipulated. This has led postmodernists to claim that all truth is relative; worse, it is constructed by the powerful to maintain their power.

So, truth, like beauty, is in the eye of the beholder. This leaves plenty of latitude for each side to tell its own story, and not bother too much its factual accuracy. More generally, these three factors – human psychology, technology-enabled amplification of the message, and postmodernist culture – are bound to expand the realm of credulity and conspiracy theory.

This is a serious problem, because it removes a common ground on which democratic debate and deliberation can take place. But I see no obvious answer. I have no faith in social-media companies’ willingness or ability to police their platforms. They know that “fake” information can have bad political consequences. But they also know that disseminating compelling stories, regardless of their truth or consequences, is highly profitable.

These companies’ only incentive to tackle the problem of fake news is to minimize the bad press it has generated for them. But unless and until telling the truth serves the bottom line, it is futile to expect them to change course. The best one can hope for is that they make visible efforts, however superficial, to remove misleading information or inferences from their sites. But performative acts of censorship like the removal of Trump’s tweet are window dressing that sends no larger signal. It serves only to irritate Trump’s supporters and soothe the troubled consciences of his liberal opponents.

Top of Form

Bottom of Form

The alternative – to leave the policing of opinion to state authorities – is equally unpalatable, because it would revive the untenable claim that there is a single source of truth, divine or secular, and that it should rule the Internet.

I have no solution to this dilemma. Perhaps the best approach would simply be to apply to social-media platforms the public-order principle that it is an offense to stir up racial hatred. Twitter, Facebook, and others would then be legally obliged to remove hate material. Any decision on their part would need to be testable in court.

I don’t know how effective such a move would be. But it would surely be better than continuing the sterile and interminable debate about what constitutes “fake” news. 0

International Law and Political Necessity

The UK government’s proposed “breach” of its Withdrawal Agreement with the European Union is purely a negotiating ploy. Critics of Prime Minister Boris Johnson’s tactics must argue their case on pragmatic rather than legal grounds.

LONDON – Whenever the great and the good unite in approval or condemnation of something, my impulse is to break ranks. So, I find it hard to join the chorus of moral indignation at the UK government’s recent decision to “break international law” by amending its Withdrawal Agreement (WA) with the European Union.

The “breach” of the WA is a calculated bluff based on the government’s belief that it can honor the result of the 2016 Brexit referendum only by deploying considerable chicanery. The main problem is reconciling the WA with the 1998 Good Friday Agreement, which brought peace to Northern Ireland and committed the UK government to maintaining an open border between Northern Ireland and the Republic of Ireland.

Prime Minister Boris Johnson negotiated and signed the WA, and must have been aware of the implicit risk of Northern Ireland remaining subject to EU customs regulations and most single-market rules. But in his determination to “get Brexit done,” Johnson ignored this little local difficulty, rushed the agreement through Parliament, and won the December 2019 general election. He now must backtrack furiously to preserve the UK’s economic and political unity, all the while blaming the EU for having to do so.

The fact that Johnson may have been the prime author of this legal mess does not alter the fact that the UK government pledged to honor the popular mandate to leave the EU, and had to find a political mechanism to make this happen. The Internal Market Bill now before parliament is both that mechanism and Johnson’s latest gambit to complete Brexit.

The bill gives the government the power, with Parliament’s consent, to change or ignore elements of the WA’s Northern Ireland Protocol, which ministers fear might result in “new trade barriers […] between Great Britain and Northern Ireland.”

The government has admitted that the bill breaches international law, but claims that its provisions to disallow elements of the protocol should “not be regarded as unlawful.” This is moot, and may still be tested in the courts. But it is the breaking of “international law” that has chiefly aroused the critics’ moral indignation.

In an op-ed in The Times,former UK attorney general Geoffrey Cox argued that it was “axiomatic” that the government must keep its word to other countries (my italics), “even if the consequences are unpalatable.” Failure to do so, Cox wrote, would diminish the UK’s “faith, honor, and credit.” Signing the WA with the EU obliged the government to accept “all the ordinary and foreseeable consequences of [its] implementation.”

But it is not “axiomatic” that a government must keep its word to other nation-states, even when this is codified in treaties. Doing so is desirable, but states frequently do not, for some obvious reasons.

First, no one can accurately foresee the full consequences of their actions. The erection of customs barriers in the Irish Sea is not an “inescapable implication” of signing the WA, as Cox now claims it is, because the agreement presupposed further negotiations on this point.

Second, Cox’s pronouncement implies that a government’s word to other governments is worth more than its word to its own people. But former Prime Minister David Cameron’s government, as well as the leaders of the main opposition parties, promised to respect the result of the Brexit referendum.

Third, Cox and others have argued that rather than breaking international law, the government should trigger the WA’s dispute-resolution mechanism to challenge the agreement’s disagreeable consequences as and when they occur. But having to suffer damage before doing anything about it is an odd doctrine.

Finally, Cox seems to treat international law as being on a par with domestic law, when in fact it is inherently less binding. This is because international law is less legitimate; there is no world government entitled to issue and enforce legislation.

International law is mainly a set of international treaty “obligations” between sovereign states. Breaking one is certainly a grave matter: it rightly carries a penalty in the form of lost reputation, and the United Kingdom may now end up with a less favorable trade agreement with the EU. Whether the UK should have risked its reputation in this particular case is not the issue. Now that it has, the case must be argued on the grounds of political necessity, not on the principle of legal obligation.

Governments and policymakers often violate or evade international law via both planned and improvised escape routes. This is because treaty instruments are necessarily static, whereas conditions change. It usually makes more sense to allow exceptional derogations than unravel a web of treaties.

For example, many governments have explicitly or implicitly repudiated national debts, the best-known example being the Bolsheviks’ repudiation in 1918 of Czarist Russia’s debts, owed mainly to French bondholders. More often, debtors “compound” with their creditors to render their debt wholly or partially fictitious (as Germany did with its reparation obligations in the 1920s).

Similarly, the European Central Bank is forbidden by Article 123 of the Treaty on the Functioning of the European Union to purchase its member governments’ debt instruments. But former ECB President Mario Draghi found a way around this to start quantitative easing in 2015.

I am much more sympathetic to the argument that Johnson signed the WA in bad faith, knowing that he would most likely try to override the Northern Ireland Protocol. What critics don’t seem to understand is that extricating the UK from the EU was always going to require a lot of legal skulduggery.

The legal mess was a consequence of the politics of withdrawal, and specifically the tension between Brexit and the Good Friday Agreement’s requirement of an open border between Northern Ireland and the Republic of Ireland (an EU member state). Prime Minister Theresa May tripped over this rock, while Johnson’s government shoved the problem into the post-Brexit transition period that ends on December 31, 2020.

With the deadline for concluding a UK-EU trade deal drawing closer, Johnson hopes that the Internal Market Bill will put pressure on the EU to devise a formula that ensures a customs-free border in the Irish Sea. It is a negotiating ploy, pure and simple.

Whether it is a good negotiating tactic is arguable. But critics must make their case in the context of the negotiating process as a whole, and without resorting to legal fetishism. That is why lawyers should never run a country.1

In his closing statement at the Bretton Woods conference in 1944, John Maynard Keynes described the ideal lawyer: “I want him to tell me how to do what I think sensible, and, above all, to devise means by which it will be lawful for me to go on being sensible in unforeseen conditions some years hence.” We will soon know whether Johnson’s bluff meets this sensible standard.

What Would Keynes Have Done

In the long-run, Covid-19 may well change the way we work and live. It may – and should – lead us towards a greener, less consumption-driven economy. The question for now is what to do about the economic devastation it will bring in its wake. Around 730,000 UK jobs were lost between March and July, the biggest quarterly decline since 2009, and unemployment is forecast by the Office for Budget Responsibility to reach its highest level since 1984 (11.9 per cent). 

The coming downturn is as inevitable as the rain announced by blackening clouds. In this respect it is quite unlike the banking collapse of 2008, or even Covid-19 itself, both of which were unforeseen. Remember the Queen’s question in 2008 to a group of economists at the London School of Economics: “Why did no one see it coming?” The approaching unemployment crisis is an expected event, not an unexpected “shock”. Because it is fully anticipated, governments should be in a good position to offset its effects, if not fully, at least in large part, provided they know what to do. But the theoretical vacuum lying at the heart of current policymaking discourages any undue optimism that they might.

Admittedly, this will be a most unusual depression. As the New York Times columnist Paul Krugman has noted: “What’s happening now is that we’ve cut down both supply and demand for part of the economy because we think high-contact activities spread the coronavirus.” Businesses have been paid not to do business; their employees paid not to work. As a result, the UK’s GDP contracted by a cumulative 22.1 per cent in the first half of this year compared with the end of 2019 (the largest fall of any G7 country). It does not yet feel like a depression because millions of people’s incomes are being artificially maintained through the Job Retention Scheme. 

But the furlough scheme, as it has become known, is being wound down and will formally end on 31 October. The optimistic expectation is that as businesses reopen and workers return to work, the economy will naturally and speedily revert to its former size. This is called a “V-shaped” recovery. But in many cases there won’t be jobs to go back to, because firms will have folded or continue to be restricted in the amount of business they are allowed to do. Added to this, Britain is mainly a service economy, and one has to consider the effect on spending of compulsory social distancing, plus voluntary resistance to physical contact. In the absence of further measures to support incomes, total demand will soon start falling to the level of the reduced supply, with savage consequences for employment.

But the more fundamental reason for scepticism about future government policy is that public officials and their economic advisers still subscribe to models that assume economies normally do best without government help. Stimulus measures can be justified in an emergency, but they are not seen as part of the policy framework, any more than keeping people in intensive care is seen as a prescription for healthy living. As the Chicago economist Robert Lucas once observed, all governments are “Keynesians in the fox hole”. The fact the stimulus measures advocated by JM Keynes – such as higher public spending and tax cuts – are expected to be for emergencies only reflects the damage the neoclassical (or free-market) economics of the 1980s and 1990s inflicted on his theory: damage has never been repaired.  


The fundamental feature of today’s neoclassical orthodoxy is a disbelief in the ability of governments permanently to improve the level and direction of economic activity or to alter the distribution of wealth and income. Markets, say mainstream economists, churn out results, which, if not always optimal, cannot be improved on without dire consequences for long-term prosperity. Since the 1980s Western governments have abandoned the full employment, growth and income-equalising targets of the Keynesian social-democratic era. 

Behind this rejection of the beneficial power of government are a number of specific theoretical and policy propositions: that market economies are normally stable; that with flexible wages and prices there can be no unwanted unemployment; that governments are less efficient in allocating capital than private firms; that public budgets should be balanced to prevent governments surreptitiously  stealing resources from the private sector; that the only macroeconomic responsibility of government is to maintain “sound money”; and that this task should be outsourced to central banks who alone can be trusted not to inflate the economy for electoral reasons.

So-called New Keynesians would add numerous qualifications. They would point to the existence of “market imperfections”, which allow for more short-term “policy space” than neoclassical orthodoxy permits. Nevertheless, they are hamstrung by their adherence to economic models that in principle deny the need for, and stress the baleful consequences of, government interference with market forces. Their common sense is stronger than their logic. 

Against this orthodoxy, juxtapose the key Keynesian propositions, which justify a much more robust economic role for the state: the instability of private investment due to uncertainty; the inability of flexible wages and prices to maintain full employment; the power of government policy to improve long-run and not just short-run outcomes; and the importance of the state’s budget for balancing the economy. 

Consider the Keynesian argument for denying that flexible wages will lead to a V-shaped recovery from an economic shock. Every producer, Keynesians argue, is also a consumer. A cut in production costs (wages) simultaneously cuts the community’s spending power and thus, far from hastening recovery, deepens the slump. By the same logic, cutting government spending in a slump makes matters worse, not better.  

For this reason it is likely to fail on its own terms. The former chancellor George Osborne never succeeded in “balancing the budget” in six years of trying. The budget cannot be balanced without a recovery in government revenue; the way to increase government revenue is to increase government spending. This apparent paradox arises only because we think of governments as ordinary households, which cannot “afford” to spend more than their incomes. But the government is a super-household: in a slump its spending creates its own income by enlarging its tax take. That is why fears of a runaway explosion in the national debt are largely illusory. The debt only becomes an unsupportable burden if it grows faster than the economy. Starting from a position when the economy is shrinking, an increase in government spending will cause the economy to grow faster than the debt.

And as for the neoclassical view that public investments are bound to be wasted, Keynes replied that even the most wasteful conceivable public investment is less wasteful than unemployment.


The traditional Keynesian response to a downturn in demand is to stimulate the economy through a mixture of fiscal and monetary measures: on the fiscal side by cutting taxes or by increasing public spending; on the monetary side by “printing” money. Such stimulus packages are intended to reverse the fall in total demand, leading to a recovery in economic output and employment. An example of fiscal stimulus from the UK government in 2009 was the car scrapping incentive scheme, whereby car owners received a £2,000 discount from the Treasury when trading in their old car for a newer model. The enlarged market for newer cars led to an increase in car production and sales, which led to increased employment in the motor car industry, and this helped sustain employment elsewhere. The Eat out to Help Out Scheme, which offered restaurant diners up to a £10 discount per head, is a more recent example.

However, such is the fear that government spending equates to “socialism” (think of the phrase “socialised medicine”) that even today’s Keynesians would prefer to stimulate the economy through unconditional cash grants to private individuals, rather than direct government spending. But cutting taxes (equivalent to giving people extra cash) will not increase employment if people are reluctant to spend; new money issued by the central bank won’t increase spending if it goes straight into cash reserves. Even negative real interest rates won’t prompt businesses to borrow if their expectation of profit is zero. The truth is that indirect stimulus won’t stimulate anything much in the face of a widespread collapse in consumer and investor confidence. Only direct state spending will do the job.

I would frame my anti-slump measures around a robust Keynesian model. The war against the economic consequences of Covid-19 must be fought with the weapons of public investment and job creation.

One result of the discrediting of Keynesian theory has been the collapse of state investment: the UK government’s share of total investment fell from an average of 47.3 per cent in 1948-76 to 18.4 per cent in 1977-2007. This left the economy much more dependent on the variable expectations of the business community. More pertinently for today, it left the public health services denuded of capacity to cope with the pandemic, and unduly reliant on foreign supply chains for essential medical equipment. A sound principle in today’s world is that all the goods and services necessary to maintain the health and security of the nation should be produced within its own borders, or those of its close political allies. If that means curtailment of market-led globalisation, so be it. 

More generally, I would immediately expand and accelerate all public construction and procurement projects – infrastructure, social housing, schools, hospitals – taking the opportunity to make them energy efficient. However, not all public investment needs to be performed directly by the state. I would create a state-holding company to take equity shares in private firms that are needed in the national interest. In today’s insecure world, no country can afford to leave the direction of its economic life, especially its scientific and technological direction, to the vagaries of global market forces. 

Secondly, I would replace the furlough scheme with a public sector job and training guarantee. This would cut off the coming jobs crisis at its root. Ideally, it should be part of a permanent system for ending the unemployment that has scarred all economies since the Industrial Revolution. Every person of working age able and willing to work who cannot find work in the private sector at the minimum wage should be offered a public-sector job or training at the minimum wage. Such a scheme, by guaranteeing work for all those able and willing to work, would fulfil the old trade union demand of “work or maintenance”. 

If this system were in place there would be no need for minimum wage legislation, since anyone offered a private-sector job at below the minimum wage would have the alternative of a higher-paid public-sector job. Periodic upward adjustment of the public-sector minimum wage would substitute an upward for a downward pressure on wage levels throughout the economy.  

There are two further advantages of a public-sector job guarantee. First, it would be a much more powerful automatic stabiliser than unemployment benefit. At present, the government’s budget deficit expands automatically in a slump as state revenues fall and public spending on income support rises. This limits the fall in economic activity, but does not avoid it. Under the job guarantee scheme, although government spending would rise more than at present, private incomes and therefore public revenues would be better maintained, not only minimising the recession, but ensuring that much of the enlarged budget deficit would be self-liquidating. 

A second advantage would be the stimulus that a job guarantee would provide for decentralisation. The programme would be funded nationally, but would be administered locally by a variety of agencies: local governments, NGOs and social enterprises. Each would be tasked with creating “on-the-spot” employment opportunities where they are most needed (environmental, civic, and human care), matching unfilled community needs with unemployed or underemployed people. Good models would be Franklin D Roosevelt’s Works Progress Administration and Civilian Conservation Corps, which provided millions of local jobs to the unemployed, often with a strong green slant. Local authorities might even offer prizes for residents who devise the boldest and most imaginative ideas.

A frequent criticism of such public work schemes is that they would simply “make work”. This is to take at face value Keynes’s off-the-cuff remark that if the unemployed were sent to dig up old bottles full of banknotes, there need be no more unemployment. People never quote the follow-up: “It would, indeed, be more sensible to build houses and the like; but if there are political and practical difficulties in the way of this, the above would be better than nothing.” Of course, there are many more sensible things that need doing in every community, which only a petrified imagination stops from being conceived and carried out.

If public money is to be spent – and much more will need to be spent in the years ahead – it is much better spent creating work than maintaining millions in idleness waiting for the private economy to heal itself. The big idea which needs to be grasped is that state-created work is itself part of the healing process. Not only does it add value to the community, but it expands the market for goods and services, which the private sector needs to return to health.

Keynes was convinced that if democracies failed to tackle mass unemployment, people would turn to dictatorships. He gave democracies a programme of action. We must build on it today. The economics profession has a special responsibility to show the way, which it has shamefully shirked.

Robert Skidelsky is the author of a three-volume biography of J M Keynes, a cross-bench peer and emeritus professor of political economy at the University of Warwick. His most recent book is Money and Government: The Past and Future of Economics

The Monetarist Fantasy Is Over

UK Prime Minister Boris Johnson, determined to overcome Treasury resistance to his vast spending ambitions, has ousted Chancellor of the Exchequer Sajid Javid. But Johnson’s latest coup also is indicative of a global shift from monetary to fiscal policy.
LONDON – The forced resignation of the United Kingdom’s Chancellor of the Exchequer, Sajid Javid, is the latest sign that macroeconomic policy is being upended, and not only in the UK. In addition to completing the ritual burial of the austerity policies pursued by UK governments since 2010, Javid’s departure on February 13 has broader significance.
Prime Minister Boris Johnson is determined to overcome Treasury resistance to his vast spending ambitions. The last time a UK prime minister tried to open the government spending taps to such an extent was in 1964, when Labour’s Harold Wilson established the Department of Economic Affairs to counter Treasury hostility to public investment. Following the 1966 sterling crisis, however, the hawk-eyed Treasury re-established control, and the DEA was soon abolished. The Treasury, the oldest and most cynical department of government, knows how to bide its time.
But Johnson’s latest coup also is indicative of a global shift from monetary to fiscal policy. After World War II, stabilization policy, the brainchild of John Maynard Keynes, started off as strongly fiscal. The government’s budget, the argument went, should be used to balance an unstable economy at full employment.
In the 1970s, however, came the monetarist counter-revolution, led by Milton Friedman. The only stabilizing that a capitalist market economy needed, Friedman said, was of the price level. Provided that inflation was controlled by independent central banks and government budgets were kept “balanced,” economies would normally be stable at their “natural rate of unemployment.” From the 1980s until the 2008 global financial crisis, macroeconomic policy was conducted in Friedman’s shadow.
But now the pendulum has swung back. The reason is clear enough: monetary policy failed to anticipate, and therefore prevent, the Great Recession of 2008-09, and failed to bring about a full recovery from it. In many countries, including the UK, average real incomes are still lower than they were 12 years ago.
Disenchantment with monetary policy is running in parallel with a much more positive reading of US President Barack Obama’s 2008-09 fiscal boost, and a much more negative view of Europe’s post-slump fiscal austerity programs. A notable turning point was the 2013 rehabilitation of fiscal multipliers by the International Monetary Fund’s then-chief economist Olivier Blanchard and his colleague Daniel Leigh. As Blanchard recently put it, fiscal policy “has been underused as a cyclical tool.” Now, even prominent central bankers are calling for help from fiscal policy.
The theoretical case against relying on monetary policy for stabilization goes back to Keynes. “If, however, we are tempted to assert that money is the drink which stimulates the system to activity,” he wrote, “we must remind ourselves that there may be several slips between the cup and the lip.” More prosaically, the monetary pump is too leaky. Too much money ends up in the financial system, and not enough in the real economy.
Mark Carney, the outgoing governor of the Bank of England, recently admitted as much, saying that commercial banks had been “useless” for the real economy after the slump started, despite having had huge amounts of money thrown at them by central banks. In fact, orthodox theory still struggles to explain why trillions of dollars’ worth of quantitative easing, or QE, remains stuck in assets offering a negative real rate of interest.1
Kenneth Rogoff of Harvard recently argued that fiscal stabilization policy “is far too politicized to substitute consistently for modern independent technocratic central banks.” But instead of considering how this defect might be overcome, Rogoff sees no alternative to continuing with the prevailing monetary-policy regime – despite the overwhelming evidence that central banks are unable to play their assigned role. At least fiscal policy might in principle be up to the task of economic stabilization; there is no chance that central banks will be.
This is due to a technical reason, the validity of which was established both before and after the collapse of 2008. Simply put, central banks cannot control the aggregate level of spending in the economy, which means that they cannot control the price level and the aggregate level of output and employment.
A less skeptical observer than Rogoff would have looked more closely at proposals to strengthen automatic fiscal stabilizers, rather than dismissing them on the grounds that they will have (bad) “incentive effects” and that policymakers will override them on occasion. For example, a fair observer would at least be open to the idea of a public-sector job guarantee of the sort envisaged by the 1978 Humphrey-Hawkins Act in the US, which authorized the federal government to create “reservoirs of public employment” to balance fluctuations in private spending.
Those reservoirs would automatically be depleted and refilled as the economy waned and waxed, thus creating an automatic stabilizer. The Humphrey-Hawkins Act, had it been implemented, would have greatly reduced politicians’ discretion over counter-cyclical policy, while creating a much more powerful stabilizer than the social-security systems on which governments now rely.
To be sure, both the design and implementation of such a job guarantee would give rise to problems. But for both political and economic reasons, one should try to tackle them rather than concluding, as Rogoff does, that, “with monetary policy hampered and fiscal policy the main game in town, we should expect more volatile business cycles.” We have the intelligence to do better than that. Continue reading “The Monetarist Fantasy Is Over”