Macro Economics, End of Work Climate Change

Robert Skidelsky is emeritus professor of political economy at Warwick University. His numerous, award-winning books include Keynes: The Return of the Master (2010), a discussion of John Maynard Keynes and the urgent relevance of his ideas in the wake of the 2008 financial crisis, and How Much is Enough? The Love of Money and the Case for the Good Life (2012), co-written with his son Edward Skidelsky. A member of the House of Lords since 1991, Skidelsky was elected a Fellow of the British Academy in 1994. His most recent book is Money and Government (2018) in which he argues against the orthodoxy of small-state neoclassical economics in favour of Keynes’ “big idea”.


Interview by Masoud Golsorkhi

Masoud Golsorkhi You say in the preface to Money and Government that the whole of macroeconomics is up for grabs in view of the poor recovery after the 2008 crash, yet throughout, you draw a picture of economics as being gripped by ideology. In response to the pandemic the government has shown it is prepared to ignore its own best advice. Do you feel vindicated? Do you feel Keynes is vindicated?
Robert Skidelsky I think so, but it all depends how deep the conversion is. If it’s just thought of as an emergency and that life will return to normal pretty quickly, then the chances are that we’ll try and get back to what was there before. If, on the other hand, economists start realising that there is a chronic condition in the capitalist system of today and it’s not just an emergency, then I think there’s some chance that the state will re-emerge as a major player in the management of economies. They’ll start rethinking the role of independent central banks as the only agents of stabilisation. They’ll start thinking more about inequality and the role it plays.

MG Do you suspect that we will have to bring the deficit back down again?
RS The deficit is one of those hugely irrational and irrelevant things that people talk about. The deficit depends on the state of the economy. If you have a programme that can bring the economy back to health, then the deficit will automatically come down. If, on the other hand, you say the deficit is important in itself and start cutting it, then you’re cutting the economy at the same time. The deficit is supporting the economy; if you cut it, you’re removing one of the economy’s main supports. So just to concentrate on the deficit is to put the cart before the horse. That was the mistake they made really in 2009 and 2010 – they prematurely started cutting the deficit in Britain and in the United States as well, and as a result, the recovery was seriously incomplete.

MG Do you think they’ve learned their lesson?
RS No, but there’s more weight behind the kind of things I’m saying now than there was in 2008 and 2009, because we’ve had two shocks. The first shock was endogenous to the economy; it was the shock to the financial system. The shock of the pandemic is external, but the effects have been fairly similar. More and more economists are saying, we must redo the economics profession. Paul Krugman in his latest book says this, and [Joseph] Stiglitz has been saying it. People who have been called heterodox like me now feel a bit more mainstream because so much has gone wrong with the new Keynesian-neoclassical synthesis.

MG Overall, do you think that the imbalance between the creditor and the debtor class, which is a political question, is likely to be addressed in the UK and elsewhere?
RS It’s being addressed a bit. What you’re doing is asking one of the oldest questions in economic life: who is mainly responsible for paying the debt? Who is responsible for incurring the debt and who’s responsible for paying it back? The orthodoxy is that you get into debt through voluntary decision, through over-optimism or just plain profligacy, and it’s your duty to pay it back. If you can’t pay it back, traditionally, you went to prison. They don’t do that now, but I think it’s a big issue because the other side is why do you make loans to people who are not creditworthy? This arose,
of course, in the subprime crisis. There was a whole lot of bad loans being taken out by people at teaser rates, which then went up – all because financial institutions saw that they could make a profit. So there’s always a tendency to overlend. On the other side, we live in a culture where you must have everything you want now, so there’s a tendency to get into debt. Getting into debt isn’t just a matter of greed on the part of the debtor however; it’s also often a matter of necessity. When you have such a huge quantity of poor people in our kind of society, in the UK and elsewhere, then of course, they actually need to get into debt in order to survive. Then they’re saddled with a debt they can’t repay. It’s a larger question than just creditors and debtors; it’s about the distribution of income and wealth within the society.

MG Would you be able to explain one of the problems that potentially awaits us after this period, namely, stagflation? How did it come about at the end of the golden years between 1950 and 1975? Can it happen again?
RS Towards the end of the “golden period”, governments were trying to maintain full employment, which led to inflation. Governments weren’t the only cause of this inflation; there were also very, very powerful unions that felt they were secure because the government would never give up on them. So you had an inflationary tendency and periodic attempts to curb it, which would lead to rises in unemployment. The recoveries that followed would then show a greater increase in inflation than in employment. The result was called the “misery index” with simultaneous rises in inflation and unemployment. That’s when Milton Friedman steps in and says, “Well, the explanation of this situation is that governments are printing too much money and trying to keep unemployment below its natural rate.” Yet that was a very, very partial narrative of what went wrong; it’s ridiculous to say that inflation was simply the result of the government’s attempts to maintain too high a level of employment. What about the Vietnam War? What about the huge, huge inflation that was unleashed globally by the United States in the 1960s? What about supply? What about the quadrupling of the oil prices? There was a combination of factors that would have defeated almost any kind of economic policy at that time. When you have four or five things going wrong simultaneously, it’s very, very hard just to pick out one and say this is the cause of everything that’s gone wrong. You have an interaction of events and the policy becomes extremely difficult. Taking one as the cause of all that went wrong led to monetarism and the abandonment of any attempt to maintain full employment. You ended up with austerity, which did the reverse, and we’ve never really recovered properly from that. We’ve had periods when things haven’t been too bad, but we haven’t recovered properly from it either in terms of a demand for labour or in terms of equality, because those societies were also more equal than ours. The union push wasn’t just a push for full employment; it was an equalising force on income. Now we’ve had automation, but no push to share the fruits of it among the population at large. You have huge profits accruing to a very, very small group of people and an increase in poverty, which is now permanently established.

MG The point that Thomas Piketty makes is that big chunks of capital have vanished from the productive stage of the world economy. Would you agree with that?
RS They have vanished from the productive stage of the world economy, but they haven’t gone to sleep. They have gone into what Keynes called “financial circulation”, which is to say, assets are swapped, which then pushes up their price. One important feature of the capitalist economy as it now operates is the financial bubble. It could be any kind of asset, yet that doesn’t mean that the assets trickle down into the real economy. Some do, but many don’t. They just churn around until they crash. So unless we achieve some new way of stabilising economies at a proper level, we’re going to have more and more of these asset bubbles and their crashes, which could happen anywhere.

MG Is this a symptom of what they call the financialisation of the whole economy?
RS It is what’s called the financialisation of the economy quantitatively. It simply means that the financial sector relative to any other sector is now much larger than it was. That’s defended on the grounds of the services that the financial sector renders the rest of the economy. They like to present themselves as intermediaries that facilitate financial borrowing and lending, saving and investing and make it safer because the more diversified and the more securitised their lending, so it is argued, the less risky for everyone. Of course, all that is not true. I agree very much with something Adair Turner said soon after the last crash: a lot of financial activity is simply social waste. So how do you stop it? The old way was called financial repression and operated in the heyday of the Keynesian system: you would stop the export of capital and put restrictions on its movement. So it was no longer really free to roam the world in search of the largest profit and confined to your own country. Keynes said that the financial system shouldn’t be larger than the political system – that was the principle behind it. We need some new financial repression, but how do you do it? You can do it through taxes of one kind or another. The well-known Tobin tax, for example, named after an economist called James Tobin, taxed short-term financial transactions. In other words, you can make it illegal to hold assets too briefly. There’s also a question of ethics involved and a lot of people want to restore ethical banking. You can also stop bubbles by some sectoral allocation of capital. But remember, the central banks in all countries have been given control of what’s called financial stability. It’s become one of the responsibilities of central banks to maintain financial stability, which wasn’t the case before the crash of 2008 and 2009. Before that they only had a responsibility for inflation because the financial system was assumed to be fairly stable. Now they have a specific responsibility and they’re talking about how best to carry it out. There are all kinds of stress tests. There are lots of things you could do to make banking safer than it now is. Whether these will happen, I don’t know, because the banking lobby is very, very powerful. There’s also the huge question of shadow banking and how its financial institutions can escape the scrutiny of regulators. There are lots of things you could do, but you’ve got to have the political will and support to do them.

MG What do you think about the idea of growth? Keynes didn’t have sight of the melting ice caps, but we do. Is the idea of growth something that we can abandon or reframe like Mariana Mazzucato or Ann Pettifor propose, and can we recondition capitalism to deliver something other than growth?
RS They want green growth by cutting down on one type of growth, but not cutting down on growth itself. The more radical proposition is degrowth. Obviously some countries have to go on growing. They’re very poor and within the so-called abundant economies, they don’t feel that they have reached a stage of abundance. The problem with the notion of abundance is it’s a very macro term, and you always want to ask: abundance for whom? How much is enough? My son and I wrote a book together called How Much is Enough?, which tackles exactly this problem of abundance. How much do you need to lead a good life? And the answer is, of course, that it’s very hard to put a figure on it. First of all, you have to decide what a good life is and once you’ve got an idea of what a good life is or might look like, then you can answer the question of how much material resources you want. If you go back to the classical Greeks, people like Aristotle, they said enough is a sufficiency to enable you to develop fully your capacities, but we’ve become very greedy and we want more and more. And of course, the whole profit-making momentum is designed to feed our desire for more and more and more so that we can never cut it off at any point. That does bring one, as you rightly say, to the planetary limits. We’re simply depleting bit by bit the planet’s capacity to support us, and that has to be tackled. You can’t have a simple growth agenda anymore because you’ve got a supply constraint looming up – it’s as simple as that. ◉

Britain’s Benefit Madness

Work is the ultimate escape from poverty. But the futile sort demanded by the United Kingdom’s income-support scheme puts many of society’s weakest members on a path to nowhere, because it reflects a welfare ideology that fails to distinguish fantasy from reality.

LONDON – Mahatma Gandhi probably never said, “The greatness of a nation can be judged by how it treats its weakest member.” But that doesn’t make it any less true. And nowadays, the United Kingdom is in danger of receiving a failing grade.

According to the Joseph Rowntree Foundation, 14.5 million people, or 22% of the UK’s population of 65 million, live below the “poverty line” (defined as less than 60% of median income). Of a working-age population of 42 million, some 5-6 million, or about 12%, are either unemployed or underemployed (working less than they want to). About eight million working-age citizens, or 20% of the total, qualify for what the British call “benefit,” whereby all or part of their income is paid by the state.

These figures are approximate, and some of the details are disputed. But the broad picture is that, even setting aside COVID-19, the UK’s capitalist system normally cannot provide a living wage for about one-fifth of the country’s working-age population.

This represents a huge change from the late 1940s, when Britain established its redoubtable welfare state. The philosophy that inspired it, reflected in the 1942 Beveridge Report held that the state would guarantee full employment, that work would provide the income for a decent life, and that the welfare system would deal with “interruptions” to work caused by unemployment, sickness, and maternity.

By the 1960s, the interruptions had become much more frequent, not because unemployment had risen, but because the number of claims for so-called national assistance (benefits not covered by insurance) rose faster than the working-age population. The initial growth stemmed largely from an increase in the number of single mothers and an additional entitlement to disability benefits. Later increases in the number of claimants, including in the early 1980s, were fueled by a rise in unemployment and precarious work.

The current situation, with about 20% of the working-age population “living on the state,” has existed since the 1990s. The growing numbers inevitably resulted in the spread of means-testing and conditionality, which, together with demands to simplify an increasingly fragmented system, led to the introduction of the current Universal Credit regime, whose long rollout began back in 2011. The new scheme consolidated six benefits for working-age people, in or out of work, into a single monthly payment.

But the key move had come earlier, in 1995, when the UK’s then-Conservative government replaced the unemployment benefit with a Jobseeker’s Allowance. In contrast to the era of Keynesian full-employment commitments, claimants would receive the allowance in return for undertaking a mandatory “job search,” defined as “work activity.” Every claimant had to prove that they were spending 35 hours a week – the equivalent of a full-time job – looking for work. Failure to engage in the necessary “work activity” would result in their allowance, or “wages,” being docked or cut off.

The philosophy behind this parody of the work contract was clearly explained by Neil Couling, a senior civil servant at the UK’s Department for Work and Pensions (DWP), in his evidence to the House of Lords Select Committee on Economic Affairs in March 2021. “The system does require the 2.5 million people on universal credit to engage with work search as a condition of receiving universal credit,” Couling said. “You have to look for a job if you are going to get a job.”

As the DWP explained, “deliberately mirroring a contract of employment, the claimant commitment makes clear that welfare is no different from work itself.” This means that “just as those in work have obligations to their employer, so too claimants have a responsibility to the taxpayer.”

Pronouncements like this one reveal that insanity – the inability to distinguish fantasy from reality – has taken over a system. It is true that you have to look for a job in order to get one. But you will not find one, even if you search overtime, if there are none available. The fantasy behind the scheme (which also underpins neoclassical economics) is the assumption of full employment, with unemployment being simply a consequence of able-bodied workers’ preference for leisure.

Likewise, the UK’s benefit system assumes, insanely, that all claimants are digitally literate. The moving filmI, Daniel Blake, about an unemployed carpenter who had recently had a heart attack, portrays Blake’s increasingly desperate efforts to submit a benefit claim online. Although his cardiologist has said he is unfit for work, the authorities say he lacks enough “points” to qualify for disability benefits. So, Blake has to apply for a Jobseeker’s Allowance, which means he is forced to attend a CV workshop and be coached to apply for jobs that he is medically unfit to do.

Blake, who is digitally illiterate, goes to a public library to use the computer there. When the librarian tells him to “run the mouse up the screen,” he takes the mouse and moves it across the monitor.

He then writes a CV by hand and gives it to various employers, who tell him that there is no work to be had. But the officials at the Jobseeker’s Allowance office are unimpressed. “That’s not good enough, Mr. Blake – how do I know you’ve actually been in contact with all these employers?” says one. “Prove it.” This is pure Kafka, the algorithmic grinding of a senseless machine.

Top of Form

Bottom of Form

There is of course a method in the madness: Universal Credit can be seen as a deliberate tool to shape a currently redundant segment of the workforce into the forms required by low-skilled labor markets. But the disease is misdiagnosed: the problem is aggregate under-demand for labor, not a surplus supply of the wrong kind of labor.

The only escape from such a system is to replace fantasy with reality. If the UK’s private sector cannot in normal times provide decently paid jobs for all those willing and able to work, the state should step in with a public-sector job guarantee. That would immediately halve the number of Universal Credit claimants “searching for work” and, by eliminating Marx’s “reserve army of the unemployed,” substitute upward for downward pressure on wages.

Community-provided work, however dire, is more rewarding than a soul-destroying slog from firm to firm in search of nonexistent jobs. Work is the ultimate escape from poverty, but the futile sort demanded by the UK’s benefit contract puts many of society’s weakest members on a path to nowhere.

Sequencing the Post-COVID Recovery

As countries emerge from the COVID-19 pandemic, John Maynard Keynes’s emphasis on the need to implement post-crisis economic policies in the right order is highly relevant. But sustainability considerations mean that the distinction between recovery and reform is less clear cut than it seemed in the 1930s.

LONDON – John Maynard Keynes was a staunch champion of US President Franklin D. Roosevelt’s New Deal. The road to a civilized future, he wrote, went through Washington, not Moscow – a direct rejoinder to those idealists, including some of his students, who put their faith in communism.

But Keynes was not uncritical of FDR. Specifically, he faulted Roosevelt for mixing up recovery and reform. Recovery from the slump was the first priority; social reforms, “even wise and necessary,” might impede recovery by destroying business confidence. Presaging today’s debates about post-pandemic economic-policy priorities, Keynes argued that proper sequencing would be the key to the New Deal’s success.

The advisers in FDR’s “brain trust” were reformers, not Keynesians, and had a different view. Attributing the Great Depression to excessive corporate power, they thought that the route to recovery lay in institutional change. As a result, so-called Keynesian stimulus was a minor component of the New Deal – emergency treatment pending longer-run cures.

Keynes himself repeatedly argued that the New Deal’s extra federal spending was insufficient to bring about full recovery. FDR’s total stimulus package of $42 billion – mostly spent in the first three years of his presidency, from 1933-35 – amounted to about 5-6% of US GDP at the time. Keynes, taking a rosy view of the fiscal multiplier, thought it should be double that.

The Nobel laureate economist Paul Krugman said much the same about President Barack Obama’s 2009 stimulus of $787 billion, which came to 5.5% of GDP. On the basis of such uncertain reckonings, President Joe Biden’s $1.9 trillion economic rescue plan, equivalent to 9% of current GDP, seems about right.

Keynes was talking about fiscal stimulus. He was famously skeptical of the monetary stimulus attempted by both President Herbert Hoover in 1932 and FDR in 1933 – now called “unconventional monetary measures,” or, more simply, quantitative easing (QE). Then, like now, the goal was to bring about a recovery of prices by printing money.

The most controversial of these schemes, Roosevelt’s gold-buying spree, was designed to offset the collapse in commodity prices. As FDR explained in one of his famous fireside chats, higher hog prices meant higher farm wages and buying power. In fact, large-scale gold buying by the US Treasury and the Reconstruction Finance Administration failed to move the price of hogs or anything else.

Keynes’s reaction was scathing. Rising prices are an effect of recovery, not a cause of it, he argued, adding that trying to raise output by increasing the quantity of money was like “trying to get fat by buying a larger belt.” All that FDR’s gold-buying program did was to replace gold hoarding with currency hoarding. And yet economists continually reinvent the wrong wheel. The 2009-16 QE programs embodied the same misguided theory and similarly failed to boost the price level.

Likewise, Keynes criticized those provisions of FDR’s National Recovery Administration that tried to engineer recovery by strengthening the position of labor. This, too, he thought, was the wrong way round: the time to saddle business with extra costs was after recovery was secure, not before. And while Keynes never challenged FDR’s promise to drive the money changers out of the temple, he must have wondered about how this would affect the confidence of a paralyzed financial system.

Finally, Keynes worried that mixing up recovery and reform was giving FDR’s administration “too much to think about all at once.” This observation should serve as a warning to those who see in an economic crisis the chance to push all their favorite schemes, regardless of temporal consistency.

Keynes’s stress on the importance of proper policy sequencing is highly relevant today. But, as we emerge from the COVID-19 pandemic, the distinction between recovery and reform – and consequently between macro and micro policy and the short and long run – is less clear cut than it seemed to Keynes (and others) in the 1930s.

For starters, full-employment policy is now obviously linked to employability, which was simply not the case in the 1930s. The reason so many people were out of work back then was not that they lacked the skills required by industry, but rather that aggregate demand was insufficient.

Keynes thus wrote in December 1934 that the purpose of the government spending a “small sum of money” was to get “private individuals and corporations to spend a much larger sum.” What they spent it on was of no further concern to policymakers.

But in today’s age of automation, no government can afford to take such a cavalier attitude to the sustainability of employment. As early as 1930, in fact, Keynes foresaw technological unemployment as a problem that would be outside the scope of demand management.

Since then, the accelerating threat of job redundancy has enlarged what Keynes called the “agenda” of government. In particular, the state must be centrally concerned with the speed of technological innovation, the choice of technologies, and the distribution of the productivity gains that technology enables.

Top of Form

Bottom of Form

In the coming years, the uncomplicated Keynesian full-employment policy will need to give way not just to a training guarantee, but also to an income guarantee as the character of work changes and the quantity of necessary human labor falls. Sustainable employment may thus be very different from what we now think of as full employment.

Then there is environmental sustainability. Although Keynes understood that the state would need to account for a much larger share of investment, this was mainly a matter of smoothing out fluctuations in the business cycle, not plotting a sustainable ecological future. (Conferences on nutrition always bored him.) He was too much of a liberal, or perhaps simply too much of his time, to believe that the state’s agenda should include deliberately shaping the future through its choice of investment and consumption projects.

Today, economic reform shadows recovery to a far greater extent than it did when Keynes distinguished between the two. But his way of setting out the relationship is a clear starting point from which to build both better.

Why the West failed to contain COVID-19

The promise of a “final” end to lockdowns in the spring of 2021 is the kind of hyperbole we have come to expect about new products and policies. The Oxford University vaccine may work; it may even be delivered effectively. Meanwhile, Covid-19 is still around, the UK government is extending lockdown for large parts of the country and effective protections are still being ignored, at grave cost.

From the start of the pandemic, the policy choice in Europe has been presented as a trade-off between lives and livelihoods. Since priority was (rightly) attached to saving lives, the livelihoods of large sections of the population have been sacrificed, with income support for workers in the form of long paid holidays called furloughs, and loans and grants for business prevented from trading. As a consequence of widespread business distress and associated redundancies, European countries face a huge problem in reopening their economies in the wake of the pandemic. Forecasts suggest that the UK economy will be 6 per cent smaller in 2021 than in 2019, and unemployment at 7.5-8.0 per cent – roughly double its pre-crisis level. 

The experience of East Asia shows the choice in Europe was, and continues to be, wrongly presented. Countries such as China, Japan, South Korea and Taiwan found a way of protecting both lives and livelihoods. Their death rates per head of population have been much lower than in Europe; their economies have barely contracted; and they are forecast to be larger, not smaller, next year. Their secret was an effective system of testing, tracking and quarantining. The question is why such a system was not adopted as the first line of defence in Europe. This is not just a historical question. If it were technically feasible in March it is even more so today. It is still not too late to avoid future economic and social damage, even though most of the damage already inflicted cannot be repaired.

***

Virologists identify the nature of an epidemic, while epidemiologists study the way it spreads. It was an almost-forgotten English doctor, Ronald Ross, who first developed a predictive model of malaria transmission, which was later generalised as the SIR (Susceptible, Infected and Recovered) model of contagious disease epidemics. His successors concluded that a viral infection ends when the virus runs out of hosts in which it can reproduce itself; that is, when the population develops “herd immunity”.

Central to the SIR model is the “R rate”, the rate of infection. The statisticians work out a data-based prediction of the rate of infection in a susceptible population. Politicians, advised by medical scientists, as well as by health professionals who tell them about medical capacity, and economists who tell them about strains on the economy, decide policy. Their skill lies in assessing political reactions to their policies.

“Mass protection” and “focused protection” have been the two main political responses to the models produced by these experts. They have been presented as polar opposites, but are essentially two variants of the same response.

Think of them in terms of breaking the S⇀ I chain. If a virus is circulating in a susceptible population, the effect of a mass lockdown (semi-isolation) is to reduce the transmission rate and thus the overload of medical services. As soon as the lockdown is eased, the spread rate picks up, but because the susceptible population has been reduced by recovery or death, R continues to decline (with smaller spikes) to the point when normal life can be resumed. The treatment is effective, but the economic cost is horrendous.

The alternative of focused protection, as proposed by the Great Barrington Declaration and others, is a restricted application of mass protection. It aims to remove the most susceptible population (the old and those with underlying health issues) only from the path of the virus. It thus makes possible a more normal life for most people and, in principle, reduces the pandemic’s economic costs. However, the Great Barrington Declaration’s signatories never made clear how a sizeable section of the population can be securely “shielded” from the rest. For this reason, it is supported by very few public health professionals. Sweden has shown that any exponential growth of the virus in a healthy population is bound to spread to the vulnerable. The country’s death rate was similar to that of mass protection countries; and the effects on its economy also similar.

The bird that never flew in most of Europe (Germany was a partial exception) was “targeted protection” based on digital testing, tracking and self-isolating. Such systems exist, and have been rolled out in the UK as well as in other European countries, but never as an alternative to mass or focused protection. Indeed, the UK government decided early in March to stop contact tracing and impose a mass lockdown.  

Digital or targeted protection works not by locking down pre-determined blocks of people but only those individuals and clusters who test positive. Every individual spreader and his or her contact group is isolated straight away. As a result, normal life, with a few sensible precautions (masking, distancing, hand washing) can continue as before. No lockdowns, total or partial, are needed. The South Korean economy contracted by 2.74 per cent between March and September; the UK economy by more than 9 per cent. (Taiwan’s economy actually grew.)

South Korea is the pin-up story for digital protection. Only 289 out of 51 million inhabitants are known to have died from Covid-19 between February and July 2020; in the UK it was 44,600 out of 66 million. South Korea avoided a national lockdown; the UK has had two.

South Korea’s success is generally attributed to early identification and management of cases, clusters and contacts. Every time an individual tests positive in the densely populated country, 100 contacts are identified by location and payment data and then tested. The country has also skilfully controlled its borders (as has Taiwan and China) and it has sensible precautions such as masking and selective social distancing and localised temporary lockdowns. Short of a vaccine, there is no single efficacious system of protection; the difference lies in the weight given by policymakers to the different elements of protection. The national lockdown method is by far the most expensive and, on the evidence, the least efficacious.

Electronic shielding was never central to the European policy response. The reasons are complicated, but essentially due to a mixture of unfamiliar technology, logistics and politics. Consider these in turn.

***

The technology behind digital shielding is quite simple. It consists of a mobile app on every mobile phone. This is similar to the English NHS Covid-19 app, but without the need for a QR code check-in. 

An effective system works by identifying those users who have been exposed to someone else who has tested positive. The system asks them to be tested within 48 hours and for them to quarantine in the meantime. In a lighter version of the scheme they can be asked if they have symptoms and only then will they have to isolate while they wait to get tested.

An efficient test and track system of this kind would obviate the need for either mass lockdowns or continuous mass testing. It would concentrate on the crucial infection-spreading interactions, allowing optimisation of testing and isolating to those who really are at risk. Thus the rate of transmission is curbed not by locking down whole populations for months or testing everyone, but by briefly quarantining small clusters. You would avoid the ludicrous need to lock down a million people because infection had spread in a cluster of a hundred. The economic effects are incomparably milder.

Such a system was available from the start of the pandemic. But it was never central to stopping the spread of the virus in Europe. Indeed, it was presented as a minor separate measure, subordinate to lockdown. There was no effective lobby for its priority. At no point did those familiar with the technology say to politicians: “Why not make this your main weapon?” Nor did the media take up the cause. Two obstacles seemed insuperable to mass implementation: logistical and political.

***

To obviate or minimise the need for other measures everybody has to have the app. The scale of roll-out depends on medical capacity to test and track and user capacity to understand what is required of them. Lack of medical capacity seems not to have been the decisive constraint, except in the UK.   

A more important barrier to roll-out was the large fraction of the “digitally challenged” in the population – those who didn’t know how to install the app or who had obsolete telephones. (The same problem is faced when any service is put online.) This is where a big governmental effort was needed and was not forthcoming. The government might have provided free up-to-date phones and free help in setting them up. Along with video tutorials, FAQs and other online resources every technology shop staff could have been trained and subsidised to offer free assistance to the digitally disadvantaged, while ensuring that the app was designed according to user-centred techniques.  

For those with an obsolete mobile phone, who refused to replace it, but still wanted to take part in the test and track scheme, an alternative device such as a digital bracelet or necklace could have been offered to track the users, as have been trialled and deployed in Hong Kong, Bulgaria and Bahrain. Mobile phones or bracelets need to be with the user at all times when outdoors and infringement would need to be pursued by law, just as lockdown rules, mask-wearing and social distancing now are.

Enriching the population technologically in double-quick time would have cost a significant amount of money, though a tiny fraction of the cost of the lockdowns. And it would have offered a valuable opportunity to use the crisis to achieve an enhanced overall level of digital competence, with favourable knock-on effects on people’s employability.

Nor was the technical processing of data quickly the problem it has been made out to be. These are very simple and light data (a few bytes, probably not even kilobytes). The ghastly delays in processing information in the UK, revealed by the Panorama programme “Test and Trace Exposed“, was the result of organisational ineptitude, not technical difficulty.

The central point is that the technology was available from the start and the logistical problem of wiring everyone up and getting up-to-the-minute information could have been solved had attention and resources been directed to solving it. The reason they were not was political.

***

As European failures have demonstrated, if you opt for digital shielding you can’t be half-pregnant: failing to make sure that everyone follows the rules of the scheme would not be a partial success. It would be a complete failure.

This means two things: authorities need to make sure that the system covers the entire population; second, that every test returning a positive result is synced on to the system, and rapidly processed accordingly. Once this is done, all the new tests performed by the health authority are addressed to the close contacts (as identified above) of those who are positive. The colossal waste of mass lockdown is avoided.

Standing in the way were, and remain, serious concerns about privacy and enforcement.

The following incident in South Korea in May 2020, as reported by the Mail on Sunday, produced national headlines: “Night clubs in Seoul have been linked with 119 coronavirus cases nationwide after a ‘super-spreader’ visited a number of bars in the Itaewon district. [The] 29-year-old man, who is thought to be at the epicentre of the latest cluster of cases, was tracked by authorities… and tested positive for Covid-19.”

It turned out the bars were in the gay district, so, understandably, most of the client names and addresses registered by these clubs and available to the health and law enforcement authorities were false. And the fact that electronic tagging is widely used everywhere for tracing movement of criminals and that CCTV cameras might be used to track people’s movements increases the feeling that Big Brother is watching you.

The charity Privacy International argues persuasively: “Unprecedented levels of surveillance, data exploitation and misinformation are being tested across the world [in the light of Covid-19]. Many of those measures are based on extraordinary powers, only to be used temporarily in emergencies. Others use exemptions in data protection laws to share data. Some may be effective and based on advice from epidemiologists, others will not be. But all of them must be temporary, necessary and proportionate. It is essential to keep track of them. When the pandemic is over, such extraordinary measures must be put to an end and held to account.”

The key question is how to secure the efficient population protection offered by digital shielding while keeping the information it needs anonymous. 

The Italian app Immuni provides the answer. Not only is it not reliant on a centralised database to trace Covid-19, it also does not rely on venue check-ins, preferring closer range Bluetooth connection with other devices. This is far more accurate (in that it drastically reduces the risk of false positives) and grants full anonymity. The system doesn’t need to know my name, nor where the interaction took place. It just knows the date and that the length of the interaction was long enough and the distance was close enough for people to be infected by me. My identity is not disclosed at any step of the process, nor the places I have been. My anonymity will only be breached if someone has to visit me because I have not followed the rule of disclosure. 

It is hard to evaluate the argument that Asian culture is more hospitable than European to tracing because Asians value personal liberty less, and the “public good”, more. Contemporary Europeans associate liberty with privacy, but as Hannah Arendt pointed out in her 1958 book The Human Condition, the right to be protected from the public gaze cannot ever be more than relative.

Enforcement concerns are easily addressed: anybody can opt out of the scheme and embrace the alternative solution in the form of a personal lockdown. The prospect of this would itself be an incentive to become digitally literate.

In our kind of society, people would probably need some financial incentive to join the scheme (such as free up-to-date systems or cash payments). Those who join the scheme but who don’t follow the rules would face fines or policing through phone calls and other checks.

Digital shielding presupposes some degree of scrutiny and enforcement of its rules. But this has to be weighed against the prohibitions of normal life (as well as enforcement of these prohibitions) entailed by mass lockdowns. The lesson of the lockdowns is that the liberty to do what I want requires some regulation of that liberty to ensure it does not harm others.

***

Angela Merkel said people “needed beds to be full before they would accept a lockdown”. A further implication might be that people had to experience a full lockdown before they would be ready to accept the degree of intrusion involved in digital shielding. This explains why the test, track, isolate scheme has been implemented at best as subsidiary to full or partial lockdowns. This political calculation may have been right for contemporary Europe; it was clearly not so in many Asian countries.

There are certainly legitimate concerns about limiting personal freedoms. It’s only in the light of much stricter limitations, such as those that come with a full lockdown, that one can put things in perspective and see digital shielding as a desirable alternative. In the end, the aim of this solution is to offer an alternative to general lockdown in order to avoid the ascending economic, social and medical costs for the community.

The world economy was hugely damaged; trillions of pounds, dollars and euros were spent protecting communities, and hundreds of thousands of unnecessary deaths have been caused by want of effective deployment of a system of testing, tracing and quarantining. The technical requirements for such a system existed from the start; its effective deployment would have taken some time in Europe, so that the earliest lockdowns were probably inevitable. But by May or June, there should have been no need for further lockdowns. The reason digital shielding was so slow in coming is not technical or epidemiological, but political. With some notable outliers, most European citizens have preferred the anonymity of lockdown to the individual scrutiny of testing and tracing. Given the absolute priority attached to protecting lives, the policy was therefore set. But, despite optimistic claims for a vaccine, Covid-19 is not over. It is not too late to change.

Robert Skidelsky is the author of a three-volume biography of JM Keynes, a crossbench peer and emeritus professor of political economy at the University of Warwick. His most recent book is “Money and Government: The Past and Future of Economics”.

Massimiliano Bolondi is a technology adviser.

Spending Review: Beyond Accountancy

The furlough and the business support schemes, started in March, would end in October to coincide with the reopening of the economy. This meant that the UK economy would be much the same -give and take some minor “scarring”- in 2021 as it was in 2019: a year’s growth lost, but that was the limit of the damage.

The expectation of a fourth quarter bounce-back was always unrealistic: severely damaged economies never “bounce back” unaided. The Chancellor’s response to the “second wave” of infections and lockdowns in October/ November was, in essence, to extend his March 2020 job retention measures until next March. The “V” was becoming more like a “U”, the scarring would be worse, but the assumption seems to have been that very little extra support for businesses and jobs would be needed after March 2021. The £280bn spent :getting the country through COVID-19″ would have been spent; the fiscal task ahead was to start paying it back. Revenue would automatically increase as the economy returned to “normal”, but most the hugely expanded deficit would have to be reduced by raising taxes.

“The expectation of a fourth quarter bounce-back was always unrealistic: severely damaged economies never “bounce back” unaided.

The main realisation underlying the Chancellor’s new spending review yesterday, 25 November, was that this second, less rosy, outcome was not going to happen either. The Office of Budgetary Responsibility (OBR) now forecasts that the UK economy will be 6% smaller, and unemployment twice as large, in 2021 than it was before the pandemic hit. It was therefore necessary to switch tack from waiting for economic life to awaken from its frozen slumber to jolting it back into life by positive action. In addition to the money already spent or pledged, Rishi Sunak has promised an extra £55bn to fund those public services in the front line of the fight against COVID-1, especially the NHS.

More interesting, from the economic point of view, are the pledges around investment in future job creation. Capital spending next year will total £100bn, £27 bn more than in 2019-20; and local authorities will be able to bid for projects to improve what Sunak called “the infrastructure of local life”: a new bypass, upgraded railways stations, less traffic, more libraries, museums, galleries, better high streets and town centres. The Chancellor has also promised a National Infrastructure Bank, headquartered in the North of England, and tasked with working with the private sector from next spring onwards to finance major new investment projects across the UK.

“The government reacted to the pandemic by putting sticking plaster over the temporary wounds it inflicted on the private economy.

All of this represents a considerable revolution in thinking, but without the new language which would make this obvious. The new programmes are tagged onto the old programmes, with bits of extra money for them. This is partly because the Treasury’s thinking is still evolving; partly because a Spending Review is not the place to announce a new relationship between the state and the economy. But this is the logical consequence of the Chancellor’s announcement.

The government reacted to the pandemic by putting sticking plaster over the temporary wounds it inflicted on the private economy. What Sunak’s new measures implied was that, in future, the state is going to play a much more active role in maintaining the patient’s health. The accelerated and expanded capital investment programmes (together with the announcement of a National Infrastructure Bank) amount to a reassertion of the state’s investment function, which had withered away after the Thatcher revolution. The endorsement of local authority job creation schemes, together with the expansion of Kickstart, implies that the state cannot, and will not, in future be indifferent to the scourge of unemployment.

“These are only tentative gropings towards a new economic philosophy.

At the moment these are only tentative gropings towards a new economic philosophy. Thatcherite neoliberalism was all of a piece. It held that provided inflation was avoided, the market system could be expected to produce the best outcomes, in the short-run and the long run. This encompassed a global market; it included indifference to the distribution of wealth and income. In the neo-liberal nirvana, everyone received the “right” rate for the job.

The new philosophy is only half formed. It lacks a language of public responsibility; and it has too many missing paragraphs. Crucially, what should be the role of the financial system, and what measures should be taken to chain it to responsibility? How can a system of national protection be made compatible with market-driven globalisation, which make national prosperity dependent on global supply chains over which no one has any control? What steps can, or should, governments take to prevent cost-cutting automation from destroying jobs, livelihoods, and communities?

These are the questions which leaked out from the accountant’s words of 25 November. They will not go away.

Lord Skidelsky is emeritus professor of Political Economy at Warwick University; and author of a prize-winning biography of the economist John Maynard Keynes.

Job Creation is the New Game in Town

November 13 2020Even if a successful rollout of a new COVID-19 vaccine causes the current health crisis to recede by next spring, the unemployment crisis will remain. That is especially true in the United Kingdom, where fiscal stimulus is urgently needed to avert a lost decade – if not a lost generation – of growth.

EDINBURGH/LONDON – In the wake of the COVID-19 pandemic, both the US and European economies are gearing up for large-scale job creation. US President-elect Joe Biden has pledged to invest $700 billion in manufacturing and innovation, plus $2 trillion in a “Biden Green Deal” to combat climate change and promote clean energy. Meanwhile, Germany has abandoned years of thrift by backing a €750 billion ($887 billion) European Union recovery fund and, like France, will maintain its own national job retention and job creation program throughout 2021.

By contrast, the United Kingdom’s chancellor of the exchequer, Rishi Sunak, has fallen behind the curve. Back in March, many expected that Britain would experience a V-shaped recovery. As this prospect faded, it became clear that Sunak’s rescue operation needed to be matched with a viable recovery plan.

The consensus view is that both the UK and the global economy will be smaller in 2021 than they were in 2019. The International Monetary Fund predicts that the global economy will be 6.5% smaller than was forecast before the COVID-19 crisis, with a legacy of unemployment at least double the pre-pandemic norm.

These gloomier forecasts have prompted international calls for the reinstatement of active fiscal policy, with the IMF urging rich-country governments to start large public investment programs. In its latest Fiscal Monitor, the Fund says that increasing public investment by 1% of GDP could boost GDP by 2.7%, private investment by 10%, and employment by 1.2%.

The IMF’s call to action is particularly important, because the Fund was a champion of fiscal retrenchment during the 2008-09 global financial crisis, despite the obvious need for stimulus. Its earlier macroeconomic model, like those of most other economists and policymakers at that time, was based on the flawed theory that market economies have a natural tendency to reach full employment. This ignored the truth, most persuasively articulated by John Maynard Keynes, that in the absence of government stimulus, economies can remain naturally stuck in recession for a long time.

The Bank of England, too, has changed its tune. The BOE is about to inject an additional £150 billion ($198 billion) into the UK economy, in addition to the more than £200 billion it already has pumped out in 2020, and now realizes that it cannot do all the heavy lifting. Businesses will not invest, no matter how low the cost of capital, until they see a market. That is why the BOE has now joined the US Federal Reserve and the European Central Bank in calling for fiscal stimulus.

Before COVID-19, monetary policy seemed to be the only game in town. Now, if we are to avoid mass unemployment and the consequent loss of demand in the economy, job creation must become the overriding priority after the lockdown.

To its credit, the UK government brought forward £8 billion in infrastructure spending this past summer. But that is a mere fraction of what is needed. The government is now frontloading its £40 billion, five-year investment plan into the next two and a half years, and giving priority to big environmental projects and social housing. Retrofitting homes and local amenities could quickly create many jobs, with immediate multiplier effects.

Regional and local job and training schemes are essential to the longer-term task of reallocating work and skills toward the labor market of the future. The lesson of the UK’s 1998 New Deal for Young People and the 2009 Future Jobs Fund is that such programs must offer not only training and work experience but also assistance with job searches and incentives for employers to hire people on a permanent basis.

We estimate that one million young Britons under the age of 25 are neither working nor in training or education. But the government’s Kickstart job-creation scheme, which was launched belatedly earlier this month, has offered job placements to young people only for six-month periods.

The government expected that Kickstart would secure placements for 300,000 young people, but perhaps only around 100,000 will be enrolled in the scheme by the end of 2020. Ministers assumed that 5% of UK employers would take on young people, but outside of the retail and logistics sectors, thousands of firms are instead planning redundancies and will almost certainly not offer employment on anything like the hoped-for scale in the coming months.

If we are to assist the other 900,000 or so under-25s in need of help and create the estimated 1.5 million youth placements that will be required over the next year, the public sector will have to become the employer of last resort. So, rather than passively responding to a rise in unemployment, fiscal policy should aim to replace Karl Marx’s “reserve army of the unemployed” with a buffer stock of state-supported jobs and training schemes that expands or contracts with the business cycle.

What we need above all from UK policymakers is an updated full-employment commitment in the spirit of Keynes and US President Franklin D. Roosevelt. An essential condition for this is the coordination of monetary and fiscal policy. The BOE should retain its anti-inflation mandate, but policymakers should not use this to cut off necessary fiscal stimulus.

Earlier this month, the BOE echoed then-ECB President Mario Draghi’s famous 2012 pledge to save the euro by stating that it “stands ready to take whatever additional action is necessary” to boost the economy. To boost the credibility of such forward guidance, the government could give the BOE a dual mandate to fight both inflation and unemployment, while the bank could state that it will not tighten monetary policy until unemployment falls below its pre-crisis level of 4%.

A successful rollout of Pfizer’s new COVID-19 vaccine (and possibly others) could return life to a semblance of normality by next spring. But even if the health crisis recedes, the unemployment crisis will remain. UK policymakers must act now to avert a lost decade – if not a lost generation – of growth.

Robert Skidelsky Speech on Internal Withdrawal Bill

I will confine my remarks to Part 5 of this Bill. I find myself swayed by two completely opposite accusations of bad faith. The government accuses    EU negotiators of bad faith in seeking to erect  ‘unreasonable’ customs barriers between Northern Ireland and the rest of the UK .

Opponents of the Bill  say the bad faith is our own government’s. The Withdrawal Agreement set up a  Joint Committee to  resolve trade disputes; the government have chosen not to use it So, as Ed Milliband argued in his powerful phillipic in the other place,  the government was proposing to breach international law for bogus reasons.

I cannot support the regret motion, and would like to explain why.

To my mind, international law is not the main issue. ‘Never before’ say Noble Lords, ‘has a British government  sought  to break international law’.  But never before has Britain faced the  problem of   extricating itself from as complex a political, economic, and legal structure as the EU.   Law has to take account of political reality. As   John Maynard Keynes said in answer to  legal fundamentalists of his day:  ‘What I want from lawyers is to devise means by which it will be lawful for me to go on being sensible  in unforeseen conditions’. Noble Lords know very well that not  every contingency can be foreseen.

So, My Lords, I ask you to judge  the legislation before the House on three different grounds:     sufficient reason, motive, and consequences.

On the first,I agree with the argument that sufficient reason has not been established for the  override of Part 5  at the government’s discretion. But no noble Lord has mentioned  Amendment 66, by which    the government has   agreed to obtain parliamentary approval before activating  Part 5. I think that’s a reasonable compromise between those who think  Part 5 is   essential and those who think it unnecessary.

 Second,I sympathise with the  argument that the government signed the Agreement in bad faith in order to meet the PM’s  political requirements.   However,  most  Noble Lords  have ignored the argument that it was always going to require some bad faith-and legal creativity – to make  the Brexit decision consistent  with the Good Friday Agreement. When Ed Milliband said ‘A competent government would  never have entered into a binding agreeent with provisions they could not live with’ I’m afraid he is setting the bar of competence much too high. Contrary to Baroness Humphreys,deliberate  ambiguity is the hallmark  of statecraft.

Finally, what will the consequences be?   The legal fundamentalists  say it will damage our ability to get an agreement because it will damage trust  in the government’s word; the pragmatists believe it will force the EU negotiators to come up with a workable exit formula. Time will tell whether the government has calculated the balance of risks properly.

 My own feeling, contrary to much noble rhetoric, is that  we are still largely in the world of posturing.  That is the way EU and many other international negotiations work: public posturing, followed by  a last minute outbreak of commonsense.

 I think that’s the way it will turn out, and don’t want us to do anything which will weaken the hands of our own negotiators.(550) 

Policing Truth in the Trump Era

Social-media companies’ only incentive to tackle the problem of fake news is to minimize the bad press that disseminating it has generated for them. But unless and until telling the truth serves the bottom line, it is futile to expect them to change course.

LONDON – On October 6, US President Donald Trump posted a tweet claiming that the common flu sometimes kills “over 100,000” Americans in a year. “Are we going to close down our Country?” he asked. “No, we have learned to live with it, just like we are learning to live with Covid, in most populations far less lethal!!!”

Trump’s first claim is true: the flu killed over 100,000 Americans in 1918 and 1957. “We have learned to live with it,” is a matter of opinion, while his claim that COVID-19 is “far less lethal” than flu in most populations is ambiguous (which populations, and where?).

There seemed nothing particularly unusual about the tweet: Trump’s fondness for the suggestio falsi is well known. But, soon after it was posted, Twitter hid the tweet behind a written warning, saying that it had violated the platform’s rules about “spreading misleading and potentially harmful information related to COVID-19.” Facebook went further, removing an identical post from its site entirely.

Such online controversies are becoming increasingly common. In 2018, the now defunct political consulting firm Cambridge Analytica was said to have willfully spread fake news on social media in order to persuade Americans to vote for Trump in the 2016 US presidential election. Since then, Facebook and Twitter have removed millions of fake accounts and “bots” that were propagating false stories. This weeding-out operation required the platforms themselves to use artificial-intelligence algorithms to find suspicious accounts.

Our reliance on firms that profit by allowing “disinformation” to take the lead in policing the truth reflects the bind in which digital technology has landed us. Facebook and Twitter have no incentive to ensure that only “true” information appears on their sites. On the contrary, these companies make their money by harvesting users’ data and using it to sell advertisements that can be individually targeted. The more time a user spends on Facebook and Twitter, and the more they “like,” click, and post, the more these platforms profit – regardless of the rising tide of misinformation and clickbait.

This rising tide is partly fueled by psychology. Researchers from the Massachusetts Institute of Technology found that from 2006 to 2017, false news stories on Twitter were 70% more likely to be retweeted than true ones. The most plausible explanation is that false news has greater novelty value compared to the truth, and provokes stronger reactions – especially surprise and disgust. So, how can companies that gain users and revenue from false news be reliable guardians of true news?

In addition, opportunities to spread disinformation have increased. Social media have vastly amplified the audience for stories of all kinds, thus continuing a process that started with Johannes Gutenberg’s invention of the movable-type printing press in the fifteenth century. Just as Gutenberg’s innovation helped to wrest control of knowledge production from the Roman Catholic Church, social media have decentralized the way we receive and interpret information. The Internet’s great democratizing promise was that it would enable communication without top-down hierarchical strictures. But the result has been to equalize the credibility of information, regardless of its source.

But the problem is more fundamental: “What is truth?” as the jesting Pontius Pilate said to Jesus. At one time, truth was God’s word. Later, it was the findings of science. Nowadays, even science has become suspect. We have put our faith in evidence as the royal road to truth. But facts can easily be manipulated. This has led postmodernists to claim that all truth is relative; worse, it is constructed by the powerful to maintain their power.

So, truth, like beauty, is in the eye of the beholder. This leaves plenty of latitude for each side to tell its own story, and not bother too much its factual accuracy. More generally, these three factors – human psychology, technology-enabled amplification of the message, and postmodernist culture – are bound to expand the realm of credulity and conspiracy theory.

This is a serious problem, because it removes a common ground on which democratic debate and deliberation can take place. But I see no obvious answer. I have no faith in social-media companies’ willingness or ability to police their platforms. They know that “fake” information can have bad political consequences. But they also know that disseminating compelling stories, regardless of their truth or consequences, is highly profitable.

These companies’ only incentive to tackle the problem of fake news is to minimize the bad press it has generated for them. But unless and until telling the truth serves the bottom line, it is futile to expect them to change course. The best one can hope for is that they make visible efforts, however superficial, to remove misleading information or inferences from their sites. But performative acts of censorship like the removal of Trump’s tweet are window dressing that sends no larger signal. It serves only to irritate Trump’s supporters and soothe the troubled consciences of his liberal opponents.

Top of Form

Bottom of Form

The alternative – to leave the policing of opinion to state authorities – is equally unpalatable, because it would revive the untenable claim that there is a single source of truth, divine or secular, and that it should rule the Internet.

I have no solution to this dilemma. Perhaps the best approach would simply be to apply to social-media platforms the public-order principle that it is an offense to stir up racial hatred. Twitter, Facebook, and others would then be legally obliged to remove hate material. Any decision on their part would need to be testable in court.

I don’t know how effective such a move would be. But it would surely be better than continuing the sterile and interminable debate about what constitutes “fake” news. 0

International Law and Political Necessity

The UK government’s proposed “breach” of its Withdrawal Agreement with the European Union is purely a negotiating ploy. Critics of Prime Minister Boris Johnson’s tactics must argue their case on pragmatic rather than legal grounds.

LONDON – Whenever the great and the good unite in approval or condemnation of something, my impulse is to break ranks. So, I find it hard to join the chorus of moral indignation at the UK government’s recent decision to “break international law” by amending its Withdrawal Agreement (WA) with the European Union.

The “breach” of the WA is a calculated bluff based on the government’s belief that it can honor the result of the 2016 Brexit referendum only by deploying considerable chicanery. The main problem is reconciling the WA with the 1998 Good Friday Agreement, which brought peace to Northern Ireland and committed the UK government to maintaining an open border between Northern Ireland and the Republic of Ireland.

Prime Minister Boris Johnson negotiated and signed the WA, and must have been aware of the implicit risk of Northern Ireland remaining subject to EU customs regulations and most single-market rules. But in his determination to “get Brexit done,” Johnson ignored this little local difficulty, rushed the agreement through Parliament, and won the December 2019 general election. He now must backtrack furiously to preserve the UK’s economic and political unity, all the while blaming the EU for having to do so.

The fact that Johnson may have been the prime author of this legal mess does not alter the fact that the UK government pledged to honor the popular mandate to leave the EU, and had to find a political mechanism to make this happen. The Internal Market Bill now before parliament is both that mechanism and Johnson’s latest gambit to complete Brexit.

The bill gives the government the power, with Parliament’s consent, to change or ignore elements of the WA’s Northern Ireland Protocol, which ministers fear might result in “new trade barriers […] between Great Britain and Northern Ireland.”

The government has admitted that the bill breaches international law, but claims that its provisions to disallow elements of the protocol should “not be regarded as unlawful.” This is moot, and may still be tested in the courts. But it is the breaking of “international law” that has chiefly aroused the critics’ moral indignation.

In an op-ed in The Times,former UK attorney general Geoffrey Cox argued that it was “axiomatic” that the government must keep its word to other countries (my italics), “even if the consequences are unpalatable.” Failure to do so, Cox wrote, would diminish the UK’s “faith, honor, and credit.” Signing the WA with the EU obliged the government to accept “all the ordinary and foreseeable consequences of [its] implementation.”

But it is not “axiomatic” that a government must keep its word to other nation-states, even when this is codified in treaties. Doing so is desirable, but states frequently do not, for some obvious reasons.

First, no one can accurately foresee the full consequences of their actions. The erection of customs barriers in the Irish Sea is not an “inescapable implication” of signing the WA, as Cox now claims it is, because the agreement presupposed further negotiations on this point.

Second, Cox’s pronouncement implies that a government’s word to other governments is worth more than its word to its own people. But former Prime Minister David Cameron’s government, as well as the leaders of the main opposition parties, promised to respect the result of the Brexit referendum.

Third, Cox and others have argued that rather than breaking international law, the government should trigger the WA’s dispute-resolution mechanism to challenge the agreement’s disagreeable consequences as and when they occur. But having to suffer damage before doing anything about it is an odd doctrine.

Finally, Cox seems to treat international law as being on a par with domestic law, when in fact it is inherently less binding. This is because international law is less legitimate; there is no world government entitled to issue and enforce legislation.

International law is mainly a set of international treaty “obligations” between sovereign states. Breaking one is certainly a grave matter: it rightly carries a penalty in the form of lost reputation, and the United Kingdom may now end up with a less favorable trade agreement with the EU. Whether the UK should have risked its reputation in this particular case is not the issue. Now that it has, the case must be argued on the grounds of political necessity, not on the principle of legal obligation.

Governments and policymakers often violate or evade international law via both planned and improvised escape routes. This is because treaty instruments are necessarily static, whereas conditions change. It usually makes more sense to allow exceptional derogations than unravel a web of treaties.

For example, many governments have explicitly or implicitly repudiated national debts, the best-known example being the Bolsheviks’ repudiation in 1918 of Czarist Russia’s debts, owed mainly to French bondholders. More often, debtors “compound” with their creditors to render their debt wholly or partially fictitious (as Germany did with its reparation obligations in the 1920s).

Similarly, the European Central Bank is forbidden by Article 123 of the Treaty on the Functioning of the European Union to purchase its member governments’ debt instruments. But former ECB President Mario Draghi found a way around this to start quantitative easing in 2015.

I am much more sympathetic to the argument that Johnson signed the WA in bad faith, knowing that he would most likely try to override the Northern Ireland Protocol. What critics don’t seem to understand is that extricating the UK from the EU was always going to require a lot of legal skulduggery.

The legal mess was a consequence of the politics of withdrawal, and specifically the tension between Brexit and the Good Friday Agreement’s requirement of an open border between Northern Ireland and the Republic of Ireland (an EU member state). Prime Minister Theresa May tripped over this rock, while Johnson’s government shoved the problem into the post-Brexit transition period that ends on December 31, 2020.

With the deadline for concluding a UK-EU trade deal drawing closer, Johnson hopes that the Internal Market Bill will put pressure on the EU to devise a formula that ensures a customs-free border in the Irish Sea. It is a negotiating ploy, pure and simple.

Whether it is a good negotiating tactic is arguable. But critics must make their case in the context of the negotiating process as a whole, and without resorting to legal fetishism. That is why lawyers should never run a country.1

In his closing statement at the Bretton Woods conference in 1944, John Maynard Keynes described the ideal lawyer: “I want him to tell me how to do what I think sensible, and, above all, to devise means by which it will be lawful for me to go on being sensible in unforeseen conditions some years hence.” We will soon know whether Johnson’s bluff meets this sensible standard.