New Times,
New Thinking.

  1. Politics
20 March 2019updated 25 Jun 2021 6:26am

The Boeing plane crashes show the power of capitalism to cause death by automation

By Paul Mason

The more we know about the catastrophic failures of an automated system on two Boeing planes, the more it begins to look like a microcosm of late capitalism. It is a case study of our collective inability to imagine failure and quantify risk.

Let’s start with what we know. When a Boeing 737 Max 8 crashed last October, minutes after taking off from Jakarta last October, initial investigations showed something probably went wrong with an automated system, known as MCAS, which prevents the plane from stalling if it climbs too fast.

This month, when the same kind of aircraft crashed, having taken off from Addis Ababa, investigators quickly found the same problem: a lever controlled by an automated system was set to force the plane’s nose downwards, even as its pilots were trying to make it climb.

Satellite data shows both planes making rollercoaster movements as their crews tried to grapple with the MCAS system, which experts think was likely giving them the wrong information, or overriding their decisions. All Boeing’s Max 8s are currently grounded and the company’s share price has slumped.

But this was no mere engineering flaw. Documents unearthed by the Seattle Times this week reveal failures in corporate culture and in regulation that pose big questions about safety standards for automated systems far beyond the aerospace industry, and, indeed, existential questions about capitalism in the age of automation.

According to the Seattle Times, Boeing’s safety analysis of the MCAS system was flawed in three fundamental ways. First, it seems to have completely underestimated its effects. When the planes first entered service, the automated system was able to push the tail flaps four times further than stated in the manual. If you try pulling the steering wheel of your car four times further than what power-assist is doing, you can begin to understand why that’s a bad idea.

Secondly, Boeing failed to understand that the system could keep resetting itself. Imagine your steering wheel trying to jerk your car off the road, and then overriding your attempts to keep the car straight manually.

Give a gift subscription to the New Statesman this Christmas from just £49

Finally, Boeing underestimated the risk of the system’s failure. At worst, they said, it could cause death or injury to a few passengers but not the catastrophic loss of the plane. As a result, they allowed themselves to build in only a single point of failure, rather than forcing the machinery to rely on two sensors.

Why did they make this catalogue of errors? According to the insiders quoted by the Seattle Times, the answer is commercial pressure.

Boeing’s new version of the 737 had to be certified as safe by the US Federal Aviation Authority (FAA). Because it does not have enough resources to do the certification on its own, the FAA routinely divides up the certification process, with its own experts rating the most critical systems and Boeing’s engineers doing the rest. But as the company was in a race with rival Airbus to produce its upgraded jet, the FAA’s managers reportedly began to pressure their own experts to let Boeing do more of the job. “There was constant pressure to re-evaluate our initial decisions,” one former engineer told the Seattle Times.

So now we have two kinds of failure: an engineering failure and a regulatory failure. On top of that we have to add a management failure. Because they’d decided the MCAS would only be triggered in extreme situations, Boeing concluded pilots didn’t need to know about the new feature, or require extensive training beyond an hour or two on an iPad. This, in turn, allowed Boeing to pitch its new plane as a low-cost alternative to the ones already in service, with minimal transition costs.

If you stand back from the detail and think about the generalities, there is something remarkably similar between the emerging picture of Boeing’s Max 8 fiasco, the failure of Lehman Brothers in 2008 and the failed levees that killed 1,833 people during Hurricane Katrina in 2005.

At Lehman Brothers there was clear malfeasance, as the bank moved unsustainable liabilities on and off its balance sheet every three months to avoid reporting them. But the underlying problem was the misunderstanding of risk. Complex financial instruments were deemed safe by self-regulating experts, in this case the credit rating agencies. The regulator was weak, and effectively captured by the industry it was supposed to be regulating.

With Katrina, the facts were even simpler. Engineers knew their system of drains and levees could withstand a Category 3 hurricane but not a bigger one. Political decision makers took the risk that would never happen. In addition, poor maintenance left the defences of the city, according to one army engineer, “a system in name only”.

What’s at work here is something more than the raw pursuit of profit (though it is always there). Neoliberal capitalism seems to create, spontaneously, a kind of performative behaviour between safety-critical industries and their regulators, whereby both sides go through the motions, tick all the required boxes, but miss – or ignore – vital clues that things are about to go haywire.

Neoliberal ideology treats all agents in a system – from the regulator to individual banks, companies, building contractors – as if they are pursuing enlightened self-interest within a market relationship. It became common to hear, in the 2000s, the statement among economists that “the best regulator of a deal is the participants themselves”. This was the ideology that blew up spectacularly, by the admission of one of its main proponents, Alan Greenspan.

In the ensuing years, it has blown up again and again. Facebook lost and manipulated our data – yet we the customers have no power even to know what was happening, let alone to insist on a change in the terms of the deal. The airlines who bought the Max 8 planes were, likewise, initially clueless as to the increased risk. Meanwhile, millions of drivers bought vehicles from Volkswagen that were emitting four times the amount of permitted Nitrogen Oxide due to a software tweak nobody knew about.

While engineers are trained to test systems and materials to destruction, the recurrent pattern when we see catastrophic failures of regulation and risk management is the refusal of politicians and regulators to test human systems to destruction. Instead, the illusion is fostered that, because nothing has gone wrong so far, the sheer complexity and interconnectedness of the system creates a new, unbreakable safety net.

So far this century, this tendency to assume the best, and to “perform” risk management procedures without actually doing them, has produced one catastrophic breakdown after another.

But in technology terms, the century of automation has barely begun. Within our lifetimes it is likely that the roads will fill with self-driving cars. Employee recruitment and the sentencing of criminals are already being determined by algorithms, some of which turn out to replicate bias against black people, and were deployed without anyone having to consider that as a risk. Both Israel and South Korea have developed robotic sentries that can kill an “intruder” at 2,000 metres and, if permitted, do so without reference to a human being.

At present, the backlash against automation consists of the justified fear that it will eradicate many jobs, or workplace tasks, and that it will subject the behaviour of human beings to machine control. Boeing’s Max 8 catastrophe will, once it is understood, add the next layer to the story. It will probably go down as the first globally-recognised case study of death by automation.

Yet automation, combined with unaided machine learning by computers, has the ability to transform life on the planet radically for the better. It will reduce the number of hours’ work needed to produce the necessities of life for the human race and – in a way no previous technological revolution ever did – deliver on the promise that increased productivity brings increased leisure time.

But it’s a transition that has to be managed. Right now, even the most timid and obvious transition measures are the subject of huge dispute and fear. Universal basic income, which purposefully decouples work from wages, provokes opposition from conservative politicians and trade union leaders alike.

In the coming century, we will see the rise of complex, all-embracing automated systems – for example, integrated city transport systems, and automated diagnosis and treatment in healthcare systems that rely on the data of entire populations. Human decisions will move from being “in the loop” to “on the loop” – just as the pilots of the Lion and Ethiopian Max 8s were.

Though they will fail rarely, their failures will be spectacular. If a city transport system, or energy grid, or entire health service were ever to shut the controlling humans out of the decision loop, as the MCAS system did on the doomed planes, consent for such large-scale automation projects would be lost.

So to minimise the risk, and contain the backlash, we have to address the social problem, not just the technical ones, that lies at the root of all catastrophes.The market does not self-regulate. The self-interest of deal participants is not enough. Formal transparency always hides systemic opacity. Big means dangerous: Boeing, Lehman, Facebook and Volkswagen leveraged their huge social power as monopolies to neutralise the pressure of regulators.

Above all, nobody is prepared to test the socio-economic system to failure because it will always fail. At a deeper level, few are prepared to link the failure of capitalist risk management systems to capitalism itself, because nobody is prepared to imagine its complete failure.

The Brazilian philosopher Roberto Unger told a London audience this week: “Imagination does the work of crisis before the crisis hits.” Modern capitalism has deprived an entire generation  – engineers, entrepreneurs and politicians – of the ability to imagine a crisis worse than the one they have already created.

Content from our partners
Breaking down barriers for the next generation
How to tackle economic inactivity
"Time to bring housebuilding into the 21st century"