Giles Merritt recalls that Europe’s belated AI catch up efforts must be seen in the context of 75 years of bungled micro-electronic breakthroughs.
The more AI (artificial intelligence) makes headlines, the clearer it is that Europe has failed disastrously to keep up in the global race for digital technologies.
Some bedraggled chickens are now coming home to roost in a barnyard that’s already crowded with lost opportunities in precisely those industries that will reshape the 21st century world.
This article is the first in a two-part look at the reasons for these failures, the economic and social implications, and Europe’s chances of catching up in a technological sector prone to dramatic leap-frogging.
If unpalatable lessons can be learned, could Europe still make up its lost ground?
Un peu d’histoire, as the Michelin Green Guides say. We tend to forget that Europe has a long history of scientific breakthroughs accompanied by disastrous industrial ventures.
The world’s first mainframe computer was Britain’s introduction in 1951 of the Ferranti Mk1, usually known as the Manchester Ferranti after the university boffins who developed it.
They included the parents of Tim Berners Lee, inventor of the internet.
Not only in computers did Europe throw away its early lead by neglecting to back projects with taxpayers’ money
The Ferranti computer was bulky and slow, but no more so than America’s 13-ton Univac unveiled a year later to help with the US census.
Once harnessed to military requirements and NASA’s moonshot programmes, Univac’s sales soon overtook its UK competitor.
Back in the laboratories, the Europeans continued to outstrip the US with their inventiveness.
In 1965, Italy’s Olivetti launched the first programmable personal computer, the P101.
In the same year, General de Gaulle initiated France’s “Plan Calcul”, centred around the ill-fated Machines Bull company.
The French aimed to lead the world with this, but Bull was so starved of capital that eventually Giscard d’Estaing agreed its sale across the Atlantic to General Electric.
IBM’s PCs didn’t begin to transform businesses everywhere until 1980, by which time the Americans had firmly established their lead in electronics.
Apple, Hewlett-Packard and Texas Instruments, to name but a few, had marketing skills that Europeans could only envy, backed by open-handed R&D spending.
Not only in computers did Europe throw away its early lead by neglecting to back projects with taxpayers’ money.
In 1953, Siemens had been an early developer of the ultrapure silicon needed for semi-conductors.
By the 1980s it had built a state-of-the-art facility at Neuperlach, outside Munich, but micro-chips had already become a Japanese fiefdom, with the US not far behind and Taiwan together with South Korea looming on the horizon.
In the 1990s, Siemens called its new semi-conductor Infineon, explaining that this neologism represented “unlimited possibility, perseverance, innovation and reliability.”
What the EU’s leading engineering giant lacked, however, was deep enough pockets.
In 2006, all the shares in its Infineon subsidiary were being sold off by Wall Street’s Goldman Sachs in a financial operation akin to a fire sale.
EU countries’ record on introducing automation and innovative systems is poor, and the growing ITC labour shortage suggests it may soon be poorer still
Throughout these years, the European Commission had been warning about the consequences of competing national efforts within the EU.
Unlike the US, where military contracts had been essential to the growth of IBM and others, Europe’s national governments had been too stingy to offer adequate support for digital initiatives, as well as reluctant to join cooperative pan-European projects.
It wasn’t just in hardware – and to a slightly lesser extent software – that Europe was falling behind.
EU member states’ scientific research and education systems were proving stubbornly hard to streamline.
Brussels’ eurocrats had long been warning that the shortfall in specialist digital workers would hamstring companies’ ability to introduce productivity-boosting technologies.
They are now forecasting that by 2030 Europe’s shortage will have risen to eight million of these key workers.
Media coverage of AI’s coming revolution tends to focus on job losses and the possible misuse of surveillance technology and fake information.
In fact, Europe should be worrying as much if not more about likely missed opportunities.
EU countries’ record on introducing automation and innovative systems is poor, and the growing ITC labour shortage suggests it may soon be poorer still.
Over the first two decades of this century, only half a million or so European jobs have fallen victim to digital transformation, and those chiefly in large corporations.
This slow pace of innovation is thought to have been the chief reason that Europe has lost its lead on productivity growth to America.
From 1975-2000, Europe’s productivity improved every year by 2.7 per cent, far better than the US rate of 1.3 per cent.
But Americans’ embrace of computerisation has since turned the tables; those figures have almost exactly reversed, and of late productivity growth has been flatlining in much of Europe.
The implications of Europe’s digital laggardness are still unclear, although there seems little ground for optimism.
But it may be that AI will be so transformative that who owns it matters less than who uses it to greatest advantage.
The second part of this article will address the factors involved.
The second article appears on 28 November and will focus on the twin AI challenges of boosting productivity while adhering to tough transparency rules.
*The views expressed in this Frankly Speaking op-ed reflect those of the author and not of Friends of Europe.
*This article first appeared on the Friends of Europe website and is reproduced with kind permission.
*The views expressed by the author of this article, Giles Merritt, are not necessarily those of The Bulrushes