It's been a busy year in AI and robotics, with ongoing debates around ethics and the societal impact of automation sitting alongside development in technology. Here's my top stories from the year.
Why? The ethics of artificial intelligence are never far from the news, with bias in historic data and algorithm design risking the automation of societal prejudices. In December, Neil Raden cast his eye over proposals to counter this from the Stanford Human-Centered Artificial Intelligence conference – and found them wanting. Preserving jobs is not enough, he said:
Why limit it to workers? Are we not concerned with the welfare of the young, the old, the infirm, oppressed minorities? Wouldn't a scheme like this, even if it didn't displace human workers (presumably those making $7.25/hour), merely reinforce the status quo? I'd put this in in a class of proposals that are aspirational, but without any foundation for success.
Why? The UK is getting to grips with these issues too. In December, Derek du Preez looked at a new standard for algorithmic transparency, led by the Cabinet Office in the wake of the National AI and National Data strategies this year. Welcoming the move, he wrote:
Transparency when deploying algorithms is essential. I welcome the government's efforts to lead in this area. However, what I would like more detail on is how this is going to be governed and where information will be held on the projects being deployed. Transparency is essential, but so is accountability.
Why? For one AI innovator, GainX CEO and anthropologist Angelique Mohring, diversity is critical to removing the risk of harm from biased systems. In September, she told me:
It is a concern of mine that AI is built upon, and then built upon again, without checkpoints in place. In anthropology we’ve moved on, over the centuries, from where bias was built into early anthropological studies. A hundred years later we're still unfolding some of those biases. Introduce women, for example into anthropology, and suddenly assumptions that were made and built upon for years [by men] have been undone and turned on their head.
With AI, my concern is what's the damage that's going to be done today, because it's moving at such a pace that policy is having a very difficult time keeping up with it.
Why? For the UK, the big story in this field was publication of the AI Strategy in September – after years in which Whitehall gave the impression of already having one. The 35-page document promised a 10-year plan to turn the UK into an “AI superpower”, but largely covered the next 12 months. This suggested “real strategic urgency”, I wrote, adding:
Can the UK really catch up with the US and China? In research, ideas, and innovative applications, no doubt. But the troubling facts are these: first, the UK has never backed its rhetorical superlatives with much more than piecemeal central investments.
Why? This point was rammed home in the summer by Professor Dame Wendy Hall, co-author of the AI Review that kickstarted much of this process. In July, Hall said of the incoming Strategy:
All the things that we managed to get the government to pull out the bag […] are a drop in the ocean compared to what we actually need. It's a drop in the ocean of what’s needed to keep ourselves at the forefront of the AI agenda in the world. […] The job is not done. And we have to increase funding in this area.
Why? The prevailing political mood runs counter to many of the UK’s bold ambitions in AI, said a report that pulled no punches. But it’s not all doom and gloom, I wrote in November.
The Terminator cliché of technology turning on its human masters rather ignores the AI systems that speed the development of vaccines, help in the early detection of diseases and pandemics, and minimize heat, waste, and carbon emissions in power grids and data centers – all things that seek to avert apocalypse. By speeding up analysis and spotting hidden patterns in data, AI might be the thing that saves the planet.
Why? Strategy after strategy arrived this year, falling over themselves like Krug-crazed guests at a Downing Street Christmas party. The Innovation Strategy, announced on 22 July as another pillar of the Plan for Growth, was a welcome statement of the UK’s aim to be a world leader in science and tech. While supporting its ambitions, I wrote:
[The Strategy] does little to fix the UK’s core problems, which include a longstanding lack of skills and persistently underpowered central investment. The world’s top five markets for industrial robots remain China, Japan, the US, South Korea (the world’s most automated nation), and Germany, with two out of every three new installations being in Asia – which is already the world’s manufacturing hub. To compete with that, the UK urgently needs to stop navel-gazing and modernize.
Why? The most recent data from the International Federation of Robotics (IFR.org) shows that industrial robot installations in the UK remain stubbornly low compared to its peers. Just 2,500 industrial robots were installed in the UK in 2020 (0.5% of an estimated world total of 520,900). To underline the point, The World Robot Conference took place in China in September, prompting diginomica to cast its eye over the market there:
According to China’s National Bureau of Statistics, industrial robot sales in China increased by over 194 percent between 2016 and 2020, from 72,000 units to 212,000. Meanwhile, its service robotics companies earned roughly $8.2 billion in 2020 alone, a year-on-year uptick of 41%.
Why? By 2035, the uptake of robotics and autonomous systems (RAS) could mean a boost of £6.4 billion ($8.7 billion) in value added to the UK economy. That was according to the government’s own estimates in a 110-page report from the department for Business, Energy and Industrial Strategy (BEIS) in October. This prompted me to ask the obvious question:
Why is the UK only poised to gain a paltry 0.19% to 0.5% of the predicted added global value from robotics ($8.7 billion expressed as a percentage of McKinsey’s $4.5 trillion and $1.7 trillion estimates)? And in a 14-year rather than four-year timescale? That’s hardly the good news the government thinks it is, when the UK is, by 2021 nominal GDP estimates, still the world’s fifth largest economy.” For answers, read my report.
Why? Because the UK robotics community itself remains as upbeat, innovative, and inspiring as ever, hosting a Summer of robotics events: a three-day Robotics and AI Showcase (RAI21) in May, and the UK Festival of Robotics in June.
Why? One of the biggest challenges in Industry 4.0’s (ahem) collision of technologies, such as robotics, AI, drones, and driverless vehicles, is the lack of common sense among some innovators. In March, I exposed the fallacy of the futurist view that autonomy is a cure-all for our global infestation of lethal tin boxes – aka cars:
The reality is that AVs will massively increase the infestation of tin boxes, it’s just that humans won’t be driving them. […] There would indeed be fewer cars in existence… but a huge increase in traffic: by nearly fifty percent. This counter-intuitive outcome is because of the inconvenient truth about private car ownership: the average driver-owned tin box is stationary in driveways and car parks nearly all the time [90%]. But on-demand, autonomous vehicles would be in constant use.
Why? My COVID-era grumpiness extended to one application of a great technology, aerial robotics. Delivery drones have been proposed as the ideal solution for sustainable, green, last-mile urban deliveries. But I suggested in September, the reality of mass uptake of drones to deliver small packages in cities would create new types of environmental nightmare.
Just one example:
If only one percent of the packages delivered by just three firms in the US arrived by drone, there would be three times more drones in America’s skies daily than there are passenger planes flying in the entire world. And that would still leave 99 percent of those companies’ packets being delivered by road, barely denting the problems of urban traffic congestion, air pollution, and greenhouse gases.
Why? To prevent robots making human life a misery, perhaps what the world needs is an update to Isaac Asimov’s Three Laws of Robotics. Cometh the hour, cometh the man: in July Oded Karev, Head of RPA at automation software company, NICE (a reassuring name) published his ‘Robo-Ethical Framework’: five new ‘laws’ for the software robot age:
Not to prevent bots from harming humans so much as to stop humans from harming others with robots or automating corporate crime.
Why? While the pandemic has created a new machine age, according to various 2021 research papers, that transformation may be short lived. At least according to Jefferson Flanders, CEO of ‘edtech’ provider, MindEdge Learning, in February.
Some restaurants, retailers, banks, and other businesses that have ploughed money into automation and self-service may capitalise on the pendulum swinging back towards human contact, a year after lockdowns stopped the clock on normal operations.
We’re facing a low-tech baby boom, he added. Something to think about as we batten down the hatches for the festive period and new year. Chin up, everyone. Merry Christmas and a prosperous, healthy new year.