New Times,
New Thinking.

“AI is invisible – that’s part of the problem,” says Wendy Hall

The leading computer scientist and acting chair of the AI Council on challenging perceptions around artificial intelligence.

By Sarah Dawood

Despite its many mundane applications, age-old stereotypes about artificial intelligence persist. Decades’ worth of films and literature have imbued our subconscious with the spectre of frightening autonomous robots. While these predictions seem unlikely to come true, the rapid advance of AI technology is certainly worthy of some level of anxiety. This week, a Google engineer was suspended after he publicly claimed that a computer chatbot he was developing was thinking and feeling like a human child.

Indeed, developing intelligent computer systems has ethical implications. Concerns centre not only around autonomy, but the displacing of workers, data privacy and AI perpetuating unconscious human bias. The decision-making of both AI technology and the people running and developing it needs moderation.

But what is often overlooked are the many ways that we already use machine learning – from finding alternative routes to avoid traffic on Google Maps to receiving faster customer service via chatbots.

There is a significant perception problem surrounding AI, says Wendy Hall, acting chair of the UK’s AI Council. “In the media, AI is often [depicted as] a ‘terminator’-type robot, that’s male and aggressive,” she tells Spotlight. “But a lot of AI is digital. You don’t see it, it just works, and that’s part of the problem – it’s invisible.”

The AI Council was set up in 2019 as an independent body tasked with advising government. Its members span industry, the public sector and academia. Alongside boosting public confidence and understanding in AI, its other main focuses are training more people in AI skills and improving diversity in the industry, as well as helping the government develop fair and ethical regulation.

Its AI Roadmap, published in January last year, laid out 16 core recommendations for government around these priorities. This was used to inform the government’s long-awaited National AI Strategy, which sets out a ten-year plan to make the UK a “global AI superpower”. This strategy is implemented by the Office for AI – a joint government unit sitting across the departments for Digital, Culture, Media and Sport (DCMS) and Business, Energy and Industrial Strategy (BEIS).

Hall is one of the UK’s leading AI experts, having worked at the forefront of the sector for nearly 40 years. The AI Council was born out of an independent review in AI that she co-led in 2017 alongside Jérôme Pesenti, now vice president of AI at Facebook-owner Meta.

Give a gift subscription to the New Statesman this Christmas from just £49

Originally a maths graduate, Hall first became interested in programming in the 1980s when she tried out a Commodore PET – one of the first-ever personal computers – and taught herself the early programming language Basic (Beginners’ All-purpose Symbolic Instruction Code).

She took her first job in the fledgling field of computer science research in 1984 at the University of Southampton. “That was my turning point,” she says. “I had no idea that it would lead me to where I am today.”

She was fascinated by what would later become “multimedia” – the concept of inserting pictures, videos, sound and text into computers, and how information could be linked via a global network to help people search for things. She went on to co-create the Microcosm hypermedia system (an early iteration of the internet) in her team at Southampton.

What now seems pioneering was dismissed at the time, however. “I was told that what I was doing wasn’t computer science, and that there was no future for me in [the field] if I didn’t stop doing stupid things with pictures.

“But I could see that when computers and networks got bigger, faster and more powerful, they were going to change the world,” she says. “I didn’t really think that it would happen within my lifetime, that it would all be available in smartphones in our pockets.”

Between 1981 and 1986, when she took a part-time postgraduate conversion degree in computer science at City University London, she was one of only two women out of 80 students. “I remember, we both sat at the back, and we said to each other, ‘I’ll stay if you’ll stay’,” she says. Little has changed since then, she adds, which is why diversifying the workforce is now one of her main priorities.

Like most tech, the AI sector is overwhelmingly white and male. Research from the AI Now Institute in New York found that women make up 15 per cent of AI researchers at Facebook and 10 per cent at Google, while less than 5 per cent of the staff at Facebook, Google and Microsoft in the US are black. In the UK, less than a fifth – 16 per cent – of computer science graduates are women.

This has implications not only on individuals but the public. “If it’s not diverse, it’s not ethical,” Hall says. “If you don’t have a diverse workforce, there’s more chance that what you’re producing is biased and won’t work for a large section of the population.”

A number of government projects, backed by the AI Council, aim to tackle this inequality. In 2019, the government funded a £13.5m upskilling programme to help people from non-science backgrounds access AI and data science conversion courses, including 1,000 scholarships for under-represented groups such as women, ethnic minorities and disabled people. A further £23m for 2,000 more scholarships was announced earlier this year. “This is what I’m most excited about,” says Hall. “That [first funding round] was a game changer in increasing the diversity of the AI pipeline.”

Attracting enough people to the sector in general is an issue within itself. The Alan Turing Institute’s AI ecosystem survey found a significant AI skills gap in the UK – four-fifths of businesses reported struggling to recruit and retain talent. There is a need for both developers and non-technical people, says Hall, who likens creating an AI system to “building a new city”.

“The first thing you need are the people who can build houses, fit the drainpipes and put electricity in them,” she says. “You need the basic infrastructure.” In other words, you need the machine learning programmers. Then you need to “put people in the new city” – non-coders who are still integral to the AI system functioning, such as auditors, ethicists and algorithm bias detectors.

Building the talent pool starts in school, she says – digital literacy programmes and information about science, tech, engineering and maths (STEM) A-levels are crucial. The AI Council recommended the establishment of an AI “online academy” in its AI Roadmap.

But some critics argue that the council has not yet done enough to boost the diversity and skills pipeline. One policy expert, who wished to remain anonymous, tells Spotlight that the council has been “largely invisible” since laying out its AI Roadmap report in January 2021. The council’s gov.uk page indicates that the group last met in April.*

It should be pushing for a more coherent diversity plan, says the source: “We need a national strategy for diversity in STEM. This should sit alongside the AI strategy and the council should help to deliver it.”

Others say that the council’s membership list should be more inclusive, rather than centre around Russell Group universities and tech giants such as DeepMind. Kamal Bechkoum, head of the school of computing and engineering at the University of Gloucestershire, says there should be “a greater representation of SMEs [small to medium-sized enterprises] and the education sector”. The council’s membership is appointed by government and is due to be reshuffled soon.

Lee Howells, head of AI at PA Consulting, adds that more work should be done to address AI’s image problem, and that the council is “well positioned” to become the authoritative voice for that.

“[The fact] that much of everyday life is positively impacted by AI through things such as search engine results, medical discovery and traffic routing needs to be simply and consistently communicated to help educate society and allay fears,” he says.

He adds that the council has a challenge on its hands in establishing a “clear role” for itself, amongst a complex landscape of AI government bodies and independent organisations, including the Alan Turing Institute, the Office for AI and the Centre for Data Ethics and Innovation (another government body that is supported by an advisory board and sits within DCMS, and leads on enabling ethical data use in the UK).

Some have questioned what that role can be, given it is neither a legislator nor a regulator with powers of scrutiny. But, Hall says, “the council never was and won’t be a regulatory body”.

“We don’t know how we regulate AI yet,” she says. “It’s a work in progress globally. The council could eventually evolve to advise the regulatory body.”

As part of the National AI Strategy, the government is currently working on the UK’s approach to AI regulation. “We don’t want to over-regulate, or regulate too soon,” Hall says. “It’s a very nascent industry. Whilst there are huge issues around bias and ethics, we don’t want to impose regulation that stops innovation.”

Despite AI’s image problem, Hall believes there has been a change in attitude since she co-authored her independent AI review five years ago. The economic turbulence of the pandemic and the war in Ukraine has led to mass resignations and people working more to cope with the rising cost of living. Rather than fears around robots taking people’s jobs, now the question is whether robots can be utilised to do boring tasks so people can do the more interesting stuff.

“In 2017, the rhetoric was all about how AI was going to displace jobs,” she says. “It seems madness today when we’re so short of staff. Wouldn’t you love some AI to fill those gaps now?”

For Hall, AI is ever-changing: there are no final answers, and it needs to stay at the top of the agenda. “Looking at AI is rather like looking at the climate,” she says. “It’s going to be here forever. It’s not something that we’re ever going to finish managing. It has to remain a priority.”

Wendy Hall is acting chair of the AI Council, covering for Tabitha Goldstaub while she is on maternity leave. Hall spoke in a personal capacity, not on behalf of the AI Council or government.

*This article was updated on 21 June 2022 to clarify that the AI Council last met in April 2022, rather than December 2021 as was previously stated.

Read more: Inside Europe’s fight for ethical AI

Content from our partners
Building Britain’s water security
How to solve the teaching crisis
Pitching in to support grassroots football

Topics in this article : ,