Artificial intelligence (AI) is the next disruptive revolution to hit the planet. It will match, if not surpass, the impact of the industrial, agricultural and, more recently, the dot-com revolutions. But its ultimate success hinges not on the technology that underpins it but on people’s relationship with it. The development of skills, trust and diversity are vital to making the UK a leader in AI.
AI is already all around us and outperforming us in many areas, from gaming to language skills and healthcare – it is used to predict and detect cancer and we have already seen AI-developed drugs entering clinical trials. At BT, it runs through our business and is present in many of our day-to-day operations. We rely on AI to manage network traffic for greater efficiency; our cyber security experts use it to predict and identify threats; and we are using AI tools to constantly improve our customer experience.
However, there are three factors that are set to rapidly accelerate the power of AI and which will see it become the most disruptive trend in technology over the next five years: the explosion of data and our ability to store this more effectively; dramatic increases in computational power; and advances in machine and deep learning.
Research from Microsoft shows that organisations that are already using AI at scale perform, on average, 11.5 per cent better than peers that have not embraced the technology. If these are the results being achieved today, the expected explosion in
AI over the next five years becomes even more exciting.
The choice of where, how and to what extent AI is used, however, remains a matter for debate. To date this debate has been polarised with either a utopian or dystopian view. Both viewpoints often miss the point. AI’s potential to unlock transformational business and societal value (including tackling climate change) is dependent upon building and maintaining trust among the public and
in boardrooms.
Government, academia and the private sector all have a role to play here. Collaboration around education and investment in skills development at all levels will help the public and decision makers to discern fact from fiction and build trust. Recent developments in encryption and machine learning are driving huge advances in protecting personal data but there are two other core drivers of trust in AI: jobs and human bias.
The debate around jobs and AI remains binary while the reality is, in fact, more nuanced. The first thing to be clear on is that, yes, automation will make some jobs obsolete, particularly those based around repetitive and mechanistic tasks. However, in total, fewer than 10 per cent of jobs will be entirely automated. The flip side, though, is that because all jobs will be impacted at the task level it will free up time for people to focus on higher value-added, fulfilling and enjoyable elements of their work. Finally, just as with all previous technological revolutions (industrial, agricultural, dot-com) the growth in AI will fuel the creation of new roles globally, including those outside of core technology in social sciences, ethics and other areas.
One of AI’s biggest risks is human bias in its coding and the selection of data for training and testing. It is also a huge contributing factor to the lack of trust. Notable examples include systems that produce gender-biased recruitment outcomes or algorithmic bias against certain demographics, notably black women.
There are three areas where bias can present itself in AI. The first, and often biggest contributing factor, is the data; secondly, the way in which algorithms are developed, managed and governed; and lastly, bias in the coders themselves. We would have many fewer examples of gender and ethnic biases in recruitment and other areas if we had significantly more diverse teams. Diversity and inclusion in the industry is simply not good enough.
Diversity is crucial to ensuring we’re not training systems to propagate societal biases. That’s why BT is supporting research at Cambridge University that examines how AI may entrench or accentuate existing inequalities, who makes AI and how the demographics of the workforce may affect AI technologies.
Besides diversity, there’s also a skills gap in AI that we need to face. The AI Council recently advised the government to “double down” on efforts in this area. Its “roadmap” recommended that the government should focus on a sustained investment in skills, and advocated for a long-term programme of research fellowships, AI-relevant PhDs, industry-led master’s degrees, and level 7 apprenticeships. It also strongly recommended that under-represented groups are given equal opportunity and be included in all programmes.
Seven in ten firms struggle to fill AI roles. The UK ranks at number four in the world for global talent, but there are not enough qualified professionals to meet demand. The talent that does exist isn’t diverse enough: in the UK just one in seven AI researchers are women. Businesses and the government must put more resources into developing skills and ensuring the diversity of the AI creators of the future. Tackling these two issues together is the best way to achieve the right outcomes.
AI will fuel the next technological revolution, but exactly what that looks like depends on the success of businesses and other organisations in clearly incorporating ethics and human-centric AI into their strategies. If we get this wrong, regulators will step in to protect the public good, potentially entrenching fears and mistrust. But if we get it right and successfully build trust and understanding of AI’s economic and societal value, the opportunities are immense.
AI’s winning formula is people plus machines; it’s not people or machines. Now’s the moment to address how we leverage advanced digital and data-driven technologies to empower people, grow the economy and build a bright sustainable future.
Adrian Joseph OBE is chief data and AI officer at BT Group, and a member of the UK AI Council.