New Times,
New Thinking.

The Policy Ask with Kriti Sharma: “AI fear-mongering pains me”

The founder of AI For Good on technology’s capacity to solve social problems, building robots as a child, and digitising the justice system.

By Spotlight

Kriti Sharma is the founder of the social enterprise AI for Good and chief product officer for legal technology at Thomson Reuters. She started AI for Good in 2018 to help address humanitarian issues through artificial intelligence. Sharma has led the development of chatbots and AI-powered tools to support people experiencing domestic violence in South Africa, and to educate young people about sex and reproductive health in India. Prior to this she worked at the technology firm the Sage Group and Barclays.

How do you start your working day?

I like to be inspired by people using technology to solve social problems. On my commute (I am fortunate enough to walk to work!) I like to listen to an uplifting podcast about young innovators or read a story about people using technology in a positive way. It’s particularly important given that news around technology tends to focus on the risks, rather than the benefits.

What has been your career high?

Helping underserved communities by using my skills in technology. As chief product officer for legal technology at Thomson Reuters, this means leveraging technology to support legal systems that promote a fair and just society. In parallel, in founding AI for Good, it means leveraging AI to tackle some of the biggest global issues – for example, we have supported over a million consultations on people at risk of domestic violence or in need of trusted health information.

What has been the most challenging moment of your career?

The most challenging moment is right now. I see tremendous potential in using AI for social good. But it pains me to see the rhetoric around AI being skewed towards existential risks and fear-mongering. We have to get this right, focus on the real challenges we face today and move with a sense of urgency. I believe we can apply this technology in a way that doesn’t create further inequality and benefits underserved communities.

[See also: A day in the life of an AI mover-and-shaker]

If you could give your younger self career advice, what would it be?

I grew up in Rajasthan building my own computers and robots, and was always curious about using my tech skills to address social needs. I then went on to study computer science and engineering. I would remember to be curious about other disciplines and collaborate with experts from all fields – linguistics, policymakers, law, art, design. I’d also tell my younger self to keep building those machines!

What policy or fund is the UK government getting right?

The UK is well placed in encouraging higher-risk, longer-term bets in innovation, and Innovate UK is an example of getting things right. It takes a holistic approach to innovation, linking academia with businesses of all sizes, civil society and other partners throughout the innovation life cycle. I am excited to see how the £31m Responsible AI fund will shape the development of AI to benefit people, communities and society.

Give a gift subscription to the New Statesman this Christmas from just £49

I certainly think the UK government is digitally forward thinking. For example, the Covid-19 pandemic accelerated digitisation of the legal system, such as in the form of virtual courts and evidence management. The increased use of technology to reach the margins of society where the real problems exist has significantly moved the needle on fairer access to justice.

What upcoming UK policy or law are you most looking forward to?

While we may not have all the answers right now, we must expedite putting some fundamental AI guard rails in place, in the form of a globally aligned, cross-industry regulatory framework. This will help us to address key areas of concern around AI such as transparency, bias and accuracy, while also helping to unlock the possible benefits in a transparent and trusted way.

And what policy should the UK government scrap?

I’d like to see more emphasis on AI skills training to ensure it can be properly accessed and utilised. For example, generative AI will not replace highly-trained lawyers, but a lawyer using generative AI will certainly replace one who isn’t using the technology in the very near future. I would argue we need to move past teaching everyone to code to teaching young people how to thrive with human skills and augment these with AI.

What piece of international government policy could the UK learn from?

The European Parliament’s landmark EU AI Act. This risk-based approach to regulation takes concrete steps to manage AI’s impact on society, while exploring the benefits of the technology in a transparent and responsible way, and taking a tough stance on high-risk applications to prevent harm. There is now a powerful global opportunity for other regulators to follow suit, in partnership with industry.

If you could pass one law this year, what would it be?

I would love to see action taken to help instil public trust in AI. Otherwise, we aren’t truly able to drive the adoption needed to harness its true potential. For example, a law or regulation which helps consumers understand the sources of information upon which an AI is making a recommendation, or which makes it clear when they are communicating with a machine or a human.

[See also: The human era is ending]

Content from our partners
How to tackle economic inactivity
"Time to bring housebuilding into the 21st century"
For building best practice? Look North