New Times,
New Thinking.

Advertorial feature by Sponsored by Leidos
  1. Spotlight on Policy
26 April 2021

How AI can enable an efficient border

Trust in artificial intelligence is key to its successful uptake, says Shannon Kerrigan, business development manager at Leidos UK. 

By Shannon Kerrigan

As the UK government pushes to progress citizen centricity and digitisation at the border, the use of artificial intelligence (AI) and the algorithms that drive it remains controversial. Concerns around immigration processing bias, the way data is collected and used, and the algorithms supporting border processes mean that governments must focus on balancing innovative technology with reliability, resiliency and security to protect the privacy and safety of citizens.

Recognising this challenge, Leidos has combined the cutting-edge technology that makes AI and machine learning (ML) algorithms reliable, resilient and secure with tools that increase trust by ensuring transparency and eliminating bias. Our approach distinguishes our solutions and enables us to deliver holistic end-to-end transformation with integrity.

AI and ML have been used for everything from improving the British Army’s supply chains to forecasting major cyber threats by analysing tweets for mentions of software vulnerabilities. The most successful use cases, however, have required a comprehensive understanding of the risks and benefits posed by the technology, and a methodology that builds stakeholder trust. Trust is incredibly important. As users and owners of systems keep pace with ambitious transformation and efficiency agendas, problems often arise during deployment and maintenance because of fundamental barriers in confidence. The central question remains: can humans trust AI to do what it is supposed to do, when it is supposed to do it, in the circumstance it is supposed to do it in? As we support our UK national security and defence clients in turning complex data into new opportunities for insight and efficiency, the Leidos approach is to deliver Trusted AI.

[Read our exclusive research: What could Single Trade Windows mean for the growth of UK PLC?]

What do we mean by Trusted AI? It’s AI that acts as a valued partner and works with humans to solve problems; that is predictable and resilient across changing environments; that increases security rather than introducing new vulnerabilities; that does not put humans or missions at additional risk; and it behaves ethically.

We believe that AI is a true partnership between technology and people, with each stakeholder playing a critical role in how the other works. With this in mind, the methodology of implementation is critical. We know that building human trust requires more than tools and technology. It requires a robust test and evaluation process that assures these technologies are effective and safe to use. And most importantly, trust in AI requires a tailored process where the level of automation increases gradually through stages, reducing risk and assuring human control and governance. This is why we developed our 4A Methodology.

Leidos uses this incremental approach to minimise the inherent biases that typically impact the design and development of automated solutions. Think of biases as our digital fingerprints that we leave on data, the algorithms we develop, and even the way we interact with technology. Leidos’ AI experts help reduce biases in a variety of ways, including analysis of all data types (structured and unstructured), combining both AI and human-led algorithms, and offering visibility into how AI arrives at decisions.

Give a gift subscription to the New Statesman this Christmas from just £49

As the UK continues to mature in the way it collects, assures and uses data, implementation methodology is as critical (if not more so) to achieving success as the technology itself. For example, the Cabinet Office 2025 UK Border Strategy of developing a Single Trade Window to act as the entry and exit point for all trade requires an overwhelming volume of data. This is currently in diverse formats that need to be harmonised, standardised and integrated into a single view. To make the first step in this journey, confidence must be built among users by demonstrating the potential value of applying AI to increase automation within this massive transformation programme.

The Home Office’s Future Border and Immigration System Programme presents another clear need for a structured framework of implementation around technology initiatives. It is critical that system users and owners, and citizens, are also on the journey as we transform our border and accompanying processes to deliver a universal permission to travel, a person-centric approach, and increased use of automation.

Using a framework such as the Leidos 4A Methodology to deliver Trusted AI ensures that the pace of progress is synchronised with stakeholders’ confidence levels. Working in this type of close partnership empowers users to unlock the value of the technology and tools they have access to. Supporting front-line officers with faster access to accurate data that informs critical decision-making will save time and increase their effectiveness, building confidence.

The 4A Methodology

Analysis: First, we assist customers with data identification and organisation, helping reduce the overall timeline to introduce artificial intelligence (AI). Working in partnership, we analyse potential opportunities to generate value through AI, prioritising those where AI offers the greatest value.

Assistance: As we introduce AI, we focus initially on solutions where AI assists humans to do their work more efficiently, e.g. sifting through data to help improve decision-making.

Augmentation: As user trust grows and human feedback is incorporated into the AI, humans work as partners to inspect, analyse, and confirm potential options presented by the system.

Automation autonomy: As more human feedback is incorporated, the AI is ready to autonomously handle human tasks, although humans always remain in control of critical processes.

4A Methodology advantages

Increased trust: Trust in artificial intelligence (AI) is established gradually and organically, with feedback captured in models, features and user interfaces. Contextual, human-driven control: Users select level of AI based on context and confidence.

Better data: Human interaction with AI at early stages provides the rich data needed to build more complex, reliable capabilities.

Lower risk: Scope of AI deployment is increased based on proven success and trust gained through experience.

More robust: Reliable fallback modes with human control.

Better governance: Continuous transition and iterative capability development. Mature tactics, techniques and procedures: Providing time to train and develop policies due to the incremental approach.

Why partner with Leidos?

As a leading provider of data-rich solutions for the public and private sectors, we understand our customers and their data. By combining that understanding with our expertise in designing and building AI/ML systems, we’re able to accelerate solutions that provide immediate value for our customers’ most pressing problems. Our knowledge of the underlying landscape in the national security domain allows us to take a disciplined, science-based approach to AI/ML that distinguishes our solutions.

For more information visit www.leidos.com/uk-borders

Shannon Kerrigan is business development manager at Leidos UK.

[Read our exclusive research: What could Single Trade Windows mean for the growth of UK PLC?]

Topics in this article : ,