
![]() |
Brian Sims
Editor |
Home> | Security | >Cyber Crime | >AI represents opportunity and challenge for UK intelligence community |
Home> | Security | >IT Security | >AI represents opportunity and challenge for UK intelligence community |
Home> | Security Matters | >Security Matters | >AI represents opportunity and challenge for UK intelligence community |
AI represents opportunity and challenge for UK intelligence community
29 April 2020
THE ROYAL United Services Institute (RUSI) was recently commissioned by GCHQ to conduct an independent research study into the use of artificial intelligence (AI) for national security purposes. The resulting research finds that AI offers numerous opportunities for the UK's national security community to improve both the efficiency and effectiveness of existing processes.

The aim of the project is to establish an independent evidence base to inform future policy development regarding the national security uses of AI. The findings are based on in-depth consultations with stakeholders from across the UK's national security community, law enforcement agencies, private sector companies, academic and legal experts and civil society representatives.
The research itself was complemented by a targeted review of existing literature on the topic of AI and national security.
According to RUSI, AI methods can rapidly derive insights from large, disparate data sets and identify connections that would otherwise go unnoticed by human operators. However, in the context of national security and the powers given to UK intelligence agencies, the use of AI could give rise to additional privacy and Human Rights considerations which would need to be assessed within the existing legal and regulatory framework.
For this reason, enhanced policy and guidance is needed to ensure the privacy and Human Rights implications of the national security uses of AI are reviewed on an ongoing basis as new analysis methods are applied to data.
RUSI observes in the report that the UK's adversaries “will undoubtedly seek to use AI to attack the UK” and suggests this may well include not just states, but also criminals as well.
Future threats
The future threats could include using AI to develop deep fakes, whereby a computer is able to learn to generate convincing faked video of a real person in order to manipulate public opinion. It might also be used to mutate malware for cyber attacks, making them far harder for normal IT security systems to detect. The worry is that it might even be possible to re-purpose and control drones to carry out attacks. RUSI argues that, in these instances, AI will be needed to counter itself.
Alexander Babuta (research fellow in national security at RUSI and one of the report's authors) commented: “The adoption of AI is not just important to help intelligence agencies manage the technical challenge of information overload. It's highly likely that malicious actors will use AI to attack the UK in numerous ways, and the intelligence community will need to develop new AI-based defence measures.”
Importantly, the researchers (ie Babuta and colleagues Marion Oswald and Ardi Janjeva) firmly believe that AI will only be of “limited value” when it comes to “predictive intelligence” in key areas such as counter-terrorism. Acts of terrorism are too infrequent to provide sufficiently large historical datasets to look for patterns. They happen far less often than other criminal acts.
Even within that data set, the background and ideologies of the perpetrators vary so much that it's pretty difficult to build a model of a terrorist profile. The report points out that there are too many variables to make prediction straightforward, with new events potentially being radically different from previous ones.
Further, any kind of profiling could also be discriminatory and lead to new Human Rights concerns.
When it comes to counter-terrorism, the engrossing 57-page report argues that “augmented” intelligence will be the norm whereby technology assists dedicated human analysts to sift through and prioritise increasingly large amounts of data by coming to their own judgements and conclusions.
- Building registration process begins at the Building Safety Regulator
- FIA introduces online examinations for industry-leading qualifications
- TransUnion research shows digital fraud attempts against companies declining as UK businesses re-open
- Fire Safety Matters Podcast - Episode 13
- New smoke hoods for all TWFRS fire engines
- ASFP to partner with British Coatings Federation for administrative services
- High cyber risk for smart homes
- Lucy D’Orsi appointed as next chief constable for British Transport Police
- Landlord heavily fined for inadequate fire safety measures at two properties
- Cyber adversaries “exploiting global pandemic at enormous scale” reports Fortinet