Brian Sims
Editor |
Home> | Security | >CCTV | >Video Surveillance: Authenticity in an AI-Centric World |
Home> | Security | >Cyber Crime | >Video Surveillance: Authenticity in an AI-Centric World |
Home> | Security | >Integrated Systems | >Video Surveillance: Authenticity in an AI-Centric World |
Video Surveillance: Authenticity in an AI-Centric World
29 May 2020
BARELY ANY day goes by without Artificial Intelligence (AI) and video surveillance being front and centre in the public spotlight due to concerns about ethics and privacy. Even more so now given the rapid development of highly beneficial applications designed for safety in light of COVID-19. Pauline Norstrom explains why it has never been more important for stakeholders to remain properly informed about this ever-evolving set of technologies.
The search for solutions to the new issues created by the COVID-19 crisis has been catalysed by AI which helps analysts, who may be otherwise overwhelmed with data, to understand (in real-time) the impact of constant policy change, different international developments and the effects realised on people, businesses and the economy. AI is even being developed to predict future scenarios to improve preparedness.
The latest developments in terms of contact tracing are built using AI to assess the risk of infection to an exposed individual, while other AI-centric solutions are automatically tracking and informing of NHS resource availability in response to new COVID-19 cases. In bio-science, over 100,000 academic papers were made available in the CORD19 project to be automatically analysed using natural language processing and other AIs to find similarities in other viruses and use the data to create model outcomes and learnings from existing drugs.
AI is already used effectively in the healthcare and Government sectors and also rapidly growing in popularity within the professional security business sector to create meaning from data generated from the proliferation of cameras and other sensors.
Without doubt, AI has the potential to solve all sorts of problems, but every leading light has a dark side. Bad actors may try to use it to alter the perception of reality. Although we have not seen fake videos successfully pass off as valid evidence in court - yet - the need to verify a surveillance video's authenticity and ensure AI 'explainability' are inexorably linked.
Generator of data
The video surveillance domain is a massive generator of content-rich data from multiple sectors including retail, logistics, education, sport and infrastructure. It is nearly impossible to search video retrospectively to determine what happened after a complex event or to predict whether a situation may arise without AI. When used with cloud technology bolstered by strong cyber security measures, ever-improving communications and dedicated processing, AI is accessible to a much wider user base. Furthermore, processing does not have to happen at the edge or the near side. Rather, it can happen anywhere. No longer does AI sit in the corner of the development labs. Now, it's being applied in the real world to solve Big Data information overload.
When technologies are distributed, it's important not to lose sight of traceability back to the source. The broadcast and media industries embedded digital rights management technology to prevent copyright theft. In contrast, the video surveillance industry has a steep hill to climb in order to apply authentication technologies in real-time, which increase the processing and power overhead and slow the systems down. Generally speaking, that's is why, if check-sums are applied at all to verify authenticity, this happens at the point of export.
Nothing can halt progress as these technologies come of age. This is evidenced by massive investment into AI in the security industry for facial recognition and object detection. According to the 2019 Stanford AI Index, this investment topped $7 billion last year, representing nearly 10% of the total global investment into AI. AI is proving itself in live monitoring and the retrospective search of video recorded within discrete systems. However, to create video storage warehouses which span geographies and industries, and which are suitable for mass automatic analysis, the video has to be accessed through secure APIs and harmonised to a common format. This may mean that the original data structure could be lost.
Analysis through a neural network produces new data about the video which can be correlated with other data sets to build-in the power to predict events. The AI brings great benefits by increasing accuracy and reducing human processing time, but in creating meaningful analysis, the outputs may become obfuscated from the original data source, potentially resulting in a challenge later if authenticity, 'explainability', algorithmic bias and ethical use are called into question.
Period of renaissance
AI is going through a period of renaissance since powerful dedicated GPUs and advanced open source and crowd-sourced coding frameworks are becoming more widely accessible to developers. The dialogue around trust is also maturing as forward-thinking businesses, driven by market forces, see the benefit of starting to scope, develop and introduce AI into their products and business process.
The demand for the use of AI in the security and safety realm is growing rapidly as Governments look to the private sector to help them come up with policies and solutions which enable the permanent safe return of people to public spaces.
With this as a backdrop, it's fully expected that the use of professional video surveillance cameras to monitor publicly accessible spaces will continue to grow, further supported by thermal cameras, automatic facial recognition, occupancy counters and social distancing detection (all of which continually monitor the physical workspace and create alerts if breaches in safety policies occur).
When AI is applied for a purpose other than security, this opens up a much wider application to the safety aspect of crowded spaces. This new use case scenario assumes that people are a risk to each other just by going about their normal daily duties. That could be perceived as a wider and more immediate threat than the lone wolf terrorist. Not that the threat of terrorism is any lower than it was, but there is now a new terrorist on the street: the hidden bio-threat.
The increase in the application of AIs to analyse video and other associated data may result in a pressing need to re-examine whether video evidence is authentic. This is evidence which is most likely to be used to manage Health and Safety and, ultimately, insurance claims, leading to wider public scrutiny if employees become infected and seriously ill after attending a protected space.
Admissibility of digital evidence
Authenticity in the video surveillance domain hit a wall in the early 2000s when recording transitioned from tape-based media to computer hard drives. This was a hot topic sparking much debate around the admissibility of digital evidence in court. As digital recordings could be easily altered without detection, the industry provided a means by which video authenticity could be verified, especially so when the camera was the only witness.
In response, the British Security Industry Association created a Code of Practice which later became a British Standard, working hand-in-hand with the criminal justice system and the Home Office. The British Standard specified proof of authenticity through the application of check-sums to exported data, while also encouraging robust audit trails and physical security of the devices to the point at which video evidence entered police custody.
When systems became networked, the user could choose whether to capture and record in-camera, on-site or remotely. Images required for evidence could be downloaded anywhere an authorised user had access. Now, images can be captured and may be temporarily stored on-site then transcoded and stored in a different format to the original. AI engines create near real-time analysis, substantially enhancing the value of the system for stakeholders and, in many cases, saving lives as a result. The benefits outweigh the risks.
The perceived issues surrounding video authentication today seem moot, but as credible and reliable AIs continue to emerge, fuelled by the rapidly changing need to solve pandemic-focused problems, this matter will be back in the spotlight because, as technology becomes more sophisticated, so does the sophistication of those who intend harm or mischief to others. The open nature of the Internet and increasing communications networks enable fast and efficient access to servers around the world. Never has it been more important to ensure that systems are managed effectively and to the highest possible standards.
Standards and certifications
Professional video surveillance in the UK is strictly controlled by the industry. It's held together by several very strict voluntary standards and certifications and a Government Code of Practice which is focused on citizen privacy rather than keeping the systems secure and useful for forensic purposes.
Video evidence may be compelling and can seal a conviction in a criminal case if proven to be authentic. Furthermore, a successful guilty plea in a major case can save the criminal justice system millions in reduced legal costs, cementing its value to the public when properly managed.
However, following the success of deep fakes created during the 2019 UK General Election campaign (the infamous example being the video created by Future Advocacy showing Boris Johnson and Jeremy Corbyn endorsing each other) demonstrated how easily video can be manipulated. Can we trust what we are seeing? As native video surveillance data is harmonised into standard formats, and this video is then used as evidence, it's much easier to create a deep fake without detection. We may well see a resurgence of the legal debate around whether a digital image has (or could have) been altered.
Modern surveillance providers offer cloud and managed services which open up the ability to use AI. These systems are strictly controlled. However, the standards and audit trail frameworks need to catch up to ensure that bad actors are not successful in altering vital video evidence or creating a mass illusion to distract security analysts from a real incident.
Video authentication in an AI-centric world needs to reclaim its place on the strategic agenda. In the meantime, it's the professional security industry's ongoing responsibility to successfully self-regulate, while also creating robust audit trails and explainable AI supported by super-secure, bank-grade cyber security measures that bring the best of these technologies to market for the greater good of society as a whole.
Pauline Norstrom is CEO and Founder of Anekanta Consulting
*For further information visit www.anekanta.co.uk or send an e-mail to: ask@anekanta.co.uk
- Arrests made in wake of London terror attack
- International COVID-19 fraud case leads to arrests and sparks investigations in Europe
- BESA outlines programme for 2022 National Conference
- New scheme provides trauma packs to City businesses
- Support for small businesses following cyber attacks
- Firefighters make poorly boy’s dream come true
- Designed by the Industry
- Fire service not included in public sector pay boost
- Stay put policy needs extra support
- HMRC stops fraudsters and saves public over £2.4M