AI in Physical Security

Category

29 Feb, 2024

Get in touch


    Input this code:captcha

    See our privacy policy.

    Category

    29 Feb, 2024

    AI in physical security

    It’s hard to ignore the impact that Artificial Intelligence (AI) is having on our world. From the comparatively mundane and ‘simplistic’ – e.g. your mobile phone making sensible (and very credible) suggestions for what response you should include in a WhatsApp message through to applications that were almost unimaginable just a few years ago, like the ability for an AI interface to accurately diagnose patients at the point of entry to a healthcare setting.

    And while some AI applications have an almost Orwellian feel to them, if we can see past the bluster and fear, then AI clearly has a lot offer.

    One of the fears surrounding AI, particularly in sensitive situations or industries, the technology is not able fully replicate or replace the human element, despite pressure to do so. One such industry is in Physical Security, and there are strong arguments for that point of view. This blog explores the pros and cons of AI in physical security applications and examines what the future might hold.

    What role could AI have in the physical security industry?

    AI has the potential to assist in both proactive and reactive elements of physical security.

    It is already widely used in other industries to carry out analytical tasks and make decisions or recommendations based on that analysis. The technology is highly suited to processing and analysing large amounts of diverse and often technical data, using that to identifying patterns and from there predict possible outcomes. That is very much what is involved in a physical security assessment, so could AI take on that task, at a much higher rate of output and therefore more cost effectively?

    Other proactive tasks AI can assist with sit in the realm of monitoring, and the technology is well suited to many tasks:

    1. Video surveillance and analysis – AI can identify and track objects and individuals, distinguishing between authorised personnel and potential threats. It can also spot anomalous behaviour or activities, for example people loitering in restricted areas.
    2. Facial Recognition: AI-powered facial recognition systems can identify individuals on watchlists or unauthorised personnel attempting access.
    3. Drone Detection: AI systems can be used to establish no fly zones and identify and track drones flying in or near restricted areas.
    4. Access Control: Biometric systems are already being enhanced by drawing on the data processing power of AI. For example adaptive processing by automatically updating a person’s profile over time as they age, or by leveraging the ability to combine biometric features into a single system – e.g. voice, iris and facial features.

    In terms of reactive tasks, AI also has much to offer:

    1. Automated Response: AI-powered drones and robots can be deployed to investigate alarms, inspect perimeters, and respond to security breaches.
    2. Remote Monitoring: Security personnel can remotely control and monitor multiple security systems through AI interfaces, enabling quicker response times.
    3. Emergency Response: AI can automate emergency notifications and response procedures, assisting with good communication and coordination during incidents.

    There are clearly many tasks, roles and responsibilities where AI could play a part. It has the potential be quicker, cheaper and more effective than a human carrying out the same task. But that doesn’t automatically mean they should.

     

    Risks of AI in physical security

    AI is far from infallible and there are some notable examples of mistakes being made by the technology. Some are simply comical, from inappropriate translations to a bald-headed referee being mistaken for a football on an automated webstream.

    Some are far more serious. Self driving cars causing fatal collisions, facial recognition systems leading to wrongful convictions and examples of racial bias in government and corporate vetting procedures.

    It’s these mistakes that give rise to concerns about the application of AI in an industry where the stakes are so high – potentially even life or death.

    Take the task of a security assessment as an example. Is a machine really capable of spotting nuances when assessing a site? Can AI deal with truly novel scenarios in the same way that a human can? If we assume that a machine can only ever be as good as the quality and volume of data fed into it, then the honest answer to that question must be no.

    There are comparatively few people in the world capable of carrying out a thorough, unbiased and independent assessment and very often the judgements in those are not black and white so it’s very hard to feed sufficient data into an algorithm in a meaningful way.

    Equally, if highly sensitive data is fed into an AI tool, that data then resides somewhere and must be kept safe from cyber attacks. By using AI as a tool, we are actually opening up a new means of attack. Understanding how a security assessment is carried out in granular detail is all well and good, but it opens up the possibility of that information falling into the wrong hands and being used against it’s intended purpose.

    The use of AI in physical security raises ethical, privacy and potentially legal issues about the mass processing of personal data. Not to mention of course the ramifications of a machine being entrusted with a task that is designed to safeguard human life. If it fails, who or what is ultimately responsible?

     

    Is there a right answer?

    AI is clearly a divisive technology. On the one hand it has the potential to greatly speed up processes, improve monitoring and response capabilities all at reduced cost. But it is not without issue and it will get things wrong. So can AI ever be used in such a high stakes environment as physical security?

    The short answer is yes and no. After all, humans are equally capable of making mistakes, so just because something isn’t perfect does not mean it should be considered untenable. Instead, sufficient checks and measures should be in place to learn, grow and correct over time.

    Should AI ever be relied on to solely carry out a physical security risk assessment? Absolutely not. But it could be used in conjunction with human expertise to speed up data processing and even provide an additional perspective.

    Equally, AI will surely have a place in monitoring and alerting functions, but handing over the keys to the castle is a step too far. Security experts must learn to use AI as part of their toolset, not letting learnt human skills be lost along the way.

    The debate about AI in security consultancy will continue, and as the technology improves there will be more scope for involving data processing to hasten analysis. However, the practical and ethical implications of using a non-human will always remain a limiting factor in the proliferation of technology in security consulting.

    With so much at stake, security and risk analysis will continue to face the same moral questions as other life preservation industries – such as self-driving vehicle manufacturing – and the solution to where the line is drawn will continue to be blurry for the foreseeable future.