Artificial Intelligence and Cyber Security – Boon or Pain?

I wrote a piece back in May about AI entitled ‘AI – Good or Evil? A Clear and Present Danger to Cyber Security? I’ve discussed how AI could be used to generate code to be inserted into a Ransomware attack, and perhaps heralding the re-emergence of the once fabled ‘script kiddy’. Whilst there is no doubt that AI has a great potential for good with applications in just about every sphere of IT, it can allow some very nasty people, who have very limited technical ability, to introduce new and frightening scams. I also quoted a story from CNN where a lady in the US received a call allegedly from her daughter, which was very scary indeed and the ‘daughter’ was yelling and sobbing that she’d been kidnapped, and other voices could be heard in the background. Of course, these were all generated by AI and turned out to be totally untrue, the daughter having called her mother and assured her she was safe. Of course, this is an extreme case, but it does demonstrate the power of AI and its ability to be used by unscrupulous and nasty people. If this is happening in the US, it’s only a matter of time before it arrives here.

Another scam, this time reported in The Washington Post, was an update on the very well reported CEO Scam, whereby someone impersonates the CEO of a company using spoofed email, but this time it’s using AI spoofing the CEOs voice, attempting to scam money from the company.  This one worked, and the company lost a lot of money.

Two bad examples of AI.  I’m not putting AI down or saying we shouldn’t be utilising it, but cyber security is my thing, and I can’t ignore the warning signs.  Cyber criminals are never slow in adopting new technologies and methods.  We need to be aware of that.

So why am I bringing this up again so soon?  Well AI continues to be reported in the news, sometimes with a good result and sometimes not so good.  So, I thought I’d ask Chat GPT what it thought about AI and Cyber Security.  I asked 2 questions, the first was, ‘Is AI a danger in terms of Cybersecurity?’.  The answer came back:

AI can be both a boon and a potential danger in terms of cybersecurity.  On one hand, AI can enhance cybersecurity by detecting and mitigating threats more efficiently, analysing vast amounts of data for anomalies, and automating certain security tasks.  On the other hand, AI can also pose risks if it falls into the wrong hands or is used maliciously.  Sophisticated AI powered attacks could exploit vulnerabilities, evade detection, or launch targeted attacks at an unprecedented scale.  It is crucial to develop robust safeguards ethical guidelines, and responsible AI practices to ensure AI remains a force for good in cybersecurity.

So even ChatGPT thinks this is a double edged sword.  The real problem for my target clientele, the SME space, is that affording AI powered tools to detect and mitigate threats and analyse large amount of data, is almost certainly not going to be realistic.  So, when we are talking about developing robust safeguards, understanding the risks becomes even more vital, if that’s possible, than it ever has been before.  It also brings Zero Trust solutions sharply into focus and finding such solutions that are affordable, is going to become crucial.

The next question I asked was, ‘How do we ensure that AI isn’t a cybersecurity threat?’.  The answer came back:

Ensuring that AI is not a cybersecurity threat requires a multi-faceted approach. Here are some key considerations: 

  1. Robust Security Measures: Implement strong security protocols, encryption, and access controls to safeguard AI systems and their underlying data from unauthorized access or manipulation. 
  1. Rigorous Testing and Validation: Thoroughly test and validate AI algorithms and models to identify and fix vulnerabilities, including potential avenues for cyberattacks, before deployment. 
  1. Adversarial Testing: Conduct adversarial testing to simulate potential attacks and evaluate the resilience of AI systems against different cybersecurity threats. 
  1. Data Privacy and Governance: Establish clear policies and practices for data privacy, ensuring that sensitive information is properly handled, stored, and shared in compliance with relevant regulations. 
  1. Ongoing Monitoring and Updates: Continuously monitor AI systems for potential security risks, promptly address vulnerabilities, and keep software and hardware up to date with the latest security patches. 
  1. Ethical AI Development: Integrate ethical considerations into the design and deployment of AI systems, emphasizing transparency, fairness, and accountability to mitigate potential biases or unintended consequences. 
  1. Collaboration and Information Sharing: Foster collaboration among industry, academia, and government entities to share knowledge, best practices, and emerging threats, enabling a collective effort to combat AI-related cybersecurity challenges. 
  1. Education and Awareness: Promote cybersecurity awareness and provide training to AI developers, users, and stakeholders, emphasizing responsible AI usage and the identification of potential threats. 

By following these guidelines and continuously adapting to emerging risks, we can enhance the security of AI systems and minimize the potential for them to be exploited as cybersecurity threats.

Again, looking at this from an SME perspective, some of it isn’t relevant i.e., No6 for example, SMEs aren’t going to be developing their own AI solutions.  But much of it I would have come up with on my own, without the aid of a machine.  It would appear that AI uses some common sense, which is nice.  No8 for example.  I bang on and on about this.  It is low cost and easy to implement.  It’s staggering how many companies don’t do this. This list also shows the value of Zero Trust solutions and encryption, which on its own, vastly reduces the risk to data, particularly PII (personal identifiable information – UK GDPR).

Scroll to top