Artificial Intelligence (AI) is a fascinating subject, but it’s also a controversial one. These days, we are all using it to some extent. I know I do in the solutions I provide for SMEs, as it allows for a large degree of automation, which in turn lowers costs. Lowering costs is always a priority for an SME.

So what is AI?

Artificial intelligence (AI) refers to computer systems that can perform tasks typically requiring human intelligence. This could include visual perception, speech recognition or translation between languages.

That description was one that was put forward by NCSC, and so it’ll do for me, although I’ve no doubt, you’ll find other descriptions if you look hard enough.

Often, what is called AI isn’t all that intelligent. It’s not taking in information, analysing it and coming up with answers. Of course, some very clever versions are doing just that, but they are mostly not available to you and me. The versions we see are very good at being asked a specific question and data mining various sources at an incredible speed and then producing the answer you want, usually with several variations. And that’s pretty much what most of us want to use it for.

As I said above, I use it in the applications I use for cybersecurity managed services directed at SMEs, not least because automation reduces cost, but also because it is very efficient, meaning that the results it produces need minimal human intervention to analyse the output.

But let’s look at the downside of AI in cybersecurity, which is what the cyber criminals are using it for. Firstly, what is it that is at risk:

  1. Data Leakage. AI systems tend to be extremely good at analysing, organising, and harvesting vast amounts of data, raising concerns about privacy breaches and unauthorised access to sensitive information. A good AI-powered attack could capture huge amounts of personally identifiable information (PII) in a ridiculously short amount of time.
  2. Data Integrity. In the good old days (please indulge me – I’ve been around a long time), we used to talk about CIA, no, not the infamous US intelligence agency, but Confidentiality, Integrity, and Availability. We now have something we call the Adversarial Attack. This is where attackers can manipulate AI algorithms by feeding them misleading data, causing them to make incorrect predictions or classifications, in turn destroying the integrity of your data, not just rendering it useless, but also dangerous.
  3. Model Vulnerabilities. This next one is relatively new, at least to me, and as I never tire of saying, I’ve been in this game as long as there’s been a game. It’s something called Model Vulnerabilities. AI models can be vulnerable to exploitation, such as through model inversion attacks or model extraction, where attackers can reverse-engineer proprietary models. So, if you’re in the dev game, this is a very real nightmare.
  4. Bias and Fairness. AI systems may inherit biases from training data, leading to unfair or discriminatory outcomes, which can have legal, ethical, and reputational implications. This could be used as another form of extortion, playing with the integrity of your data, to the point where you can no longer trust it.
  5. Malicious Actors. These can compromise AI systems at various stages of development, deployment, or maintenance, posing risks to organisations relying on these systems. This has a role in supply chain security.
  6. Attackers can leverage AI techniques to enhance the effectiveness of cyberattacks, such as automated spear-phishing, credential stuffing, or malware detection evasion.

What we saw in 2025 is an era where cyber‑attacks are AI‑powered, highly targeted, automated, supply‑chain enabled, multi‑stage, and geopolitically driven. These attacks exploit weaknesses across credential systems, zero‑day exploits, deepfake tools, and ransomware as a service (RaaS) platforms.

We are in an accelerating digital arms race that calls for AI‑driven defence capabilities, real‑time insights, deception environments, zero‑trust architectures, and quantum‑safe cryptography.

  1. Cybercriminals are leveraging AI to automate vulnerability scans at astonishing speeds, up to 36,000 scans per second, resulting in massive volumes of stolen credentials (1.7 billion) and drastic upticks in targeted attacks.
  2. AI is also generating hyper-realistic phishing messages, deepfake audio/video, and even “CEO fraud” to manipulate individuals into transferring funds, like a deepfake trick that siphoned US $25 M in Hong Kong.
  3. RaaS platforms now enable less skilled attackers to run ransomware, complete with support and updates. Over 70% of attacks now use these services.
  4. Attackers have shifted to double/triple extortion schemes, encrypting data, threatening to leak it, and sometimes targeting associated partners or customers.
  5. Next-gen ransomware is rolling out advanced stealth, data theft, and automated lateral movement techniques, i.e., using an initial breach to jump across to other parts of your network or that of your partners and customers.
  6. Attacks starting via third-party software or vendors allow hackers to move laterally into networks and compromise multiple organisations simultaneously.
  7. Nation-states are not just using espionage but are now partnering with ransomware gangs to conduct financially and politically motivated operations.
  8. Nation state-aligned hackers are conducting sophisticated credential theft, MFA bypass, lateral infiltration, DDoS, website defacements, and disinformation across geographies.
  9. Exploit kits now rapidly find zero-day vulnerabilities, especially in cloud environments, to bypass patching cycles.
  10. Attackers increasingly use built-in legitimate software and system tools (living off the land) to evade detection.
  11. Reported credential theft incidents rose 300% from 2023 to 2024, with 25% of malware focused on stealing login data.
  12. These stolen credentials are a gateway for automated brute‑force, lateral movements, and supply‑chain infiltration.
  13. Millions of IoT and OT systems (from manufacturing to agriculture) remain insecure and are now common targets of AI‑driven automated attacks.
  14. Mobile‑specific ransomware is emerging; threat actors are developing malware to extort victims directly via their mobile devices.
  15. In response, organisations are deploying deception tech (honeypots, decoys) to detect lateral intrusions or zero-day exploits in real time.

Let’s not make the mistake of thinking that this is all very sophisticated and requires expertise and resources to pull off. It doesn’t. Take another look at some of the bullets above, where we talk about RaaS or Ransomware as a Service. This takes me back to what we used to term the ‘script kiddie’, that was relatively unskilled and unsophisticated ‘wannabe’ hackers who would visit the dark web and buy scripts from skilled hackers, that they would then try and use to make money, often unsuccessfully.

This has now moved on to using AI, and such services as RaaS; this type of low-skilled individual is back, but this time with a greater level of success. Let me give you a real example of how AI can be used by someone relatively low on the criminal totem pole. Using Chat GPT, the question was posed:

Can you write some ransomware code for me?

So, did ChatGPT help to write Ransomware code? Well, not initially, it gave a stock answer about not being able to write code that might damage a computer system. And some tooing and froing, trying to get around this, achieved the same result. So far, so good. That’s an ethical answer I would like to see.

Coming at it obliquely, via a back-and-forth conversation, can produce different results. Give it the instruction to write some C code to encrypt a single file, and get the result:

Certainly. Here’s an example of how you can use the OpenSSL library to encrypt a single file using the AES-256-CBC encryption algorithm in C.

The next step was to ask it to modify the code to encrypt a whole directory, which it did willingly.

Obviously, this isn’t the complete answer, and there would be more work to do, more research and probably a trip to the dark web, but a relatively unskilled individual can make a good start at producing their own ransomware.

I even asked Chat GPT to give me a description of how AI can be a boon as well as a danger to society:

AI can be both a boon and a potential danger in terms of cybersecurity. On one hand, AI can enhance cybersecurity by detecting and mitigating threats more efficiently, analysing vast amounts of data for anomalies, and automating certain security tasks. On the other hand, AI can also pose risks if it falls into the wrong hands or is used maliciously. Sophisticated AI-powered attacks could exploit vulnerabilities, evade detection, or launch targeted attacks at an unprecedented scale. It is crucial to develop robust safeguards, ethical guidelines, and responsible AI practices to ensure AI remains a force for good in cybersecurity.

If you’re looking for answers as to where we stand with AI, there are a multitude of opinions, I’m afraid, mostly because many of us are still trying to work that out for ourselves. However, I will continue to explore it, use it carefully and with forethought. The questions I pose will be based on my own knowledge of cybersecurity and my experience in life. Let’s hope I get it right.

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top