BOTS have been around for a long time now, and most people now have at least a basic idea of what they are.  For those that aren’t sure, a bot is a software application that is programmed to do certain tasks. Bots are automated, which means they run according to their instructions without a human user needing to manually start them up every time. Bots often imitate or replace a human user’s behaviour. Typically, they do repetitive tasks, and they can do them much faster than human users could.

Sound benign don’t they, and to be fair, there are many that are benign, carrying out functions that lend themselves to automation.  But they are used for other purposes that aren’t so great.  There are many examples of malicious BOTS that scrape content, spread spam content, or carry out credential stuffing attacks.

Malicious BOTS often work in what is known as a Botnet, short for robot network, which refers to an assembly of computers than malware has compromised.  Such infected machines, individually known as BOTS, are remotely controlled by an attacker.  These networks can and do run synchronised, large scale attacks on targeted systems or networks.

That is one reason why attacks such as ransomware, perpetrated on SMEs, are profitable for cyber criminals.  By using a Botnet, they can send such attacks to hundreds of targets at the same time, which requires only a percentage to pay up, to produce a return on a very small investment.

Bot activity is expected to increase even further this year, the researchers claimed, due to the arrival of generative AI tools like OpenAI’s ChatGPT and Google’s Bard.

“Bots have evolved rapidly since 2013, but with the advent of generative artificial intelligence, the technology will evolve at an even greater, more concerning pace over the next 10 years,” said Karl Triebes, a senior vice president at Imperva.

“Cyber criminals will increase their focus on attacking API endpoints and application business logic with sophisticated automation. As a result, the business disruption and financial impact associated with bad bots will become even more significant in the coming years.”

This is something I have talked about before.  AI can be both a boon and a potential danger in terms of cybersecurity. On one hand, AI can enhance cybersecurity by detecting and mitigating threats more efficiently, analysing vast amounts of data for anomalies, and automating certain security tasks. On the other hand, AI can also pose risks if it falls into the wrong hands or is used maliciously. Sophisticated AI-powered attacks could exploit vulnerabilities, evade detection, or launch targeted attacks at an unprecedented scale. It is crucial to develop robust safeguards, ethical guidelines, and responsible AI practices to ensure AI remains a force for good in cybersecurity.

We have nothing to fear from ethical AI development which integrates ethical considerations into the design and deployment of AI systems, emphasizing transparency, fairness, and accountability to mitigate potential biases or unintended consequences.  Sadly, we are already seeing signs of AI being used in cyber-attacks.  Some of you may remember that at one time we had what was known as the ‘script kiddy.  These were budding criminals who did not have a deep skill level but were downloading, often purchasing, scripts on the dark web, written by skilled hackers who made a good living selling them online.  The script kiddy would then attempt to use these scripts to hack, also taking all the risk.

The script kiddy has all but disappeared of recent years, but AI is allowing them to make a comeback – in spades.  They can now use AI to create code that allow them to produce their own malware, which is, in turn, creating an upsurge in cyber-attacks and threats.

So don’t be complacent, 2024 could become even more of a problem than 2023, in terms of cyber-attacks.  Time to take some action now, to protect yourself.

Scroll to top