Cybercriminals have always been ahead of the curve when it comes to adopting new technologies, as they do not have the limitations of obeying rules, laws, ethics or actually having to pay with their own money. Recently, artificial intelligence (AI) has been in the news a lot, specifically Generative AI, in which a little effort can generate a huge output.
One generative AI system you may have heard of is ChatGPT, which has become the bane of many schoolteachers in setting homework essays. With a few short sentences outlining what you want to write, detailed essays on any subject are completed in seconds. Generative AI is not limited to just text-based outputs, though text generators are one of the four main types. The other three are visual generators, audio generators and code generators. These can be combined to create videos and much more.
The problem we face is how the criminals are making use of these tools, including imitating someone’s voice, creating incriminating fake photos and videos, even real-time face swapping on a conference call. Also, the code generators allow them to create impersonation websites, scripts and bots to check for vulnerabilities and login to hundreds of online systems and services with credentials from data breaches. The main point here is that the criminals have their own generative AI systems and are not limited to the mainstream ones, with their built-in (but limited) safeguards.
Can you tell which one of these social media profile pictures is fake? Actually, both are fake, but by the time you read this, it may be impossible to tell a real photo from a fake one.
There is even a word for an AI generated fake photo, audio or video; deepfake, though this is because it is created using a type of artificial intelligence called deep learning, rather than because they are deeply convincing. A deepfake video can also be created in real-time instantaneously, mapping a person’s face and mouth movements so that it can be used for a face swap to impersonate people on video conference calls. Deepfakes can be easily used for extortion by cyber criminals, such as creating fake sexually explicit images or videos, faked romantic affair evidence, criminal activity evidence or compromising audio footage. There have even been deepfake virtual kidnappings, where family members were convinced that a loved one was in trouble and paid a ransom.
Other generative AI systems can enhance existing phishing and extortion techniques, allowing highly personalised attacks against specific people or targeted systems. Finding the needle in a haystack of information is no longer an arduous task, nor is finding patterns and information in social media that can be exploited. Stolen breach data and compromised systems can be analysed in mere seconds, with only a little computer knowledge, identifying potential victims and improved automated attacks.
Extortion is not the only criminal area enhanced by generative AI, fraud is as well, especially romance scams. Fake social media account profiles no longer need to rely on stolen pictures so aren’t susceptible to a reverse image search, as they can all be AI generated. Deepfake audio can be created in practically any language, together with pre-recorded deepfake videos and voice-cloning. Deepfake in-situ photos have even been used to prove that a fake person is real, holding today's newspaper or a sign that they were only just asked to write, when challenged by suspicious online daters. To combat this scenario, ask them to video chat from outdoors like a local park, while walking, as this limits the technology available to them to a mobile phone.
Deepfakes can also aid lots of other types of fraud, including celebrities endorsing investment scams, faked government and banking documents for identity theft, and fake audio and video of senior management. The international engineering and design firm Arup recently fell victim to a highly sophisticated and targeted fraud to the tune of £20 million, featuring a deepfake of the chief financial officer (CFO) in a video conference call. Be warned, well-crafted deepfakes are largely impossible to detect on the small screen of a mobile smartphone, so for anything involving money make sure to use systems that are familiar to you like Apple FaceTime or a WhatsApp video call and query anything out of the ordinary.
Overall, the outlook is bleak, as generative AI lowers the barrier of entry for cybercriminals even further. That said, following good cyber hygiene practice from this eBook and a healthy dose of scepticism can protect you from what was less than a decade ago, pure science fiction.
It is also important understand that when it comes to using generative AI systems yourself, such as ChatGPT, Google Gemini or Microsoft Copilot, data protection and privacy may be an issue. Especially with free systems, anything you enter or create may go into the dataset for all other users, so avoid entering credentials and any information you wouldn’t want to end up in the hands of others. The general rule is, if you are not paying for the system with money, you are paying for it with your data.
Index or next chapter Formjacking