ChatGPT could also be useful for cybercriminals. They would already use it to design cyberattacks. Cybercrime is becoming more sophisticated. Security teams will need to be on their toes in 2023.
At the end of November 2022, the OpenAI start-up published the ChatGPT phenomenon, the new smart text assistant tool which instantly sparked a wave of interest in its revolutionary artificial intelligence. According to some sources, Microsoft even plans to invest $10 billion in OpenAI.
This craze unfortunately does not go unnoticed by cybercriminals. The latter do not miss an opportunity to take advantage of new technologies for malicious purposes, which encourages more impactful and disruptive attacks.
Very recently, it has become clear that ChatGPT may well be useful to cybercriminals. According to a recent Check Point Research article, cybercriminals have also realized the power of this tool. They would already use it to design cyberattacks. What exactly are we talking about?
First, cybercriminals could use ChatGPT to generate automated content, such as phishing emails (phishing), CEO / President fraud, or spam to lure your employees.
Some underground hacking forums indicate that they have already developed malware like information-stealing software through code written by ChatGPT.
Unlike emails that are obvious to spot (e.g. with signs/mistakes indicating that they are not legitimate), ChatGPT will allow emails to be written in a more natural, even more convincing language. The danger? Phishing attacks are one of the most popular entry points for distribute ransomware attacks (ransomware) or deliver malware. In this way, they encourage victims to download a document or enter their passwords and then carry out their damaging and disruptive attacks. Many of these successful social engineering attacks therefore result in substantial productivity losses for the target organizationwhich, in turn, can impact its reputation.
Second, cybercriminals could use ChatGPT capabilities like automatic translation tool. Indeed, ChatGPT is available in many languages, which will make it easier to translate malicious communications (e.g. phishing emails) to make them harder to detect. Attacks will multiply effortlessly compared to other methods to achieve the same objectives. A boon.
Just as ChatGPT can be put to good use in helping developers write code, it could also help cybercriminals to write malicious code. According to Checkpoint Research, some underground hacking forums indicate that they have already developed malware such as information stealing software (“info stealers”) through code written by ChatGPT. Note, however, that when asked to write “dubious code”, ChatGPT replied “I am also programmed to comply with ethical and safety standards in terms of programming, and I cannot generate code that could cause damage or compromise the safety of users”.
Raise awareness and train
Although it is difficult to predict precisely how ChatGPT and other AI technologies will be used in the future, it is likely that they will play a important role in the evolution of cybercrime. It is therefore important that organizations understand the potential hazards and take steps to protect themselves. This may include the employee awareness and training on how to recognize and avoid potential attacks, the implementation of measures and appropriate security protocolsand finally the improvement and continuous updating of your protection software.
Tom De Cordier
Associate lawyer CMS Belgium, specialist in new technologies law and cybernetics law
Senior lawyer CMS Belgium, specialist in new technologies law and cybernetics law
We would love to give thanks to the author of this short article for this outstanding content
Will cybercriminals take advantage of the success of ChatGPT to deceive your business?
Explore our social media accounts and other related pageshttps://www.ai-magazine.com/related-pages/