Do you find cyberattacks dangerous? Wait for hackers to use artificial intelligence

A team of cyber researchers predicts the arrival of AI-based malware by 2028. attacks on a larger scale.

Artificial intelligence is used to detect cybersecurity threats. It is only a matter of time before it is also used in the service of pirates. In about five years, we should see the first such attacks, predict cyber researchers from WithSecure, in partnership with the Finnish Agency, in a report published on December 14.

Currently hackers use known flaws in the interfaces to trap their victim and drop their malware. Artificial intelligence requires resources to be developed, and therefore specialists capable of programming them. For the moment, ransomware are quite effective and earn millions for hackersbut cybersecurity solutions are also multiplying, and the most sensitive sectors — with the most interesting data — are investing in their protection.

In this game of cat and mouse, artificial intelligence will be a significant boost for hackers. Since the AI ​​works off of a database, developers will need to program the software to find a solution in various scenarios.

The many dangers of AI

Researchers from WithSecure and the Finnish agency have listed all the advantages of AI-based malware:

  • Speed ​​of execution: AI can be used to automate tasks that today are done manually, such as extracting information or discovering vulnerabilities in software. These missions can be run on a machine at a much faster pace.
  • Better preparation: AI-based cyberattacks can analyze and reason on larger amounts of data, and therefore explore more potential vulnerabilities. While a human may miss information, the program can optimize the search and not miss any leads.
  • A sophisticated infiltration: targeted phishing can be personalized according to the victims, adapt to the behaviors and merge with the uses of the system. They will also be able to adjust to different interfaces and learn during their infiltration. If the first attack is not the right one, the second will be more effective.
An email written by chatGPT that can be used as phishing to impersonate a company’s HR department. // Source: Numerama
  • A wider attack : an attacker will be able to launch automated operations against many targets simultaneously. The attack will immediately be scaled up with many customizations and different targets.

In many cases, the AI ​​can help the hacker to trap the victim, without necessarily entering the system. Deep fakes and vishing — voice phishing on the phone — are already used methods and are expected to grow in the future. Regardless of the technological advances, humans always end up finding a malicious use for it.

For further

AI accomplices of hackers.  // Source: Short Circuit

We would like to give thanks to the author of this post for this awesome material

Do you find cyberattacks dangerous? Wait for hackers to use artificial intelligence

Explore our social media profiles along with other pages related to them