Destructive AI cyberattacks in 5 years? The experts are sounding the alarm! –

Powered by artificial intelligence, devastating cyberattacks of a new kind could appear within 5 years. This is revealed by a disturbing report from Finland. Ultra-realistic DeepFakes, automated malware deployment… discover a taste of the future of cybercrime!

Artificial intelligence is the source of many innovations. Over the past few months, AI has notably transformed the art world thanks to generative models as DALL-E 2 Where MidJourney. More recently it is the ChatGPT chatbot which creates sensation by answering any question instantly.

However, this technology also hides a dark side. In the near future, AI could enable hackers to launch devastating cyberattacks and potentially fatal.

This is revealed a report published by cybersecurity company WithSecurethe Communication and Transport Agency of Finland, and the Finnish National Emergency Supply Agency.

the report looks at trends Current trends and advances in the field of AI, cyberattacks, and areas where the two intersect.

According to this alarming study, artificial intelligence will prove to be extremely effective in identity theft : a frequently used technique for phishing or phishing

How AI will boost the strike power of hackers

According to WithSecure researcher Andy Patel, “ Although AI-generated content has already been used for social engineering, AI techniques designed to direct campaigns, perform attack steps, or control malware have not not yet observed in nature “.

However, the expert believes that it will not be long. According to him, such techniques will first be developed by highly skilled and well-resourced cybercriminals, such as government-affiliated groups “.

Afterwards, ” after new AI techniques are developed by these sophisticated cyber criminals, it is likely that some will become accessible to less skilled hackers and are becoming prevalent in the field of cybercrime “.

Currently, AI-based attacks remain infrequent and mainly exploited for social engineering. For good reason, current AIs do not come close to human intelligence and cannot plan or carry out cyberattacks in an autonomous way.

However, cyber criminals are likely to create new AI capable of identifying vulnerabilities, plan and carry out attacks, use stealth to circumvent defenses, and aggregate data from infected systems.

According to the report, such AI could see the light of day in just 5 years. The document explains that “ AI attacks can execute faster, target more victims, and find more attack vectors than conventional attacks because of the nature of intelligent automation and the fact that they can replace traditionally manual tasks “.

The advent of the era of DeepFakes

AI cyberattacks will particularly excel in identity theft, phishing, and vishing. The authors of the report explain that “ identity theft based on DeepFakes is an example of a new capability that AI brings to social engineering attacks “.

Previously, ” no previous technology has made it possible to mimic the voice, gestures and image of a human target convincingly enough to deceive victims “. Thus, DeepFakes could well and truly become the biggest cybersecurity threat.

And for good reason: most modern security and authentication systems rely on biometric technologies. This concerns passports, bank accountsor even our smartphones whose unlocking is done by facial recognition or fingerprint sensor.

Faced with the speed of development of deepfakes, security systems based mainly on these technologies are at high risk.

These attacks may in particular use synthetic information, and thwart biometric authentication systems. Thus, new methods will be required to combat AI-based hacking. Thus, the study suggests that the adoption of preventive measures is the key to dealing with these threats…

We wish to say thanks to the writer of this write-up for this incredible material

Destructive AI cyberattacks in 5 years? The experts are sounding the alarm! –

We have our social media pages here and other pages on related topics here.