Cybersecurity: AI can cause harmful situations

We find artificial intelligence so everywhere that we sometimes forget its presence and the risks associated with its use. (Photo: 123RF)

GUEST BLOG. Let’s start with the elephant in the room: artificial intelligence (AI). Right now, artificial intelligence is used in just about every sphere of our lives, whether it’s in banks when you apply for a loan without having to speak to anyone, in call centers, in the advertising environment where you are sent targeted advertising when you browse the web, in medical diagnostic aid tools, in certain computer security solutions and so on.

The use of AI is becoming more and more widespread and greatly helps the efficiency of many companies. We find so many everywhere that we sometimes forget its presence and the risks associated with its use.

This is a common phenomenon of the brain, and it’s partly why phishing attacks work. We are so used to receiving hundreds of emails that we sometimes forget about the risks of opening the attachments they contain.

What is Artificial Intelligence?

Artificial intelligence is a growing field globally. It makes it possible to simulate human intelligence, using technology. For those who would like to see an interesting example, I encourage you to check out the chat bot from OpenAI. I was strongly impressed.

An algorithm using AI may be able to sort through the applications received by a company in order to fill a position, for example. The advantage of using such an algorithm is to minimize the amount of work of the person analyzing the applications, since only those that contain all the required criteria will be forwarded. An immense time has therefore just been saved.

Moreover, according to theGlobal AI Indexwhich compares countries based on their level of investment, innovation and implementation of artificial intelligence, Canada is currently in 4th place.

As for Quebec, artificial intelligence is also growing. Indeed, the “Portrait of AI in the greater Quebec City regionreports nearly sixty companies that currently offer an artificial intelligence solution.

Montreal is even considered “world center for artificial intelligencebecause of its large concentration of organizations, experts, researchers and students in the field. It is for this reason in particular that large companies such as Google, Meta and Microsoft, which develop artificial intelligence, have opened offices there.

Obviously, the rapid evolution of technologies goes hand in hand with the advent of new risks and that is why the federal government is currently considering the question of artificial intelligence with its Bill C-27.

Federal Bill C-27

In mid-June, Bill C-27, also called the Digital Charter Implementation Act 2022, was tabled in the House of Commons and at the same time initiated its first reading. Most recently in November, the second reading of the bill began.

Bill C-27 seeks to reform federal privacy laws. The objective of this reform is to adapt the current laws to the evolution of technologies. It is divided into three new laws, of which the Artificial Intelligence and Data Act which happens to be the first law in the country that will govern AI.

Briefly explained, the objective is to regulate artificial intelligence systems that are used in commerce and in international exchanges as well as between provinces. It also requires that measures be taken to limit the risk of harm and biased results that may be linked to the operation of the systems.

This point is important, because although the use of artificial intelligence has many advantages, it can also cause unwanted and harmful situations.

Consider, for example, an algorithm that makes automated decisions, but these decisions automatically penalize certain people, for unforeseen reasons, thus causing discrimination. This can happen when the algorithm is poorly trained and it can unfortunately take a long time to realize it.

The same is true with Data Poisoning, which is a popular type of cyberattack. This attack consists of modifying the training data of an algorithm so that the latter makes bad decisions.

This type of attack is often performed against filters that detect malicious emails. If the attacker succeeds in identifying enough legitimate emails as malicious, the algorithm’s decision criteria will be distorted and the algorithm will give less value to the indicators that allowed it to identify an email as malicious.

By sabotaging the integrity of the training data, the attacker changes the behavior of the email classification algorithm and can hope that their malicious emails will be classified as legitimate.

An attacker might want to sabotage decision support algorithms to harm a competitor, a group of individuals, or even a nation.

Unfortunately, once data poisoning is discovered, it is often too late. Indeed, the identification of corrupted data is difficult. The rapid implementation of sufficient security measures will allow the detection of anomalous behavior in the training data and will greatly help the prevention of this type of cyberattack.

Bill C-27 also provides that organizations may be ordered to provide certain documents related to their artificial intelligence systems. In addition, organizations will not be able to use personal information that has been illegally obtained in their artificial intelligence system if its use poses risks of serious harm to individuals.

Innovation brings new cyber risks

Like it or not, artificial intelligence will probably replace part of the workforce in the nearer future than we would like. Labor shortages along with rising wages are only accelerating the uptake of replacement technologies like AI.

At the same time, companies here are still struggling to adopt basic cybersecurity measures. I still regularly come across business leaders who categorize cybersecurity issues on the same level as simple technical issues like printer issues.

We saw it with Bill 25 and now with Bill C-27, that businesses are catching up. Even governments are one step ahead. Is this good news?

Cybersecurity innovation is an important success factor for several reasons. First, cybersecurity threats are constantly evolving, which means that companies must be able to adapt and develop new approaches to protect their systems and data. Second, consumers and customers place increasing importance on the security of their data and expect companies to implement effective security measures. Finally, cybersecurity innovation can also allow companies to differentiate themselves from their competitors and position themselves as leaders in their field.

*This article was written in collaboration with Me Ariane Ohl-Berthiaume, Head of Legal Affairs at Mondata.

We want to give thanks to the author of this short article for this amazing material

Cybersecurity: AI can cause harmful situations


We have our social media pages here and other pages related to them here.https://www.ai-magazine.com/related-pages/