Will artificial intelligence revolutionize the search for information?

Before answering the question of the contribution of artificial intelligence to the search for information, let’s go back to the major phenomena that have upset this discipline.

The first research tools were directories (directories). The sites there were listed by humans in a tree structure. These tools provided the first Internet users with large directories of sources, classified by category. But the Internet user looking for simplicity and speed has dropped the directories. finish them Lycosso, Nomadiceven theOpen Directory where contributors were volunteers on the participatory model of Wikipedia. The first born of them, Yahoo! finally stopped indexing its directory in 2014, after 25 years of loyal service. However, the practice of directories is still practiced by anyone who searches for information in open sources. One of the qualities required in this profession is to benefit from and update a directory of sources on which to carry out investigations. So, if the general directories are dead, any good “OSINTeur” has his personal directory of sources and in particular of databases whose results are not indexed by search engines.

With the active help of Internet users, these are the algorithms who got the better of generalist directories. Search engines like Altavista then Google, Bing, yandex Where Baidu proposed faster and simpler solutions for the Internet user: a robot crawl the website and indexes millions of pages. The Internet user accesses it through a requestor which allows advanced searches and sorting of results. These tools are criticized for being able to configure their algorithm to limit or focus the results (towards some who would have paid, others who defend one ideology to the detriment of another, or even towards those that the Internet user would tend to prefer depending on of his habits). Despite everything, the advanced use of search engines saves time when searching for information in open sources. But if the vast majority of Internet users are content with the use of these engines, and most often just one, these tools are not sufficient to search for information in open sources. The key to effective research of relevant information in open sources lies in the construction of a research plan, a method of investigation. If it can have a common core for many searches, it must be adapted to the subject, the sources and the first results obtained. We talk a lot about rebound strategy in OSINT; Knowing how to bounce back on the information found, or an absence of information, to carry out a new investigation.

In recent years, we no longer speak of searching for information in open sources or cyber documentation, but of OSINT (for Open Source Intelligence). If this new denomination did not really come to upset the trade, it largely contributed to democratize the practice of it and to make it more desirable. Without going as far as a fad, we have seen the proliferation of directories of OSINT sources and examples of applications, often more in the field of leisure than in the business world. This dynamic has led to software developments ofapps or simple programs to automate a task, run queries, compile information, etc. These programs, most often available on GitHubare not accessible to the general public, because require some knowledge in computer science or in development Python for example.

In parallel with algorithmic research, an approach social grew on the web. Fundamentally different in access to information because it is no longer a question of looking for specific information but of consulting that (s) that his entourage (his “friends”) offers or shares. Some search engines have tried to integrate a social layer into the consultation of the results given by their algorithm. Thereby, Google tried Google+ which favored results already liked or shared by those around the person. Others have experimented with improving access to social information by integrating search engines but have had to quickly back down for regulatory reasons in particular. Facebook Graph Search had to considerably reduce its ambitions, for example, so as not to reveal too much of its users.

For some time now, theartificial intelligence can answer simple questions. Its use is improving to the point of generating increasingly sophisticated texts in response to sometimes complex questions. They therefore participate in the search for information by proposing an alternative to the search plan: an answer (and often only one) to a question asked in natural language. These chatbots are mostly not able to browse the internet to feed their response. Their knowledge is limited to that contained in their corpus and depends on their configuration and their “training”. But the challenge for players who invest in AI is precisely to train their robots on the website to open their fields of knowledge. Does this mean that search engines will disappear tomorrow for conversational robots that will directly provide answers to Internet users’ questions? If the user succumbs to the ease and simplicity of use, it is a possible or even probable outcome. If the Internet user persists, because unfortunately there is a tendency to only want to find a simple, even binary, answer to a sometimes complex problem, he risks giving in to the easy use of these robots. It is indeed a safe bet that their use continues to grow. Remember that the success of the first search engines was already based on a search for simplicity by Internet users, almost a refusal to think, leaving the care of this “intelligence” to the tool.

What if the Internet user wanted to continue thinking, keep control of his research, compare and compare the results, analyze the relevant sources, cross-check information, etc.? Then he would know how to use each of the logics of the search for information listed in this article wisely: a good directory of sources, algorithms to facilitate investigations, applications and programs to automate them and artificial intelligence to outline some more complex mechanics. But he would remain at the manoeuvre, at the orchestration. As stated previously, the key to an effective and successful search for information lies in the creation an ever new investigation methodology, certainly based on those already carried out in the past, but also based on the results obtained. However, algorithms, programs and robots today only repeat, compile, concatenate, certainly on a large quantity of collected data and acquired experiences, but they do not know how to create. They are not endowed with imagination… The best quality to excel in the search for information is however to be creative, imaginative, with the fewest possible limits and frameworks; Finally to think differently!

Francois Jeanne Beylot

For further :

We want to say thanks to the author of this write-up for this outstanding web content

Will artificial intelligence revolutionize the search for information?


Our social media profiles here as well as other pages related to them here.https://www.ai-magazine.com/related-pages/