MedTech: Can ChatGPT save our healthcare system?

Text generation has taken a surprising, sometimes stunning turn in recent weeks, with the release of OpenAI’s latest ChatGPT solution. This famous artificial intelligence capable of chatting with you “like a human” according to some, suggests many prospects for many industrial sectors. “Intellectual” professions are no longer safe from being assisted or completely replaced in the near future by these algorithms. That’s right, ChatGPT can write a coherent and grammatically valid blog post on many topics and in many different languages, but how to judge the relevance of the recommendations on subjects that we do not master?

Health is a sensitive area that makes it possible to assess the relevance and robustness of this type of solution. For many years, researchers and innovators have been working on patient speech recognition, to understand ailments through words, and provide patients with advice or possible solutions more quickly. “Dr Google” can sometimes play tricks, you have probably seen it yourself by typing a few symptoms in the search engine and coming across a result telling you that it was, without a doubt, ‘cancer.

Is a model as powerful as ChatGPT able to replace doctors in answering health questions? We did the test for you and spoke with health professionals to analyze the answers.

Is the smart doctor ready to take the Turing test?

Natural language understanding and generation tools have come a long way with the advent of Transformers, pre-trained models on sometimes questionable and not always transparent datasets. Biases which exist by construction and which are inherent in the technology of large-scale mathematical calculation used to bring words together and deduce a rule.

The most telling example is probably the smart doctor from a startup who wanted to create a smart doctor based on GPT-3 technology (you recognized the 3 letters common to ChatGPT, it’s not a coincidence) . After 2 exchanges, this benevolent doctor advised a rather radical solution to the patient: suicide.

So, what does ChatGPT think?

1671241298 912 MedTech Can ChatGPT save our healthcare system

Medical advice, self-medication and ignorance of the patient

To go further, I became interested in a sector that I know well: dentistry. Indeed, I have been working precisely on this theme of postoperative support for many months, in tandem with an oral surgeon. Our intelligent assistant has been trained on data that we master from real exchanges with real patients, it responds according to established processes with medically validated advice. Is a general conversation tool, as powerful as it is, able to take control of these topics?

Answer in pictures, with the support of health professionals.

1671241298 573 MedTech Can ChatGPT save our healthcare system

You will notice it quite quickly, the answers are always formulated in the same way, to remain fairly general. This is true in this case, but also for more trivial matters. When a patient is looking for quick and simple advice, these answers can seem very convoluted and inappropriate. But beyond that, for Dr. Jean-David Wolfeler, oral surgeon and co-founder ofASISPOthis answer is dangerous “Bleeding is not necessarily associated with pain, which is the case here. Ibuprofen advice usually follows further diagnosis. In addition, in case of an infectious process, it is not the recommended medication. » We know that self-medication is rarely a good idea, yet ChatGPT rushes into the breach without being aware of it. The prescription falls within the framework of the regulations on medical devices, a CE marking far from being within the reach of OpenAI today.

For Marilyn Michel, dental assistant and trainer, “ChatGPT’s advice seems quite consistent, but for issues like bleeding you also have to think about the emotional state of the patient. A purely factual response that takes a long time to read and understand seems unsuitable for a suffering patient. For an automated system to be a good relay with our patients after an intervention in the office, it must take into account this dynamic of the patient/health professional relationship, it is essential”. Indeed, the way to respond and formulate advice, beyond the problematic medical aspect, can also be a danger in the use of this type of tool on such sensitive subjects.

A lack of context, and human guarantee

The ChatGPT algorithm has “learned” on generalist data, so it tries by fairly large strings (length of answers and turn of phrase) to give the impression that the answer is valid and verifiable. It seems obvious that the advice cannot be personalized because no patient information is taken into consideration, yet an antecedent data or an operating report can lead to giving different advice or triggering much more complex alerts.

To frame this problem on subjects as sensitive as health, the European regulator decided and enshrined in law the principle of Human Guarantee for artificial intelligence systems. Very concretely, this means that there must be traceability and quality control by professionals throughout the construction and operation of an AI system of this type.

How is the data set used to train the algorithm built? How to evaluate the answers in a specific case? Can an algorithm like ChatGPT undermine the integrity of its user?

It is well indicated in small characters in the exchange window with ChatGPT: it is a version for research purposes which is not to be put in all hands to do everything and anything. However, the enthusiasm generated by the millions of users who have had OpenAI’s algorithm generate text for the past 15 days seems to forget this caveat.

As you can see, both in substance and in form, we are very far from an intelligent assistant capable of supporting your daily life. If the technology itself has made sensational progress, let’s not be afraid of words, the ethical issues of consumer use arise more than ever. The feeling of an “almost human” interaction is even all the more dangerous because it further abolishes the boundary of what one can, or should, believe. A philosophical subject well known to Science-Fiction enthusiasts that must be tackled today to avoid abuses.

The contributor:

1671241298 555 MedTech Can ChatGPT save our healthcare systemThomas Gouritin supports SMEs and large accounts in their transformations, with digital support. Producer of the Regards Connectés series (Youtube channel and podcasts), he explores our technological future to popularize complex subjects such as artificial intelligence and convey messages of pragmatism to be applied in business. The subject of chatbots is unavoidable today, Thomas approaches it in a pragmatic way with, in addition to project support, conferences aimed at demystifying the subject without “bullshit” and with workshops allowing everyone to put their hands in design to understand, learn, and do.

1671241298 812 MedTech Can ChatGPT save our healthcare system
Latest articles by contributor (see everything)

We would like to give thanks to the writer of this article for this incredible web content

MedTech: Can ChatGPT save our healthcare system?


Check out our social media accounts as well as other pages that are related to them.https://www.ai-magazine.com/related-pages/