Skip to content
A CadioCloud electrocardiogram recorder is on display during the 2021 World Artificial Intelligence Conference at the Shanghai World Expo Center on July 10, 2021, in Shanghai, China.
Visual China Group
A CadioCloud electrocardiogram recorder is on display during the 2021 World Artificial Intelligence Conference at the Shanghai World Expo Center on July 10, 2021, in Shanghai, China.
Author
PUBLISHED: | UPDATED:

Recently, two major news stories in the technology world broke out. The first one was about a call by big names in the technology sector to pause the development of artificial intelligence. The second was about the use of large language models, or LLMs, in health care. That followed a recent interview with OpenAI CEO Sam Altman, who revealed that ChatGPT and other applications based on LLMs will allow us to “have medical advice for everybody.”

Some technology leaders are calling for pausing AI development altogether while another suggests that we should integrate AI into one of the most vital sectors of the society: namely, health care.

If this seems confusing, it is — because both ideas are radical and can be seen as the opposite ends of a spectrum regarding technology. But I think we can find a middle ground.

The idea of pausing the development of technology indicates misconceptions about how science and technology evolve. Technological developments arise organically when a combination of social demands, enthusiastic investors and a vision for harnessing innovation are present. Investments can be loosely supervised, but the other two factors cannot be paused. Indeed, pausing is not only unfeasible, but it also is dangerous because it deters transparent development and communication about recent improvements.

Technology is like an unstoppable train that runs on tracks we’ve laid. To minimize risks of harm, it is essential to be proactive and strategically guide technology development by steering it away from areas with high potential for harm and direct it toward small-scale experiments.

Let’s take health care, for instance. Pausing AI development completely would mean losing out on potential benefits that time-strapped clinicians could use to improve care. For instance, clinicians might one day use LLMs to write letters to insurance companies and investigate medical notes for tracing liabilities.

That said, as an ethics expert, I cannot ignore my responsibility to warn society about the risks and trade-offs of a hurried approach regarding the integration of LLMs in health care. These efforts could pave the path to collecting patients’ health data, which might include medical notes, test results and all kinds of information.

Thanks to OpenAI’s newly released GPT4, which can understand and analyze images, in addition to text, scans and X-rays could be among the collected health data. Data collection efforts often start with offering applications that facilitate efficiency. For example, analyzing notes to summarize a patient’s history, which could be a major help for overworked clinicians, could be the pretext needed to collect a patient’s historical data.

So what is the middle ground?

Whether we like it or not, vital sectors including health care use technologies that collect our data. While we are not even remotely prepared for the integration of LLMs in health care, pausing their development is not the solution. Like other technologies, LLMs will eventually be integrated into sectors like health care, and so small-scale experiments open up space for reflection and evaluation of their strengths, weaknesses, opportunities and threats. Furthermore, experimenting with LLMs allows their developers to collate and address concerns around privacy, data security, accuracy, biases and accountabilities, among others.

Ethical issues aside, incorporating LLMs into existing health care systems — while also navigating legal issues — is not only extremely challenging, but it also takes time and requires implementation instead of pausing to explore potential legal obstacles. The medical landscape is heavily regulated and has all kinds of checks and balances to protect patients, clinicians, health care providers and wider society.

In some now-hypothetical scenarios involving AI — for example, drawing wrong conclusions from available data, misleading or interfering with the diagnosis, sharing health data with third parties — our legal systems and the notion of liabilities could be pushed to their limits because they were not designed to deal with these challenges and thus are not ready for such an enormous shift.

In the case of loosely regulated data — similar to a web browser’s cookies that keep revealing information about us — we don’t know what information can be collected and transferred by LLMs. We have yet to learn about the purposes for which our data could be analyzed, where it will be stored, who will have access, how well it will be protected and so many other unknowns. While pausing is not helping any of these, a cursory adoption could be catastrophic.

So the middle ground involves cautious experimentation with small-scale LLMs and evaluating their performance, while observing what their developers will do with our information and trust.

Instead of pausing AI, we should collectively negotiate with AI developers, demanding good faith and transparency to ensure that technology will not make us vulnerable in the future.

Mohammad Hosseini, Ph.D., is a postdoctoral scholar in the preventive medicine department at Northwestern University’s Feinberg School of Medicine, a member of the Global Young Academy and an associate editor of the journal Accountability in Research.

Submit a letter, of no more than 400 words, to the editor here or email letters@chicagotribune.com.