Mohammad Hosseini – Chicago Tribune https://www.chicagotribune.com Get Chicago news and Illinois news from The Chicago Tribune Fri, 07 Jun 2024 22:42:21 +0000 en-US hourly 30 https://wordpress.org/?v=6.5.4 https://www.chicagotribune.com/wp-content/uploads/2024/02/favicon.png?w=16 Mohammad Hosseini – Chicago Tribune https://www.chicagotribune.com 32 32 228827641 Mohammad Hosseini: What will happen to generative AI after November’s election? https://www.chicagotribune.com/2024/06/10/opinion-artificial-intelligence-ai-joe-biden-donald-trump-regulations/ Mon, 10 Jun 2024 10:00:22 +0000 https://www.chicagotribune.com/?p=17267265 We’ve seen it play out time and again. Industries that lack sufficient government oversight prioritize short-term profits over long-term sustainability and can cause significant harm to the economy, society and the environment. Consider social media, Big Oil and real estate

We’re now staring down the barrel of the next industry in dire need of stronger government guardrails: generative artificial intelligence, or GenAI. 

Over the past year and a half, federal agencies, among many others, have used GenAI models such as ChatGPT to generate text, images, audio and video, making GenAI a priority concern for the U.S. government. And while we may assume President Joe Biden and former President Donald Trump are at opposite ends of the political spectrum on this issue, as they are on practically every issue, their approach to AI has actually been very similar. They have pampered AI developers with significant funding and deregulation, giving them global leverage, credit and visibility. 

While Biden and Trump have expressed concerns about citizens’ privacy, safety and security, the way they have regulated AI shows they’re actually on the developers’ side. That said, Biden and Trump diverge in their climate policies. Given the current stage of GenAI development and adoption and a dire need of data centers for more energy, it is the climate policy of the next president that will affect GenAI developers the most. It is worth exploring how GenAI has expanded under Biden and how it might be affected if Trump wins reelection this year.

GenAI developers have grown significantly during the Biden years, partly thanks to his administration’s various favorable policies. Last July, the Biden administration announced securing voluntary commitments from seven AI companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — under the guise of underscoring “safety, security and trust.” But in reality, these so-called commitments were more like gifts because their scope is limited to GenAI tools that are overall more powerful than existing ones. For example, the commitments require public reporting of capabilities, limitations, areas of appropriate/inappropriate use, societal risks, effects on fairness and bias, but only for more powerful AI models, thus offering a carte blanche for existing models.

In September, the Biden administration announced new voluntary commitments with identical stipulations and scope for eight other AI companies — Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI and Stability. One month later, Biden signed an executive order on AI, which offered GenAI developers even more favors, such as ordering federal agencies to support AI development and promote its use.

Perhaps the most compelling aspect of the executive order for GenAI developers is in Section 10.1(f)(i), which discourages federal agencies from “imposing broad general bans or blocks on agency use of generative AI.” This means that even in cases when an oversight agency has reasons to believe that using GenAI is harmful, it cannot ban the use. Given these supports, it is no surprise that GenAI developers have grown significantly during the Biden years, but if Trump is reelected, developers may have to adapt to a new regulatory and investment environment.

Trump is no stranger to AI. Like Biden, Trump emphasized that federal agencies should consider limiting regulatory overreach, thereby reducing barriers to innovation and growth regardless of the risks. In February 2019, Trump signed an executive order to maintain American leadership in AI and launched the American AI Initiative. Among other things, this executive order directed federal agencies to prioritize research and development in AI, enhance access to high-quality federal data and computing resources for AI researchers, set AI governance standards and build the AI workforce. In February 2020, Trump committed to doubling nondefense research and development investment in AI over two years. In December 2020, he signed an executive order on promoting the use of AI in the federal government

These examples show that Trump’s strategy toward AI was very similar to Biden’s, and one could even argue that Biden’s policies mirrored and continued those started by Trump, except Biden seems more lenient and has offered more specific favors. But other differences between Biden and Trump could affect AI developers, with the most notable being their disagreement regarding climate change. 

Just as Biden brought the U.S. back into the Paris climate accord on his first day in office, Trump could withdraw from the agreement after being reelected. Such an enormous shift in international commitments could alter the business environment for GenAI developers. Recall that in 2017, when Trump announced his plans to withdraw from the Paris agreement, 25 companies, including Apple, Google, Meta and Microsoft, published an open letter in newspapers urging the administration not to exit the agreement. 

The letter highlighted the negative impact of a withdrawal on competitiveness, jobs and economic growth, and risks over time including damage to facilities and operation, as well as competitive imbalance for American companies. However, at the current pace of development and deployment of GenAI — and the expected trajectory of a growing need for energy and, concurrently, a growing environmental footprint — remaining in the agreement could slow down the growth and uptake of GenAI.

GenAI developers may secretly hope for Trump to win to benefit from more lenient climate policies. Also, military use of GenAI may be affected by whether Biden or Trump wins. With the unrestricted use of AI during the Israel-Hamas conflict and successful applications by the Department of Defense, demand for these capabilities will grow.

If international conventions are drafted to regulate military use, the president’s endorsement, or lack thereof, will significantly affect developers’ global business.

Mohammad Hosseini is an assistant professor at Northwestern University. His research focuses on topics related to technology ethics, including artificial intelligence.

Submit a letter, of no more than 400 words, to the editor here or email letters@chicagotribune.com.

]]>
17267265 2024-06-10T05:00:22+00:00 2024-06-07T17:42:21+00:00
Mohammad Hosseini and Kristi Holmes: AI tools benefit developers and can be a liability for the rest of us https://www.chicagotribune.com/2023/11/27/mohammad-hosseini-and-kristi-holmes-ai-tools-benefit-developers-and-can-be-a-liability-for-the-rest-of-us/ https://www.chicagotribune.com/2023/11/27/mohammad-hosseini-and-kristi-holmes-ai-tools-benefit-developers-and-can-be-a-liability-for-the-rest-of-us/#respond Mon, 27 Nov 2023 06:00:00 +0000 https://www.chicagotribune.com?p=814843&preview_id=814843 A year ago this month, OpenAI released ChatGPT — a free generative artificial intelligence chatbot that creates text in response to user prompts.

With its launch, millions of people started using ChatGPT for tasks such as writing school essays, drafting emails and personal greetings, and retrieving information. Increasingly, more people and public offices are using ChatGPT to improve productivity and efficiency, conducting sophisticated tasks instantaneously that are typically beyond human abilities.

Publicly available reports show that in this year alone, 21 federal departments have used ChatGPT or similar systems to serve Americans, with the Departments of Energy, Health and Human Services, and Commerce being the top three users. Governmental uses of these systems may benefit the public by reducing costs or improving services. For example, U.S. Customs and Border Protection has improved speed and trustworthiness of data entry and analysis, and the Department of Veterans Affairs has developed physical therapy support tools.

ChatGPT and similar systems can change work processes and human interactions across many domains — and with it, create ethical, legal, social and practical challenges. One challenge involves an unequal distribution of benefits and burdens.

Companies such as the Silicon Valley giants that develop these systems, or integrate them into existing workflows, continue to benefit most of all. Even in the case of users whose daily tasks can be completed faster, their employers benefit more from ChatGPT in the long run. This is because with more efficiency comes lower labor costs. More importantly, when these technologies are fully incorporated, they can even replace workers with cheap and reliable robots, which is already happening in spaces such as Amazon warehouses.

A typical response to these changes is that new technologies have disrupted work throughout history and humans have always adapted. This is a straw man argument that doesn’t directly address the nuances of the current situation. Furthermore, it distracts us from better understanding the impacts on people and holding responsible those who contribute to and will benefit the most from this new dynamic.

Speaking of benefits, by helping us write clearer and faster or offering some assistance with digital tasks, ChatGPT and similar tools afford more time for interesting tasks such as ideation and innovation. Here, the short-term impact seems to be positive.

However, the very use of this technology creates ethical challenges. Ultimately, the companies advancing these technologies are disproportionately better off because they not only collect valuable user data — which can make users vulnerable in the future — but also, their system is trained with free labor.

These gains will allow them to offer specialized secondary services, for example, to companies that employ workers for office-based jobs. This lucrative future should explain why share prices for Microsoft, which now owns OpenAI, increased 52.26% over the last year — from $247.49 on Nov. 25, 2022, to $377.44 on Nov. 20, 2023 — and were not affected even after OpenAI’s bitter power struggle that led to the firing of its CEO, Sam Altman, on Nov. 17. In another twist, Altman is returning as CEO and will answer to a new board.

In the past year, domestic and international organizations have made moves toward encouraging responsible use of these systems. For example, the White House recently issued an executive order on AI with directives to guide AI use. Likewise, the Paris-based Organization for Economic Cooperation and Development recently launched the OECD AI Policy Observatory. This observatory offers information about trustworthy AI policies and data-driven analysis through linked resources and country-level dashboards.

However, governments’ inability to mandate developers to disclose the data used to train these systems clearly demonstrates that policies can go only so far. Transparency and equity are key issues when it comes to enhancing generative AI tools because how can we know how to improve these models without knowing the inputs?

Indeed, training data should be inclusive and come from reputable and reliable sources, and algorithms should be unbiased and continually scrutinized. Such measures are essential for future development of these tools and can be achieved only with more transparency from all involved parties.

Furthermore, if developers of these tools hope to make major and meaningful social impacts, they should carefully consider the influence of these systems on current and future affairs. Some of our biggest global challenges include climate change, pandemics, immigration and conflicts in Africa, the Middle East and Europe, to name but a few. Although ChatGPT can evaluate existing data or generate new textual content to help us analyze and communicate about these critical issues, it can also be used nefariously.

Use of ChatGPT to create misinformation is challenging democratic values. The deluge of misinformation has already compromised our ability to understand and engage meaningfully with global challenges and will likely grow in severity.

For example, AI-generated images and troves of false texts have contributed to misinformation about global conflicts, which has influenced the public’s perception of events and contributed to polarizing opinions. When misinformation and fake news are created about climate change, pandemics, immigration, heath topics, and more, it becomes increasingly difficult to unite people and mobilize them to address these issues.

Whether or not we personally use ChatGPT and other AI systems, our lives will be affected by them. We may wonder what we can do to be informed in this new age of AI. We can begin by advocating for information and media literacy and using technologies such as ChatGPT critically while keeping in mind the inherent biases and inequities that exist in these tools as well as the data used to train them.

Generative AI is here to stay, and to realize the full promise of these systems, we must leverage them safely and responsibly.

Mohammad Hosseini, Ph.D., is an assistant professor in the Department of Preventive Medicine at Northwestern University’s Feinberg School of Medicine. Kristi Holmes, Ph.D., is a professor of preventive medicine and the director of Galter Health Sciences Library at Northwestern’s Feinberg School of Medicine.

Submit a letter, of no more than 400 words, to the editor here or email letters@chicagotribune.com.

]]>
https://www.chicagotribune.com/2023/11/27/mohammad-hosseini-and-kristi-holmes-ai-tools-benefit-developers-and-can-be-a-liability-for-the-rest-of-us/feed/ 0 814843 2023-11-27T06:00:00+00:00 2023-12-04T17:18:44+00:00
Mohammad Hosseini: Big Tech’s voluntary commitment to responsible AI offers us false assurances https://www.chicagotribune.com/2023/08/15/mohammad-hosseini-big-techs-voluntary-commitment-to-responsible-ai-offers-us-false-assurances/ https://www.chicagotribune.com/2023/08/15/mohammad-hosseini-big-techs-voluntary-commitment-to-responsible-ai-offers-us-false-assurances/#respond Tue, 15 Aug 2023 06:00:00 +0000 https://www.chicagotribune.com?p=1049477&preview_id=1049477 In response to widespread concerns over artificial intelligence, representatives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI met last month with President Joe Biden’s administration to help move toward safe and responsible AI technology.

As part of that meeting, the White House released a document that underscores the principles of safety, security and trust as fundamental to AI’s future. It is, however, quite surprising that the document available online does not mention the White House; it is void of any official signs such as standardized formatting, letterhead, authorship or source, signature, reference number or even a date.

These issues aside, when dealing with high-tech giants such as Amazon, Google and Microsoft, the notion of voluntary commitment is more like a joke than anything serious. Indeed, even in cases in which established laws exist and commitments are mandatory, these companies often have the financial and legal resources to navigate around them, sometimes pushing the boundaries of what’s permissible and bending the rules to their benefit.

Google and Amazon’s union-busting efforts, Facebook’s Cambridge Analytica scandal, and Microsoft and OpenAI’s copyrights violations are only the tip of the iceberg, demonstrating that regulators already have a hard time enforcing existing mandatory laws. So how can voluntary commitments be expected to yield better results?

Regarding the challenges with implementation and effectiveness, the variability in interpretation stands out because companies could interpret and implement these commitments in dissimilar ways. Particularly in cases of potential competitive disadvantage — such as when abiding by these commitments could be against the financial bottom line — flouting or interpreting the guidelines in a lax manner could be a matter of life and death for these companies. This is likely given the international nature and implications of AI development.

The White House’s consultations with a broad spectrum of countries — Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the United Arab Emirates, and the United Kingdom — indicate an appetite for creating an international front to address these issues effectively. But history has shown that aligning on international agreements, especially on issues with deep economic implications, often yields more rhetoric than action. (Climate change and the Paris climate accord are a shining example.) Essentially, creating a cohesive and enforceable international framework clashes with the reality of geopolitical competition.

In the case of AI, the U.S.’ major technological competitor, China, is not listed among the countries the White House consulted with. Will the U.S. or its allies forgo a competitive edge by committing to guidelines and moral principles that other nations choose not adopt? Again, the example of climate change is illuminating. The U.S. and China are the world’s top two carbon emitters, and despite showing good intentions to curb their emissions, because they compete in developing solar energy equipment, their interests keep clashing, and consequently, global efforts to combat climate change are negatively affected.

Lastly, it is also important to note that voluntary commitments tend to offer false assurance, creating an illusion of safety and security, even when underlying issues persist and nothing changes in practice. What makes the notion of voluntary commitments in the case of AI more troublesome is that it puts users’ safety, security and trust at the mercy of companies that already have bad records and can use technological competition with China as a convenient excuse to disregard their voluntary commitments.

Thus, it’s vital to be critical toward soft measures such as voluntary commitments and advocate for devising more comprehensive and internationally inclusive solutions that include monitoring and evaluation regimes as well as sanctions for the companies and countries that don’t comply.

Mohammad Hosseini, Ph.D., is a postdoctoral scholar in the Department of Preventive Medicine at Northwestern University’s Feinberg School of Medicine, a member of the Global Young Academy and an associate editor of the journal Accountability in Research.

Submit a letter, of no more than 400 words, to the editor here or email letters@chicagotribune.com.

]]>
https://www.chicagotribune.com/2023/08/15/mohammad-hosseini-big-techs-voluntary-commitment-to-responsible-ai-offers-us-false-assurances/feed/ 0 1049477 2023-08-15T06:00:00+00:00 2023-08-15T15:09:12+00:00
Mohammad Hosseini: How much should AI concern us? We need real guidance, not vague alarmism. https://www.chicagotribune.com/2023/05/22/mohammad-hosseini-how-much-should-ai-concern-us-we-need-real-guidance-not-vague-alarmism/ https://www.chicagotribune.com/2023/05/22/mohammad-hosseini-how-much-should-ai-concern-us-we-need-real-guidance-not-vague-alarmism/#comments Mon, 22 May 2023 06:00:00 +0000 https://www.chicagotribune.com?p=87926&preview_id=87926 Recently, Geoffrey Hinton, the visionary expert who was at the heart of so much innovation in artificial intelligence and machine learning, left Google. In an interview with CNN, he said, “I’m just a scientist who suddenly realized that these things are getting smarter than us. I want to sort of blow the whistle and say we should worry seriously about trying to stop these things getting control over us.”

Hinton has been called the “Godfather of AI” because he was one of the seminal figures in the 1980s who worked on techniques, such as backpropagation, that have been pivotal in creating today’s large language models and generative AI like ChatGPT.

If the surge of generative AI is creating a battlefield between social and corporate values, and its pioneers are scientists like Hinton who started the fight and decided to leave as things started getting dirty, what values are we teaching to our next generation of scientists? Hinton said he is blowing the whistle, but this looks nothing like whistle-blowing. If he really wants to blow the whistle, he should tell us what is happening behind Google’s doors.

This is what other computer scientists such as Timnit Gebru, a leader in AI ethics research at Google, did. She was sacked after co-writing a paper that critically explored Google’s search engine and its use of large language models, titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”

When the CNN interviewer asked about Gebru’s critique, Hinton said, “They were rather different concerns from mine. I think it’s easier to voice concerns if you leave the company first. And their concerns aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.”

Among other things, this could be an instance of undermining the courageous act of an African woman who raised ethical issues much earlier than the godfather or an indication that Hinton knows and will disclose something far beyond what Gebru warned us about. I suspect the former to be much more likely. Just for context, Gebru’s paper was published in March 2021, long before ChatGPT’s release and the subsequent avalanche of publications about social, legal and ethical concerns related to large language models.

Geoffrey Hinton, an artificial intelligence pioneer, at his home in Toronto on April 24, 2023. Hinton decided to leave Google to freely share his concern that AI could cause the world serious harm.
Geoffrey Hinton, an artificial intelligence pioneer, at his home in Toronto on April 24, 2023. Hinton decided to leave Google to freely share his concern that AI could cause the world serious harm.

Among other issues,Gebru and her colleagues highlighted the dangerous risks and biases of large language models and their environmental and financial costs, inscrutability, illusion of meaning, and potential for language manipulation and misleading the public.

Are these different from Hinton’s concern? Yes, because unlike Hinton’s vague and science fiction-y claim that AI is going to take over, Gebru’s concerns were unambiguous and specific. Furthermore, unlike Hinton, who followed his “stop these things from getting control over us” with “it’s not clear to me that we can solve this problem,” Gebru and her co-authors had very specific recommendations: “Weighing the environmental and financial costs first, investing resources into curating and carefully documenting data sets rather than ingesting everything on the web, carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values, and encouraging research directions beyond ever larger language models,” Gebru and her co-authors wrote.

Now that Hinton has left Google, will he really blow the whistle? His current position does not suggest that because he thinks “the tech companies are the people most likely to see how to keep this stuff under control.”

This could imply many things, one of which is that tech companies could charge us directly or indirectly to keep this technology under control, like your antivirus software, or may use this technology to blackmail citizens when needed. Would they do these things, though? Maybe, maybe not, but they certainly could.

What we can hope for, though, is for Hinton to act like a responsible scientist and prioritize social over commercial interests. He can act like a true whistleblower and disclose meaningful and specific information about what is happening in the tech industry beyond punchy or dramatic lines: “These things are going to take over.” Perhaps this way, he could leave a legacy better than a godfather, allowing him to protect the real family and show loyalty — to us the people and not Google.

Mohammad Hosseini, Ph.D., is a postdoctoral scholar in the preventive medicine department at Northwestern University’s Feinberg School of Medicine, a member of the Global Young Academy and an associate editor of the journal Accountability in Research.

Submit a letter, of no more than 400 words, to the editor here or email letters@chicagotribune.com.

]]>
https://www.chicagotribune.com/2023/05/22/mohammad-hosseini-how-much-should-ai-concern-us-we-need-real-guidance-not-vague-alarmism/feed/ 4 87926 2023-05-22T06:00:00+00:00 2023-05-22T12:35:07+00:00
Mohammad Hosseini: Should we bring AI into hospitals? Let’s find the middle ground. https://www.chicagotribune.com/2023/04/10/mohammad-hosseini-should-we-bring-ai-into-hospitals-lets-find-the-middle-ground/ https://www.chicagotribune.com/2023/04/10/mohammad-hosseini-should-we-bring-ai-into-hospitals-lets-find-the-middle-ground/#respond Mon, 10 Apr 2023 15:27:49 +0000 https://www.chicagotribune.com?p=113450&preview_id=113450 Recently, two major news stories in the technology world broke out. The first one was about a call by big names in the technology sector to pause the development of artificial intelligence. The second was about the use of large language models, or LLMs, in health care. That followed a recent interview with OpenAI CEO Sam Altman, who revealed that ChatGPT and other applications based on LLMs will allow us to “have medical advice for everybody.”

Some technology leaders are calling for pausing AI development altogether while another suggests that we should integrate AI into one of the most vital sectors of the society: namely, health care.

If this seems confusing, it is — because both ideas are radical and can be seen as the opposite ends of a spectrum regarding technology. But I think we can find a middle ground.

The idea of pausing the development of technology indicates misconceptions about how science and technology evolve. Technological developments arise organically when a combination of social demands, enthusiastic investors and a vision for harnessing innovation are present. Investments can be loosely supervised, but the other two factors cannot be paused. Indeed, pausing is not only unfeasible, but it also is dangerous because it deters transparent development and communication about recent improvements.

Technology is like an unstoppable train that runs on tracks we’ve laid. To minimize risks of harm, it is essential to be proactive and strategically guide technology development by steering it away from areas with high potential for harm and direct it toward small-scale experiments.

Let’s take health care, for instance. Pausing AI development completely would mean losing out on potential benefits that time-strapped clinicians could use to improve care. For instance, clinicians might one day use LLMs to write letters to insurance companies and investigate medical notes for tracing liabilities.

That said, as an ethics expert, I cannot ignore my responsibility to warn society about the risks and trade-offs of a hurried approach regarding the integration of LLMs in health care. These efforts could pave the path to collecting patients’ health data, which might include medical notes, test results and all kinds of information.

Thanks to OpenAI’s newly released GPT4, which can understand and analyze images, in addition to text, scans and X-rays could be among the collected health data. Data collection efforts often start with offering applications that facilitate efficiency. For example, analyzing notes to summarize a patient’s history, which could be a major help for overworked clinicians, could be the pretext needed to collect a patient’s historical data.

So what is the middle ground?

Whether we like it or not, vital sectors including health care use technologies that collect our data. While we are not even remotely prepared for the integration of LLMs in health care, pausing their development is not the solution. Like other technologies, LLMs will eventually be integrated into sectors like health care, and so small-scale experiments open up space for reflection and evaluation of their strengths, weaknesses, opportunities and threats. Furthermore, experimenting with LLMs allows their developers to collate and address concerns around privacy, data security, accuracy, biases and accountabilities, among others.

Ethical issues aside, incorporating LLMs into existing health care systems — while also navigating legal issues — is not only extremely challenging, but it also takes time and requires implementation instead of pausing to explore potential legal obstacles. The medical landscape is heavily regulated and has all kinds of checks and balances to protect patients, clinicians, health care providers and wider society.

In some now-hypothetical scenarios involving AI — for example, drawing wrong conclusions from available data, misleading or interfering with the diagnosis, sharing health data with third parties — our legal systems and the notion of liabilities could be pushed to their limits because they were not designed to deal with these challenges and thus are not ready for such an enormous shift.

In the case of loosely regulated data — similar to a web browser’s cookies that keep revealing information about us — we don’t know what information can be collected and transferred by LLMs. We have yet to learn about the purposes for which our data could be analyzed, where it will be stored, who will have access, how well it will be protected and so many other unknowns. While pausing is not helping any of these, a cursory adoption could be catastrophic.

So the middle ground involves cautious experimentation with small-scale LLMs and evaluating their performance, while observing what their developers will do with our information and trust.

Instead of pausing AI, we should collectively negotiate with AI developers, demanding good faith and transparency to ensure that technology will not make us vulnerable in the future.

Mohammad Hosseini, Ph.D., is a postdoctoral scholar in the preventive medicine department at Northwestern University’s Feinberg School of Medicine, a member of the Global Young Academy and an associate editor of the journal Accountability in Research.

Submit a letter, of no more than 400 words, to the editor here or email letters@chicagotribune.com.

]]>
https://www.chicagotribune.com/2023/04/10/mohammad-hosseini-should-we-bring-ai-into-hospitals-lets-find-the-middle-ground/feed/ 0 113450 2023-04-10T15:27:49+00:00 2023-04-10T19:27:49+00:00