Skip to content
  • When she co-led Google's ethical artificial intelligence team, Timnit Gebru...

    Jeff Chiu/AP

    When she co-led Google's ethical artificial intelligence team, Timnit Gebru was a prominent insider voice questioning the tech industry's approach to AI. Now Gebru is trying to make change from the outside as the founder of the Distributed Artificial Intelligence Research Institute.

  • Geoffrey Hinton, an artificial intelligence pioneer, at his home in...

    Chloe Ellingson/The New York Times

    Geoffrey Hinton, an artificial intelligence pioneer, at his home in Toronto on April 24, 2023. Hinton decided to leave Google to freely share his concern that AI could cause the world serious harm.

of

Expand
Author
PUBLISHED: | UPDATED:

Recently, Geoffrey Hinton, the visionary expert who was at the heart of so much innovation in artificial intelligence and machine learning, left Google. In an interview with CNN, he said, “I’m just a scientist who suddenly realized that these things are getting smarter than us. I want to sort of blow the whistle and say we should worry seriously about trying to stop these things getting control over us.”

Hinton has been called the “Godfather of AI” because he was one of the seminal figures in the 1980s who worked on techniques, such as backpropagation, that have been pivotal in creating today’s large language models and generative AI like ChatGPT.

If the surge of generative AI is creating a battlefield between social and corporate values, and its pioneers are scientists like Hinton who started the fight and decided to leave as things started getting dirty, what values are we teaching to our next generation of scientists? Hinton said he is blowing the whistle, but this looks nothing like whistle-blowing. If he really wants to blow the whistle, he should tell us what is happening behind Google’s doors.

This is what other computer scientists such as Timnit Gebru, a leader in AI ethics research at Google, did. She was sacked after co-writing a paper that critically explored Google’s search engine and its use of large language models, titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”

When the CNN interviewer asked about Gebru’s critique, Hinton said, “They were rather different concerns from mine. I think it’s easier to voice concerns if you leave the company first. And their concerns aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.”

Among other things, this could be an instance of undermining the courageous act of an African woman who raised ethical issues much earlier than the godfather or an indication that Hinton knows and will disclose something far beyond what Gebru warned us about. I suspect the former to be much more likely. Just for context, Gebru’s paper was published in March 2021, long before ChatGPT’s release and the subsequent avalanche of publications about social, legal and ethical concerns related to large language models.

Geoffrey Hinton, an artificial intelligence pioneer, at his home in Toronto on April 24, 2023. Hinton decided to leave Google to freely share his concern that AI could cause the world serious harm.
Geoffrey Hinton, an artificial intelligence pioneer, at his home in Toronto on April 24, 2023. Hinton decided to leave Google to freely share his concern that AI could cause the world serious harm.

Among other issues,Gebru and her colleagues highlighted the dangerous risks and biases of large language models and their environmental and financial costs, inscrutability, illusion of meaning, and potential for language manipulation and misleading the public.

Are these different from Hinton’s concern? Yes, because unlike Hinton’s vague and science fiction-y claim that AI is going to take over, Gebru’s concerns were unambiguous and specific. Furthermore, unlike Hinton, who followed his “stop these things from getting control over us” with “it’s not clear to me that we can solve this problem,” Gebru and her co-authors had very specific recommendations: “Weighing the environmental and financial costs first, investing resources into curating and carefully documenting data sets rather than ingesting everything on the web, carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values, and encouraging research directions beyond ever larger language models,” Gebru and her co-authors wrote.

Now that Hinton has left Google, will he really blow the whistle? His current position does not suggest that because he thinks “the tech companies are the people most likely to see how to keep this stuff under control.”

This could imply many things, one of which is that tech companies could charge us directly or indirectly to keep this technology under control, like your antivirus software, or may use this technology to blackmail citizens when needed. Would they do these things, though? Maybe, maybe not, but they certainly could.

What we can hope for, though, is for Hinton to act like a responsible scientist and prioritize social over commercial interests. He can act like a true whistleblower and disclose meaningful and specific information about what is happening in the tech industry beyond punchy or dramatic lines: “These things are going to take over.” Perhaps this way, he could leave a legacy better than a godfather, allowing him to protect the real family and show loyalty — to us the people and not Google.

Mohammad Hosseini, Ph.D., is a postdoctoral scholar in the preventive medicine department at Northwestern University’s Feinberg School of Medicine, a member of the Global Young Academy and an associate editor of the journal Accountability in Research.

Submit a letter, of no more than 400 words, to the editor here or email letters@chicagotribune.com.