Links on Artificial Intelligence

An evolving list of mostly skeptical takes on Generative AI
Author

Carwil Bjork-James

Published

March 20, 2025

Introductions to Generative AI

  • A jargon-free explanation of how AI large language models work (ArsTechnica article): a particularly eloquent primer on the underlying technology.
  • Possible Minds: Twenty-Five Ways of Looking at AI: If you want to know both the existential risk perspective and the here’s why human four-year-olds maintain intuitive knowledge ungrasped by AI, and especially how people who are doing this work see it, this is a thoughtful book filled with long-term (but not “long-termist”) perspectives.
  • ChatGPT Is Not Intelligent (podcast episode) w/ Emily M. Bender.
  • Big picture argument (podcast episode) from Gary Marcus that the current big-data approach won’t produce Artificial General Intelligence, but could contribute to it.
  • On how (even buggy and hallucinatory) LLM output can help you with programming, if you’re a programmer: https://simonwillison.net/2023/Apr/8/llms-break-the-internet/
  • AI and the Illusion of Intelligence (Coursera course): a short, smart online course that takes you from Turing’s article inventing his famous test through the development of LLMs, with a thoughtful argument about intelligence augmentation through technology.

Artificial intelligence machine synthesized by Adobe Firefly

AI news sources

  • Changelog and its sub-podcast Practical AI — Developers talking about technology. Example episodes: especially clear take on Deep Seek; interview with National Institute of Standards and Technology staffer regulating AI.
  • TED AI Show podcast — Definitely credulous about AI’s potential, but lots of in-person interviews with AI players, giving insight on how companies are envisioning their plans. Examples: This episode taught me more about Google than I knew before.
  • Tech Won’t Save Us podcast — Combines pessimism on the tech hype cycle with political critique of the direction of Big Tech.
  • Also for the very technical discussion, there is Machine Learning Street Talk podcast

Persistent critical voices:

  • Emily Bender (Wikipedia | Google Scholar), a linguist who entered the Generative AI discussion as an expert on large-language models. Co-author of “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜” research paper with Timnit Gebru. Bender’s take is that hype is driving the AI discussion, and that there are substantive ethical problems with current models. More than other researchers, Bender inclines towards detailed collaborative critiques on questions like: Can AI’s that pass human benchmark tests be said to have capacities they test for? What are the dangers of using AI for search?
  • Gary Marcus (Wikipedia Substack blog) — researcher and proponent of a structured approach to building artificial intelligence, but who thinks that hallucinations and failure to understand edge cases are invariable traits of neural network architectures. Marcus has emerged as a sharp critic of the current generative AI wave as unlikely to every generate sophisticated intelligence and a potential diversion of hundreds of billions of dollars and societal distrust in an unproductive direction.
  • Ed Zitron — Technology critic and writer. Focused on declining tech user experience and risks of a financial bubble around generative AI. His general take is that generative AI has yet to show a vital use case and could dramatically underperform market expectations.
  • Melanie Mitchell (Wikipedia | Substack blog) — A very hands-on research expert on AI development and benchmarking who asks hard questions.
  • Jaron Lanier, whose You Are Not a Gadget and Who Owns the Future? raise important questions about power and economy of technology, while accepting that technology will solve whatever problems it creates. Who Owns the Future also provides a smart typology of perspectives on technology.
  • Cory Doctorow, futurist sci-fi writer with a hard economic critique of Big Tech in Chokepoint Capitalism.
  • Shoshana Zuboff. It’s worth taking a moment and imagine AI chatbots and on-the-horizon agents, not as potentially intelligent machines, but rather as data collectors to the corporations that she thoroughly laid out in The Age of Surveillance Capitalism.

AI establishment voices:

  • Fei-Fei Li, Director of Stanford’s Human-Centered AI Institute, a crossover from AI science to humanities who invented the term “foundation models.”
  • Geoffrey Hinton, researcher into neural-network-based machine learning. His experience with creating vision models whose internal workings are inscrutable have convinced him that knowing how AI works isn’t essential to believing that it works. For good insight on this view, and contrasts with Li, listen to Geoffrey Hinton in conversation with Fei-Fei Li

A notable moment in the the Hinton–Li conversation comes at 1h18, where they are asked whether “we are at the point where we can say [LLMs/foundational models] have understanding and intelligence?” Hinton’s answer is at 1h30. Li responds at 1h35.

Also, the core of Hinton’s perspective may be this definition of education: “So the way we exchange knowledge, roughly speaking, this is something of a simplification, but I produce a sentence and you figure out what you have to change in your brain, so you might have said that, that is if you trust me.” “What I want to claim is that these millions of features and billions of interactions between features are understanding. … If you ask, how do we understand, this is the best model of how we understand.” (“Will digital intelligence replace biological intelligence?” Romanes Lecture at 14m28s /

Visions of AI entrepreneurs and business leaders

On limitations of implementing AI in workplace settings

Upwork research “surveying 2,500 global C-suite executives, full-time employees, and freelancers in the U.S., UK, Australia, and Canada”:

The majority of AI use appears to be emerging bottoms up, with workers leading the charge. Now, leaders are eager to channel this enthusiasm. Among the increased demands executives have placed on workers in the past year, requesting they use AI tools to increase their output tops the list (37%). Already 39% of companies require employees to use AI tools, with an additional 46% encouraging employees to use the tools without mandating that they do so.

However, this new technology has not yet fully delivered on this productivity promise: Nearly half (47%) of employees using AI say they have no idea how to achieve the productivity gains their employers expect, and 77% say these tools have actually decreased their productivity and added to their workload.

For example, survey respondents reported that they’re spending more time reviewing or moderating AI-generated content (39%), invest more time learning to use these tools (23%), and are now being asked to do more work (21%).

Technology author Mayo Olshin: “If however, you simply”trust” the Al outputs due to lack of knowledge, skill, or willingness to review results, the long term damage will outweigh the initial productivity gains you got so “hyped” about.” A similar experience from David Chisnall on CoPilot: “It has cost me more time than it has saved.”