What you should know about Artificial Intelligence
Introductions to Generative AI
- A jargon-free explanation of how AI large language models work (ArsTechnica article): a particularly eloquent primer on the underlying technology.
- Possible Minds: Twenty-Five Ways of Looking at AI:1 If you want to know both the existential risk perspective and the here’s why human four-year-olds maintain intuitive knowledge ungrasped by AI, and especially how people who are doing this work see it, this is a thoughtful book filled with long-term (but not “long-termist”) perspectives.
- Artificial Intelligence: A Guide for Thinking Humans:2 Despite being published in 2019, this book by Melanie Mtichell (more on her below) is so technically smart and rigorous that it anticipates many of the limitations, quandaries, and debates that surround generative AI.
- Machines of Loving Grace:3 John Markoff’s deep history of modern computer design and engineering traces how designers of human–computer interactions have been pulled either towards Artificial Intelligence (duplicating human intelligence in software systmes) or Intelligence Augmentation (building tools that extend human capacities).
- ChatGPT Is Not Intelligent (podcast episode) w/ Emily M. Bender.
- Big picture argument (podcast episode) from Gary Marcus that the current big-data approach won’t produce Artificial General Intelligence, but could contribute to it.
- On how (even buggy and hallucinatory) LLM output can help you with programming, if you’re a programmer: https://simonwillison.net/2023/Apr/8/llms-break-the-internet/
- AI and the Illusion of Intelligence (Coursera course): a short, smart online course that takes you from Turing’s article inventing his famous test through the development of LLMs, with a thoughtful argument about intelligence augmentation through technology.
1 John Brockman and ProQuest, Possible minds: twenty-five ways of looking at AI (Penguin Press, 2019).
2 Melanie Mitchell, Artificial intelligence: a guide for thinking humans, First edition. (Farrar, Straus; Giroux, 2019).
3 John Markoff, Machines of loving grace: The quest for common ground between humans and robots, First edition (ECCO, 2015), http://books.google.com/books?vid=isbn9780062266682.

AI news sources
- Changelog and its sub-podcast Practical AI — Developers talking about technology. Example episodes: especially clear take on Deep Seek; interview with National Institute of Standards and Technology staffer regulating AI.
- TED AI Show podcast — Definitely credulous about AI’s potential, but lots of in-person interviews with AI players, giving insight on how companies are envisioning their plans. Examples: This episode taught me more about Google than I knew before.
- Tech Won’t Save Us podcast — Combines pessimism on the tech hype cycle with political critique of the direction of Big Tech.
- Also for an often technical discussion, there is Machine Learning Street Talk podcast — Recent more accessible episodes include an interview with the Apple researcher who “exposed deep cracks in LLMs’ “reasoning” capabilities” and a profile of the ARC v2 challenge on human-like reasoning capabilities. You can see the challenge itself in this NYT interactive.
Persistent critical voices:
- Emily Bender (Wikipedia | Google Scholar), a linguist who entered the Generative AI discussion as an expert on large-language models. Co-author of “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜” research paper with Timnit Gebru. Bender’s take is that hype is driving the AI discussion, and that there are substantive ethical problems with current models. More than other researchers, Bender inclines towards detailed collaborative critiques on questions like: Can AI’s that pass human benchmark tests be said to have capacities they test for? What are the dangers of using AI for search? Bender has a new book out in 2025, The AI Con,4 making a practical case that LLMs are being oversold and an ethical case they aren’t building the future we want.
- Gary Marcus (Wikipedia Substack blog) — researcher and proponent of a structured approach to building artificial intelligence, but who thinks that hallucinations and failure to understand edge cases are invariable traits of neural network architectures. Marcus has emerged as a sharp critic of the current generative AI wave as unlikely to every generate sophisticated intelligence and a potential diversion of hundreds of billions of dollars and societal distrust in an unproductive direction. Marcus published his views in Taming Silicon Valley (2024).5
- Ed Zitron — Technology critic and writer. Focused on declining tech user experience and risks of a financial bubble around generative AI. His general take is that generative AI has yet to show a vital use case and could dramatically underperform market expectations.
- Jaron Lanier, whose You Are Not a Gadget and Who Owns the Future? raise important questions about power and economy of technology, while accepting that technology will solve whatever problems it creates. Who Owns the Future also provides a smart typology of perspectives on technology.
- Cory Doctorow, futurist sci-fi writer with a hard economic critique of Big Tech in Chokepoint Capitalism.
- Shoshana Zuboff. It’s worth taking a moment and imagine AI chatbots and on-the-horizon agents, not as potentially intelligent machines, but rather as data collectors to the corporations that she thoroughly laid out in The Age of Surveillance Capitalism.
4 Emily M. Bender and Alex Hanna, The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want Exposing Surveillance Capitalism and Artificial Intelligence Myths in Information Technology Today (Harper, 2025).
5 Gary F. Marcus, Taming Silicon Valley: How We Can Ensure That AI Works for Us (The MIT Press, 2024).
Outspoken AI scientists:
- Fei-Fei Li, Director of Stanford’s Human-Centered AI Institute, a crossover from AI science to humanities who invented the term “foundation models.”
- Geoffrey Hinton, researcher into neural-network-based machine learning. His experience with creating vision models whose internal workings are inscrutable have convinced him that knowing how AI works isn’t essential to believing that it works. For good insight on this view, and contrasts with Li, listen to Geoffrey Hinton in conversation with Fei-Fei Li
A notable moment in the the Hinton–Li conversation comes at 1h18, where they are asked whether “we are at the point where we can say [LLMs/foundational models] have understanding and intelligence?” Hinton’s answer is at 1h30. Li responds at 1h35.
Also, the core of Hinton’s perspective may be this definition of education: “So the way we exchange knowledge, roughly speaking, this is something of a simplification, but I produce a sentence and you figure out what you have to change in your brain, so you might have said that, that is if you trust me.” “What I want to claim is that these millions of features and billions of interactions between features are understanding. … If you ask, how do we understand, this is the best model of how we understand.” (“Will digital intelligence replace biological intelligence?” Romanes Lecture at 14m28s )
- Melanie Mitchell (Wikipedia | Substack blog) — A very hands-on research expert on AI development and benchmarking who asks hard questions. She’s the author of
Melanie Mitchell’s recent research has been examining the cognitive processes in LLM-based AI systems, including through experiments. In a recent talk, “Evaluating Cognitive Capacities in AI Systems” at the 2025 Natural Philosophy Symposium at Johns Hopkins, she lays out this work. Mitchell argues that increasing performance by AI systems seems grounded in multiplying numbers of heuristic associations rather than the “abstract reasoning” named by the labs creating current foundation models. She and collaborators have developed experiments to confirm this, in part by demonstrating that AI model performance declines when letters are swapped out from a simple sequence task (an experiment much like the well-publicized Apple paper on distractor information interfering with reasoning). She also offers one of the clearest explanations of the ARC abstract reasoning tests and talks about how her lab has been creating simpler versions of these tests to probe just how LLMs handle abstractions, when they can.
On the B-side, if you will, of Mitchell’s lecture video come comments from Alison Gopnik. Her focus is reframing AI not as a self-conscious (or eventually self-conscious) reasoning technology, but rather as a novel form of social aggregation, best thought of as a successor to the library or the market, or even to the state and regulation. She argues that it’s pointless to ask who is smarter: a scientist or a library. Indeed, this is a kind of “category error” that misses the more important point: what can one, or many scientists do with a library that they couldn’t do without one? When thought of as a social aggregator that lends power to new actions, we end up a lot closer to the notion of “Intelligence Augmentation,” a converse goal to Artificial Intelligence that’s explored in Possible Minds and in Machines of Loving Grace.)
(For what it’s worth, I find the social aggregation metaphor to be a much clearer description of what the text tha pours out from AI chatbot firehose than the intelligent agent metaphor. It explains how fully formed units of code that appeared elsewhere sometimes show up, and how interpolated summary segments also show up. As Melanie Mitchell writes elsewhere, and I’m paraphrasing here, “like Soylent Green, AI is people.”)
Visions of AI entrepreneurs and business leaders
Mustafa Suleyman, CEO of Microsoft AI, His talk, “What Is an AI Anyway?,” characterizes AI as “a new digital species”
Dario Amodei, CEO of Anthropic. February interview on Hard Fork. November appearance on NYT DealBook (video). In essence, Amodei who has prophesied the near-term arrival of AGI continues to believe that scaling up LLMs is the way to get there.
Sam Altman, CEO of OpenAI: Instead of listening to Sam Altman, let me suggest reading one of the two up-close books on Altman and OpenAI’s rise published this year: Karen Hao’s Empire of AI6 which describes the AI industry as not just about its product, but about the acquisition of creative labor (through training data) and coordination of human labor and energy (through its overseas workforce and data centers) in ways that echo colonial enterprises. Keach Hagey’s The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future7 is more, well, optimistic. As a comparative review put it, Hagey “stands Altman as the secular prophet preaching human progress and boundless optimism.”
This interview quote from Karen Hao on Altman is informative, suggesting that his ability to shift narratives makes him something of an unreliable narrator on even his own views:
“He’s a once-in-a-generation fundraising talent. That is his particular skill. And he’s also a once-in-a-generation storytelling talent, which is effectively why he’s so good at fundraising. He is able to paint these extremely persuasive visions of the future. He was already prominent within Silicon Valley, and Silicon Valley very much runs on stories and telling stories about the future. And one of the things that I sort of concluded through the reporting of my book is that when he says something to someone, what he’s saying is more tightly correlated with what he thinks they need to hear than what he actually believes or the ground truth of the thing. He’s able to say the things that really provoke people to kind of rally towards a general, broad, sweeping mission that he paints.8
6 Karen Hao, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI (Penguin Press, 2025).
7 Keach Hagey, The optimist: Sam Altman, OpenAI, and the race to invent the future, First edition (W.W. Norton & Company, 2025).
8 Steve Inskeep, “Journalist Karen Hao Discusses Her Book ’Empire of AI’,” NPR, May 20, 2025, https://www.npr.org/2025/05/20/nx-s1-5334670/journalist-karen-hao-discusses-her-book-empire-of-ai.
9 Timnit Gebru and Émile P. Torres, “The TESCREAL Bundle: Eugenics and the Promise of Utopia Through Artificial General Intelligence,” First Monday, ahead of print, April 14, 2024, https://doi.org/10.5210/fm.v29i4.13636.
A suprising, one could also say disturbing, share of AI visions are wrapped up in some peculiar ideas about the future that Timnit Gebru and Émile P. Torres have christened the TESCREAL bundle.9 (Gebru is a Google engineer fired for her critical views. Torres is a philosopher who was once a transhumanist.)
The bundle consists of: “Transhumanism, Extropianism, Singularitarianism, (modern) Cosmism, Rationalists (the internet community), Effective Altruism, and Longtermism” (as summarized and linked in Wikipedia). In essence, all of these ideologies envision a future “beyond humanity” itself, though with different variations. Many argue that cultivating a distant sci-fi future into existence is a supreme ethical goal, outweighing say the ecological survival of planet Earth or the near-term wellbeing of most humans.
If you want a readable summary of these ideas and their influence, More Everything Forever10 is your guided tour. Torres and comedian Kate Willett’s podcast Dystopia Now (Apple | Spotify) is a regular look at this world.
10 Adam Becker, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity (Basic Books, 2025).
Limitations of implementing AI in workplace settings
Upwork research “surveying 2,500 global C-suite executives, full-time employees, and freelancers in the U.S., UK, Australia, and Canada”:
The majority of AI use appears to be emerging bottoms up, with workers leading the charge. Now, leaders are eager to channel this enthusiasm. Among the increased demands executives have placed on workers in the past year, requesting they use AI tools to increase their output tops the list (37%). Already 39% of companies require employees to use AI tools, with an additional 46% encouraging employees to use the tools without mandating that they do so.
However, this new technology has not yet fully delivered on this productivity promise: Nearly half (47%) of employees using AI say they have no idea how to achieve the productivity gains their employers expect, and 77% say these tools have actually decreased their productivity and added to their workload.
For example, survey respondents reported that they’re spending more time reviewing or moderating AI-generated content (39%), invest more time learning to use these tools (23%), and are now being asked to do more work (21%).
Technology author Mayo Olshin: “If however, you simply”trust” the Al outputs due to lack of knowledge, skill, or willingness to review results, the long term damage will outweigh the initial productivity gains you got so “hyped” about.” A similar experience from David Chisnall on CoPilot: “It has cost me more time than it has saved.”
Recent books
Many of the perspectives described above have been articulated into new books published in the past year or so including:
- Becker, Adam. More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity. Basic Books, 2025.
- Bender, Emily M., and Alex Hanna. The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want – Exposing Surveillance Capitalism and Artificial Intelligence Myths in Information Technology Today. Harper, 2025.
- Hao, Karen. Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. Penguin Press, 2025.
- Marcus, Gary F. Taming Silicon Valley: How We Can Ensure That AI Works for Us. The MIT Press, 2024.