An Unserious Book
Sal Khan brings an infomercial to a (supposed) revolution with "Brave New Words."
(Note: This post was previously shared in slightly different form at my other newsletter,
.)The title of Sal Khan’s new book, Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing), makes two implicit claims:
AI will revolutionize education.
The AI revolution is “a good thing.”
It is strange, then, that the book makes no real attempt to grapple with the implications of its own argument. Rather than walking his audience through the method and manner of this supposed revolution, Khan simply asserts that it will happen because…AI! As to whether or not this is “a good thing” we’re then treated to a series of unsupported leaps from the initial premise, because…AI.
The result is a 270-page infomercial for Khan’s tutor-bot, Khanmigo, which he has recently made free to educators thanks to the backing of Microsoft, which is also the primary source of funding behind OpenAI whose ChatGPT powers the Khanmigo platform.
It is difficult to even grapple with Khan’s book as an argument or vision because there is no real argument and no vision beyond an almost childlike faith in the awesomeness of technocratic approaches to teaching.
If we are facing a revolution of the scale Khan is promising we deserve a serious book about how this revolution will work. This is not that book.
This book is filled with bullshit. (Sorry Mom, it’s the right word.)
Turning a blind eye to the dark side of AI development
The B.S. starts with his characterization of OpenAI, the developer of ChatGPT. OpenAI president Greg Brockman and CEO Sam Altman invited Khan inside the tent in the summer of 2022, months before the public release of ChatGPT and over a year prior to the release of GPT-4, which powers the Khanmigo tutoring platform. In the book, Khan characterizes OpenAI this way as: “One of the groundbreaking research laboratories working in the field of friendly, or socially positive, artificial intelligence.”
Perhaps this was a plausible description of OpenAI in the summer of 2022, but subsequent events have revealed a rather different side of the company. Recent reporting about the brief ouster of CEO Sam Altman shows that the company’s head has engaged in a serial campaign of deceit, even with his own governing board, using the non-profit origins of the company as cover for what is a vision that is either expansive, if you want to put it nicely, or rapacious, if you consider that Altman recently declared that OpenAI needs $7 trillion dollars - the combined annual GDP of the UK and Germany combined - in order to continue their AI development.
This is a company that is literally asking us to bet our collective future on their quest to create a god-like super intelligence that will be able to solve all of our problems.
In a book that is purportedly concerned about providing paths to prosperity for our world’s young people, Khan expends exactly zero words considering the dubious ethical origins of generative AI having been trained on the unauthorized and uncompensated use of the text, images, video, and audio of others.
Khan briefly acknowledges, but then hand waves away the known problems around algorithmic bias in AI.
Khan says nothing about the environmental threat of AI development. He does not grapple with the fact that as GPT-4 was being trained, a data center in West Des Moines, Iowa “used 6% of the district’s water” all by itself, or that 30-50 queries to ChatGPT requires the use of half a liter of water to cool the servers tasked with responding.
Khan does not mention how coal-fired power plants that were set to be retired have been kept in use solely for the purpose of powering energy hungry AI.
Khan ignores the fact that workers in Kenya being paid $2 an hour to “train” ChatGPT had their mental health “destroyed” after being exposed to explicit content as part of the training process.
Reading the opening of the book helped shed some light on the book’s title, an obvious reference to Aldous Huxley’s dystopian novel, Brave New World, in which individuals are sorted by IQ in a genetically engineered society and then kept docile through doses of mind-control drugs. I previously wrote how bizarre it seemed to invoke a dystopia as you’re trying to sell a utopia, but Khan appears to have a special knack for ignoring anything that doesn’t fit his preferred vision.
In Brave New Words he invokes Orson Scott Card’s Ender’s Game as one of the inspirations for his digital tutor. This is apparently a longstanding vision as when asked about what he was reading by the New York Times in 2012 he said:
I’m a fan of hard science fiction, which is science fiction that is possible. The science fiction books I like tend to relate to what we’re doing at Khan Academy, like Orson Scott Card’s “Ender’s Game” series and Isaac Asimov’s “Foundation” series. What all these books are about is how humans can transcend what we think of traditionally as being human — how species hit transition points and can become even more elevated. Very epic ideas are at play here. Not the everyday pay-the-bills, take-out-the-trash kind of stuff.
Ender’s Game (spoiler alert) is a novel about children who are manipulated into executing a preemptive war conducted through virtual combat in which they consign (likely) millions of their own soldiers to death while wiping out an entire alien species in an act of “xenocide.” Ender and his companions believe they are training in a simulation, only being told the combat was real after the conclusion of the final battle.
They were kept in the dark because the military leaders were concerned about the children hesitating or showing mercy if they knew what they were doing had real-world consequences.
This is, quite frankly, a bizarre model for educating people, and yet it appears to be one of Khan’s core inspirations, “What all these books are about is how humans can transcend what we think of as traditionally being human - how species hit transition points and can become even more elevated.”
These are the words of a fanatic.
Sal Khan has no interest in teaching
Sal Khan has no apparent genuine interest in teaching and teachers.
Oh, on the surface, it seems he cares deeply about teaching and teachers, even including several chapters purportedly directly addressing the fate and treatment of teachers, but his sole concern for teachers involves making sure it’s easier for them to use his technology. He touts how quickly and efficiently Khanmigo can make a lesson plan, or “customize” a lesson with content that is personalized to the student.
But as Dan Meyer shows, “customizing” lesson content has never been shown to work as an aid to student learning. Khan is in the business of solving the problems he perceives rather than truly engaging with and collaborating with teachers on the actual work of teaching. He turns teaching into an abstract problem, one that just so happens to align with the capabilities of his Khanmigo tutor-bot.
Teaching is the most difficult, most rewarding work I have ever done. It is an ever evolving challenge to engage the individual intelligences of students in experiences that will foster their social, emotional, and intellectual growth.
Teaching is a practice which requires the employment of skills, knowledge, attitudes and habits of mind. Developing these aspects of one’s practice requires a kind of constant attention to both the particulars of the moment in the classroom, as well as a longer view of how these moments aggregate into learning.
If you would like a look at what this looks like in practical terms, I highly recommend the newsletter of
who writes with great insight about his experiences as a 5th grade teacher. This past year has been a challenging one, and his recent post wondering if he’s become “Richard Vernon” the cynical principal from The Breakfast Club illustrates what it means to think, feel, and act as a teacher.I know I have a lot of teachers reading this newsletter, and I would encourage anyone so moved to try to describe what teaching is and how it works in the comments.
At his core, Sal Khan has never exhibited any interest in education per se. He has been focused on the problem of the delivery of educational content, first through Khan Academy, and now Khanmigo. To be sure, good content is an important component of achieving learning, but in truth it is a relatively small component.
In Brave New Words he declares, “With Khanmigo, I think we have an artificial intelligence that is hard to distinguish from a strong human tutor.” Khan believes this because Khanmigo can engage students “in Socratic questioning throughout the learning process.”
Khanmigo cannot reason, feel, or communicate with intention. It cannot smile or frown. It does not read non-verbal cues. It does not joke around or make intuitive leaps. To believe that Khanmigo is hard to distinguish from a strong human tutor, one needs to ignore that when we interact with other human beings we bring our human selves to the experience.
Khan wants us to marvel that after a reading an assignment, Khanmigo may engage a student by asking “What is your opinion of this essay?”
I can testify that this is not an effective way to engage students in a learning experience because this is what I was doing in my earliest days as a TA in graduate school when I knew nothing and had no experience with teaching. My students would look back at me, blank-faced until one of them had mercy on me, raising their hand and saying, “Uhh, it was alright.”
Throughout the book Khan takes what Khanmigo is capable of doing and asserts that this is an example of effective teaching. One of the claims he and others make about the benefits of tutor-bots is their “patience,” even their “infinite” patience, but is infinite patience truly a component of effective teaching?
I think not and said as much at my other newsletter.
Honestly, Khan’s treatment of teaching is insulting as much as anything. He claims to want to provide technology to teachers and schools that will help them without bothering to understand what teachers do or how teaching works.
Motivation, engagement, relationships have no salience to Khan’s vision of teaching. This is not a vision of teaching that will work for most students. At his
newsletter,
highlights recent research that shows the limits of technological intervention in teaching spaces. When ed tech companies do research on the efficacy of their products, they “excluded roughly 95% of studentsfrom their studies for not meeting arbitrary thresholds for usage.”Should we be embracing an approach that only 5% of students are willing to engage with?
This is another question that Khan lets go begging throughout the entire book because he just doesn’t care, even though the problem of engagement is the central challenge of learning.
Sal Khan really don’t know about learnin’ writin’
As I explored in Why They Can’t Write: Killing the Five-Paragraph Essay and Other Necessities, Sal Khan is not unique in mistaking producing written texts for the purposes of schooling with learning to write, but he for sure falls into the trap.
He argues, “The most successful students will be those who use artificial intelligence applications to make their writing smoother, their prose clearer, and their long-form answers to complex questions more succinct.”
Here we see values the values of smoothness, clarity, and succinctness as tantamount to quality writing. This is a cramped vision focused on very narrow criteria. There is no consideration of depth, or elegance, or entertainment and engagement. There is no consideration of audience or the rhetorical situation.
Just as Khan is uninterested in what it means to teach, he is unconcerned with how writing is learned. To him - and again, he is not alone in this - the benefit of generative AI is in streamlining the production of a text product.
The real power of generative AI is to solve what I call ‘the blank-paper problem where oftentimes the hardest thing to do is start writing. Early classroom adopters in this new reality are finding success allowing their students to use generative AI to help compose the first draft.
If the goal is to help students to learn to write as opposed to engage in academic cosplay for the purpose of schooling, this is the exact opposite of what we should be doing and is a form of educational malpractice.
Khan also latches on to the canard of the benefit of instant feedback on writing, as though this is an obvious benefit. He starts with a bad analogy.
I want to dwell on the value of providing rapid feedback. For example, it would be very hard to get better at basketball free throws if you didn’t know whether or not you made the basket for several days or weeks. As ridiculous as this sounds, this is exactly what happens with writing practice. Before generative AI came on the scene, it could take days or weeks before students got feedback on their papers. By that point, they may have forgotten much of what they had written, and there wouldn’t be a chance for them to refine their work. Contrast this to the vision in which students receive immediate feedback on every dimensions of there writing from the AI. They will have the chance to practice, iterate, and improve much faster.
A cognitively complex and challenging process like writing is nothing like the mechanical process of shooting free throws. The comparison is off the rails from the beginning.
To be sure, providing students with feedback on their writing is an ongoing challenge, but it is a problem primarily caused by giving teachers too many students and too little time to respond to student writing. But even with that being true, the idea that immediate feedback on writing is a benefit to learning to write is simply wrong.
I covered the limited utility of real-time feedback on student writing back in 2018 at Inside Higher Ed. I think I make a pretty persuasive case for why real-time feedback has very few benefits to learning to write. Maybe some folks would differ on that front, but Khan does not even attempt to grapple with the possibility that learning to write may be more complicated than he believes.
Writing is learned through writing experiences. The way Khan discusses the integration of generative AI into the writing process distorts those experiences into something that is not writing. With learning, as
points out friction is the whole point of the exercise.Outsourcing writing to something that cannot think or feel, that has no experience of the world, that has no memory (in the human sense) is a willing abandonment of our own humanity.
Maybe Sal Khan would argue that this is an example of species “elevation” through interacting with AI, but I think that’s bullshit.
Sal Khan wants you to doubt your own humanity
There are literally dozens of head-slapping claims in this book. Because Brave New Words is an informercial, not an argument, Khan gives himself to fire off mini thought experiments that fall apart under even a moment’s scrutiny.
Sleeping on a problem is the same as what large language models do.
What are our brains doing subconsciously while our consciousness waits for an answer? Clearly, when you “sleep on a problem,” some part of your brain continues to work even though “you” are not aware of it. Neurons activate, which then activate the neurons depending on the strength of the synapses between them. This happens trillions of times overnight, a process mechanically analogous to what happens in a large language model.
Uh…what now? I couldn’t tell you if Khan is correct about what happens to humans as we sleep here, but on its own terms he’s wrong. LLMs work on a probabilistic next token prediction process. They are a syntax fetching machine. This is nothing like what is happening in our subconscious brains. Pure nonsense.
Generative AI is creative just like humans
Some would also argue that generative AI’s “creativity” is just derivative from all the data it has been expose to. But isn’t that very human as well? Even the large leaps in human creativity have been closely correlated to things that the creator has been exposed to. Would Einstein have mad the leap to special relativity if he hadn’t already read the work of Lorentz and countless other physicists?
This is how I can tell Khan didn’t anticipate any critical reads of his work and perhaps wrote this anticipating an audience far more naive about how generative AI works than what has become reality, because this is transparently wrong. Thinking and intuitive leaps based on prior knowledge and experience is fundamentally different than next token prediction. B.S. all the way down.
At times, it’s hard to take Khan’s book seriously because it really seems to have been written in such a fundamentally unserious way. He refuses to grapple with any kind of complexity around what it might mean to integrate this technology into our educational systems. If you want to read someone who is grappling with these questions, I recommend
who is in the midst of a series of posts discussing all of the subjects Khan’s book raises.There is a particular irony in Khan’s concluding chapter calling for a spirit of “educated bravery” in considering AI in education. He wants us to be bold experimenters pushing the bounds of possibility. That he concludes a book devoid of genuine educating content with this call-to-action is a bigger irony than the book’s title. The idea that embracing AI is an act of bravery while questioning it is something like the opposite is an insult.
While Brave New Words is an unserious book, we should take the vision for education it promotes very seriously given how much money is being put behind it and power of the people who are pushing this vision. The book’s endorsements primarily come from billionaires with longstanding histories of shaping the education systems in the United States and beyond. Some of the non-billionaires who blurbed it should be ashamed of themselves, but it’s tough to stop the Ted Talk logrolling train once it starts heading down the tracks.
Integrating Khanmigo or other generative AI tools into schools would be to engage in a massive, unregulated, untested, possibly deeply harmful experiment. Sal Khan’s book-length informercial is meant to grease the wheels for that journey.
I think the book’s utter unseriousness should be read as a warning to the rest of us.
The people in charge have not thought these things through.
Thank you for taking the time to read and review Sal Khan's book. You've gone into great depth. On the surface, this looks like a book I'd like to read, but your review has made me decide to skip it.
Thanks for saving me a couple weeks of reading time!
Could you clarify this sentence?
"Sal Khan has no apparent genuine interest in teaching and teachers."
Khan Academy has 8,000-plus free videos on YouTube. That is a huge investment of resources to help out students around the world.
In your view, what constitutes a "genuine interest in teaching and teachers"?
Was Khan once someone who had a "genuine interest in teaching and teachers", but has since lost it?
The way you review Khan's book certainly makes it sound like he's transitioned to a salesman interested in selling his AI chat bot more than anything else.
The section of your review about writing is interesting. Writing is thinking. Thinking is hard.
As you pointed out, writing is a lot more than just being clear and concise. Khan has "no consideration of depth, or elegance, or entertainment and engagement. There is no consideration of audience or the rhetorical situation."
I was a teacher for 15 years, and I think writing is the most difficult skill to teach.
To write well, you must have some degree of subject mastery, interesting insights, use interesting words, organize your ideas coherently, avoid too much repetition, etc. There's a lot going on.
It's likely that large language models (LLMs) can help with some aspects of the writing process, but I'm dubious that an LLM can teach writing in its totality.
Even if LLMs can teach us all how to write, won't we all end up sounding the same? Wouldn't it be boring if we all write the same and, by extension, essentially think the same?
As far as I can tell, AI is this amorphous buzzword that gets thrown around without anyone really understanding what AI is. I certainly don't know what AI is.
As you noted at the beginning of your review, "[i]t is difficult to even grapple with Khan’s book as an argument or vision because there is no real argument and no vision beyond an almost childlike faith in the awesomeness of technocratic approaches to teaching."
It's somewhat reassuring to know that even Khan doesn't know what AI is!
Finally, thank you for noting the connections between Khanmigo, Microsoft, and OpenAI. I wasn't aware of those threads.
Cheers!