In the comments of a terrifically insightful post by
(AI is Unavoidable, Not Inevitable) I had a very generative discussion with another reader, , who teaches high school history.On the whole, we have similar views of the potential and pitfalls of generative AI technology in education, though Steve is on the optimist side of the fence, while I’m more pessimistic. The conversation was helpful to me because in the back and forth, it forced me to more deeply consider what I think students should learn, how they should learn it, and perhaps most importantly, what kinds of conditions would allow for this learning.
I am of the view, and have been for quite some time (as evidenced in Why They Can’t Write: Killing the Five-Paragraph Essay and Other Necessities) that we need to fundamentally rethink the kinds of activities students do when it comes to writing in school, as well as how we assess and value those activities. I have now seen a couple of generations of students been asked to perform writing-related simulations, rather than being exposed to experiences that help them learn to write, and I think the consequences of this are very bad for both those students and society.
The risk is even higher now with the ubiquity of generative AI and large language model tools. According to a rather breathless report from Axios, OpenAI CEO, Sam Altman, on the heels of a seven-figure donation to Trump’s inauguration, will be giving a closed-door briefing to Congress to brief them on “a next-level breakthrough that unleashes Ph.D.-level super-agents to do complex human tasks.”
It’s impossible to parse the hype from reality, particularly when it comes to OpenAI, which is in the habit of giving short demos of technological breakthroughs that do not result in short term, or as of yet, even medium-term transformations. In fact, Altman himself walked back the hype the day after the Axios report.
But who knows? One of the realities I had to wrestle with while writing More Than Words: How to Think About Writing in the Age of AI was that the technology that may be in the world at the time of the book’s publishing would not be the same as the technology at the time of the book’s writing. For that reason, I tried to focus my inquiry on the things that I think are enduring, no matter the capabilities of artificial intelligence. My view is that reading and writing are not just of value because they are important precursors to thinking, which then allows us to join in a broader conversation (as Steve Fitzpatrick and I engaged in), but because they are worthwhile experiences, in and of themselves.
But, as much as I wish it were so, school isn’t just about experiences, so it’s important to articulate how the experiences I envision map to particular kinds of learning, and where that learning can take students. To Marc Watkins’s point that AI is “unavoidable,” I think we also have to consider what it would mean to utilize generative AI tools along the way.
Let’s see what we can figure out here.
There are two questions that underpin how I see any course design:
What do I want students to know?
What do I want students to be able to do?
As a writing teacher, it’s not all that hard for me to answer my questions:
How to write.
Write.
Okay, it’s definitely more complicated than that, which is why I developed my framework of “the writer’s practice” - the skills, knowledge, attitudes, and habits of mind of writers. Depending on the focus of the class, what students need to know to be able to practice their practice may shift. For example, in a fiction writing class, students will need to learn some stuff about point-of-view and perspective, a subject I explored at my
newsletter this past weekend.Let’s imagine some different levels of acquiring and demonstrating knowledge about point-of-view as its used in fiction writing.
I could ask a question about an isolated fact related to point-of-view: What pronoun is used in first-person singular point of view?
I could ask a definitional question perhaps as a multiple choice: Which of the following best describes the concept of “point-of-view” as it applies to fiction?
I could ask for students to apply a definition to an example: Read the following passages and identify the point-of-view (e.g., third-person, limited) for each.
I don’t know that any of these questions have much of a purpose as part of a graded assessment. One reason is that a student could plug any of these questions verbatim into a large language model and get a correct answer back without even bothering to even read something like a Wikipedia entry.
But more importantly, while they may have some surface-level relationship to my first course design question (What do I want students to know?) they move the ball not at all on question two (What do I want students to be able to do?).
With writing, some element of knowing is necessary before getting to the doing, but in truth, I think it’s less than we might imagine. In terms of direct instruction, over the years my preference was to give students just enough to allow them to wrestle with an actual writing experience, and after that wrestling I’d try to go back and fill in some of the missing knowledge based on their experiences.
To get at both of my core pedagogical questions, I need to help students not just know stuff, but to have ideas of their own.
I don’t just need students to recall and parrot information. I need them to apply and think while using information. I have a number of different exercises I use when it comes to point-of-view, for example having them pick an opening from a favorite story by another writer and then rewrite it from a different perspective (different POV, different character, tense change, etc…) and then to write a short analysis of how they think it changes the reader’s perception of the narrative.
This is not easy to do, as students discover it involves more than change a few pronouns around. The exercise’s goal is to show that a change in perspective fundamentally changes the kinds of information and modes of expression that a narrative entity has access to. It works great.
But of course, even this activity is potentially vulnerable to outsourcing to a Large Language Model. Using ChatGPT I ran an experiment on the opening paragraph from William Faulkner’s “Barn Burning,” to change the point-of-view from third-person limited to “the boy” to first person from the boy’s perspective.
Here’s Faulkner’s original:
The store in which the Justice of the Peace’s court was sitting smelled of cheese. The boy, crouched on his nail keg at the back of the crowded room, knew he smelled cheese, and more: from where he sat he could see the ranked shelves close-packed with the solid, squat, dynamic shapes of tin cans whose labels his stomach read, not from the lettering which meant nothing to his mind but from the scarlet devils and the silver curve of fish–this, the cheese which he knew he smelled and the hermetic meat which his intestines believed he smelled coming in intermittent gusts momentary and brief between the other constant one, the smell and sense just a little of fear because mostly of despair and grief, the old fierce pull of blood. He could not see the table where the Justice sat and before which his father and his father’s enemy (our enemy he thought in that despair; ourn! mine and hisn both! He’s my father!) stood, but he could hear them, the two of them that is, because his father had said no word yet:
And here’s ChatGPT’s rewrite:
The store smelled like cheese. Sitting there, crouched on my nail keg at the back of the room packed tight with people, I couldn’t stop smelling it—cheese, and more than cheese. From where I was, I could see the shelves, rows and rows of squat tin cans, their labels shouting at my stomach in bright pictures—scarlet devils and the silver flash of fish—talking louder than the letters I couldn’t read. The cheese smell was strong, but there was something else, too, a smell my gut swore was meat, sealed up in those cans, coming at me in little whiffs. And under all of that was another smell, one I didn’t need to think about to know: fear, mixed up with something else, something worse. Grief. Despair. It was the same feeling that always came when the pull of blood—our blood—got its hold on me.
I couldn’t see the table where the Justice sat, couldn’t see my father or the man standing there with him, the man who was as much my enemy as he was Pa’s. Our enemy. Ourn. Mine and hisn both! He’s my father! But I could hear them, the two of them, or just the one really, because Pa hadn’t said a word yet.
I mean, its almost entirely denuded of interesting style, but if this was genuinely the product of student work, it would absolutely hit the target on what I’d be looking for.
Even more interestingly, ChatGPT can generate a very passable analysis of the difference between the two approaches. In fact, it’s probably better as judged against standard grading criteria than what most students do because it has the syntactical polish that’s native to the models, which is something students tend to struggle with. It’s too long to post the whole thing here, but here’s an excerpt:
As we think about what we want students to know, and what we want students do be able to do, there’s a couple of things to consider here. 1. While the ChatGPT rewrite and analysis are “good” when measured against traditional grading criteria, I submit that they are not interesting. They have not surfaced any of the unique, spiky intelligence of a human being.
And 2. if a student outsourced my exercise to the LLM out of frustration, or boredom, or lack of time, even if they may have some additional knowledge about point-of-view from reading the LLM-generated text, they have not had the experience of doing it for themselves.
Some measure of struggle is necessary to build our capacities. LLMs can satisfy many (if not most) aspects of schooling without asking anything of students.
If we want students to learn to do stuff, they have to do stuff. When it comes to writing, I’m of the belief that the vast majority of that stuff must be done in the absence of LLM assistance or engagement.
But what about reading. I’m going to to some more thinking and dig into that next time.
Love this: "My view is that reading and writing are not just of value because they are important precursors to thinking, which then allows us to join in a broader conversation (as Steve Fitzpatrick and I engaged in), but because they are worthwhile experiences, in and of themselves."
I'd love to hear your thoughts on my new post on The Scholarly Kitchen:
https://scholarlykitchen.sspnet.org/2025/01/28/guest-post-finding-your-voice-in-a-ventriloquists-world-ai-and-writing/?informz=1&nbd=&nbd_source=informz
I allude to the point you made about losing personality, style, and instead producing uninteresting writing.
For me, it's very much about doubling-down on the the value of "doing" in the classroom—and screaming (to the rooftops) that something is lost when a student (or teacher) takes a shortcut around or past that "doing."
The problem, though, is that this proliferation of shortcuts is colliding head on with an educational landscape that has become increasingly transactional and outcome-oriented, where students are incentivized not to value the "doing" but to find the most efficient path to the outcome that meets their expectations.
As you've noted before, AI isn't the original sin—it's just pouring kerosene on the culture that already exists.