I have been encouraged to see the reflexive rejection of a Google ad touting the use of generative AI as a writing aid that has been fairly ubiquitous during the Olympics television coverage.
The ad, titled “Dear Sidney” features a voice over of a father touting the sprinting prowess of his young daughter over what look like home movies and family photos. His daughter idolizes U.S. Olympian and gold medal winning hurdler Sydney McLaughlin-Levrone. The commercial cuts to a shot of Google search using Gemini AI, a large language model that distills an answer to queries, rather than merely linking to outside sources. We see searches for hurdling technique and the Gemini-delivered answers.
The father says that his daughter, “Wants to show Sydney some love, and I’m pretty good with words, but this has to be just right.” As the father speaks, his words are superimposed over scenes of his daughter running, and we see a small error – too many spaces between “just” and “right” automatically corrected.
A prompt box for Gemini appears as the father speaks, “Help my daughter write a letter telling Sydney how inspiring she is and be sure to mention that my daughter plans on breaking her world record one day. (She says sorry, not sorry.)”
The add concludes with images of the AI-generated text as Eve’s “Who’s That Girl” plays.
The negative response was rounded up by Matt Steib writing at New York magazine’s Daily Intelligencer blog.
“Like many things about AI itself, this is something seemingly nobody wants. People were quite upset with the ad, which kept playing during prime time. ‘I flatly reject the future that Google is advertising,’ wrote Syracuse media professor Shelly Palmer. ‘I want to live in a culturally diverse world where billions of individuals use AI to amplify their human skills, not in a world where we are used by AI pretending to be human.’ Brand strategist Michael Miraflor wrote that the ad was quite similar to the Apple iPad commercial from May that was widely reviled. ‘They both give the same feeling that something is very off, a sort of tone-deafness to the valid concerns and fears of the majority,’ he wrote, adding that both were developed in-house.”
Washington Post columnist Alexandra Petri declared, “I hate the Gemini ‘Dear Sidney’ more with every passing moment.”
She says:
This is an ad for the people who think that replacing meals with pills is a prospect that fills everyone with delight, that if we can ever do away with sleep and music entirely, it will be a grand triumph. Instead of wastefully spending hours picking out yarn and knitting your baby a blanket by hand, Grandma can now push a button!
The encouraging part is how quickly and reflexively viewers noted the extreme disconnect between the experience Google Gemini was promoting, automating what is meant to be a heartfelt message from a young girl to her hero, and what we think should be appropriate.
I’m encouraged because what I think is the correct sentiment – this is just wrong – is so clear, and so widely held.
The core message of my forthcoming book, More Than Words: How to Think About Writing in the Age of AI is that writing is an inherently human activity, and that “writing” is not synonymous or interchangeable with text generation. The act of writing is an experience during which we think, feel, and communicate, and any time we outsource writing to a text generator, we are denying ourselves that experience.
I believe we should be bringing this lens of experience to any situation that brings together humans with generative AI.
I think many teachers are feeling despair about some students automatically turning to LLMs to complete writing is because it feels like – perhaps because it is – “cheating.”
But cheating is not just a circumvention of norms around academic dishonesty, it is also the denial of an experience. We might say that from that lens, doing an end run around a learning opportunity means students are only “cheating themselves,” so to speak, but when students talk about their choice to use LLMs as a shortcut to the production of an academic artifact, they rarely consider that they may be missing out on something of value.
The idea that learning is rooted in experiences or that the experiences could be meaningful often fails to register from a student’s perspective. Why this is the case is really the subject of one of my other books, Why They Can’t Write.
But I also want to ask how often teachers think of their work through this lens of experience? Is this a luxury? If so, what does it mean for the conditions we’ve created for teaching and learning?
What happens to the experience of teachers when some of the work of teaching is outsourced to automation using generative AI tools? Personally, I would never use generative AI as a tool to evaluate or respond to student writing. One reason is because I think it is wrong to evaluate writing using technology that cannot read. Even if the LLM could produce feedback or generate a score similar to that of a human, it just doesn’t sit right with me that students are subjected to evaluation by that tool of automation.
But the more important reason is that the work of teaching writing requires me to read student work in order to be properly engaged with the work of teaching.
Yes, sometimes, too often the material conditions under which we work make it seem impossible to engage at that level. But like
tells at Luther’s podcast, teachers should be very cautious about outsourcing their labor to generative AI.One reason is because ceding one’s human labor to automation encourages further automation and at some point teachers may find themselves out of work because systems predicated on efficiency and productivity and reduced cost will sacrifice quality and human well-being in exchange for those increased efficiencies and reduced cost.
But I also ask what is the end game for teachers as people when we cease to experience the work of teaching?
I try not to let myself slide down the slippery slope of where these things could go if we are not careful about preserving our experiences, but when I do, I begin to get awfully worried.
The disconnect represented in this ad and others like the iPad cruncher ad is an interesting manifestation of just how disconnected big tech is from how AI looks to most of us. It is bad enough when the execs open their mouths. But even their marketers can't figure out how to sell it without making everyone mad.
At least we can love that everyone hated it