It is very hard to quantify how much a student has learned when it comes to writing.
I stressed the word “quantify” there because it’s actually quite easy to gather qualitative evidence of student improvement. Every writing teacher has witnessed this in reading a student’s work over the course of the semester and simply knowing that learning has happened. How much learning? That’s a different question.
To be honest, speaking from the point of view as someone who teaches writing where my primary relationship is with students, I don’t really care how much students have learned as long as they’ve learned. Using my framework of “The Writer’s Practice” (the skills, knowledge, attitudes, and habits of mind of writers) I put a lot of weight on student reflections regarding their own learning. I ask them to articulate what they believe has changed within their writing practices over the course of the semester, and I trust those testimonies because ultimately, the students’ learning belongs more to them than to me.
Much of what students learn is just impossible to quantify. For example, when “The Writer’s Practice” was used in a two-week summer enrichment program for some rising 8th graders in Chicago Public Schools, one of the main things that students learned was that writing could actually be interesting and even fun. This shift in attitudes may or may not show up in the writing they completed, but the evidence linking the change in attitudes to an improved writing product would be speculative at best.
While individual instructors may not have to worry about quantifying learning, education systems are veritably obsessed with it, often to the detriment of students and teachers alike. The way this obsession distorts the work students do and how that work is assessed often drives us further away from learning, rather than towards it as Campbell’s Law kicks in.
Please don’t try to say that grades are reliable indicator of learning. We all have experienced courses where we’ve learned little and received an A, or learned a lot and gotten a much lower grade than A. We also have decades of research on how emphasizing grades incentivizes behavior that’s counterproductive to enduring learning and knowledge, e.g., cramming for a test where the material we be almost immediately forgotten. Sometimes grades mean you’ve learned a lot, sometimes not.
I am sympathetic to the desires of those outside of the teacher/student loop to have a window into how much has been learned. My wish is that we did a better job contextualizing the tools we have to convey these things. Grades are not really up to the task.
We also have to take a hard look in terms of at which level of input the available data is truly meaningful. Standardized assessments were originally conceived as a way to see what was going on at the school level. Attempting to drill that down to the individual student distorted the related activities every step of the way. As a teacher, my goal was to assess the overall effectiveness of my pedagogy. This meant that I aggregated student behavior up to the class or course level. Each level up meant that weirdness and outlier experiences had less weight on the overall measurement. There’s never going to be a perfect indicator for all students, all classes, all schools. It requires a willful ignoring of the incredible variance in humans to believe otherwise.
This has me thinking about what kinds of things we could measure that indicate a student is learning. It is a difficult thought experiment. Maybe some folks could help think about this with me. There’s a comment button at the bottom of the post.
Since writing is my field, l offer these thoughts with writing courses in mind. I’d have to think even more deeply about how these ideas would translate to different contexts.
Measurable things
The number of words written over the course of a semester. When I was teaching fiction writing classes using a traditional workshop model, I was frustrated with how little students were writing, so when I shifted away from that model, I decided to measure whether or not students wrote more. As it turns out, students, on average wrote twice as much after the shift. I could not prove that more writing increased learning, but it seemed like a good thing in and of itself.
Independent time on task. It is relatively easy to track how much time is spent on producing a piece of writing, and if we’re tracking time, we may be able to tease out different inferences. For example, if someone who has been dashing off and turning in first drafts now seems to literally be taking more time, we could quantify that difference and then see if it correlates to improved products.
Or, we could decide to measure efficiency against a common, repeated task. When I first started blogging at Inside Higher Ed a dozen years ago, it would take me days to write a single post. Now, provided I’ve done my program of pre-thinking, it’s no more than a couple of hours from first words on the page to finished product. I’ve become far faster as a writer of that kind of content while - in my view - maintaining consistent quality. That’s an interesting, measurable sign of my learning.Student confidence in ability to execute task. Here I’m envisioning some kind of pre-writing occasion survey where students are given a capsule description of the experience and then asked to rate their confidence in being able to complete that experience to satisfaction. Over time, should confidence increase, provided the student is not self-deluding, we could track learning.
Various features of the text. We have the tools to measure things like vocabulary, sentence sophistication, sentence variety, sentence correctness, etc… Because these things can be measured, they can be tracked. I find these things more interesting than useful when it comes to responding to writing - I don’t experience writing through metrics - but these are all measurable.
There’s obviously a Campbell’s Law problem with these should these measurements come to stand in for the behavior. For example, if students start writing a bunch of blah blah because their grade depends on how many words they’ve written, that’s not productive. In regards to various features of the text, if we settle for what we can count, rather than what we should count (an approach which I unfortunately already endemic in assessment), we’re off on an unproductive path.
My instinct is that all of these things have to be happening in the background, largely hidden from students, maybe not even shared with them so as not to disturb their natural progression as they engage with the writing experiences.
I know I must be missing some things. What else could we measure if we wanted to measure “learning?”
My instinct is to be worried about your instinct "that all of these things have to be happening in the background, largely hidden from students, maybe not even shared with them so as not to disturb their natural progression as they engage with the writing experiences."
The impulse to quantify learning is often demanded and always implicated in the bureaucratic machinery of schooling as it does its work of ranking and ordering students. Even if we hide the quantification from our students or from the bureaucracy, the question of what potential use a measure will be put to is worth considering.
Measures of learning often begin as benign and student-centered. Think of Alfred Binet and Théodore Simon who developed what many regard as the first IQ test to determine how best to help students with learning disabilities. Or, think of the potential use of the measure of your own time spent writing blog posts. For you, it was enlightening and useful. In the hands of an editor with stable of bloggers, it could be used as part of a system for the efficient production of content according to a fixed schedule.
I completely agree that Campbell's law is an important context for thinking about educational measurement. With that context in mind, I think we are obliged to let our students in on how this works, including the risk that any measure we develop in partnership with them may end up used in unintended and damaging ways.
We could borrow from the corpus studies used in linguistics: we could measure the use of different sentence structures, use of particular markers (verbs used in narrative citation, citations per 1000 words, use of hedges or markers of certainty, etc.), range of vocabulary, use of the first person, etc.. All of those are quite revealing when seeking to understand writing. And good writers usually use a large array of strategies, depending on their rhetorical aim….so students who read widely and have learned to experiment with their writing are able to be flexible in their use of these strategies.