It is very hard to quantify how much a student has learned when it comes to writing.
I stressed the word “quantify” there because it’s actually quite easy to gather qualitative evidence of student improvement. Every writing teacher has witnessed this in reading a student’s work over the course of the semester and simply knowing that learning has happened. How much learning? That’s a different question.
To be honest, speaking from the point of view as someone who teaches writing where my primary relationship is with students, I don’t really care how much students have learned as long as they’ve learned. Using my framework of “The Writer’s Practice” (the skills, knowledge, attitudes, and habits of mind of writers) I put a lot of weight on student reflections regarding their own learning. I ask them to articulate what they believe has changed within their writing practices over the course of the semester, and I trust those testimonies because ultimately, the students’ learning belongs more to them than to me.
Much of what students learn is just impossible to quantify. For example, when “The Writer’s Practice” was used in a two-week summer enrichment program for some rising 8th graders in Chicago Public Schools, one of the main things that students learned was that writing could actually be interesting and even fun. This shift in attitudes may or may not show up in the writing they completed, but the evidence linking the change in attitudes to an improved writing product would be speculative at best.
While individual instructors may not have to worry about quantifying learning, education systems are veritably obsessed with it, often to the detriment of students and teachers alike. The way this obsession distorts the work students do and how that work is assessed often drives us further away from learning, rather than towards it as Campbell’s Law kicks in.
Please don’t try to say that grades are reliable indicator of learning. We all have experienced courses where we’ve learned little and received an A, or learned a lot and gotten a much lower grade than A. We also have decades of research on how emphasizing grades incentivizes behavior that’s counterproductive to enduring learning and knowledge, e.g., cramming for a test where the material we be almost immediately forgotten. Sometimes grades mean you’ve learned a lot, sometimes not.
I am sympathetic to the desires of those outside of the teacher/student loop to have a window into how much has been learned. My wish is that we did a better job contextualizing the tools we have to convey these things. Grades are not really up to the task.
We also have to take a hard look in terms of at which level of input the available data is truly meaningful. Standardized assessments were originally conceived as a way to see what was going on at the school level. Attempting to drill that down to the individual student distorted the related activities every step of the way. As a teacher, my goal was to assess the overall effectiveness of my pedagogy. This meant that I aggregated student behavior up to the class or course level. Each level up meant that weirdness and outlier experiences had less weight on the overall measurement. There’s never going to be a perfect indicator for all students, all classes, all schools. It requires a willful ignoring of the incredible variance in humans to believe otherwise.
This has me thinking about what kinds of things we could measure that indicate a student is learning. It is a difficult thought experiment. Maybe some folks could help think about this with me. There’s a comment button at the bottom of the post.
Since writing is my field, l offer these thoughts with writing courses in mind. I’d have to think even more deeply about how these ideas would translate to different contexts.
Measurable things
The number of words written over the course of a semester. When I was teaching fiction writing classes using a traditional workshop model, I was frustrated with how little students were writing, so when I shifted away from that model, I decided to measure whether or not students wrote more. As it turns out, students, on average wrote twice as much after the shift. I could not prove that more writing increased learning, but it seemed like a good thing in and of itself.
Independent time on task. It is relatively easy to track how much time is spent on producing a piece of writing, and if we’re tracking time, we may be able to tease out different inferences. For example, if someone who has been dashing off and turning in first drafts now seems to literally be taking more time, we could quantify that difference and then see if it correlates to improved products.
Or, we could decide to measure efficiency against a common, repeated task. When I first started blogging at Inside Higher Ed a dozen years ago, it would take me days to write a single post. Now, provided I’ve done my program of pre-thinking, it’s no more than a couple of hours from first words on the page to finished product. I’ve become far faster as a writer of that kind of content while - in my view - maintaining consistent quality. That’s an interesting, measurable sign of my learning.Student confidence in ability to execute task. Here I’m envisioning some kind of pre-writing occasion survey where students are given a capsule description of the experience and then asked to rate their confidence in being able to complete that experience to satisfaction. Over time, should confidence increase, provided the student is not self-deluding, we could track learning.
Various features of the text. We have the tools to measure things like vocabulary, sentence sophistication, sentence variety, sentence correctness, etc… Because these things can be measured, they can be tracked. I find these things more interesting than useful when it comes to responding to writing - I don’t experience writing through metrics - but these are all measurable.
There’s obviously a Campbell’s Law problem with these should these measurements come to stand in for the behavior. For example, if students start writing a bunch of blah blah because their grade depends on how many words they’ve written, that’s not productive. In regards to various features of the text, if we settle for what we can count, rather than what we should count (an approach which I unfortunately already endemic in assessment), we’re off on an unproductive path.
My instinct is that all of these things have to be happening in the background, largely hidden from students, maybe not even shared with them so as not to disturb their natural progression as they engage with the writing experiences.
I know I must be missing some things. What else could we measure if we wanted to measure “learning?”
Yes and no.
Amazing John. you are teaching young kids. Well I read your blog post. You are already familiar with various quantification tools. Yes grades really dont speak about learning outcomes. Then you are familiar with survey method. Yes you conclude that you want the assessment hidden from the students so that they dont have the perception and pressure that they are being monitored in addition to formal exams.
Well I think the best suggestion I can give that is foundational to quantification is that teacher has to work as the control variable. What I mean is that it is you who best knows what learning outcomes you want from the course. We practice it intensively in the University where I was a Professor. First we prepare a course outline and define the learning outcomes for each topic and each lecture. Then we link it to the course learning outcomes. However I agree that University prepare a university wide survey that is given to the students after the conclusion of the course to judge teacher performamce. First the survey is not course specific or area of specialization specific and secondly students are not really trained to provide the most accurate info. For example, if I grade then good, my student evaluation would be really good. However the course assessments need to follow the normal curve that means only few would get A and most would get a B and only few would fail the course. Though I would personally not fail the students.
The most amazing part of your blog post is that you want to quantify student learning at school level. That is really unheard of. And the impression I get from the post is that it is you only who is thinking on these lines.
My suggestion is that you prepare a survey questionnaire for students. You can make it interesting for them especially when you are teaching them English language and thereby they wouldn't note that is is actually an assessment of their learning. For each question you define certain codes. For example, if a student may have linked class learning with any practical aspects in his daily life. It can be student sensitization about multi cultuarism (that is so important in today's US). The quantification codes are also with you unlike a regular survey. WHat students would have on a piece of paper is questions disguised as some really interesting expressions and activities.
Its similar to computer programing where you define codes behind a paywall. Then you can further quantify student learning. Though my suggestion is just an idea about quantification through pseudo survey method, the real tip is that it is you to decide what controls you want to introduce based on what learning outcomes you expect from your course.
I must say you are a really great teacher. Best wishes.