14 Comments

Yes and no.

Expand full comment

Okay, what's the yes, and what's the no?

Expand full comment

I think about this a lot. In school we set up these experiences so that we can measure progress based on specially tailored metrics. And in some sense, that is learning. I shifted to teaching through an outcomes based model this year. It went well. There was a lot of learning per this definition. And yet, does this exclusively describe what learning is? I was also treated as a Montessori Middle School guide as another point in my career. That theoretical lens defines learning so much more expansively. But does it exclusively define what learning is? Probably not. And so I keep searching and teaching and learning.

Expand full comment

Amazing John. you are teaching young kids. Well I read your blog post. You are already familiar with various quantification tools. Yes grades really dont speak about learning outcomes. Then you are familiar with survey method. Yes you conclude that you want the assessment hidden from the students so that they dont have the perception and pressure that they are being monitored in addition to formal exams.

Well I think the best suggestion I can give that is foundational to quantification is that teacher has to work as the control variable. What I mean is that it is you who best knows what learning outcomes you want from the course. We practice it intensively in the University where I was a Professor. First we prepare a course outline and define the learning outcomes for each topic and each lecture. Then we link it to the course learning outcomes. However I agree that University prepare a university wide survey that is given to the students after the conclusion of the course to judge teacher performamce. First the survey is not course specific or area of specialization specific and secondly students are not really trained to provide the most accurate info. For example, if I grade then good, my student evaluation would be really good. However the course assessments need to follow the normal curve that means only few would get A and most would get a B and only few would fail the course. Though I would personally not fail the students.

The most amazing part of your blog post is that you want to quantify student learning at school level. That is really unheard of. And the impression I get from the post is that it is you only who is thinking on these lines.

My suggestion is that you prepare a survey questionnaire for students. You can make it interesting for them especially when you are teaching them English language and thereby they wouldn't note that is is actually an assessment of their learning. For each question you define certain codes. For example, if a student may have linked class learning with any practical aspects in his daily life. It can be student sensitization about multi cultuarism (that is so important in today's US). The quantification codes are also with you unlike a regular survey. WHat students would have on a piece of paper is questions disguised as some really interesting expressions and activities.

Its similar to computer programing where you define codes behind a paywall. Then you can further quantify student learning. Though my suggestion is just an idea about quantification through pseudo survey method, the real tip is that it is you to decide what controls you want to introduce based on what learning outcomes you expect from your course.

I must say you are a really great teacher. Best wishes.

Expand full comment

We could borrow from the corpus studies used in linguistics: we could measure the use of different sentence structures, use of particular markers (verbs used in narrative citation, citations per 1000 words, use of hedges or markers of certainty, etc.), range of vocabulary, use of the first person, etc.. All of those are quite revealing when seeking to understand writing. And good writers usually use a large array of strategies, depending on their rhetorical aim….so students who read widely and have learned to experiment with their writing are able to be flexible in their use of these strategies.

Expand full comment

I know you’ve already said this - your point 4! - but I thought it was worth expanding on why these measurements might be a valid assessment of a certain type of learning.

Expand full comment

What if teachers measured students qualitatively instead of quantitatively? I’m a big fan of student reflections, empathy interviews, and individual student conferences. It would be cool if teachers were given time (and training) to code these interviews for themes that could be used to measure student learning (e.g.: Grounded Theory).

Expand full comment

Your thoughts resonate with me. In my experience with online learning, I believe that all the quant could (should?) be measured mechanically, without interrupting or distracting the students. This would leave the tutors to focus on teaching and the qualitative assessment–bringing the human touch and genuine value. Once the quant and Qual are combined, we should see genuine, balanced assessment.

Expand full comment

What an incredible idea. next thought- What even is learning? I think about this a lot, as i have the flexibility and desire to tailor lessons to my own child as her homeschool teacher. I have removed her from the field of "assessment" for the time being, yet I am constantly monitoring her learning. I want to know what works and what doesn't and am highly motivated to find the paths of least resistance. For both of us. I have a set of values that are my main aim in directing this ship, and then she has her own journey of learning that is ultimately the most important part of this endeavor. Observation and asking her what HER goals are have proven to be the most important part of the equation. We cannot measure someone else's learning based solely on our (the facilitator's) goals for their learning. have they accomplished what they wanted out of this class? out of this practice? have they learned something more or less valuable than they set out to learn from this class? what learning have they done and do they value it now and in the future as their life unfolds? i have no idea if this is quantifiable but is an incredibly important factor of learning.

Expand full comment

My instinct is to be worried about your instinct "that all of these things have to be happening in the background, largely hidden from students, maybe not even shared with them so as not to disturb their natural progression as they engage with the writing experiences."

The impulse to quantify learning is often demanded and always implicated in the bureaucratic machinery of schooling as it does its work of ranking and ordering students. Even if we hide the quantification from our students or from the bureaucracy, the question of what potential use a measure will be put to is worth considering.

Measures of learning often begin as benign and student-centered. Think of Alfred Binet and Théodore Simon who developed what many regard as the first IQ test to determine how best to help students with learning disabilities. Or, think of the potential use of the measure of your own time spent writing blog posts. For you, it was enlightening and useful. In the hands of an editor with stable of bloggers, it could be used as part of a system for the efficient production of content according to a fixed schedule.

I completely agree that Campbell's law is an important context for thinking about educational measurement. With that context in mind, I think we are obliged to let our students in on how this works, including the risk that any measure we develop in partnership with them may end up used in unintended and damaging ways.

Expand full comment

My default mode with students is to attempt to practice maximum transparency and attempt to provide them maximum agency over their own work, so I don't want to give the impression that I think we should actively hide anything from them, but in terms of data there's also various ways of framing that data in ways that are more or less useful. I don't need to know the specifics of something like my annual bloodwork on a physical, but it's a great comfort to know that everything is in the normal range. I also think a primary goal should be to help students develop their own indicators of progress so they're rooted in what's meaningful to them. This is a key part of how I use my framework of "the writer's practice." I want them to be thoughtful about how their own practices are developing over time, partly so they can address what they perceive as needs, but also because it's good for them to understand that they're making progress.

The progress I made on my blog post speed was something I only noted literally years after it happened because my goal was never to get faster at writing blog posts. If someone had incentivized faster blog posts, I feel like I both would've allowed quality to decline, and increased my anxiety over how long it was taken. So in a way, that metric was hidden from me, or at least deprioritized in a way that allowed me to develop that skill in the absence of actively caring about it, organically.

Couldn't agree more that we have to let them know how this stuff works, but if we're going to do that, we also have to give them the space to shape or change the game. For me, alternative grading helped a lot there, but this is all at the class level. I don't really know how to solve the system-level issues.

Expand full comment

The system-level issues is where my head is at right now. I'm halfway through The Ordinal Society, which is helping me understand what's changed in the past 20 years and how LLMs may change them some more. It is also calling my attention to how the digital information economy produces bureaucratic systems with a different logic that those produced by earlier information economies, so bureaucracies finds ways to use data in new ways, some harmful to the interests of the people who may have given it away freely.

Even without the Dewey quote on your about page, I would have the sense from your essays that you are student-centered in your approach. As I get ready to jump back into the classroom this fall, I need to get my head into questions of teaching practice.

Expand full comment

These different levels and types of analysis are interesting to me. When I published my book of curriculum (The Writer's Practice) an academic acquaintance told me that if I wanted to see it REALLY sell, I'd get busy designing a study that proved it "worked." I already knew is "worked" because I'd witnessed it and collected testimony from 100's of students, but because I was slow on the uptake at the time, he explained how systems weren't going to care about all of my fuzzy qualitative findings. He understood that qualitative data is still data, but he also knew that system couldn't handle qualitative data as proof that it "works."

I spent some time thinking about what kind of study I might design, but everything I came up with was rooted in student self-reporting, rather than any way to measure efficacy through their written artifacts because any attempt to standardize the artifacts corrupted the developing of their writing practices (which are invariably rooted in individuals). I was working in market research in the late 90s as big data approaches (based on shopping behaviors rather than surveys) to analyzing consumer sentiment was getting rolling and I remember how people who understood more about the math and stats than me thought we might be able to create truly predictive models. And we did, for aggregate behavior, but I also had a guy I worked with who had a mantra, "Individuals are not averages," as a caution around drawing conclusions down to the individual consumer level.

I think about that a lot in education because I ultimately, I think the student's education belongs to them, but this leaves the challenge of organizing systems around those individual needs.

Expand full comment

"Individuals are not averages" is an excellent mantra. In case you missed it, Alfie Kohn had a great blog post a few months back on the topic of "evidence-based" instruction: https://www.alfiekohn.org/blogs/evidence-based/

Expand full comment