There’s something I need to get off my chest. There is no such thing as the “science of writing.” There isn’t such a thing as the “science of reading” either. When you see the construction the “science of…” as applied to education you are looking at a marketing term, not a reflection of something real when it comes what most of us think of when it comes to scientific inquiry and the standards of proof for claims.
Hello John, scientist here. I certainly concur that the "science of writing" is a nonsense, particularly as it involves human cognition and emotions, processes for which neurobiologists have very little understanding. That being said, it certainly is feasible to use approaches used in science (making a hypothesis [eg role of phonics] and then empirically testing it [harder in a classroom than in a test tube]) to investigate various forms of pedagogy. In many scientific fields such as mine (ecology), the 'rules' are probabilistic rather than deterministic and from your discourse that seems likely for pedagogy here.
As I say, this method is what I do in my own teaching, I experiment by hypothesizing and changing variables, but using these methods does not translate de facto to a science. The difficult part of measuring these things in education is that there is very little agreement on what we actually should be measuring in terms of outcomes. Even the broad purpose of an education is a contested space. Now, apply that down to a specific discipline or individual course and things get even more complicated.
Outside policy people complain that we don't have enough answers about teaching, but that's because the measurements are impossible. I wrote this piece a few years ago in response to some pontificating by two figures who have far more influence than most over the direction of education in the US to try to illustrate why the questions they're asking are the wrong ones. https://www.insidehighered.com/blogs/just-visiting/why-measuring-teaching-success-so-complicated
Yes, I saw that re your methodology. Recognize that philosophers still wrist-wrestle over what constitutes a science (NB I am not at all arguing that writing is). And your comment on outcomes resonates re my discipline of ecology. Many of the most fundamental questions lack definitive (which often means mechanistic) answers.
Yes, to me, ultimately science is a process as much as anything. The "science of..." frame as a marketing term wants to suggest things are "settled," but lots of things are never settled.
Indeed. And in biology (as I would tell my students), as one layer is elucidated, new deeper layers of complexity are revealed.
I did cringe during the pandemic every time I heard "trust/follow the science" -- that experience has blown away supposed 'settled' matters and illuminated many new ones concerning our immune response & its side effects
I was in grad school in the late 90s when the whole field of English studies, including the youthful composition studies, shifted to a scientific (read: quantitative) model from a more nuanced and textual (read: qualitative) model. Huge disappointment then and now for me.
I understand the pull of wanting to figure something out with some definitiveness, but it does surprise me how far down that quantitative road some folks were/are willing to travel. I don't want to work entirely in the dark, but maintaining some measure of mystery is part of the fun.
Experimentation is key. However to some administrators, doubting yourself and trying new things screams incompetence.
I remember my third year of teaching when I shifted towards a writing workshop model. My administrators had several "you're in trouble" sit down style meetings. My sin? If you're experimenting, you don't know what you're doing. So do "what works." Less writing, more multiple choice, they explained. More standardization. (Teaching basic email writing? That's a *business* standard. Out of bounds! A verbal reprimand on my part.) So I quit teaching after that year, but later came back.
On a side note, I openly balk when writing is scientifically "tested" with multiple choice. Let's keep a straight face and analyze football using hockey metrics. Goodness gracious, this profession is riddled with category errors!
This is a testimony to the really pernicious effects of a system that fails to respect teacher autonomy and creativity and has all of the wrong incentives. It sort of break my hear to hear. I wrote "why They Can't Write" for those administrators as much as anyone, but I just don't know how to break that cycle that you experienced.
For what it's worth, when I returned seven years ago, I had great administrators who have been nothing but supportive. Switching environments helped. As I look over, "Why They Can't Write" and "The Writer's Practice" sit on my selves, and have been great influences. (I won't lie--author and reader dialogue on Substack still blows my mind!)
As for my old bosses, some true believers function on sheer appeal to authority. They say "best practice" but follow fads. I've got a post jostling around about the topic, but out of respect *for* those folks, even if it was nearly ten years ago, I'd like to write in the most respectful way possible.
In the meantime, when you write about having the right classroom composition, many don't understand that point. Even if we know what we need--regardless of level--we're supposed to work miracles with far less than perfect starting points. Learning never happens in a vacuum!
Interesting points here, especially noting the use of "science" as a marketing term.
AMEN! Thank you!
Hello John, scientist here. I certainly concur that the "science of writing" is a nonsense, particularly as it involves human cognition and emotions, processes for which neurobiologists have very little understanding. That being said, it certainly is feasible to use approaches used in science (making a hypothesis [eg role of phonics] and then empirically testing it [harder in a classroom than in a test tube]) to investigate various forms of pedagogy. In many scientific fields such as mine (ecology), the 'rules' are probabilistic rather than deterministic and from your discourse that seems likely for pedagogy here.
As I say, this method is what I do in my own teaching, I experiment by hypothesizing and changing variables, but using these methods does not translate de facto to a science. The difficult part of measuring these things in education is that there is very little agreement on what we actually should be measuring in terms of outcomes. Even the broad purpose of an education is a contested space. Now, apply that down to a specific discipline or individual course and things get even more complicated.
Outside policy people complain that we don't have enough answers about teaching, but that's because the measurements are impossible. I wrote this piece a few years ago in response to some pontificating by two figures who have far more influence than most over the direction of education in the US to try to illustrate why the questions they're asking are the wrong ones. https://www.insidehighered.com/blogs/just-visiting/why-measuring-teaching-success-so-complicated
Yes, I saw that re your methodology. Recognize that philosophers still wrist-wrestle over what constitutes a science (NB I am not at all arguing that writing is). And your comment on outcomes resonates re my discipline of ecology. Many of the most fundamental questions lack definitive (which often means mechanistic) answers.
Yes, to me, ultimately science is a process as much as anything. The "science of..." frame as a marketing term wants to suggest things are "settled," but lots of things are never settled.
Indeed. And in biology (as I would tell my students), as one layer is elucidated, new deeper layers of complexity are revealed.
I did cringe during the pandemic every time I heard "trust/follow the science" -- that experience has blown away supposed 'settled' matters and illuminated many new ones concerning our immune response & its side effects
I was in grad school in the late 90s when the whole field of English studies, including the youthful composition studies, shifted to a scientific (read: quantitative) model from a more nuanced and textual (read: qualitative) model. Huge disappointment then and now for me.
I understand the pull of wanting to figure something out with some definitiveness, but it does surprise me how far down that quantitative road some folks were/are willing to travel. I don't want to work entirely in the dark, but maintaining some measure of mystery is part of the fun.
Experimentation is key. However to some administrators, doubting yourself and trying new things screams incompetence.
I remember my third year of teaching when I shifted towards a writing workshop model. My administrators had several "you're in trouble" sit down style meetings. My sin? If you're experimenting, you don't know what you're doing. So do "what works." Less writing, more multiple choice, they explained. More standardization. (Teaching basic email writing? That's a *business* standard. Out of bounds! A verbal reprimand on my part.) So I quit teaching after that year, but later came back.
On a side note, I openly balk when writing is scientifically "tested" with multiple choice. Let's keep a straight face and analyze football using hockey metrics. Goodness gracious, this profession is riddled with category errors!
This is a testimony to the really pernicious effects of a system that fails to respect teacher autonomy and creativity and has all of the wrong incentives. It sort of break my hear to hear. I wrote "why They Can't Write" for those administrators as much as anyone, but I just don't know how to break that cycle that you experienced.
For what it's worth, when I returned seven years ago, I had great administrators who have been nothing but supportive. Switching environments helped. As I look over, "Why They Can't Write" and "The Writer's Practice" sit on my selves, and have been great influences. (I won't lie--author and reader dialogue on Substack still blows my mind!)
As for my old bosses, some true believers function on sheer appeal to authority. They say "best practice" but follow fads. I've got a post jostling around about the topic, but out of respect *for* those folks, even if it was nearly ten years ago, I'd like to write in the most respectful way possible.
In the meantime, when you write about having the right classroom composition, many don't understand that point. Even if we know what we need--regardless of level--we're supposed to work miracles with far less than perfect starting points. Learning never happens in a vacuum!