Formative Pain in the Assessment

 

This month marks a special anniversary for me.  It’s been 4 years since I started using the Developmental Reading Assessment (from Pearson) in my classroom.  Back then I was bright-eyed and full of undirected/misdirected teaching energy.  I accidentally signed up for a PD day that “trained” me on how to administer the DRA.  That accident led to piloting this assessment for my district, followed by the adoption of this assessment tool district-wide.  As is so very often the case in my district, the few of us who had gotten any sort of training were then put in the position of passing that training on to many colleagues in whatever pale immitation we could do of a day-long training (condensed to 90 minutes). 

And here we are.  The district requires all upper-el teachers to use this assessment tool in the Fall and Spring.  Everyone knows how to administer the testing.  Everyone is required to dutifully record scores in each student literacy file and report scores via spreadsheet to district administration.  We’ve all been “on board” about two years now.  Here’s the interesting part–most teachers seem to have no idea what to do with the data collected using this assessment.  I frequently hear comments to the effect that this assessment is useless and that teachers get all the information they need to guide instruction through informal assessment.  My only response to that is:  who the hell do you think you’re fooling, you big fakers?

Don’t get me wrong–I deeply value the informal assessments we are always making with students in every conversation, the perusal of each written piece of work, sometimes just watching surruptitiously as their little faces screw up in frustration/concentration/dazedness during independent work.  But since I’ve yet to hear a single colleague or administrator try to discount the value of wide and varied assessments, this isn’t really at issue.  The issue is whether differentiated reading assessment can help inform and focus instruction for individual students.  Or maybe the issue is giving people a tool, telling them to use it, but not HOW to use it.  Here’s a hammer, build a Frank Lloyd Wright house…

Four years.  It took me four years to go from staring with glossy eyes at the rubrics and “Focus for Instruction” sheets to using the data to inform the scope, sequencing and grouping in my Readers Workshop in a way that very specifically acknowledges on a daily basis the needs of the students I am charged to teach.  To recognize who would need more conferring versus guided reading instruction, what was worth a series of focus lessons versus what could be handled with one or two reminders.  Teaching is slowly turning me into a systems person, and I had to slowly (and painfully at times) figure out ways to analyze and organize this assessment data in a way that spoke to me in clear and resonant tones. 

The upside is I know what I’m doing and why, and I have evidence to base my decisions on, not just instinct and bravado, which is how I now regard my previous “I’ve got it all up in the ol’ noggin” position.  DRA data often confirms what I’m assessing in other ways, formally and informally.  Hooray for me.  More importantly, this data holds me accountable to each student for providing the instruction they actually need, instead of what I most enjoy teaching or find easy to teach.  I hate teaching word attack skills–one of the many reasons I love fifth grade, because most kids come in pretty competent in decoding-type strategies.  But not all, and DRA data shoves that in my face every time I plan my week of Readers Workshop.  Little Susie needs her word attack help, and it’s my gosh darn job to give it.  (It’s a giant snore-fest for me, but that must be why I’m being paid and not volunteering.)

So to the “I’ve got it all ‘up here'” reading teachers, I say–what if that was the attitude and response of your cardiologist, your lawyer, your financial advisor?  Professionals need to embrace accountability in reasonably transparent ways.  These assessments take a LONG time to administer, so we’ve got to make the most of what they tell us, use every drop of data to understand our students, to find those places to nudge them along, and measure their success/failure in some part as our own.   Data like this enhances the art of teaching.  It is one piece of a very complicated endeavor–making a successful reader.

As part of my anniversary celebration, I’ll be sharing my strategies for interpreting and using DRA data with my colleagues at a meeting next week.  I don’t know if I’m doing it “right”, but I know we can’t keep regarding the good assessment as the sort of onerous obligation that we (and fairly so) hang on the high-stakes state-level tests (in Michigan we call this the MEAP).  We are all so tired, so inundated with work that takes us away from our vocation of teaching children, that I think maybe some really worthy tools, DRA included, got caught in the net of suspicion and acrimony that most of us understandably cast over the constant rainfall of “reform” initiatives. 

Wish me luck as I try to salvage something good from the wreckage.

 

Advertisements

Becoming one-teacher-at-a-time-ish…

     Happy FIFTH Snow Day to me…

    I finished One Teacher at a Time last week (first post here), while at the same time dipping my toe into the criterion-based assessment waters in both Math ( a unit on fractions) and Science (a unit on force and motion).  I started by setting up class lists with the benchmarks heading the top row of a table, leaving room for assessment over multiple activities connected to each benchmark.  I also set up a student self-evaluation sheet and started each unit having students rate their own feelings of proficiency relative to each benchmark.

     What I’ve liked so far is the more specific way I’m pushed to introduce content to my class–rather than just a statement on how we’ll be studying force and motion, I gave the specific content we’ll focus on–contact and noncontact forces, balanced and unbalanced forces, etc.  With only a few science lessons completed, it seems like the students have a stronger sense of focus, and so do I. 

    One of the worries a colleague shared with me is that all this observational assessment would suck away time to help students.  I’ve found the opposite so far.  As I travel around the room observing students working through, for example, simplifying fractions, I can quickly determine who seems proficient and who does not.  From there, I can pull a quick small group to re-teach, pair more-proficient with less-proficient students for peer teaching, or access materials (like a fraction strip kit) that might help one or two particular students with comprehending the process and content of the benchmark.  And no one gets left out because I’m caught up with the first kid who needs help or the kid who needs constant reassurance about every–single–problem. 

    The big issue is the one I predicted–I’m having to re-assess how I’ve constructed lessons, chosen materials and assignments/activities, and sequenced the instruction.  I don’t mean to by whiny and lazy, but this is a LOT of work.  I can already picture what a long summer it’s going to be.  Worthwhile, sure–but lots of work.  Setting up a new gradebook online with all the benchmarks is just the beginning, really.  Re-conceptualizing every unit of study I’ve carefully crafted over the last four years in fifth grade is the real work. Once that is done, implementing will be rocky at first, but given what I’ve read and even the small bit I’ve begun to experience in my classroom, it will have a major payoff in student outcomes.  And I’ll get there, but from my cozy deep-mid-winter blogging chair, it seems like I should take a nap first.

  whatapickle.jpg

Shifty Paradigms

paradigmshiftby-askpangatflickr.jpg 

Last Wednesday I spent a few hours at my ISD with other 3-5 teachers listening to Jane E. Pollack, author of, most recently, One Teacher at a Time.  I thought I was just going to hear a little about standards-based report cards.  Instead, I had my entire practice challenged.  Yes, it was one of those sorts of presentations.  Jane didn’t want to just tell me about a possiblity–she wanted me to shift how I teach and assess, basing my practice on criterion-based scoring, where every student is made to focus on their proficiency in each benchmark of the subjects I teach.  She walked in with what to her seemed the unshakable assertion that this paradigm is clearly the one in which I as a teacher can really push the learning curve of my students. 

I held my ground, but have always known it was shaky at best.  It doesn’t take a lot of reflection to know that the traditional grading system is not a framework that promotes learning, or even reflects learning.  The grades in my book tell a lot of stories, many of which aren’t really useful.  I often explain to my students and their parents that grades do not necessarily reflect ability or even learning–they reflect performance on assignments.  Have I always been assiduous in my choice of assignments to be sure they strongly reflect the precise benchmarks my students are responsible for gaining proficiency in?  Heck no.  And I’m not alone by a long-shot.  So Ms. Jane comes along and causes tectonic plate movement under my shaky ground. 

I want to cry, partly in frustration over yet another BIG IDEA I have to grapple with in my practice.  But partly in some relief–I knew this grade stuff was kind of bogus all along, but didn’t know what my alternatives were.  So now I’m reading her book, which reflects some of what I heard her say in person.  I just read something Jane mentioned during her talk–that in the 50’s, maybe half of the school-aged children were graduating high school.  Jane talked about what that meant for teachers and education–many classrooms were populated by students who “got it”–the wheat had already been separated from the chaff.  Those students that tear us up with frustration and grief were largely not in the room anymore, and teachers were instructing the motivated, resourced kids.  I’m not saying I’d want that, but it does re-frame the idea that kids were smarter “in the old days” and that somehow today’s teachers and students are falling short of their predecessors.  They weren’t counting all the kids–cheaters.

Back to criterion-based assessment–so I see myself with this gradebook in which each benchmark is listed with a number of activities attached and the students get scored on their level of proficiency (or lack thereof) in each activity/reflection of the benchmark.  And the students are aware of the precise benchmarks we are striving toward, self-assessing as well as being assessed by me as we move through various and sundry compelling opportunities to learn and grow (that part doesn’t change, thank God), and the students focus is on the proficiency goal as opposed to the grade stuff.  At the end of a unit, they (and I) know exactly where they stand in relation to each benchmark of learning.  And this magical online gradebook helps me spit out subject grades using all these criterion-based assessments at the end of each marking period (don’t ask about the magic grading program, I don’t know enough to speak to that as of yet).

And the way I find time to do all that business?  By throwing away some of the stuff I was doing before that was not as geared toward benchmark-y proficiency.  I am teacher-enough to admit this stuff exists, and scared enough to admit I don’t want to let go.  But I can’t have it both ways, of this much I am convinced.  More deliberate pedagogical choices must be made for this brave new classroom.  That queasy feeling could be dread or excitement at the prospect.  I’ll let you know.

 

Assessment That Doesn’t Suck

Feel like writing...I’m happily plopped on a friend’s couch in Seattle, trying to enjoy the last few days of winter break, but needing to look ahead to the next six months at the same time.  I hate multitasking.  I pulled out my work bag early in the a.m. and started scoring this groovy little assessment I had my students complete before the holidays.  The whole thing is based on work of some other great NWP TCs whose presentation I watched last month in NYC. 

First I had my students draw a picture of themselves as writers, then had them write a supplement to the picture giving any explanation they wanted and asking them to articulate how they feel about themselves as writers.  The purpose was to get a look at their perceptions of what writing is, their attitudes toward writing in general and their writing identities in particular, their metacognition of what happens in their writing processes, etc.  I used a simplified version of the protocol developed by smarter NWP minds than mine.

As time goes on, I see some of what I’ve learned helping me to understand and differentiate instruction more in writers’ workshop, but my first-blush reflection is definitely what I learned about the areas I’m teaching well in and where I clearly need to make greater efforts.  My students overall have a strong sense of the tools writers use and the need for planning, envisioning, and revising their writing.  The best news for me was that nearly all of my students expressed very positive attitudes toward writing (and nobody thinks it stinks, hooray!).  The expressions of how successful they feel as writers were less clear–I’ve had a sense of needing to do more reflection on writing pieces and processes, and I think a little of that will tell me more about this.

Glaring me right in the face is a universal lack of mention of either purpose or audience from my writers.  I know this doesn’t mean that they are totally inconsiderate of these, but given how important knowing purpose and audience is for writers, I see a clear need to do more modelling and instruction and conferring that helps my students increase their awareness of purpose and audience as well as their abilities to articulate purpose and audience in their own writing pieces.

There’s a lot more I have gleaned from my students’ drawings and written reflections, but what I’ve mentioned is enough to keep me very busy for the next while!  I hope to tweak this tool and use it in a few months to see how my students’ attitudes have grown or shifted.  So much of the written assessment I do (voluntarily or not) with my students tends to be a drag for them, but this was enjoyable for most and has provided me with lots of food for thought about their learning and my teaching.  It even helps get at some of those rather ridiculous Michigan GLCEs that insist that students will love writing.   I hate to end ’07 on a cynical note, but it is sadly the case that most imposed assessments are not only a bit painful for the students, but lack much that I find useful in teaching.  Maybe ’08 will find me with more useful, rich assessments that have student learning at the core.