Education

Feedback in the “Real World”

floops_loopsLike many classroom teachers, I’ve worked in a variety of non-educational jobs over the years.  From pub trivia host to tour guide, each role gave me an opportunity to develop skills that made me a better educator.  Working at the Apple Store is probably the best example of this: being a part of a huge, modern, progressive technology company has shown me what the workplace of the 21st century will look like for many of my students.  It constantly reminds me of the difference between what we teach in public schools and what employers seek.  The most important lesson was the critical role of feedback.

Historically, feedback has been something provided by managers to workers, flowing downhill as if pulled by gravity.  Schools have mimicked this flow: teachers evaluating students and delivering suggestions for improvement.  In contrast, at Apple there is an intentional and pervasive climate of feedback by and to everyone.  Employees at all levels and with any amount of experience are required to approach one another, ask for permission, and use a structured protocol to describe what they have observed and the impact that it has had.  The result is a powerful climate of constructive criticism, meaningful praise, and eager self-improvement.

Returning to my classroom after spending time in that environment, I was faced with the hard fact that my students resist feedback.  They see it as evaluation more than as an opportunity to improve.  They cringe at criticism and respond with reflexive words of defense, like “Yeah, but…” and “I tried that”.  Feedback from peers is met with even more pushback: like most adults, students see criticism as something provided by the “ones who know” to the “ones who don’t know”.  Assessment is something that experts do.

Over time, I began to see that the importance of learning to give and receive feedback trumped the challenges of changing student perceptions.  Like so many much-needed changes to grading and assessment, students and parents have been programmed to think a certain way… and they are wrong.  We can not simply acknowledge their resistance to change and give up.  We must push forward to practices that improve learning and develop responsible citizens.

What is the role of feedback in your classroom?

 

 

image from Smashing Magazine, used with permission

Education

Locus of Control

“Better policy would focus on school and teacher inputs. For example, we should agree on a set of clear and specific best teaching practices (with the caveat that they’d have to be sufficiently flexible to allow for different teaching styles) on which to base teacher evaluations. Similarly, college counselors should provide college applicants with guidance about the components of good applications. Football coaches should likewise focus on their players’ decision-making and execution of blocking, tackling, route-running, and other techniques.”

From a recent post by Ben Spielberg (hat tip to Larry Ferlazzo for sharing it) in which he uses quotes from Nate Silver’s “The Signal and The Noise” to destroy the idea that teacher evaluation should be based on short-term data like test scores.  I loved Silver’s book and I enjoyed the connection here.

Education

Quantifying for Improvement [CROSS-POST]

For the past six months, I have been maintaining the Seize the Data! blog that I created as part of a team of amazing teachers.  The blog is a forum for discussing all things #eddata and #assessment, and also a voice for the powerful data literacy curriculum that we developed with the help of NCCAT, NC DPI, and a data literacy expert.  The post below is a “reprint” of my most recent entry on that blog, because I thought that the Scripted Spontaneity audience might enjoy the topic. 

 

from Fitbit.com

Personal data collection is becoming a major industry in America.  My wife just bought me a Fitbit Force–a Bluetooth-enabled wristband that tracks movement and captures exercise–to encourage me to be more active.  When I first used it, however, I feared the same effect that happens in our classrooms around data collection.

Sometimes, the focus in schools is so pinpoint-focused on creating and administering assessments that the tests themselves tend to become the goal.  So much attention and effort is put into writing the tests and deploying them, that by the time teachers receive data it is largely summative and irrelevant to their instruction.

My Fitbit Force, however, continuously communicates with my iPhone to share current data, and then it shares those data with other services like MyFitnessPal.com.  On MyFitnessPal, I can log my diet using an amazing database of foods–commercial and homemade.  I can even scan the barcodes of store-bought foods to enter them into the system.  MyFitnessPal develops a weight-loss plan for me based on calories per day, and then shows me progress during the day.  I can adjust my meals and exercise as my day goes on to ensure that I stay below my daily maximum.

Imagine if we could do the same for students and teachers.  If we could assess them continually and unobtrusively, compare the data to goals that we helped to create, and then provide meaningful feedback to improve mastery (or, in the case of teachers, instruction).  The technology to do this is within our grasp, and should become much more common as the cost comes down.  But, ultimately, this is what good thoughtful teachers do already.  They observe and monitor student learning, adjust lessons to meet their needs, and aren’t surprised by the results of big tests.

That’s why Seize the Data! is a great first step for teachers who want to know more about how to collect the right data and make the best use of it.  The Winter Semester of our online course is underway, but we are constantly scheduling blended learning and workshop-style sessions for schools and districts who want to bring this critical knowledge to their teachers, free of charge.  You can sign up at our Registration page, or come back in February to join an online cohort as they form for our Spring Semester.

Education

Grades as measurements

The following post was originally published on SmartBlogs Education on January 23, 2013:

medium_406716712I have gone to great lengths in my classroom over the past few years to teach my students everything I know about grading and assessment. Why? Because I am trying to dispel the notion that a grade (all by itself) is an accomplishment. I want them to understand that learning is the goal. Grades exist simply to communicate the amount of learning.

Convincing my students, however, is easier than convincing their parents, other teachers, administrators and community members. It seems that everyone has bought into the idea that a good grade is an achievement that should be rewarded. It’s common sense, right? To earn an “A”, students must have worked hard and sacrificed, and we want to encourage that kind of character. We compensate students with sports eligibility, scholarships and plaques for academic excellence. In some families, there is even a financial reward.

Why do we do this? Well, the answer is simple. We learned in our Psych 101 courses that if you want a behavior to occur more often there must be a positive consequence when it does. Put aside for a moment the findings of Daniel Pink and others that this sort of classical conditioning only works for simple tasks. The underlying problem is that a grade is not an accomplishment. It’s a measurement.

Consider this: would you give your daughter a prize for being an inch taller at her annual check-up? Would you clap a student on the back and praise him for having a body temperature of 98.6 degrees? Of course not because these measurements are seen as important information that a medical expert will use to diagnose and treat problems. So, why don’t grades work the same way?

The easy answer is that we have created this monster. As parents, we have incentivized our children to earn better grades. As teachers, we publicly recognize the best scores. As school leaders, we herald the honor roll. We create intense pressure among nearly all of our students to earn the highest marks.

This pressure breeds negative behaviors. We see students so focused on earning an “A” that they stop thinking in creative ways. Students begin to undermine each other to improve their rank, rather than developing collaborative skills. Cheating becomes rampant in a world where all that matters is the letter on the report card.

All of this can be seen in a typical classroom, especially near the end of a marking period. Students who slacked off for weeks beg for extra credit. Those who have not demonstrated superior content mastery try desperately to find a way to excel. Unintended lessons supersede the important ones: Effort is more important than mastery, appeasing the teacher is better than studying, and if I can’t turn my “F” into an “A” there is no reason to try anymore.

So, what’s the solution? In my classroom, it comes down to re-education. I train my students to understand the value of assessment. They know that formative assessments help me (and them) to understand their weaknesses and address them. They see the value of improvement over the absolute mastery level. They begin to see each test as a check-up, not a challenge.

Obviously, I can’t change a system that values letter grades so highly. But, I can help my students value my feedback and their own growth over the fleeting thrill of an “A”. And I can look on with satisfaction when they begin to care about their own progress without rewards or consequences from anyone else.

photo credit: timsamoff via photopin cc

Education

Efficient Assessment, Part 3: The Decisions

In previous blog posts, I’ve laid out the questions I ask myself when designing an assessment and the ways in which I collect student mastery data.  This month, I answer the question “What do I do with these data?”

Few in the standardized-test-driven galaxy in which public education lives these days would argue that we don’t have enough information about our students.  Those who aren’t crazy about the data from state exams can always collect their own information directly from their students.  Personally, I prefer these data because they are more targeted at the knowledge and skills that I know my students need to have and that I have been teaching in my classroom.

But, getting the data isn’t the most important part, at least in my opinion.  The magic, of course, is in what we do with the information we have.

I am fortunate to be a member of a pilot program to train North Carolina teachers in Data Literacy.  My cohort is helping to develop the lessons that will be used to help classroom teachers all over the state better understand data and assessment.  This has got me thinking a lot about this difficult issue.  I mean, in a perfect world, I would have plenty of time (plus smaller classes and extra teaching help) to provide the remediation that my assessments indicate is needed.

In the real world of public education, however, we are constantly asked to “do more with less”.  We are responsible for the learning of every single student no matter how differently they are prepared, how differently they learn, or how hard they work.  How we use our assessment data is one of the few things we can control.

So, here is what I do with mine.  First, I group students using my data.  This usually translates to a Masters group (on or above grade-level) and a Developing group (below grade-level proficiency).  I provide self-paced enrichment for the Masters, sometimes involving preparing review materials for others, or tutoring members of the Developing group, and I personally provide extra instruction to the Developing group.  This can be Study Island review activities or hands-on experiences with topics where the entire class didn’t get one).  Above all, I seek to provide another way for these students to experience the concepts and another chance for them to understand it.

The second way that I use my assessment data is in planning further whole-class activities.  If the data show me that there is a misconception or gap in understanding that is widespread, I can focus on clearing up that confusion.  Often, this is best achieved by providing a demonstration or video that makes the point in a way that is surprising.  Students will frequently remember a visual experience that has changed their thinking.

The last way that I use the data from my assessment is in determining a student’s grade in my Science class.  Summative data are my least favorite, as I’m sure they are for many teachers, and I resist the urge to stamp a final grade on any assignment until deadlines beyond my control force my hand.  But, ultimately, it is important for students, teachers, and parents to know what level of mastery students have achieved.

For me, that is what assessment data leads to: differentiation, better lesson planning, and summative grades.  The beauty of good classroom assessment, however, is that there is so much more that a skilled teacher can do with this information.  The key is to collect it, share it, and use it.  Don’t give a test just because you’ve finished teaching something.  Assess early and often, and use the data with students to help them improve their understanding.

What do you do with your student assessment data?

photo credit: prayitno via photopin cc

Education

Efficient Assessment, Part 2: The Tools

Last month, I posted a short explanation of the thinking that I put into constructing classroom assessments.  This month, I take those ideas into practice.

I am definitely an “early adopter” when it comes to new technological tools, especially those that my students can make use of.  When you fuse my tech eagerness with my obsession over grading and assessment, you get a single-minded drive to explore all sorts of digital assessment tools.  If there is an online gradebook application, student response device, or new-fangled student data product that was developed over the past decade, I have tried it.

In my last post, I mentioned Validity, Reliability, Authenticity, and Efficiency.  In evaluating tools that either students or I can use for the purpose of classroom assessment, I look for Value.  I want to know what a tool offers me, in terms of saving me time/effort, or what it offers my students, in terms of access, ease of use, and accuracy.  What does this tool cost my school (or, more often, me) per year?  In short, how is this better than a pencil and paper quiz in class?

I’ve warmed up to several digital tools that I use on a regular basis.  As you will see, some of them have been in my toolbox for years and others are brand-new.  All of the them have passed the test of providing value and each offers a different advantage.

Continue reading

Education

Efficient Assessment, Part 1: The Questions

Assessment is one of those aspects of education that even good teachers can take for granted.  We prepare our students for state and local high-stakes tests, but we also write our own tests and quizzes.  We use student work to gauge mastery, and we question students during class to ascertain their weaknesses.

We do all of this on a daily or weekly basis, but how often do we really think about what we are doing and why?

Along with my obsession for grading practices, I have a little “problem” with assessment.  While some teachers reuse their tests from year to year, I always start with a blank page and create them anew.  I perseverate over the minutiae of each quiz.  I write and rewrite and re-rewrite each question.  It’s not healthy.

Would you like a glimpse inside my crazy head?  Let’s assume you’re nodding your head.

When I’m planning an assessment, I try to consider several factors:

  1. Validity: How closely does this match what I want students to learn?
  2. Reliability: Would all students with the same level of mastery get the same score?
  3. Authenticity: Are students asked to demonstrate the actual skills that I want them to master?  Will the results really tell me what they know?
  4. Efficiency: How much time and effort will it take to capture these data relative to their utility?

For me, validity is the biggest reason that I create my own assessments.  I know what is on the state curriculum and I know how I have interpreted it for my class.  I can ensure that my assessment provides useful information about my students’ mastery.  Reliability comes from removing bias and making the questions as clear as they can be.  Assessment isn’t about playing “gotcha” and rewarding those who can decipher the clues.  It’s about measuring curriculum mastery.

Authenticity is a spectrum that extends from super-simple, multiple-choice quizzes on one end to performance assessments (like lab practicals or oral questions) on the other.  In my experience, authenticity and efficiency are constantly in opposition.  The most authentic assessment that a classroom teacher can reasonably use is going to consist of short-answer or essay questions in which a student must demonstrate their understanding (with no lucky guesses possible).  But, this is exactly the type of assessment that is incredibly labor-intensive to score.  Ask any Language Arts teacher and you’ll hear horror stories of grading essays that take 10-15 minutes each.

However, on the other end of the spectrum, the easiest type of assessment to grade requires no teacher judgment, it’s just a multiple-choice test.  This assessment provides quick data, but at what cost?  How much can you trust in the results of an assessment like this?  How many responses were just the result of good luck, not true understanding?

These questions or more run through my head whenever I plan lessons.  So, why does assessment keep me up at night?  In my next post, I’ll explain how new assessment tools like edmodo, MasteryConnect, student responders, and Socrative fit into my assessment strategy.

photo credit: Marco Bellucci via photopin cc