Tuesday, October 30, 2012

Making Accountability Count


Redefining American High Schools for the 21st Century - Part 2

In the first installment of this series, I talked about how high schools have a two-track system: advanced courses for the top students, and a one-size-fits-all program for everyone else.

The public is currently screaming for increased accountability, so what measures do we utilize to hold our education systems accountable? The fact that the answer to that question is nearly always multiple choice tests is another testament to the true nature of our educational philosophy. There is no doubt that there are multiple choice tests that can, to a degree, measure higher levels of thinking, but these sorts of tests are reductionist in nature, and sacrifice subtlety of interpretation for ease of scoring. What’s worse, is that basing accountability on multiple choice tests encourages, perhaps even forces, a reductionist style of instruction that focuses on distinguishing between concrete possible answers rather than formulating solutions from disparate information.

This has come about because in response to the call for greater accountability, we have put the cart before the horse. The proper sequence would have been to determine what students needed to know and be able to do when they go out into the real world, then based on that information, to design assessments that appropriately measured the degree to which students are able to demonstrate their knowledge and skills. Instead, we took the kinds of assessments commonly in use, then formulated standards that described what students needed to know in order to score well on those tests. 

Of course the thousands of educators who sat on those hundreds of committees and formulated standards will disagree with this description. I, however, find it impossible to look at the standards we’ve been saddled with and not be reminded of the old adage that if the only tool you have is a hammer, everything looks like a nail.

In fairness, these committees probably did try to start from what students needed to know and be able to do. However, it was, and is, all too easy to let the tool we already have in the toolbox–the multiple choice test–confuse us into believing that what this kind of test measures, because it is what we are accustomed to measuring, is what students need to know and do. 

So, to get the cart back behind the horse, what do we want students to be able to know and do?

If we believe that what students need to succeed in life is the ability to recognize a correct solution from one of a limited number of options, then multiple choice instruction and accountability is the right system.

If on the other hand, if we believe that students will experience success in their future careers because they have the ability to process information from a variety of sources, recognize and connect patterns, predict future trends and applications, and communicate results in an appropriate format, then we ought to recognize that no matter what kind of improvement current accountability systems might produce, the results are illusory. In an increasingly complex world, it is more important than ever to teach student to think, then measure their thinking utilizing methods that ask them to make their thought process tangible via some kind of performance.

There are plenty of models, either in use now or used in the recent past, for how to do performance based assessment of many different types. For example, the California High School Exit Examination currently uses a writing sample as part of the requirement for graduation. Students write a response to a writing task from one of five types of writing. They are scored by teams of teachers using a process where groups of teachers meet, review a rubric, which specifically describes the performance of students at a number of levels. The teachers then go through a process using sample papers that all the scorers read, assign a score, then discuss why they assigned that score. The papers are then read by two teachers, with a third scorer weighing in if the first two scores are too far apart. This process, despite the subjectivity, produces remarkably consistent scores.

However, it is time consuming and more expensive than multiple choice assessment. 

It is a human tendency, I think, to be attracted to simple answers to complex problems. Those simple answers mislead us into thinking we have solved the problem, which is almost always not the case. 

Complex measures, like performance assessment, can be messy, but will produce much more nuanced results that tell us more about the real-world capabilities of students. We just have to be willing to pay for them.

Future posts will discuss:
  • Does Every Student Need College?
  • Re-energizing Career and Technical Education
  • High Level Thinking Isn’t Just for College

No comments:

Post a Comment