Practical Theory - The Origin
The Scholars in CyberEnglish
ToDaY's MeNu - Ted

Friday, January 22, 2010

Fix the Test

Over the years I have read in newspapers across the country from Florida to California to New York, from Texas to Minnesota the mistakes made by the test makers and scorers that have affected the graduation or promotion of students. How do we know these tests are valid and useful for our schools? Because these tests are proprietory, we don't have easy access to them before or after they have been given. Tests are given and scored and then destroyed. Something is terribly wrong when the impact of these tests is so important to so many people and they are still flawed. If we are going to use these tests for important decisions, one important decision is to reexamine the instrument(s) we use for assessment.

Let's start with the recent firefighter's case in New York involving the "biased" test. I think education may see a court case similar to the one we have recently seen with the firefighters in New York. This case raises the issue of "testing bias." "In his ruling on Wednesday, the judge found that the city intentionally discriminated against blacks in using those tests and in ignoring calls over the years to change the testing procedure. The suit was brought by three people who took the test and by the Vulcan Society, a fraternal organization of black city firefighters." This is the crux of the matter, the test was biased and therefore bad. I have heard over the years the complaints of test biasing in schools. I have seen test questions eliminated because of bias. I have always been conscious of my tests. If the results are bad, then something is wrong with the test. The courts in NYC decided this and did something about it. Perhaps educators should follow the firefighter's lead. We as a nation have never looked at the test being the problem. Instead we blame the schools, the teachers, parents, the students. Never ever have we blamed the test nor questioned the validity of the test. Perhaps we should examine the tests as we rethink education in this country.

An example of mistakes being made can be found in Kansas. A simple mistake but we have seen bigger ones over the years. The point is that we are putting much too much reliance and importance on a test instead of on portfolios. If we are going to use these tests we need much better oversight by the states.

New York's ELA exam has changed radically over the years and it is about to change again in January 2011. Here is a chance to reexamine the test. Race to the Top has added much pressure on the states and very little on the test makers. We need to change this mind think. Students at Drexel University are using a real bridge to help in their assessment. Using real world situations are the best forms of assessment, not tests created by disinterested parties.

Perhaps the final nail in this discussion has to come from Edweek's annual "Quality Counts 2010" issue. Discussion about national standards and methods of assessment dot the issue with arguments from E.D. Hirsch Jr., Alfie Kohn, Diane Ravitch, Nel Noddings. Testing doesn't work, the tests are bad, wrong, and proprietory. We need to reexamine the test and our single reliance on one instrument of assessment.

The matter of education in this country has always been a personal one since it involves all of us one way or another. Since it is personal, perhaps we should take a more personal, subjective approach to assessment rather than the failing objective one. First I would recommend a better assessment of the test and secondly, I'd recommend a method of using efolios of student work to use in assessment in conjunction with the test or alone. We know the test alone is inadequate. Let's think digits, not atoms.

For further reading: From Edweek, "Quality of Questions on Common Tests at Issue"

1 comment:

TeachMoore said...

"Fixing" the tests is the very least we could do if we are going to continue to use them or expand their use for high stakes decisions. Better, as you suggest, is to reduce our dependence on them in favor of more comprehensive evaluations of student growth and performance. But both these options presume we actually care more about the students than those who are profiting from the tests.