Testing New Standards for Standardized Testing

Testing New Standards for Standardized Testing

Can diagnosing students' misconceptions lead to personalized teaching?

Can diagnosing students’ misconceptions lead to personalized teaching?

By Joe Levine

What if standardized tests tried to diagnose specific skills and misconceptions instead of simply ranking kids based on their overall ability or performance? Would new tests of this kind provide teachers with the necessary information to personalize instruction based on what an individual student “gets,” and more specifically, on where his or her understanding is breaking down?

That’s the basic premise of cognitive diagnostic assessment, an approach that tries to identify the specific knowledge and skills a student does and doesn’t have. The field of cognitive diagnostic assessment was pioneered by Kikumi Tatsuoka, who retired a few years ago after serving as Distinguished Research Professor in TC’s Department of Human Development. Research on cognitive diagnostic testing continues in the department, conducted by various faculty members, including Young-Sun Lee, Matthew Johnson, Larry DeCarlo and Jim Corter, along with many student collaborators.

Taking cognitive diagnosis a step further, one can ask why a skill has not been mastered—that is, has it simply not been learned, or has it been learned incorrectly? An approach called cognitive error analysis assumes that students sometimes develop fixed, stable misunderstandings of how to perform certain intellectual tasks—say, of how to borrow numbers in subtraction or of the sum of the three angles in a triangle. If a kid really does have such a fixed misconception or “bug,” the theory goes, tests will reveal a consistent pattern of errors that are all of the same type.
Corter, Professor of Statistics and Education, and his student collaborators are investigating new ways to construct and analyze tests that can reveal such fixed misconceptions and at the same time identify missing subskills that are critical to performing specific intellectual tasks.

“Standardized unidimensional tests like the GRE mathematics subtest are fine for making high-stakes admissions decisions, awarding certification or serving as the criteria for a pass or fail grade,” says Corter, “but they don’t generally provide a fine-grained assessment of what a kid does or doesn’t know. They don’t tell you how someone arrived at a right answer—whether by guessing, memorizing, cheating or using the targeted problem-solving methodology.”

In a current paper in the journal Applied Psychological Measurement, Corter and his former student Jihyun Lee use a tool called Bayesian networks to test causal links between the lack of particular subskills and persistent misconceptions or “bugs.” They find that the lack of certain subskills, together with resulting inferred conceptual misunderstandings, strongly predict certain patterns of wrong answers to subtraction problems.

Using this methodology, Corter and Lee show that students reveal more of their thinking when they are asked to “construct” their responses—that is, to write answers to open-ended questions rather than choosing from an assortment of multiple-choice answer options. But Corter and Lee also argue that construction of multiple-choice items can be re-engineered, so that the incorrect alternatives are designed to identify specific misunderstandings or bugs.

Much more work needs to be done to develop and implement such tests on a mass scale across different subjects—including developing fast and accurate ways of scoring. Another open question: how best to use fine-grained diagnostic test profiles of students to create effective individualized instruction. But Corter and Lee are optimistic about the value of their research to improve evaluation of student achievement and program effectiveness. At the conclusion of their paper, they write:

Diagnosis of which sub-skills a student has mastered, and whether the student has a specific misunderstanding...should enable more focused and informative assessments of student achievement and the effectiveness of instructional programs, perhaps leading to better individualized instruction.



Published Wednesday, Dec. 15, 2010

Share

More Stories

Back to skip to quick links