By Geoff Irvine, CEO on Thursday, November 14, 2013

einstein chalkboardAlbert Einstein was asked about his basic axiom for science in his quest for a ‘unified field theory’, following his staggering, earth changing, but simple derivation of the formula: E=mc2. He replied, “Everything should be made as simple as possible, but not simpler.”

This is an axiom that has been misinterpreted with worrisome frequency in the field of outcome assessment, especially where compliance or accreditation has become a factor. What has happened is the last three words have been discarded. Most software providers have ignored the comma, and the cautionary part of the statement (“but not simpler”) dropped off the radar. The result is that the science has been lost in favor of making everything as simple as possible by abbreviating the essential workflows that underpin sound assessment system design. Result: a national data train wreck and deepening doubt that students are learning.

What happened? We figure it began with the challenge of managing standards in K-12 in the mid-1990s. Anyone who has read these and their ‘higher education cousins’ knows they were not meant to be read by real humans. They are invariably made up of sets of thousands of broad policy statements, each expressed in terms that defy valid consistent observation by different people assessing the same thing. Lesson planning software developers were among the first to try to make managing standards easier to do (“simpler”). These tools were really aimed at seeing if teachers were covering what they were supposed to, by reporting out the links between standards and every lesson. This was tedious. Solution: give the teacher access to all standards, enable a rapid search, and then add bulk linkages to each lesson. You could link lots of things in no time. The gamble was that this would look easy enough to entice people into buying what appeared to be a turnkey process. The gamble paid off.

Many providers retooled their software making the unit plans into ePortfolios. Seemed sensible at the time. Just link the standards to your key assessments, link a rubric to the work (usually the one used to generate a course grade), and assess. Magically, the resultant score propagated to all those linked standards. Presto, you had scads of data divisible by level of performance. Case closed.

Does anyone see where this breaks down? Here is the kill question: “So, how did you aggregate that?” In the ‘big buckets’ of links with the numbers attached to them there was (is) no way to discern what specifically students can or cannot do. A lack of agreement about the meaning of the standards makes matters even murkier. Institutions waste millions every year to yield data that does not accurately, reliably, and validly measure what it says it does, AND so cannot drive improvement. Many already know this. Losing “but not simpler” has been fatal. But that’s the way it’s done, right? For what it’s worth, confronting this is not hard. Just needs to be systematic… and it has nothing to do with software.


comments powered by Disqus