Careful not to mix metaphors

This week I learned from the Association of American Colleges and Universities that I’ll be presenting at their January meeting on the Bologna process of “tuning.”  I proposed the session with Adina O’Hara, who works on general education and transfer for the state of Kentucky.  Joining us will be a counterpart from the Utah system office.

The idea of “tuning” is to get different institutions and whole states (or in Europe, whole countries) to agree on the student learning outcomes that define a certain credential — a diploma, a certificate, or as they’re working on it in Utah and Kentucky, completion of a GE transfer package.

This approach, developed in Europe over the last dozen years, has all kinds of benefits for higher ed.  Cliff Adelman is the American who’s made the strongest, most consistent case.

From the perspective of GE and transfer in California, a focus on outcomes like student learning, instead of inputs like a list of courses, would free up the sending institutions to educate in whatever way works for the local context.  Faculty could play to their strengths.  Students could participate in educational experiences with more texture, customized to the geography, cultural quirks, and demographics of the region.  Literally anything would go, so long as at the end of the experience students could demonstrate communication and quantitative reasoning, knowledge of the world’s human cultures through the lenses of social science, the arts, and the humanities, and scientific literacy.

That’s the dream, anyway.

“Tuning” GE in California would rely on a stronger relationship between the public universities and public community colleges than we have now.  Today the best of those inter-system relationships center on work in specific disciplines — for example business faculty, or health sciences faculty, reaching agreement on appropriate lower-division preparation for the major.

Kentucky and Utah are already assembling interdisciplinary groups of faculty to figure out their learning goals for the GE transfer package.  I’ll be standing next to Adina in January saying that from California’s perspective, we see the upside but have a long way to go.

Instead, for the last two years in California we’ve been working to put more “high-impact practices” into our GE curriculum.  By organizing our efforts with an AAC&U project called “Give Students a Compass” we’ve been able to approach this systematically, and run preliminary data indicating these practices have a disproportionate benefit for the traditionally underserved — students of color, the economically disadvantaged, those whose parents didn’t go to college.

These are related strands of GE reform.  Tuning would, in theory, take the cardboard box of the GE package and remove the little dividers inside that separate it into three-unit lectures.  Then the box might accommodate more high-impact practices — educational experiences that last longer than a single term, for example, or rack up many units at once over a summer immersion.

But one step at a time.  Attending the same January conference where I’ll be talking about tuning will be colleagues from the community colleges who — for very good reasons — want nothing to do with tuning.  They like the high-impact practices, and like the implications for student success.  But in their system of higher ed a focus on outcomes looks more like harm than good, a euphemism for No College Child Left Behind, and teaching to the test.

So, first comes Compass, and high-impact practices, and a collaborative relationship that permits alternate certification of GE for transfer.

Later, if we’re really lucky and things go well in Utah and Kentucky, tuning.


One thought on “Careful not to mix metaphors

  1. Community College colleagues are well advised to worry about higher education’s “tuning” to learning outcomes, given what has happened in the K-12 system. In my view, the final solution isn’t a focus on learning outcomes, which I see as a preliminary step, but careful attention to how those outcomes are assessed. In the K-12 system, the “test” has become a proxy for the outcome; so instead of teaching to the outcome, teachers teach to the test (to make sure they do, policymakers are connecting job security to test scores). In higher ed feeder schools, teaching literary response as an integral part of human development, for example, is transformed into teaching plot summary and analysis of text structures because summary writing and a-b-c-d-none-of-the-above are operationally defined as literary response. Regardless of the value and nature of the outcome, if it is poorly assessed, if the assessment has teeth, educational opportunities become distorted. Could be better to have chaos with the possibility of opportunity for some than surveillance with consistent, uniform distortion for all, eh?

    Although some institutions have made great progress individually in terms of full bodied assessment, the higher education community generally is transitioning from an early stage of outcomes-identifying work where learning outcomes themselves have been clarified and to some degree negotiated (cf. the LEAP Framework and the VALUE rubrics) to a search for data collection and analysis methods. The next stage is crucial, i.e., developing valid and reliable assessment strategies that minimize distortion, maximize opportunities for individual and institutional growth, and lead to confidence in data among institutions. Here is where the K-12 system went wrong. Instead of coming together as a profession to construct and validate shared assessment strategies, this crucial work was farmed out to commercial test companies who did it the old-fashioned way—a, b, c, d, or none-of-the-above. It is tragic that the K-12 system had its golden moment, its time when good, defensible, valid, reliable, useful, meaningful, cross-institutional assessments were being built. But nobody bothered to keep the public (read: parents) up to speed, the politicians had no patience for performance assessment, and in the early to mid-1990s the door slammed shut. I’ve been wondering for some time now how much longer state and national policymakers will wait to get what they understand to be real information (read: test scores) about the impact of this expensive, important, yet inscrutable higher education system we now have? I can hear door hinges squeaking….

    My advice to the higher education community, for what it’s worth, is this: Begin immediately and vigorously to collaborate across institutions on ways to assess that are pragmatic, useful, transparent, and valid. Publicize the work. Speak about it everywhere. Start this work with the full awareness that it will face political turbulence, practical challenges, and philosophical tensions. Build on design principles that prevent a simple view of knowledge as static, discrete, bundled, complete, portable, and handed down from on high that leads to a simple view of learning (searching for right answers, relying on one source of information, giving up in the face of complexity, acting as if learning is synonymous with memorizing, and the like). Put a premium on a view of knowledge as dynamic, reasoned, defended, messy and partial and contestable if durable, interrelated and malleable. We now have a functional handle on learning outcomes that imply knowledge (though not much of a handle on what we collectively mean by knowledge itself)—from information literacy to intercultural competence and the wide expanse between—but are woefully deficient in assessment methods and strategies.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s