Skip to content

how to scale

August 21, 2014

Boston MarketIn the 1990s the popular business press had a few recurring themes, such as growth in demand for PCs, anxiety over Asian economic strength and Latin American debt, and the hazards for American businesses of expanding too rapidly. On that third point the example was Boston Market so often that for lazy writers the name became a kind of shorthand for how not to scale. The chain of fast-casual restaurants began in 1985, grew explosively, borrowed too much, and by the end of the ’90s was bankrupt.

There’s more to take from that story than the hazards of leverage; with lower interest rates the Boston Market parable might have been one of nerviness rewarded. It’s also about judgment and perspective. Whenever you’re on a roll the temptation is to push harder: momentum itself is a a resource to take advantage of.

Here in the system office we spend some time trolling for ideas to scale, wondering when good practice at one state university is ready to carry over to another. We can do a lot of good when we guess right.

So imagine my surprise on finding a compelling argument against scaling at all. The authors of this paper, faculty at San Luis Obispo, begin with an observation about the prevailing metaphor for higher education, which is industrial and thus prioritizes standardization:

Industrial era manufacturing methods attempt to minimize diversity and its sources through quality control. In these metaphors, profit is assumed to be maximized through economies of scale, where variation accrues as a loss in profit.

If you replace the word “profit” with “efficiency,” you get a pretty good account of my job managing transfer credit. Uniformity is good. Idiosyncrasy produces waste. The paper continues:

Using the metaphor of complex systems instead opens possibilities for a plurality of valid “truths” to simultaneously exist, since the underlying premise is that systems are more than the sum of their parts.

For the last couple of years I’ve been fishing around for good ways to propagate ideas across communities, something less ham-fisted than policy and credit articulation. So I liked reading this paper, which brought to mind other models for group work: the “practitioner network,” exemplified in the California Community Colleges by the RP Group, for example , or the Networked Improvement Communities supported by the Carnegie Foundation, or the model called “communities of practice,” now a couple of decades old.

What they have in common is a loose-knit, organic property that feels truer to human interaction. People join or leave as time permits. Once in the group, listening and turn-taking are as important as any other contribution. Roles and obligations are fluid.

As an administrator, one part of me loves this while the other is reaching for the Xanax. Gone are the rigid to-do list and reporting deadlines; you can’t even maintain a decent listserv. I mean, for the love of God.

Intuiting my interest in such things, last year the now-dean of undergraduate education at San Francisco State gave me a book called Scaling Up Excellence. It describes this continuum in religious terms, running from Catholic (“mandating that new people and places become perfect clones of some original model”) to Buddhist (“encouraging local variation, experimentation, and customization”).

Somewhere between those theologies is the mix likeliest to work for universities like ours, thriving on local shared governance, while chained together on the state’s higher ed road crew.

bar2

A couple of weeks ago I joined administrators at a meeting of the National Association of System Heads. One of our advance readings was an article on Collective Impact, which included these five Conditions for Collective Success:

  • common agenda
  • shared measurement systems
  • mutually reinforcing activities
  • continuous communication
  • backbone support organizations

This list nicely captures what I’ve seen working around the CSU. But I’d emphasize that these are not only the minimum requirements, but also the most you’d want to do. In other words, once everyone has agreed to the goals and metrics, and you’ve provided a means of communication, shared activity, and support, get out of the way.

20090925_friedchicken_560x375The recent interest in scaling two innovations in particular has made it suddenly important for us to get this right.

  1. HIPs. In our efforts to institutionalize CSU offerings of high-impact educational practices, like learning communities, undergraduate research, and community engagement, we’re just about done with four of the five points, but the shared measurement systems remain a work in progress. Here the challenge is to do only that: create the metrics in a way that recognizes common ground where it really exists, without homogenizing everything else.

The clock is ticking: my boss in the corner office keeps upping the ante with dedicated student success money. My campus colleagues and I have successfully argued that it belongs here, with the learning: make it visibly valuable and relevant, and students will persist and graduate.

For now we’re winning, but if we don’t back our claims up soon with some research, the fad will pass, our moment of momentum missed. We’ll be the next, you know, Boston Market.

  1. Linked Learning. Last month I was witness to a bigger, splashier meeting around scaling right. For many years, grantees of the James Irvine Foundation have been quietly building a coalition to blur the boundaries between liberal education and job training. The ideas of this movement – Linked Learning – go back further than this iteration, and they will persist beyond it, too. The state of California is putting half a billion dollars into Career Pathways Trust, a development most observers attribute to Irvine’s successes.

In both of these cases, good work is getting money and support at unprecedented levels. And so both leadership groups face hard questions about how to scale – what is and isn’t in scope, what the templates must include, which details can be sacrificed while still maintaining “fidelity to the model.”

I worry about falling into that same industrial-era paradigm, and excessive homogenization.

There’s another serious risk here, and that’s simply antagonizing others who do good work. In the words of one of my colleagues in this work:

As a field we must guard against sounding ‘holier than thou,’ — that is, sounding like we are the sole guardians of quality, the only initiative that really knows what it is, and everyone else is just going through the motions. There are many other people out there who care about quality as deeply as we do.

In other words, as we draft these papal bulls on high-impact practices and linked learning, we’d better leaven the orthodoxy with a little Buddhism, and remember what we really want to scale.

 

the attention economy, part 1

July 28, 2014

NYT article

An article in the New York Times a couple of months ago is still lodged in my brain. Prompted by the protests of fast food workers over their minimum wage, it amounted to a deeper consideration of what wealth still gets you, in an age of cheap technology.

As the article’s opening question put it, “Is a family with a car in the driveway, a flat-screen television and a computer with an Internet connection poor?”

It echoes something Ken Follett wrote in the afterward of his historical novel The Pillars of the Earth. In researching the middle ages, he was struck by the way standards of living evolve, and observed that today’s incarcerated felon enjoys a higher quality of life, better health care and more security than the wealthiest nobility of the 14th century.

So say, hypothetically speaking, we want to address the widening gap between the rich and the poor. What exactly are we filling in, if everyone already has all their teeth and a smart phone?

The NYT article would argue human services: good health care, access to education, a lawyer when you need one. This makes sense to me, because it lines up with other long-term trends of comparative value. A thousand years ago, the cheapest way to copy a book was to hire a monk for a year. Today it’s a Kindle download equivalent to about an hour at minimum wage, or, if the book is old enough, free.

In other words, what’s expensive today isn’t the book but the monk.

That growth in the relative premium on human attention is accelerating, as we race each other to make everything else cheaper. I first encountered this idea around ten years ago, in a book called The Attention Economy. By now its examples and arguments are dated – the only edition was published in 2002 – but the conclusions are surprisingly fresh. The businesses that thrive will be those that value uniquely human input, that attend to attention.

DSCN4015-1024x768

This has a few implications for us in higher education:

1. The higher standard of living we peddle to newcomers is for services, not stuff. See the New York Times article for this. We have always chafed somewhat at the crassest motive for college learning, to earn more money. Yet that incremental income is looking less materialistic all the time, and more like the means to the kind of full life that we’re more comfortable promoting.

2. The most valuable proficiencies we develop may be interpersonal. Engineering faculty around the CSU are especially vocal advocates of general education, understanding that cross-cultural competence, clear communication, and persuasion are some of their graduates’ most marketable skills. That is, in an attention economy, the spoils go not only to those who make what people want, but also who can understand, anticipate, and responsibly lead the focus of others.

3. The most important dispositional learning may be executive function. I think of this third one as the flipside of the second: just as we want our graduates able to steer the attention of others, they will need increasingly to protect their own. And so far, as near as we can tell, they’re not only bad at it but getting worse.

nass.cliffordLast October I was among the presenters at a CSU Northridge event on the future of the CSU, and appearing with me was Clifford Nass of Stanford University. His remarks on the erosion of our students’ executive function, their multi-tasker’s vulnerability to distraction from all quarters, was chilling.

He went through detailed testing results on the different ways we manage attention:

     maintain focus in an appropriate area

     ignore irrelevant information

     manage working memory

     switch between tasks

In every case, students categorized as “high multi-taskers” – the vast majority of subjects, and a growing share – scored measurably worse. Yet, these are looking like the same skills they’ll rely on most after they graduate – both in what they purchase with that higher standard of living afforded by a college degree, and in what they sell as their contribution to the workforce.

That alone should tell us something urgent about what to emphasize with our curriculum.

But for those of us working in higher education, there’s another, bigger consequence of this shift in valuation, and it plays out not for individuals but collectively.

But it’s too surprising for me to write about yet.  Maybe in another post or two.

news from the Crooked Road

July 7, 2014

road endsLast month my wife and I spent a few days driving around the southwestern corner of Virginia, a stretch of U.S. 58 that boosters call the “Crooked Road,” musician slang for an unpredictable solo.

There’s a lot here to like, including the road itself, winding as it does through small towns and gorgeous scenery. There’s good food and authentic regional accents, and other reminders we’re not in California anymore: an abundance of churches, and designated smoking areas bigger than a doorway. (In fact, cigarettes are everywhere: you can drive up to a window and order them like tacos.)

At the eastern end of the road is a countercultural outpost called Floyd, with left-leaning bookstores and a reminder that even on the mountainous side, Virginia runs purple, not red.

And then there’s the reason we came, which was to hear the music.  It’s abundant, sometimes playing in the corners as an afterthought. At the welcome center in Abingdon hundreds of locally produced CDs are on sale – hundreds—and they’re all good, available to sample for free, recorded by people you’ve never heard of. The depth of talent and material, from a place that’s not all that populous, is astounding.

IMG_20140621_214550_623We were there early in the week – a Sunday, Monday, and Tuesday – and many of the venues were dark, either until later in the week or for good, replaced by an Applebee’s or a Red Lobster. But those were the exceptions: the Crooked Road falls only a little short of its promise.

Which had us wondering, what would take it the rest of the way?

How much of “dark on Sundays” is the Old Dominion keeping the faith, and how much is because there’s not enough demand for more? Could venues along the Crooked Road cultivate a bigger market by spreading their evening shows across more of the week? Or would that kill the very culture they’re set up to share?

And if there was interest in any build-out, then is there a role for Virginia’s public universities?

 

IMG_20140623_204752_334

The whole state has a stake in the answer. As April Trivett explains from the Heartwood Cultural Center in Abingdon, the Crooked Road has been a boon, and gives the state’s poorest region hope for an economic base less problematic than tobacco or coal.

Her comment reminded me of other states looking to public higher education to help diversify into cultural and intellectual economies, goals I hear via my counterparts in Louisiana, Nebraska, Kentucky, and Pennsylvania. Michigan has made its commitment especially clear, launching a public-private partnership with its universities expressly to reverse the state’s business fortunes.

The benefits will cut both ways.  I’ve often thought (and written once or twice on this blog) that as higher ed gets more comfortable with virtual delivery, our brick-and-mortar operations will need to emphasize their local roots to justify themselves. Real-world campus life has always had unique benefits, including the physical facilities, the face time with experts, the community interaction, and the sheer serendipity you get with proximity. What’s new is that now we have to emphasize them.

As we do, we’ll be sounding a little like the people who promote the Crooked Road: some places are still worth the trip.

fMRI and learning outcomes assessment

June 4, 2014

When you think often about the same thing, does it take up more of your brain? That assumption informs our cartoons and tee shirts:

03 inside_the_male_psyche

But images of brain activity suggest the opposite. The colored sections below show active regions of the brain performing a complicated task for the first time, and then after an hour of practice:

image 01

As you get the hang of something, it takes less mental effort to continue it. We knew this: it’s why for centuries we’ve drilled our soldiers in reloading rifles, so they can conserve their working attention for other purposes, like staying alive and shooting.

As a skill or movement shrinks to its long-term minimum footprint, we can locate it precisely in a person’s brain:

04 tumblr_m6p2n9A5nh1qza6bio1_400

I think about this sometimes. When my wife Cyndi first taught jazzercise, learning a routine took her days, the same pop song booming around the house while she recited its choreography. Fifteen years later, she picks up steps to the latest Macklemore and Ryan in about ten minutes, usually while playing Candy Crush on the other screen. Somewhere in her brain I picture a synapse she didn’t have before, whose only job is to represent a grapevine right and two pliés.

Food for thought: what if, along with locating neural networks for body parts and dance steps, you could locate them for ideas? Last year a Kyoto-based research team made news by reading dreams using the same technology — functional Magnetic Resonance Imaging (fMRI) — that produced the pictures here. They did it by first building up a glossary of visual associations, showing their waking subjects images from the web, then used those as references to read their minds while asleep.

This drew from work by a Berkeley team two years prior, which studied waking subjects as they watched movie trailers. Here’s a side-by-side clip of the actual trailer and what the fMRI guessed the subjects were seeing:

So, a blurry mess, but a start, like color TV circa 1939.

Since the brain activity varies from person to person, it represents ideas constructed by the brain as it learns over time.  That makes research like this potentially useful to educators.

That is, the day may come when we don’t need testing, transcripts, and samples of student work to see if someone knows something. We can just look.

Which raises a question we can start answering right away: what would we look for?

06 sumerian%20tablet%20543x513

When we off-loaded memory to writing, around five millennia ago, we changed the relative value of different intellectual skills. Memorization fell back in the sweepstakes, leaving room at the front for things like problem-solving and persuasion.

Our priorities shift again whenever we outsource some brainwork to technology. When my parents were in high school in the early 1950s, they learned to take square roots by hand. When I was that age my textbook listed two steps to derive a square root: (1) find a calculator and (2) press the √ key. An appendix at the back explained the manual procedure for the curious.

Today’s shifts put a new premium on collaboration, persistence, cross-cultural facility, and other ineffable capacities to productively make your way in a connected world.

But about that word “ineffable”: is what we want really so impossible to describe? If I can recognize the neural fingerprint of a chassé left or watching a fight scene, then can’t I also see whose brain is cooperating? And whose is still learning how?

Katharine Stewart, my counterpart in the University of North Carolina System, has a disciplinary background in medical psychology. She’s convinced these capacities aren’t ineffable at all, and that in fact we have been usefully effing them for quite some time. They appear increasingly in the higher ed lit as “non-cognitive” skills: resilience, grit, determination. She cites longstanding parallels from other realms of human development: K-12, social work, and corrections. Indeed, aren’t we just talking about variations on impulse control? Anger management? Deferred gratification?

Those who study education as a discipline may object to my casting this as breaking news, but it’s a fact that hardly any college faculty and administrators know this stuff. We were trained in our separate disciplines, not in learning.

Watching these separate strands of work – in fMRI and in the precision other fields have used to describe non‑cognitive learning – I think we can anticipate the day when they’ll connect. So if the question is “what would we look for?”, then part of the answer is these discrete parcels of unambiguous, identifiable dispositional learning.

Looking ahead to that day, there’s a third strand that needs to catch up, and that’s our relatively primitive approach to assessing learning in the disciplines. Because along with impulse control, cooperation, and a visibly frugal use of attention on practiced tasks, we also expect our college graduates to know a subject well. They still pick a major, and on that part of the new ground we’ve barely set foot.

Since we weren’t trained for any of this, faculty in departments tend to define learning (at least initially) in the simplest way possible, as recall. After that they grope, saying things like “we want our physics majors to think like physicists.” With continued effort, these groups eventually define what they mean in smaller units, the “tells” of physicist-style thinking that reveal proficiency beyond content knowledge. They may look for signs a student can pose a relevant question and then suggest a hypothesis and experiment to answer it – depending on the specialty within physics, maybe by using specific math or pieces of lab equipment.

The trick here is to come up with small, unambiguous signs of such proficiency, indicators that can be recognized as meaningful increments of learning, and recorded for academic credit. As I’ve written before, I think the Threshold Project has promise here.

At some future point the frontiers of these three kinds of work should touch each other, and we’ll have a clearer sense of our students’ learning in a variety of domains, and our institutions’ educational effectiveness:

07 convergence

 

That may seem unlikely, the idea that we’ll know if you’ve mastered say, writing movie dialogue, by boiling it down to discrete chunks of timing, verisimilitude, characterization, and wit, and checking whether the performance of such work takes up an appropriately small and efficient corner of your brain.

08 fMRI_labeled

But to me, that’s no more reductive – or far-fetched – than evaluating illness with x-rays and blood tests. As educators we’re a lot like 18th century physicians, who diagnosed and prescribed with semi‑mystical assertions of “humors” and the like, telling patients they might feel better if they breathed smoke or bled into a bucket. Developing a handful of key understandings by hurling content knowledge at freshmen seems no more enlightened.

Or, in the words of my Texas colleague Marni Baker-Stein, we’re like amateur astronomers, on the eve of the discovery of the telescope.

I can’t wait to take a look.

 

 

why four years

May 16, 2014

Here’s a response I got from the post before last: “I have a question that I have posed to colleagues who can only say ‘because that’s the way we’ve always done it.’ My question is: Where does the concept of the four-year college degree come from? Why four years? Why not two, or five, or seventeen? Why is four the magic number? I know it predates Carnegie contact hours and whatnot, but I can’t find a good answer anywhere. Can you shed some light on this?”

American Higher Education

Well, I looked this up in my favorite reference book on higher ed history (yes, I have one), American Higher Education by Christopher J. Lucas. I didn’t find a particular origin, but a couple of references suggest this has been with us for a very long time, and certainly – as you point out – predates the credit hour, an innovation of the 1910s.  This is from his chapter on the 13th and 14th centuries:

A composite of university life indicates upwards of four or five years elapsing between a student’s initial admission and the series of academic trials required for his obtaining the medieval equivalent of a bachelor’s degree. (p50)

In the U.S. we adopted the four-year curriculum wholesale and uncritically:

The course of study offered by the typical colonial college very much reflected the earliest settlers’ resolve to effect a translation studii – a direct transfer of higher learning from ancient seats of learning at Queen’s College in Oxford and Emmanuel College at Cambridge to the frontier outposts of the American wilderness . . .

During the first year of study, Greek, Hebrew, logic and rhetoric were curricular staples. In the second year to them were added logic and “natural philosophy.” The third year brought moral philosophy (ethics) and Aristotelian metaphysics, followed in a fourth year by mathematics and advanced philological studies in classical languages, supplemented by a smattering of Syriac and Aramaic. (p. 109)

So, four years.  But get a load of that course list — and we thought our GE was musty.  I think it’s telling that in the centuries since then, we’ve changed almost everything about this curriculum except its duration.

learner.org

We can attribute its durability over the centuries in a couple of ways:

1.  We never got around to changing it, because measuring learning in ways other than seat time is so hard.

2.  We’ve deliberately held it to four years because experience indicates it’s the right period of time.

I suspect it’s both.  People get something valuable from time on task, so there’s more to this structure’s longevity than just habit.  But that doesn’t let us off the hook:  if we still expect people to set aside all that time, then we should be better at defining what that something valuable is.

More on that next time.

porous membranes

April 25, 2014

GAThis week’s Supreme Court decision on affirmative action has me thinking about lines, and how some are harder to see the closer we get to them.

For example, until recently it was pretty clear whether a person was dead or alive; you don’t hear Hamlet equivocating over to be, not to be, or some third choice in between. But as medicine advances the distinction blurs. It was 1968 when we moved the locus of life from the heartbeat to the electrical activity of the brain; these days we’re likelier to define death as “the moment after which we can’t revive you anymore.” But technology being what it is, that moment keeps coming later. Within my lifetime, the once headline-making decision of whether to unplug a loved one has become a common way to say good-bye. (See Janet Radcliffe Richards on the ramifications for organ donors.)

It gets wackier. Even for those of us still breathing unassisted, it’s hard to pin down which symptoms of life distinguish us from other dynamic systems. Is it that we enjoy intake, metabolisms, and exhaust? Process materials? Spawn? The harder we look for the line the fainter it gets, and today there are people (e.g. Jason Silva) who will tell you with a straight face that all the identifiable properties of life are present in the universe, or a city.

These are dark days for binary thinking. Even trusty gender is on more of a sliding scale than we thought. An Olympics cheating scandal (think cross-dressing to medal) led to blood-testing in the 1990s, and turned up such surprising variability in testosterone levels in both sexes that a recent New York Times op-ed argued we should quit testing altogether, and just let athletes “self-define” their sex.

It reminded me of a similar decision made by the U.S. Census, when it gave up asking people to identify with a single race. It continues to struggle with how exactly to categorize us.

Perkins%20George%20H%201910%20United%20States%20Federal%20Census

I think there’s a lesson in here for us in the knowledge biz. Like most human understanding, these paradigms take root not because they’re valid, but because they’re useful. No sane person would look at a random spray of stars and see a crab, hunter, or bear without a whole lot of coaching. But narrating the randomness made it easier to remember – and so to predict harvest season, or find your way home.

But then we get carried away. From seeing comic strips in the firmament it’s a short step to ascribing gangrene to angry Gods, dead children to angels impatient for their company, and autism to vaccines. And that’s not even counting all the good accidents we like crediting to, well, ourselves.  (See former investment trader Nassim Nicholas Taleb on our awesome capacity for self-congratulation.)

Whether assigning credit or blame, we get so eager to rationalize our experience that it’s hard to stop when we want to really figure something out.

I wish we could anticipate that moment of inflection, when a boundary that we mostly made up starts doing more harm than good. We’re probably there already with some of the discipline names in our colleges.

Yet there’s also an irony here, especially when you get back to the question of race. Recent court cases notwithstanding, we can’t prematurely dissolve the borders we’ve defined and go post-racial, seeing each other on an ill-defined sliding scale, however accurate and desirable that would be.  Because, as advocates of action research will tell you, the only way to surface problems is by “disaggregating” our data along ethnic lines, insisting on race identification, because otherwise the inequities are invisible.  I get that, and agree.

But it’s telling that no grouping seems to work everywhere.  For example, in states other than California it may be enough to put all your Asian students into one category, and some of our reform efforts at home follow the national lead.  But here that glosses over differences between significant numbers of people:  current higher ed delivery works a lot better for Chinese and Japanese students than it does for Hmong and Samoan, for example.  In California, progress won’t come from fewer lines and categories, but from more.

So we aren’t yet at that inflection point.  But in naming the races, as in naming the constellations, or the criteria for life, or our academic departments, we want to remember what we’re doing, and keep watching for when to rearrange the borders, or simply move on.  The day will come when we conclude that socioeconomic status is more meaningful than ethnicity, or that organizing campuses by interdisciplinary problem makes more sense than the current department roster.

Because our groupings are only ever artifice, and the membranes between them porous.

food fights and thresholds

April 2, 2014

thresholdHigh school diplomas and college degrees suggest that something as incremental as learning can be subdivided by broad markers, that for example, you gradually improve your reading until Dickens makes sense to you, you remember some history and science and a smattering of civics, and then the world attaches to you a high school diploma, while you go on about your learning from there.  Your growing proficiency was gradual and invisible, but its certification is explicit and punctuated.

The implied precision is laughable, this sense that suddenly in your 18th spring, you and everyone else your age – all over the country – have achieved the same thing.  No wonder adolescents are cynical.

The cognitive dissonance rings louder up here in postsecondary, amplified by our gigantic differences in majors, academic calendars, institutional rigor.  Even from the same college, two baccalaureate degrees can mean very different things.  So is there a same thing that they also mean?

This question is coming up now in the California State University, where administrators and faculty are looking for some shared way to determine the right number of academic credits to put into a degree.  In general the faculty pulls toward more units, the administration toward fewer.

The effort is complicated by conflicting senses of how the decision should be shared.  Both sides agree that “faculty own the curriculum,” but neither sees a full role for the other in setting the size of that curriculum.  It’s been a trying discussion, made harder in part because each behaves as if it’s somehow magnanimous to discuss it with the other.

For reasons that are unclear to me, some subjects have defined the bachelor’s degree more expansively than others.  My own undergraduate major was in French literature.  (A safe fallback, in case I couldn’t find a job as a state bureaucrat.)  I blew through the requirements in about three years and took up the slack in arts and other humanities, especially film.  Had I instead taken a bachelor’s in metallurgical engineering or jazz piano, the same credential would have taken almost twice as long.

Ask the pianists and metallurgists why this should be and you get one of two answers, neither of them very satisfying:

a.            because that’s how our profession defines a baccalaureate.  (Yes.  And that’s how the rest of us define a tautology.)

b.            because that’s the minimum learning you need to practice the profession as a graduate.

That second one gets more traction among my colleagues.  My devastating rejoinder:  so what?  Then say the entry level degree for your job is more than a baccalaureate.  Say that to get a job you need a masters, or a doctorate, or something else.  This solution works fine for California’s attorneys, dentists, second-grade teachers, architects, and for that matter college professors.  The big difference:  all of them have a fully credentialed off-ramp after four years of college, while the engineers and pianists do not.

In earlier internecine CSU food fights I’ve thrown some spaghetti, but on this one I’m politely sitting out.  It’s just too hard for me to see both sides, so I can’t participate constructively.

But it does have me wondering whether these degrees are set at the wrong size altogether.  If we leave behind credit hours and go to outcomes, will they need to cover a broader swath than you see with the baccalaureate?  It would be easier for me to tolerate wide differences in time to degree if the degrees really were signifiers of entry-level proficiency, instead of just time served.

1spaghettiwall1But the more I think about it, the more I think we need to count outcomes in smaller units, not bigger ones.

There’s an effort in the California Community Colleges along these lines, called the Threshold Project.  The tacit premise is that it’s hard to say when someone’s done learning a whole domain, but easier to know when they understand just one critical piece of it, a “threshold concept.”  Like a doorway, it’s a visible marker you cross on your way to understanding the rest of the field, with a clear before and after.  (I’m oversimplifying a bit.  Friends who lead the project point out that sometimes students backslide to the pre-understanding side of the doorway.  For the full story see this two-page explication, highly recommended.)

A couple of examples:  as you first learn chemistry and understand that molecules are comprised of atoms, you need to internalize the truth that the combinations follow rules, based on numbers of electrons.  No one gets this instinctively; you have to work at it, and until it makes sense to you, the rest of chemistry never will.  So you balance equations and write out atomic structures, and recognize the same compounds over and over, and eventually you grasp the threshold concept of covalent bonding.  Now — and only now — you can move on to the next one.

Another example comes from my home discipline, screenwriting (not French).  Aspiring screenwriters are often readers, and as fans of fiction they like narrating.  But for most movies the point is to make it look like the story told itself, as if we just happen to be watching people interact in a compelling sequence.  It’s very hard to write that way, and you can’t even begin until you first internalize the truth that movies only look spontaneous, because the narrative is deliberately self-effacing, another threshold concept.

In any discipline, as you identify these little comet tails of visible learning a couple of good things happen.  First, faculty can agree on a shared set of assignments – student work samples – that tell anyone in the field whether a student has successfully stepped through a given doorway.  For learning outcomes assessment, ePortfolio construction, and the recognition of credit this is priceless.  And second, administrators can start to see something portable, interchangeable, and akin to currency in higher ed, only more meaningful than a credit hour.

Put our attention there, and disputes over the size of the degree would be on better footing.  We might even holster our spaghetti.

Follow

Get every new post delivered to your Inbox.

Join 46 other followers