There’s a lot here to like, including the road itself, winding as it does through small towns and gorgeous scenery. There’s good food and authentic regional accents, and other reminders we’re not in California anymore: an abundance of churches, and designated smoking areas bigger than a doorway. (In fact, cigarettes are everywhere: you can drive up to a window and order them like tacos.)
At the eastern end of the road is a countercultural outpost called Floyd, with left-leaning bookstores and a reminder that even on the mountainous side, Virginia runs purple, not red.
And then there’s the reason we came, which was to hear the music. It’s abundant, sometimes playing in the corners as an afterthought. At the welcome center in Abingdon hundreds of locally produced CDs are on sale – hundreds—and they’re all good, available to sample for free, recorded by people you’ve never heard of. The depth of talent and material, from a place that’s not all that populous, is astounding.
We were there early in the week – a Sunday, Monday, and Tuesday – and many of the venues were dark, either until later in the week or for good, replaced by an Applebee’s or a Red Lobster. But those were the exceptions: the Crooked Road falls only a little short of its promise.
Which had us wondering, what would take it the rest of the way?
How much of “dark on Sundays” is the Old Dominion keeping the faith, and how much is because there’s not enough demand for more? Could venues along the Crooked Road cultivate a bigger market by spreading their evening shows across more of the week? Or would that kill the very culture they’re set up to share?
And if there was interest in any build-out, then is there a role for Virginia’s public universities?
The whole state has a stake in the answer. As April Trivett explains from the Heartwood Cultural Center in Abingdon, the Crooked Road has been a boon, and gives the state’s poorest region hope for an economic base less problematic than tobacco or coal.
Her comment reminded me of other states looking to public higher education to help diversify into cultural and intellectual economies, goals I hear via my counterparts in Louisiana, Nebraska, Kentucky, and Pennsylvania. Michigan has made its commitment especially clear, launching a public-private partnership with its universities expressly to reverse the state’s business fortunes.
The benefits will cut both ways. I’ve often thought (and written once or twice on this blog) that as higher ed gets more comfortable with virtual delivery, our brick-and-mortar operations will need to emphasize their local roots to justify themselves. Real-world campus life has always had unique benefits, including the physical facilities, the face time with experts, the community interaction, and the sheer serendipity you get with proximity. What’s new is that now we have to emphasize them.
As we do, we’ll be sounding a little like the people who promote the Crooked Road: some places are still worth the trip.
When you think often about the same thing, does it take up more of your brain? That assumption informs our cartoons and tee shirts:
But images of brain activity suggest the opposite. The colored sections below show active regions of the brain performing a complicated task for the first time, and then after an hour of practice:
As you get the hang of something, it takes less mental effort to continue it. We knew this: it’s why for centuries we’ve drilled our soldiers in reloading rifles, so they can conserve their working attention for other purposes, like staying alive and shooting.
As a skill or movement shrinks to its long-term minimum footprint, we can locate it precisely in a person’s brain:
I think about this sometimes. When my wife Cyndi first taught jazzercise, learning a routine took her days, the same pop song booming around the house while she recited its choreography. Fifteen years later, she picks up steps to the latest Macklemore and Ryan in about ten minutes, usually while playing Candy Crush on the other screen. Somewhere in her brain I picture a synapse she didn’t have before, whose only job is to represent a grapevine right and two pliés.
Food for thought: what if, along with locating neural networks for body parts and dance steps, you could locate them for ideas? Last year a Kyoto-based research team made news by reading dreams using the same technology — functional Magnetic Resonance Imaging (fMRI) — that produced the pictures here. They did it by first building up a glossary of visual associations, showing their waking subjects images from the web, then used those as references to read their minds while asleep.
This drew from work by a Berkeley team two years prior, which studied waking subjects as they watched movie trailers. Here’s a side-by-side clip of the actual trailer and what the fMRI guessed the subjects were seeing:
So, a blurry mess, but a start, like color TV circa 1939.
Since the brain activity varies from person to person, it represents ideas constructed by the brain as it learns over time. That makes research like this potentially useful to educators.
That is, the day may come when we don’t need testing, transcripts, and samples of student work to see if someone knows something. We can just look.
Which raises a question we can start answering right away: what would we look for?
When we off-loaded memory to writing, around five millennia ago, we changed the relative value of different intellectual skills. Memorization fell back in the sweepstakes, leaving room at the front for things like problem-solving and persuasion.
Our priorities shift again whenever we outsource some brainwork to technology. When my parents were in high school in the early 1950s, they learned to take square roots by hand. When I was that age my textbook listed two steps to derive a square root: (1) find a calculator and (2) press the √ key. An appendix at the back explained the manual procedure for the curious.
Today’s shifts put a new premium on collaboration, persistence, cross-cultural facility, and other ineffable capacities to productively make your way in a connected world.
But about that word “ineffable”: is what we want really so impossible to describe? If I can recognize the neural fingerprint of a chassé left or watching a fight scene, then can’t I also see whose brain is cooperating? And whose is still learning how?
Katharine Stewart, my counterpart in the University of North Carolina System, has a disciplinary background in medical psychology. She’s convinced these capacities aren’t ineffable at all, and that in fact we have been usefully effing them for quite some time. They appear increasingly in the higher ed lit as “non-cognitive” skills: resilience, grit, determination. She cites longstanding parallels from other realms of human development: K-12, social work, and corrections. Indeed, aren’t we just talking about variations on impulse control? Anger management? Deferred gratification?
Those who study education as a discipline may object to my casting this as breaking news, but it’s a fact that hardly any college faculty and administrators know this stuff. We were trained in our separate disciplines, not in learning.
Watching these separate strands of work – in fMRI and in the precision other fields have used to describe non‑cognitive learning – I think we can anticipate the day when they’ll connect. So if the question is “what would we look for?”, then part of the answer is these discrete parcels of unambiguous, identifiable dispositional learning.
Looking ahead to that day, there’s a third strand that needs to catch up, and that’s our relatively primitive approach to assessing learning in the disciplines. Because along with impulse control, cooperation, and a visibly frugal use of attention on practiced tasks, we also expect our college graduates to know a subject well. They still pick a major, and on that part of the new ground we’ve barely set foot.
Since we weren’t trained for any of this, faculty in departments tend to define learning (at least initially) in the simplest way possible, as recall. After that they grope, saying things like “we want our physics majors to think like physicists.” With continued effort, these groups eventually define what they mean in smaller units, the “tells” of physicist-style thinking that reveal proficiency beyond content knowledge. They may look for signs a student can pose a relevant question and then suggest a hypothesis and experiment to answer it – depending on the specialty within physics, maybe by using specific math or pieces of lab equipment.
The trick here is to come up with small, unambiguous signs of such proficiency, indicators that can be recognized as meaningful increments of learning, and recorded for academic credit. As I’ve written before, I think the Threshold Project has promise here.
At some future point the frontiers of these three kinds of work should touch each other, and we’ll have a clearer sense of our students’ learning in a variety of domains, and our institutions’ educational effectiveness:
That may seem unlikely, the idea that we’ll know if you’ve mastered say, writing movie dialogue, by boiling it down to discrete chunks of timing, verisimilitude, characterization, and wit, and checking whether the performance of such work takes up an appropriately small and efficient corner of your brain.
But to me, that’s no more reductive – or far-fetched – than evaluating illness with x-rays and blood tests. As educators we’re a lot like 18th century physicians, who diagnosed and prescribed with semi‑mystical assertions of “humors” and the like, telling patients they might feel better if they breathed smoke or bled into a bucket. Developing a handful of key understandings by hurling content knowledge at freshmen seems no more enlightened.
Or, in the words of my Texas colleague Marni Baker-Stein, we’re like amateur astronomers, on the eve of the discovery of the telescope.
I can’t wait to take a look.
Here’s a response I got from the post before last: “I have a question that I have posed to colleagues who can only say ‘because that’s the way we’ve always done it.’ My question is: Where does the concept of the four-year college degree come from? Why four years? Why not two, or five, or seventeen? Why is four the magic number? I know it predates Carnegie contact hours and whatnot, but I can’t find a good answer anywhere. Can you shed some light on this?”
Well, I looked this up in my favorite reference book on higher ed history (yes, I have one), American Higher Education by Christopher J. Lucas. I didn’t find a particular origin, but a couple of references suggest this has been with us for a very long time, and certainly – as you point out – predates the credit hour, an innovation of the 1910s. This is from his chapter on the 13th and 14th centuries:
A composite of university life indicates upwards of four or five years elapsing between a student’s initial admission and the series of academic trials required for his obtaining the medieval equivalent of a bachelor’s degree. (p50)
In the U.S. we adopted the four-year curriculum wholesale and uncritically:
The course of study offered by the typical colonial college very much reflected the earliest settlers’ resolve to effect a translation studii – a direct transfer of higher learning from ancient seats of learning at Queen’s College in Oxford and Emmanuel College at Cambridge to the frontier outposts of the American wilderness . . .
During the first year of study, Greek, Hebrew, logic and rhetoric were curricular staples. In the second year to them were added logic and “natural philosophy.” The third year brought moral philosophy (ethics) and Aristotelian metaphysics, followed in a fourth year by mathematics and advanced philological studies in classical languages, supplemented by a smattering of Syriac and Aramaic. (p. 109)
So, four years. But get a load of that course list – and we thought our GE was musty. I think it’s telling that in the centuries since then, we’ve changed almost everything about this curriculum except its duration.
We can attribute its durability over the centuries in a couple of ways:
1. We never got around to changing it, because measuring learning in ways other than seat time is so hard.
2. We’ve deliberately held it to four years because experience indicates it’s the right period of time.
I suspect it’s both. People get something valuable from time on task, so there’s more to this structure’s longevity than just habit. But that doesn’t let us off the hook: if we still expect people to set aside all that time, then we should be better at defining what that something valuable is.
More on that next time.
This week’s Supreme Court decision on affirmative action has me thinking about lines, and how some are harder to see the closer we get to them.
For example, until recently it was pretty clear whether a person was dead or alive; you don’t hear Hamlet equivocating over to be, not to be, or some third choice in between. But as medicine advances the distinction blurs. It was 1968 when we moved the locus of life from the heartbeat to the electrical activity of the brain; these days we’re likelier to define death as “the moment after which we can’t revive you anymore.” But technology being what it is, that moment keeps coming later. Within my lifetime, the once headline-making decision of whether to unplug a loved one has become a common way to say good-bye. (See Janet Radcliffe Richards on the ramifications for organ donors.)
It gets wackier. Even for those of us still breathing unassisted, it’s hard to pin down which symptoms of life distinguish us from other dynamic systems. Is it that we enjoy intake, metabolisms, and exhaust? Process materials? Spawn? The harder we look for the line the fainter it gets, and today there are people (e.g. Jason Silva) who will tell you with a straight face that all the identifiable properties of life are present in the universe, or a city.
These are dark days for binary thinking. Even trusty gender is on more of a sliding scale than we thought. An Olympics cheating scandal (think cross-dressing to medal) led to blood-testing in the 1990s, and turned up such surprising variability in testosterone levels in both sexes that a recent New York Times op-ed argued we should quit testing altogether, and just let athletes “self-define” their sex.
It reminded me of a similar decision made by the U.S. Census, when it gave up asking people to identify with a single race. It continues to struggle with how exactly to categorize us.
I think there’s a lesson in here for us in the knowledge biz. Like most human understanding, these paradigms take root not because they’re valid, but because they’re useful. No sane person would look at a random spray of stars and see a crab, hunter, or bear without a whole lot of coaching. But narrating the randomness made it easier to remember – and so to predict harvest season, or find your way home.
But then we get carried away. From seeing comic strips in the firmament it’s a short step to ascribing gangrene to angry Gods, dead children to angels impatient for their company, and autism to vaccines. And that’s not even counting all the good accidents we like crediting to, well, ourselves. (See former investment trader Nassim Nicholas Taleb on our awesome capacity for self-congratulation.)
Whether assigning credit or blame, we get so eager to rationalize our experience that it’s hard to stop when we want to really figure something out.
I wish we could anticipate that moment of inflection, when a boundary that we mostly made up starts doing more harm than good. We’re probably there already with some of the discipline names in our colleges.
Yet there’s also an irony here, especially when you get back to the question of race. Recent court cases notwithstanding, we can’t prematurely dissolve the borders we’ve defined and go post-racial, seeing each other on an ill-defined sliding scale, however accurate and desirable that would be. Because, as advocates of action research will tell you, the only way to surface problems is by “disaggregating” our data along ethnic lines, insisting on race identification, because otherwise the inequities are invisible. I get that, and agree.
But it’s telling that no grouping seems to work everywhere. For example, in states other than California it may be enough to put all your Asian students into one category, and some of our reform efforts at home follow the national lead. But here that glosses over differences between significant numbers of people: current higher ed delivery works a lot better for Chinese and Japanese students than it does for Hmong and Samoan, for example. In California, progress won’t come from fewer lines and categories, but from more.
So we aren’t yet at that inflection point. But in naming the races, as in naming the constellations, or the criteria for life, or our academic departments, we want to remember what we’re doing, and keep watching for when to rearrange the borders, or simply move on. The day will come when we conclude that socioeconomic status is more meaningful than ethnicity, or that organizing campuses by interdisciplinary problem makes more sense than the current department roster.
Because our groupings are only ever artifice, and the membranes between them porous.
High school diplomas and college degrees suggest that something as incremental as learning can be subdivided by broad markers, that for example, you gradually improve your reading until Dickens makes sense to you, you remember some history and science and a smattering of civics, and then the world attaches to you a high school diploma, while you go on about your learning from there. Your growing proficiency was gradual and invisible, but its certification is explicit and punctuated.
The implied precision is laughable, this sense that suddenly in your 18th spring, you and everyone else your age – all over the country – have achieved the same thing. No wonder adolescents are cynical.
The cognitive dissonance rings louder up here in postsecondary, amplified by our gigantic differences in majors, academic calendars, institutional rigor. Even from the same college, two baccalaureate degrees can mean very different things. So is there a same thing that they also mean?
This question is coming up now in the California State University, where administrators and faculty are looking for some shared way to determine the right number of academic credits to put into a degree. In general the faculty pulls toward more units, the administration toward fewer.
The effort is complicated by conflicting senses of how the decision should be shared. Both sides agree that “faculty own the curriculum,” but neither sees a full role for the other in setting the size of that curriculum. It’s been a trying discussion, made harder in part because each behaves as if it’s somehow magnanimous to discuss it with the other.
For reasons that are unclear to me, some subjects have defined the bachelor’s degree more expansively than others. My own undergraduate major was in French literature. (A safe fallback, in case I couldn’t find a job as a state bureaucrat.) I blew through the requirements in about three years and took up the slack in arts and other humanities, especially film. Had I instead taken a bachelor’s in metallurgical engineering or jazz piano, the same credential would have taken almost twice as long.
Ask the pianists and metallurgists why this should be and you get one of two answers, neither of them very satisfying:
a. because that’s how our profession defines a baccalaureate. (Yes. And that’s how the rest of us define a tautology.)
b. because that’s the minimum learning you need to practice the profession as a graduate.
That second one gets more traction among my colleagues. My devastating rejoinder: so what? Then say the entry level degree for your job is more than a baccalaureate. Say that to get a job you need a masters, or a doctorate, or something else. This solution works fine for California’s attorneys, dentists, second-grade teachers, architects, and for that matter college professors. The big difference: all of them have a fully credentialed off-ramp after four years of college, while the engineers and pianists do not.
In earlier internecine CSU food fights I’ve thrown some spaghetti, but on this one I’m politely sitting out. It’s just too hard for me to see both sides, so I can’t participate constructively.
But it does have me wondering whether these degrees are set at the wrong size altogether. If we leave behind credit hours and go to outcomes, will they need to cover a broader swath than you see with the baccalaureate? It would be easier for me to tolerate wide differences in time to degree if the degrees really were signifiers of entry-level proficiency, instead of just time served.
There’s an effort in the California Community Colleges along these lines, called the Threshold Project. The tacit premise is that it’s hard to say when someone’s done learning a whole domain, but easier to know when they understand just one critical piece of it, a “threshold concept.” Like a doorway, it’s a visible marker you cross on your way to understanding the rest of the field, with a clear before and after. (I’m oversimplifying a bit. Friends who lead the project point out that sometimes students backslide to the pre-understanding side of the doorway. For the full story see this two-page explication, highly recommended.)
A couple of examples: as you first learn chemistry and understand that molecules are comprised of atoms, you need to internalize the truth that the combinations follow rules, based on numbers of electrons. No one gets this instinctively; you have to work at it, and until it makes sense to you, the rest of chemistry never will. So you balance equations and write out atomic structures, and recognize the same compounds over and over, and eventually you grasp the threshold concept of covalent bonding. Now — and only now – you can move on to the next one.
Another example comes from my home discipline, screenwriting (not French). Aspiring screenwriters are often readers, and as fans of fiction they like narrating. But for most movies the point is to make it look like the story told itself, as if we just happen to be watching people interact in a compelling sequence. It’s very hard to write that way, and you can’t even begin until you first internalize the truth that movies only look spontaneous, because the narrative is deliberately self-effacing, another threshold concept.
In any discipline, as you identify these little comet tails of visible learning a couple of good things happen. First, faculty can agree on a shared set of assignments – student work samples – that tell anyone in the field whether a student has successfully stepped through a given doorway. For learning outcomes assessment, ePortfolio construction, and the recognition of credit this is priceless. And second, administrators can start to see something portable, interchangeable, and akin to currency in higher ed, only more meaningful than a credit hour.
Put our attention there, and disputes over the size of the degree would be on better footing. We might even holster our spaghetti.
Details and an application are available here, from the CSU Office of the Chancellor. Let’s make music together. But hurry: the deadline is soon, and the metronome’s ticking.
My home state of California clings to its boomtown roots. Our public sector had some stable funding in the mid-20th century, but it never really took; with Proposition 13 in the late 1970s we capped the stabilizing influence of property taxes, tightening the link between state revenue and income taxes, which are more vulnerable to the business cycle.
As a result, riding the state’s public sector fortunes, as I do, is, uh, exciting. Two years ago my office was only 2/3 occupied, and we shared stogies and warmed our hands over trashcans. Now the lights are back on, the copiers are new, and I miss the old railroad songs. And so help me we are crowded for space – and the recovery is still new.
I look around and wonder, Who are these people? Got me. I’ve added some, but on grant money and not the public dime. I guess if you multiply that position creation over enough of my colleagues, you get a space crunch. And now we’re wondering where to put the newbies.
On a campus the decision would be easy: clear out the social gathering places that make education meaningful and engaging and put in a bunch of offices, preferably for promoting student success. But here there are fewer options: offices are all we have.
Indulge me: I’m more paperless than most, and don’t use file cabinets at all. Partly it’s because I travel so much, and paperless means I can pack whatever I need into a flash drive, or the cloud.
But I don’t see much traffic in our file rooms from others, either, including the ones who stay put. It’s just a drag to make and label physical folders all the time.
So to me, the way to create some offices would be by putting windows and desks into these rooms, and relegating our old memos to Iron Mountain. But when I suggested it I just got funny looks.
Still, it got me thinking. The file cabinet’s emerging obsolescence may be a sign of the changing nature of work, information, and the way we interact to get things done.
When I first worked in offices, in the ‘80s and ‘90s, the relationship of one organization to the rest of the world was clearly hub and spoke. You had the parties you dealt with, and then radiating out from those were the different things you did with them. Some typical examples:
This construction of relationships implies a worldview, and shares some qualities with 20th century universities and their course catalogs. It prizes separateness and categorization. It eschews intermingling, ambiguity, and conflicts of interest. And it can impede multi-party collaboration.
Yet ours is a century in love with connecting — with ad hoc teams, multinational collaborations, and one-off projects and relationships that defy the tree-branch-twig hierarchy of the file cabinet.
So what I’m wondering: is the decline of the file cabinet a symptom of this change, or in part responsible for it?
There are precedents in nature for developments working as both a cause and an effect. For example, paleoanthropologists will tell you that our propensity for language, mental acuity, and dexterity all evolved together, and probably drove each other: speech and manipulation seem to have improved our thinking at least as much as they were improved by it.
In other words, the tools can catalyze as much as the ideas. We may forge nimbler alliances when we don’t have to label the folder they’re in.
Something about throwing everything together with metadata and tags is freeing, but I’ll also admit it has its limits. (I’d be pretty frustrated if L.A.’s Central Library stopped sorting its books.)
But as we rebuild our ways of doing business here in the public sector – as the employees, projects, and paradigms come drifting back into the shanty town – we have an opportunity to leave out an assumption or two.