workload, incentives, and Academically Adrift

I had the chance at lunch just now to talk to our faculty chair, Jim Postma, about this question of workload and deep learning.  In the “Academically Adrift” zeitgeist, one could wonder whether the CSU even has the capacity at this point to do more than skim.  Sure, we want the experiences of deep learning, of high-impact practices, of meaningful faculty-student interaction toward shared educational goals.  But at the sheer volume of students we serve, is that remotely within reach?

He took the point, and added that the CSU’s growing section caps (maximum enrollments allowed in one scheduled offering of a course) are recent.  In one budget downturn faculty are paid more to teach extra students, realizing efficiencies for the physical plant; and the next downturn the extra pay goes away but the classes stay big.

He added another wrinkle, connected in particular to our hopes to embed deeper learning and “high-impact” educational practices into lower-division GE:  those courses are disproportionately taught by adjuncts, who are spread even thinner.  It’s hard to imagine them having any more time-per-student to devote.

And, so the observation goes in Academically Adrift, that reduced bandwidth-per-student creates an incentive for an inadvertent, unspoken deal between the teacher and student:  don’t expect too much of me, and I’ll return the favor.  So that’s grim.  If we’re going to go all-out for high-impact practices, do we first need to re-engineer the business model?

The silver lining came with a discussion I had yesterday on this with Wayne Tikkanen, who directs the CSU’s Institute for Teaching and Learning.  He knows of a CSU campus that wanted to adopt “writing across the curriculum,” assigning (and therefore reading and grading) routine writing assignments into more courses, not just those in English departments.  To incentivize that, they created prestigious certificates for faculty trained in writing-intensive teaching.  Certification qualifies you for courses with lower section caps.  Cool, huh?

Jim liked that idea.  I was wondering if we might also eventually certify in other high-impact practices, like undergraduate research and service learning.  Among other benefits, the act of certification could provide structure for ongoing faculty development.  Say it expires every two years.  That gives you the chance to check in, maybe norm some rubrics, have faculty across disciplines talk about what they’re doing.

That’s the update on the workload and incentives front.


2 thoughts on “workload, incentives, and Academically Adrift

  1. I would want to see different evidence for “certification.” Simply getting “trained” in writing instruction doesn’t mean writing instruction happens and students learn and improve. These assumptions must be warranted. (Same is true, for example, in service learning or undergraduate research.)

    For me, the biggest implication from your idea is this: There is a way to sort courses that require caps from courses that may be ok with larger groups. Large classes in themselves are not always bad. However, when pedagogies that are centrally linked to particular learning outcomes can demonstrably not be implemented effectively in large groups, then such courses built on such pedagogies clearly need lower numbers. Even then, the magic number isn’t always “15” or “25.” The system has to do a better job of applying resources appropriately in relation to desired learning outcomes. This, of course, requires good assessment.

    I would add also that this need to understand what is really happening in terms of student learning is crucial in online environments. The class size issue is muddied in this context, but it is no less important.

    Personally, I see the cap on writing-intensive courses as a bit of a gamble. Yes, if you could “certify” that students in those classes really are getting writing instruction and are improving, then the cap is justified. However, when “certification” means students are assigned to write a certain number of pages or words or the like, or when WI certification results from a cursory review of instructor training with a look at the syllabus, I don’t think the problem is solved.

    WAC people I know will argue that this certification strategy is a way to get profs trained in some constructivist pedagogy, a good in itself, and I agree. But I don’t think it will have the impact it could have if it is not followed up with some substantive reflective analysis grounded in evidence of student learning with opportunities for students to learn also explored (“closing the loop” in WASCesque terms).

    1. Thank you so much for this thoughtful response, Terry. As usual it’s spot on, in point after point. Appropriate caps matter entirely on the context, and there’s no magic number, or even rule of thumb, to apply without doing some genuine assessment of learning outcomes, to see if we’re even on the right track. It’s one more way assessment could be made more consequential than it currently is. I wonder, is anyone trying this? Could we?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s