TITLE: A Seven-Year Itch
AUTHOR: Eugene Wallingford
DATE: November 03, 2006 6:27 PM
Academic departments at universities can sometimes be
quite agile, introducing new courses and new approaches
into curricula in response to feedback from students and
the world. But, like large corporations and almost any
organization that reaches a certain size, the modern
university also tends to calcify certain processes to
the point that they become almost useless. Academic
program review is an example.
Every seven years, my university conducts an academic
program review of each academic department, on a rolling
schedule. My department was last reviewed in 1999, so
we were on the schedule for Fall 2006. As a part of the
review, the department conducts a self-study of each of
its programs, reviewing curriculum, student outcomes,
faculty, facilities and resources, budget and finance,
and program strengths and weaknesses. Then a set of
external auditors come to campus to conduct an independent
review, informed by the self-study reports. Finally, the
dean uses the results of the internal and external reviews
to help the department plan for improvement and maintenance.
Periodically examining one's practice, gathering feedback
from independent reviewers, and then feeding what you learn
back into process improvement seems like a good idea, a
natural way for an organization to monitor itself. So why
don't faculty take to it enthusiastically? Instead, they
Any agile software developer knows one part of the answer.
Waiting seven years to gather feedback and adjust course
is simply a bad idea. Imagine how far off track a department
can go in seven years!
Of course, it's not really that bad. To some extent, faculty,
department head, and dean are all in a continual process of
monitoring the state of the department and making changes.
In the seven years since our last review, we have hired
three faculty, changed department heads twice, launched two
new majors, and moved into a new building. All of these
changes resulted from collective discussion or managerial
Part of the problem is documentation process that accompanies
the review. Since returning from OOPSLA, I have spent much
of my time encouraging faculty to find time to finish their
work on the review and finding time myself to assemble and
complete the reports for our undergraduate and graduate programs.
None of us has a lot of unencumbered time to devote to such
a big task, and the result is process thrashing and delays.
This is my first time leading a review (last time around I
wrote a big part of the M.S. program self-study but had another
department head to do the encouraging, assembling, and
completing), and I think I've learned something about how to
make this work better in the future. Most of my ideas are
inspired by agile methods.
To be fair to the university and its policy, we are already
charged with doing some of this by the university itself,
in the form of a Student Outcomes Assessment plan. This
plan should be monitoring student outcomes throughout their
time on campus and then into their alumni years. Unfortunately
our department -- and many others, I suspect -- have never
taken these plans seriously. Some faculty view this process
as an unnecessary bureaucratic burden, and others think that
we are all too busy to do it right. I think that this means
we need to develop a better plan, one in which data collection
is manageable under supervision of instructors and staff and
immediately useful in evaluating our progress. (Because many
of us didn't take student outcomes assessment seriously when
we wrote the plan, we wrote a plan that was unrealistic and
aimed more at satisfying the committee charged with approving
the plan than at satisfying our needs!)
There are practical reasons for doing some elements of program
review every seven years. For example, getting on-campus
feedback from good external reviewers is difficult. They are
busy people, in demand, and bringing them to campus is costly.
But an advisory board can provide a lighter-weight feedback
Of course, who knows who will be our department head in 2013,
so my learning may or may not have an effect on how we do
our next academic program review. But I will proceed with
some of these ideas now in an effort to help us improve as
Our self-study reports were due today at 5 PM, and we haven't
submitted them yet. You know what I'll be doing this weekend.
- Data collection and analysis should be an ongoing
process. As department head, I am inundated with
institutional data, and I have begun to seek out other
data that we need to make good decisions. I need to
make a standard part of my work week the assimilation
of this data into a meaningful, "running total" that
is the current snapshot of our department's state.
- Documentation should also be ongoing. Not the
onerous over-documentatation required by university
policy, but the sort of reporting that faculty and dean
can use to make decisions. From these reports, assembling
the self-study should be a straightforward matter.
- Whenever possible, our plans and actions should
include "unit tests". These quantifiable metrics
will help us know whether we have accomplished our goals.
We academics are like anyone else in being content to make
a decision and then assume that we are doing the right
thing and that all is working out fine. That may work
most of the time when we teach courses in our specialty
areas for many years, but even then it is a dangerous
attitude. In matters of programmatic and departmental
direction, it really is a head-in-the-sand attitude.
- Talk to our stakeholders. We need to communicate
as much as possible with our students, alumni, and
corporate supporters, to find out what they need and
how our programs fit into the world. We've always done
that informally with our students, but we need to document
the feedback we get. Our department has never had much
of an alumni outreach effort, but we are doing more and
more. I also plan this semester to form an advisory
board of folks from outside the university to help us
monitor our performance.