TITLE: Agile Moments: Accountability and Continuous Feedback in Higher Ed
AUTHOR: Eugene Wallingford
DATE: April 12, 2007 6:54 PM
DESC:
-----
BODY:
It's all talk until the tests run.
-- Ward Cunningham
A couple of years ago, I wrote about what I call my
Agile Moments,
and soon after wrote about
another.
If I were teaching an agile software development course, or
some other course with an agile development bent, I'd probably
have more such posts. (I teach a compiler development course
this fall...) But I had an Agile Moment yesterday afternoon in
an un-software-like place: a talk on program assessment at
universities.
Student outcomes assessment
is one of those trendy educational movements that comes and go
like the seasons or the weather. Most faculty in the trenches
view it with unrelenting cynicism, because they've been there
before. Some legislative body or accrediting agency or
university administrator decides that assessment is essential,
and they deem it Our Highest Priority. The result is an
unfunded mandate on departments and faculty to create an assessment
plan and implement the plan. The content and structure of the
plans are defined from above, and these are almost always
onerous -- they look good from above but they look like unhelpful
busy work to professors and students who just want to do
computer science, or history, or accounting.
But as a software developer, and especially as someone with an
agile bent, I see the idea of outcomes assessment as a no-brainer.
It's all about continuous feedback and accountability.
Let's start with accountability. We don't set out to write
software without a specification or a set of stories that tell
us what our goal is. Why do we think we should start to teach
a course -- or a four-year computer science degree program! --
without having a spec in hand? Without some public document
that details what we are trying to achieve, we probably won't
know if we are delivering value. And even if we
know, we will probably have a hard time convincing anyone
else.
The trouble is, most university educators think that they know
what an education looks like, and they expect the rest of the
world to trust them. For most of the history of universities,
that's how things worked. Within the university, faculty shared
a common vision of what to do when, and outside the students
and the funders trusted them. The relationship worked out fine
on both ends, and everyone was mostly happy.
Someone at the talk commented that the call for student and program
outcomes assessment "break the social contract" between a university
and its "users". I disagree and think that the call merely
recognizes that the social contract is already broken. For whatever
reason, students and parents and state governments now want the
university to demonstrate its accountable.
While this may be unsettling, it really shouldn't surprise us.
In the software world, most anyone would find it strange if the
developers were not held accountable to deliver a particular
product. (That is even more true in the rest of the economy, and
this difference is the source of much consternation among folks
outside the software world -- or the university.) One of the
things I love about what Kent Beck has been teaching for the last
few years is the notion of accountability, and the sort of honest
communication that aims at working fairly with the people who hire
us to build software. I don't expect less of my university.
In the agile software world, we often think about to whom they
are accountable, and even focus on the best word to use, to send
the right message: client, customer, user, stakeholder, ....
Who is my client when I teach a CS course? My customer? My
stakeholders? These are complex question, with many answers
depending on the type of school and the level at which we ask
them. Certainly students, parents, the companies who hire our
graduates, the local community, the state government, and the
citizens of the state are all partial stakeholders and thus
potential answers as client or customer.
Outcomes assessment forces an academic department to specify
what it intends to deliver, in a way that communicate the
end product more effectively to others. This offers better
accountability. It also opens the door to feedback and
improvement.
When most people talk about outcomes assessment, they are
thinking of the feedback component. As an agile software
developer, I know that continuous feedback is essential to
keeping me on track and to helping me improve as a developer.
Yet we teach courses at universities and offer degrees to
graduates while collecting little or no data as we go along.
This is the data that we might use to improve our course or
our degree programs.
The speaker yesterday quoted someone as saying that universities
"systematically deprive themselves" of input from their
customers. We sometimes collect data, but usually at the end of
the semester, when we ask students to evaluate the course and the
instructor using a form that often doesn't tell us what we need to
know. Besides, the end of the semester is too late to improve the
course while teaching the students giving the feedback!
From whom should I as instructor collect data? How do I use that
data to improve a course? How do I use that data to improve my
teaching more generally? To whom must I provide an accounting of
my performance?
We should do assessment because we want to know something --
because we want to learn how to do our jobs better. External
mandates to do outcomes assessment demotivate, not motivate.
Does this sound anything like the world of software development?
Ultimately, outcomes assessment comes down to assessing student
learning. We need to know whether students are learning what
we want them to learn. This is one of those issues that goes
back to the old social contract and common understanding of
the university's goal. Many faculty define what they want
students to know simply as "what our program expects of them"
and whether they have learned it as "have they passed our
courses?" But such circular definitions offer no room for
accountability and no systematic way for departments t get
better at what they do.
The part of assessment everyone seems to understand is grading,
the assessment of students. Grades are offered by many professors
as the primary indicator that we are meeting our curricular
goals: students who pass my course have learned the requisite
content. Yet even in this area most of us do an insufficient
job. What does an A in a course mean? Or an 87%?
When a student moves on to the next course in the program with
a 72% (a C in most of my courses) in the prerequisite course,
does that mean the student knows 72% of the material 100% of the
way, 100% of the material 72% of the way, some mixture of the two,
or something altogether different? And do we want to such a
student writing the software on which we will depend tomorrow?
Grades are of little use to students except perhaps as carrots
and sticks. What students really need is feedback that helps them
improve. They need feedback that places the content and process
they are learning into the context of doing something.
More and more I am convinced that we need to think about how to
use the idea of course competencies that
West and Rostal
implemented in their apprenticeship-based CS curriculum as a way
to define for students and instructors alike what success in a
course or curriculum mean.
My mind made what it thinks is one last connection to agile
software development. One author suggests that we think of
"assessment as narrative", as a way of telling our story.
Collecting the right data at the right times can help us to
improve. But it can also help us tell our story better. I think
a bit part of agile development is telling our story: to other
developers on the team, to new people we hire, to our clients
and customers and stakeholders, and to our potential future clients
and customers. The continuous feedback and integration that we do
-- both on our software and on our teams -- is an essential cog in
defining and telling that story. But maybe my mind was simply
in overdrive when it made this connection.
It was the at end of this talk that I read the quote which led
me to think of Kurt Vonnegut, coincidental to his passing
yesterday, and which led me to write
this entry.
So it goes.
-----