TITLE: Always Start With A Test
AUTHOR: Eugene Wallingford
DATE: September 24, 2009 8:07 PM
DESC:
-----
BODY:
... AKA Test-Driven X: Teaching
Writing a test before writing code provides a wonderful
level of accountability to self and world. The test
helps me know what code to write and when we are done.
I am often weak and like being able to keep myself
honest. Tests also enable me to tell my colleagues and
my boss.
These days, I usually think in test-first terms whenever
I am creating something. More and more finding myself
wondering whether a test-driven approach might work even
better. In a recent blog entry, Ron Jeffries asked for
analogues of test-driven development
outside the software world. My first thought was, what
can't be done test-first or even test-driven?
Jeffries is, in many ways, better grounded than I am, so
rather than talk about accountability, he writes about
clarity and concreteness as virtues of TDD. Clarity,
concreteness, and accountability seem like good features
to build into most processes that create useful artifacts.
I once wrote about student outcomes assessment as a source
of
accountability and continuous feedback
in the university. I quoted Ward Cunningham at the top
of that entry, " It's all talk until the tests run.",
to suggest to myself a connection to test-first and
test-driven development.
Tests are often used to measure student outcomes from
courses that we teach at all levels of education. Many
people worry about placing too much emphasis on a specific
test as a way to evaluate student learning. Among other
things, they worry about "teaching to the test". The
implication is that we will focus all of our instruction
and learning efforts on that test and miss out
on genuine learning. Done poorly, teaching to the test
limits learning in the way people worry it will. But we
can make a similar mistake when using tests to drive our
programming, by never generalizing our code beyond a
specific set of input values. We don't want to do that
in TDD, and we don't want to do that when teaching. The
point of the test is to hold us accountable: Can our
students actually do what we claim to teach them?
Before the student learning outcomes craze, the common
syllabus was the closest thing most departments had to
a set of tests for a course. The department could
enumerate a set of topics and maybe even a set of skills
expected of the course. Faculty new to the course could
learn a lot about what to teach by studying the syllabus.
Many departments create common final exams for courses
with many students spread across many sections and many
instructors. The common final isn't exactly like our
software tests, though. An instructor may well have done
a great job teaching the course, but students have to
invest time and energy to pass the test. Conversely,
students may well work hard to make sense of what they
are taught in class, but the instructor may have done a
poor or incomplete job of covering the assigned topics.
I thought a lot about TDD as I was designing what is for
me a new course this semester, Software Engineering. My
department does not have common syllabi for courses (yet),
so I worked from a big binder of material given to me by
the person who has taught the course for the last few
years. The material was quite useful, but it stopped
short of enumerating the specific outcomes of the course
as it has been taught. Besides, I wanted to put my
stamp on the course, too... I thought about what the
outcomes should be and how I might help students reach
them. I didn't get much farther than identifying a set
of skills for students to begin learning and a set of
tools with which they should be familiar, if not facile.
Greg Wilson has done a very nice job of designing his
Software Carpentry course in the open
and using
user stories
and other target outcomes to drive his course design.
In modern parlance, my efforts in this regard can be
tagged #fail. I'm not too surprised, though.
For me, teaching a course the first time is more akin
to an
architectural spike
than a first iteration. I have to scope out the
neighborhood before I know how to build.
Ideally, perhaps I should have done the spike prior to
this semester, but neither the world nor I are ideal.
Doing the course this way doesn't work all that badly
for the students, and usually no worse than taking a
course that has been designed up front by someone who
hasn't benefitted from the feedback of teaching the
course. In the latter case, the expected outcomes
and tests to know they have been met will be imperfect.
I tend to be like other faculty in expecting too much
from a course the first few times I teach it. If I
design the new course all the way to the bottom, either
the course is painful for students in expecting too
much too fast, or the it is painful for me as I
un-design and re-design huge portions of the course.
Ultimately, writing and running tests come back to
accountability. Accountability is in short supply in
many circles, university curricula included, and tests
help us to have it. We owe it to our users, and we
owe it to ourselves.
-----