TITLE: Names and Jargon in CS1
AUTHOR: Eugene Wallingford
DATE: August 31, 2006 5:12 PM
DESC:
-----
BODY:
As I've mentioned a few times in recent months that I am
teaching our intro course this semester for the first time
in a decade or so. After only two weeks, I have realized
this: Teaching CS1 every so often is a very good idea
for a computer science professor!
With students who have no programming background, I cannot
take anything for granted: values, types, variables, ...
expressions, statements, functions/methods ... reference,
binding ... so many fundamental concepts to build! In one
sense, we have to build them from scratch, because students
have no programming knowledge to which we can connect.
But they do have a lot of knowledge of the world, and they
know something about computers and files as users, and we
can sometimes get leverage from this understanding. So far,
I remain of the view that we can build the concepts in many
different orders. The context in which students learn guides
the ordering of concepts, by making some concepts relatively
more or less "fundamental". The media computation approach
of Guzdial and Ericson has been a refreshing change for me
-- the idea of a method "happened" naturally quite early, as
a group of operations that we have applied repeatedly from
Dr. Java's interactions pane. Growing ideas and programs
this way lets students learn bottom-up but see useful ideas
as soon as they become useful.
I've spent a lot of time so far talking about the many
different kind of names that we use and define when
thinking computationally. So much of what we do in computing
is combining parts, abstracting from them an idea, and then
giving the idea a name. We name values (constant),
arbitrary or changing values (variable), kinds of values
(type, class), processes (function, method)... Then we
have arguments and parameters, which are special kinds of
arbitrary values, and even files -- whose names are, in an
important way, outside of our programs. I hope that my
students are appreciating this Big Idea already.
And then there is all of the jargon that we computer folks
use. I have to assume that my students don't know what
any of that jargon means, which means that (1) I can't use
much, for fear of making the class sound like a sea of
Babel, and (2) I have to define what I use. Today, for
example, I found myself wanting to say "hard-coded", as
such as a constant hard-coded into a method. I caught
myself and tried to relate it to what we were doing, so
that students would know what I meant, both now and later.
I often speak with friends and colleagues who teach a lot
of CS as trainers in industry. I wonder if they ever get
a chance to teach a CS1 course or something like it. The
experience is quite different for me from teaching even
a new programming style to sophomores and juniors. There,
I can take so much for granted, and focus on differences.
But for my intro student the difference isn't between two
somethings, but something and nothing.
However, I also think that we have overglamorized how
difficult it is to learn to program. I am not saying that
learning to program is easy; it is tough,
with ideas and abstractions that go beyond what many
students encounter. But I think that sometimes lure
ourselves into something of a
Zeno's paradox:
"This concept is so difficult to learn; let's break it down
into parts..." Well, then that part is so difficult to learn
that we break it down into parts. Do this recursively,
ad infinitum, and soon we have made things more difficult than
they really are -- and worse, we've made them incredibly boring
and devoid of context. If we just work from a simple context,
such as media computation, we can use the environment to guide
us a bit, and when we reach a short leap, we make it, and trust
our students to follow. Answer questions and provide support,
but don't shy away from the idea.
That's what I'm thinking this afternoon at least. Then again,
it's only the end of our second week of classes!
-----