I recently ran across an old blog posting called Students learn what they need, not what is assigned, in which a Ted Dunning described a different sort of "flipped course" than is usually meant: He gave his students the final exam on Day 1, passed out the raw materials for the course, and told them to "get to work". They decided what they needed to learn, and when, and asked for instruction and guidance on their own schedule.
Dunning was happy with the results and concluded that...
... these students could learn vastly more than was expected of them if they just wanted to.
When students like what they are doing, they can surprise most everyone with what they will do to learn. Doing something cool like building a robot (as Dunning's students did) can be all the motivation some students need.
I'm sometimes surprised by just what catches my students' fancy. A few weeks ago, I asked my sophomore- and junior-level OOP class to build the infrastructure for a Twitter-like app. It engaged them like only graphical apps usually do. They've really dug into the specs to figure out what they mean. Many of them don't use Twitter, which has been good, because it frees them of too many preconceived limitations on where they can take their program.
They are asking good questions, too, about design: Should this object talk to that one? The way I divided up the task led to code that feels fragile; is there a better way? It's so nice not to still be answering Java questions. I suspect that some are still encountering problems at the language level, but they are solving them on their own and spending more time thinking about the program at a higher level.
I made this a multi-part project. They submitted Iteration 1 last weekend, will submit Iteration 2 tomorrow, and will work on Iteration 3 next week. That's a crucial element, I think, in getting students to begin taking their designs more seriously. It matters how hard it easy to change the code, because they have to change it now -- and tomorrow!
The point of Dunning's blog is that students have to discover the need to know something before they are really interesting in learning it. This is especially true if the learning process is difficult or tedious. You can apply this idea to a lot of software development, and even more broadly to CS.
I'm not sure when I'll try the give-the-final-exam-first strategy. My compiler course already sort of works that way, since we assign the term project upfront and then go about learning what we need to build the compiler. But I don't make my students request lectures; I still lay the course out in advance and take only occasional detours.
I think I will go at least that far next semester in my programming languages course, too: show them a language on day one and explain that our goal for the semester is to build an interpreter for it by the end of the semester, along with a few variations that explore the range of possibilities that programming languages offer. That may create a different focus in my mind as I go through the semester. I'm curious to see.
... in which the author seeks pointers to interactive Scheme materials on-line.
Last summer, I fiddled around a bit with Scribble, a program for writing documentation in (and for) Racket. I considered using it to write the lecture notes and website for my fall OOP course, but for a variety of reasons set it aside.
In the spring I'll be teaching Programming Languages again, and using Racket with my students. This seems like the perfect time to dive in and use Scribble and Slideshow to create all my course materials. This will create a synergy between what I do in class and how I prep, which will be good for me. Using Racket tools will also set a good example for my students.
After seeing The Racket Way, Matthew Flatt's talk at StrangeLoop, I am inspired to do more than simply use Racket tools to create text and slides and web pages. I'd like to re-immerse myself in a world where everything is a program, or nearly so. This would set an even more important example for my students, and perhaps help them to see more clearly that they don't ever to settle for the programs, the tools, or the languages that people give them. That is the Computer Science way as well as the Racket way.
I've also been inspired recently by the idea of an interactive textbook a lá Miller and Ranum. I have a pretty good set of lecture notes for Programming Languages, but the class website should be more than a 21st-century rendition of a 19th-century presentation. I think that using Scribble and Slideshow are a step in the right direction.
So, a request: I am looking for examples of people using the Racket presentation tools to create web pages that have embedded Scheme REPLs, perhaps even a code stepper of the sort Miller and Ranum use for Python. Any pointers you might have are welcome.
Obligation as constraint
When learning a new principle, try to apply it everywhere. That way, you'll learn more quickly where it does and doesn't work well, even if your initial intuitions about it are wrong.
Actually, you don't learn in spite of your initial intuitions being wrong. You learn because your initial intuitions were wrong. That's when learning happens best.
Blindness as constraint
... what you really mean is "constrained by rules that we've stopped thinking about".
Free jazz isn't entirely free, because you are constrained by what your muscles can do. Free markets aren't entirely free, because there are limits we simply choose not to talk about. Perhaps we once did talk about them and have chosen not to any more. Perhaps we never talked about them and don't even recognize that they are present.
I can't help but think of computer science faculty who claim we shouldn't be teaching OO programming in the first course, or any other "paradigm"; we should just teach basic programming first. They may be right about not teaching OOP first, but not because their approach is paradigm-free. It isn't.
I am thankful for human beings' capacity to waste time.
We waste it in the most creative ways. My life is immeasurably better because other people have wasted time and created art and literature. Even much of the science and technology I enjoy came from people noodling around in their free time. The universe has blessed me, and us.
At my house, Thanksgiving lasts the whole weekend. I don't mind writing a Thanksgiving blog the day after, even though the rest of the world has already moved on to Black Friday and the next season on the calendar. My family is, I suppose, wasting time.
This note of gratitude was prompted by reading a recent joint interview with Brian Eno and Ha-Joon Chang, oddities in their respective disciplines of music and economics. I am thankful for oddities such as Eno and Chang, who add to the world in ways that I cannot. I am also thankful that I live in a world that provides me access to so much wonderful information with such ease. I feel a deep sense of obligation to use my time in a way that repays these gifts I have been given.
The research paper I discussed in a recent blog entry on student use of a new kind of textbook has not been published yet. It was rejected by ICER 2012, a CS education conference, for what are surely good reasons from the reviewers' perspective. The paper neither describes the results of an experiment nor puts the evaluation in the context of previous work. As the first study of this sort, though, that would be difficult to do.
That said, I did not hesitate to read the paper and try to put its findings to use. The authors have a solid reputation for doing good work, and I trust them to have done reasonable work and to have written about it honestly. Were there substantial flaws with the study or the paper, I trusted myself to take them into account as I interpreted and used the results.
I realize that this sort of thing happens every day, and has for a long time: academics reading technical reports and informal papers to learn from the work of their colleagues. But given the state of publishing these days, both academic and non-academic, I couldn't help but think about how the dissemination of information is changing.
Guzdial's blog is a perfect example. He has developed a solid reputation as a researcher and as an interpreter of other people's work. Now, nearly every day, we can all read his thoughts about his work, the work of others, and the state of the world. Whether the work is published in a journal or conference or not, it will reach an eager audience. He probably still needs to publish in traditional venues occasionally in order to please his employer and to maintain a certain stature, but I suspect that he no longer depends upon that sort of publication in the way researchers ten or thirty years ago.
True, Guzdial developed his reputation in part by publishing in journals and conferences, and they can still play that role for new researchers who are just developing their reputations. But there are other ways for the community to discover new work and recognize the quality of researchers and writers. Likewise, journals and conferences still can play a role in archiving work for posterity. But as the internet and web reach more and more people, and as we learn to do a better job of archiving what we publish there, that role will begin to fade.
The gates really are coming down.
... comes from Gilad Bracha:
I firmly believe that a time traveling debugger is worth more than a boatload of language features[.]
This passage comes as part of a discussion of what it would take to make Bret Victor's vision of programming a reality. Victor demonstrates powerful ideas using "hand crafted illustrations of how such a tool might behave". Bracha, whose work on Smalltalk and Newspeak have long inspired me -- reflects on what it would take to offer Victor's powerful ideas in a general purpose programming environment.
Smalltalk as a language and environment works at a level where we conceive of providing the support Victor and Bracha envision, but most of the language tools people use today are too far removed from the dynamic behavior of the programs being written. The debugger is the most notable example.
Bracha suggests that we free the debugger from the constraints of time and make it a tool for guiding the evolution of the program. He acknowledges that he is not the first person to propose such an idea, pointing specifically to Bill Lewis's proposal for an omniscient debugger. What remains is the hard work needed to take the idea farther and provide programmers more transparent support for understanding dynamic behavior while still writing the code.
Paul Klipp wrote a nice piece recently on emic and etic approaches to explaining team behavior. He explains what emic and etic approaches are and then shows how they apply to the consultant's job. For example:
Let's look at an example closer to home for us software folks. You're an agile coach, arriving in a new environment with a mission from management to "make this team more agile". If you, like so many consultants in most every field, favor an etic approach, you will begin by doing a gap analysis between the behaviors and artifacts that you see and those with which you are most familiar. That's useful, and practically inevitable. The next natural step, however, may be less helpful. That is to judge the gaps between what this team is doing and what you consider to be normal as wrong.... By deciding, as a consultant or coach, to now attempt to prepare an emic description of the team's behaviors, you force yourself to set aside your preconceptions and engage in meaningful conversations with the team in order to understand how they see themselves. Now you have two tools in your kit, where you might before have had one, and more tools prepares you for more situations.
When I speak to HS students and their parents, and when I advise freshmen, I suggest that the consider picking up a minor or a second major. I tell them that it almost doesn't matter which other discipline they choose. College is a good time to broaden oneself, to enjoy learning for its own sake. Some minors and second majors may seem more directly relevant to a CS grad's career interests, but you never know what domain or company you will end up working in. You never know when having studied a seemingly unrelated discipline will turn out to be useful.
Many students are surprised when I recommend social sciences such as psychology, sociology, and anthropology as great partners for CS. Their parents are, too. Understanding people, both individually and in groups, is important in any profession, but it is perhaps more important for CS grads than many. We build software -- for people. We teach new languages and techniques -- to people. We contract out our services to organizations -- of people. We introduce new practices and methodologies to organizations -- of people. Ethnography may be a more important to a software consultant's success than any set of business classes.
I had my first experience with this when I was a graduate student working in the area of knowledge-based systems. We built systems that aimed to capture the knowledge of human experts, often teams of experts. We found that they relied a lot on tacit knowledge, both in their individual expertise and in the fabric of their teams. It wasn't until I read some papers from John McDermott's research group at Carnegie Mellon that I realized we were all engaged in ethnographic studies. It would have been so useful to have had some background in anthropology!
Bryan Helmkamp recently posted Why Ruby Class Methods Resist Refactoring on the Code Climate blog. It explains nicely example how using class methods makes it harder to refactor in a way that makes you feel like you have actually improved the code.
This is a hard lesson for beginners to learn, but even experienced Ruby and Java programmers sometimes succumb to the urge for simplicity that "just" writing a method offers. It doesn't take long for a simple class method to grow into something more complex, which gets in the way of growing the program.
As Helmkamp points out, class methods have the insidious feature of coupling your code to a particular class. Even when we program with instances rather than classes, we like to avoid that sort of coupling. Thus was born the factory method.
Deep in the process of teaching OO to undergraduates, I was drawn to an important truth found deep in the post, just after mention of the class-name coupling:
You can't easily swap in new a class, but you can easily swap in a new instance.
One of the great advantages of OOP is being able to plug a different object into a program and thus change or extend the program's behavior without modifying its code. Dynamic polymorphism via substitutable objects is in many ways the first principle of object-oriented programming.
That's why the first refactoring I usually apply whenever I encounter a class method is Introduce Object.
Mark Guzdial's How students use an electronic book, reports on the research paper "Performance and Use Evaluation of an Electronic Book for Introductory Python Programming" [ pdf ]. In this paper, Alvarado et al. evaluate how students used the interactive textbook How to Think Like a Computer Scientist by Ranum and Miller in an intro CS course. The textbook integrates traditional text with embedded video, "active" examples using an embedded Python interpreter, and empirical examples using a code stepper a lá a debugger.
The researchers were surprised to find how little some students used the book's interactive features:
One possible explanation for the less-than-anticipated use of the unique features may be student study skills. The survey results tend to suggest that students "study" by "reading". Few students mention coding or tracing programs as a way of "studying" computer science.
I am not using an interactive textbook in my course this semester, but I have encountered the implicit connection in many students' minds between studying and reading. It caught me off-guard, too.
After lengthy searching and some thought, I decided to teach my sophomore-level OOP course without a required text. I gave students links to two on-line books they could use as Python references, but neither covers the programming principles and techniques that are at the heart of the course. In lieu of a traditional text, I have been giving my students notes for each session, written up carefully in a style that resembles a textbook, and source code -- lots and lots of source code.
Realizing that this would be an unusual way for students to study for a CS class, at least compared to their first-year courses, I have been pretty consistent in encouraging them to work this way. Daily I suggest that they unpack the code, read it, compile it, and tinker with it. The session notes often include little exercises they can do to test or extend their understanding of a topic we have covered in class. In later sessions, I often refer back to an example or use it as the basis for something new.
I figured that, without a textbook to bog them down, they would use my session notes as a map and spend most of their time in the code spelunking, learning to read and write code, and seeing the ideas we encounter in class alive in the code.
Like the results reported in the Alvarado paper, my experiences have been mixed, and in many ways not what I expected. Some students read very little, and many of those who do read the lecture notes spend relatively little time playing with the code. They will spend plenty of time on our homework assignments, but little or no time on code for the purposes of studying. My data is anecdotal, based on conversations with the subset of students who visit office hours and e-mail exchanges with students who ask questions late at night. But performance on the midterm exam and some of the programming assignments are consistent with my inference.
OO programs are the literature of this course. Textbooks are like commentaries and (really long) Cliff Notes. If indeed the goal is to get students to read and write code, how should we proceed? I have been imagining an even more extreme approach:
A decade or so ago, I taught a course that mixed topics in user interfaces and professional ethics using a similar approach. It didn't provide magic results, but I did notice that once students got used to the unusual rhythm of the course they generally bought in to the approach. The new element here is the emphasis on code as the primary literature to read and study.
Teaching a course in a way that subverts student expectations and experience creates a new pedagogical need: teaching new study skills and helping students develop new work habits. Alvarado et al. recognize that this applies to using a radically different sort of textbook, too:
Might students have learned more if we encouraged them to use codelens more? We may need to teach students new study skills to take advantage of new learning resources and opportunities.
Another interesting step would be to add some meta-instruction. Can we teach students new study skills, to take advantage of the unique resources of the book? New media may demand a change in how students use the media.
I think those of us who teach at the university level underestimate how important meta-level instruction of this sort is to most of students. We tend to assume that students will figure it out on their own. That's a dangerous assumption to make, at least for a discipline that tends to lose too many good students on the way to graduation.
In The Poetry of Function Naming, Stephen Wolfram captures something that all programmers eventually learn:
[Naming functions is] an unforgiving and humbling activity. And the issue is almost always the same. The reason you can't find a good name is because you don't really understand with complete and ultimate clarity what the function does.
Sometimes we can't come up with the perfect name for a function or a variable until after we have written code that uses it. The act of writing the program helps us to learn about the program's content.
Later in the same blog entry, Wolfram says something that made me think of my previous blog, on how some questions presuppose how they are to be answered:
But in a computer language, each function name ultimately refers to a particular piece of functionality that is defined in an absolute way, and can be implemented by a specific precise program.
When we write OO programs, a name doesn't always refer to a specific function. With polymorphic variables, we don't usually know which method will be executed when we use a name. Any object that provides the protocol required by the variable's type, or implements the interface so named, may be stored in the variable. It may even be an instance of a class I know nothing about.
For this reason, when I teach OO programming, I am careful to talk about sending a message rather than "invoking a method" or "calling a function". The receiver of the message interprets the message, not the sender.
This doesn't invalidate what Wolfram says, though it does point to a way in which we might rephrase it more generally. The name of a good method isn't about specific functionality so much as about expectation. It's about the core idea associated with an interaction, not any particular implementation of that idea.
John Cook wrote about times in mathematics when maybe you don't need to do what you were asked to do. As one example, he used remainder from division. In many cases, you don't need to do division, because you can find the answer using a different, often simpler, method.
We see a variation of John's theme in programming, too. Sometimes, a client will ask for a result in a way that presupposes the method that will be used to produce it. For example, "Use a stack to evaluate these nested expressions." We professors do this to students a lot, because they want the students to learn the particular technique specified. But you see subtle versions of this kind of request more often than you might expect outside the classroom.
An important part of learning to design software is learning to tease apart the subtle conflation of interface and implementation in the code we write. Students who learn OO programming after a traditional data structures course usually "get" the idea of data abstraction, yet still approach large problems in ways that let implementations leak out of their abstractions in the form of method names and return values. Kent Beck talked about how this problem afflicts even experienced programmers in his blog entry Naming From the Outside In.
Primitive Obsession is another symptom of conflating what we need with how we produce it. For beginners, it's natural to use base types to implement almost any behavior. Hey, the extreme programming principle You Ain't Gonna Need It encourages even us more experienced developers not to create abstractions too soon, until we know we need them and in what form. The convenience offered by hashes, featured so prominently in the scripting languages that many of us use these days, makes it easy to program for a long time without having to code a collection of any sort.
But learning to model domain objects as objects -- interfaces that do not presuppose implementation -- is one of the powerful stepping stones on the way to writing supple code, extendible and adaptable in the face of reasonable changes in the spec.