December 30, 2016 12:42 PM

Looking Closer at My Course is Hard -- and Helpful

The CS faculty has decided to create common syllabi for all courses in the department. The motivation to do this came from several directions, including a need to meet university requirements regarding "outcomes assessment" and a desire to make sections taught by different instructors more consistent within and across semesters. But it will also help instructors improve their courses. Faculty teaching upper-division courses will have a more better sense of what students coming into their courses should already know, and faculty teaching lower-division courses will have a better sense of what students their students need to be able to do in their upcoming courses.

Our first pass at this is for each faculty member to use a common format to describe one of his or her courses. The common format requires us to define in some detail the purpose, goals, outcomes, and content of the course. Of course, we all have syllabi for our courses that cover some or all of these things, but now we are expected to make all the elements concrete enough that the syllabus can be used by other faculty to teach the course.

For my first attempt, I decided to write an extended syllabus for my Programming Languages and Paradigm course, which I am teaching again in the spring. I have been teaching this course for years, have detailed lecture notes for every session (plus many more), and already give students a syllabus that tries to explain in a useful way the purpose, goals, outcomes, and content of the course. That should give me a running start, right?

I've been working on putting my current syllabus's content into the extended syllabus format for several hours now. At this point, I have concluded that the process is three things: instructive, likely to be very helpful in the long run, and very hard to do.

Defining detailed goals and outcomes is instructive because it causes me to think about the course both at the highest level of detail (the goal of the course both in our curriculum and in our students' CS education) and at the lowest (what we want students our students to know and be able to do to when they leave the course). After teaching the course for many years, I tend to think big-picture thoughts about the course only at the beginning of the semester and only in an effort make general modifications to the direction it takes. Then I think about finer details on a unit-by-unit and session-by-session basis, slowly evolving the course content in reaction to specific stimuli. Looking at the course across multiple levels of abstraction at the same time is teaching me a lot about what my course is and does in a way that I don't usually see when I'm planning only at the high level or executing only at the day-to-day level.

One specific lesson I've learned is really a stark reminder of something I've always known: some of my sessions are chock-full of stuff: ideas, techniques, code, examples, .... That is great for exposing students to a wide swath of the programming languages world, but it is also a recipe for cognitive overload.

This process is helpful because it causes me think about concrete ways I can make the course better. I am finding holes in my coverage of certain topics and leaps from one concept to another that are intuitive in my mind but not documented anywhere.

I've been taking notes as I go long detailing specific changes I can make this spring:

  • to session materials, so that they give more examples of new concepts before I ask students to use the concepts in their work
  • to homework assignments, so that they emphasize specific goals of the course more clearly and to cover goals that seem to have lost coverage over time
  • to exams, so that they assess the outcomes we hope to achieve in the course
Designing class sessions, homework, and exams in terms of goals and outcomes creates a virtuous cycle in which the different elements of the course build on and reinforce one another. This is perhaps obvious to all you teachers out there, as it is to me, but it's easy to lose sight of over time.

But my most salient conclusion at this moment is that this is hard. It is difficult to explain the course in enough detail that faculty outside the area can grok the course as a part of our curriculum. It's difficult to explain the course in enough detail that other faculty could, at least in principle, teach it as designed. It's difficult to design a course carefully enough to be justifiably confident that it meets your goals for the course. That sounds a little like programming.

But I'm glad I'm doing it. It's worth the effort to design a course this carefully, and to re-visit the design periodically. That sounds a little like programming, too.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

December 28, 2016 1:01 PM

Unclear on the Concept, or How Ken Griffey, Jr., is like James Monroe

Today I read an article on faithless electors, those members of the Electoral College who over the years did not vote for the candidate they were pledged. One story from 1820 made me think of baseball's Hall of Fame!

William Plummer, Sr. was pledged to vote for Democratic-Republican candidate James Monroe. Instead, he cast his vote for John Quincy Adams, also of the Democratic-Republican Party, although Adams was not a candidate in the 1820 election.
Supposedly, Plummer did not feel that the Electoral College should unanimously elect any president other than George Washington.

There are many Hall of Fame voters each year who practice Mr. Plummer's ostentatious electoral purity. They don't vote for anyone on his first ballot, preserving "the legacy of their predecessors", none of whom -- neither Cobb nor Ruth, Mays nor Aaron -- were elected unanimously.

(Some leave a great off the ballot for a more admirable reason: to use the vote to support a player they believe deserves entry but who does not receive many other votes. They hope to give such a player more time to attract a sufficient body of voters. I cut these voters a lot more slack than I cut the Plummers of the world.)

It was a silly idea in the case of President Monroe, whose unanimous election would have done nothing to diminish Washington's greatness or legacy, and it's a silly idea in the case of baseball greats like Ken Griffey, Junior.


Posted by Eugene Wallingford | Permalink | Categories: General

December 27, 2016 8:36 AM

There's No Right Way To Write All Programs

In the Paris Review's The Art of Fiction No. 183, the interviewer asks Tobias Wolff for some advice. Wolff demurs:

Writers often give advice they don't follow to the letter themselves. And so I tend to receive their commandments warily.

This was refreshing. I also tend to hear advice from successful people with caution.

Wolff is willing, however, to share stories about what has worked for him. He just doesn't think what works for him will necessarily work for anyone else. He doesn't even think that what works for him on one story will work for him on the next. Eventually, he sums up his advice with this:

There's no right way to tell all stories, only the right way to tell a particular story.

Wolff follows a few core practices that keep him moving forward every day, but he isn't dogmatic about them. He does whatever he needs to do to get the current story written -- even if it means moving to Italy for several months.

Wolfe is taking about short stories and novels, but this sentiment applies to more than writing. It captures what is, for me, the fundamental attitude of agile software developers: There is no right way to write all programs, only a good way to write each particular program. We find that certain programming practices -- taking small steps, writing tests early, refactoring mercilessly, pairing -- apply to most tasks. These practices are so powerful precisely because they give us feedback frequently and help us adjust course quickly.

But when conditions change around us, we must be willing to adapt. (Even if that means moving to Italy for several months.) This is what it means to be agile.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

December 26, 2016 8:38 AM

Learn By Programming

The latest edition of my compiler course has wrapped, with grades submitted and now a few days distance between us and the work. The course was successful in many ways, even though not all of the teams were all able to implement the entire compiler. That mutes the students' sense of accomplishment sometimes, but it's not unusual for at least some of the teams to have trouble implementing a complete code generator. A compiler is a big project. Fifteen weeks is not a lot of time. In that time, students learn a lot about compilers, and also about how to work as a team to build a big program using some of the tools of modern software development. In general, I was quite proud of the students' efforts and progress. I hope they were proud of themselves.

One of the meta-lessons students tend to learn in this course is one of the big lessons of any project-centered course:

... making something is a different learning experience from remembering something.

I think that a course like this one also helps most of them learn something else even more personal:

... the discipline in art-making is exercised from within rather than without. You quickly realize that it's your own laziness, ignorance, and sloppiness, not somebody else's bad advice, that are getting in your way. No one can write your [program] for you. You have to figure out a way to write it yourself. You have to make a something where there was a nothing.

"Laziness", "ignorance", and "sloppiness" seem like harsh words, but really they aren't. They are simply labels for weaknesses that almost all of us face when we first learn to create things on our own. Anyone who has written a big program has probably encountered them in some form.

I learned these lessons as a senior, too, in my university's two-term project course. It's never fun to come up short of our hopes or expectations. But most of us do it occasionally, and never more reliably than we are first learning how to make something significant. It is good for us to realize early on our own responsibility for how we work and what we make. It empowers us to take charge of our behavior.

Black Mountain College's Lake Eden campus

The quoted passages are, with the exception of the word "program", taken from Learn by Painting, a New Yorker article about "Leap Before You Look: Black Mountain College, 1933-1957", an exhibit at the Institute of Contemporary Art in Boston. Black Mountain was a liberal arts college with a curriculum built on top of an unusual foundation: making art. Though the college lasted less than a quarter century, its effects were felt across most of art disciplines in the twentieth century. But its mission was bigger: to educate citizens, not artists, through the making art. Making something is a different learning experience from remembering something, and BMC wanted all of its graduates to have this experience.

The article was a good read throughout. It closes with a comment on Black Mountain's vision that touches on computer science and reflects my own thinking about programming. This final paragraph begins with a slight indignity to us in CS but turns quickly into an admiration:

People who teach in the traditional liberal-arts fields today are sometimes aghast at the avidity with which undergraduates flock to courses in tech fields, like computer science. Maybe those students see dollar signs in coding. Why shouldn't they? Right now, tech is where value is being created, as they say. But maybe students are also excited to take courses in which knowing and making are part of the same learning process. Those tech courses are hands-on, collaborative, materials-based (well, virtual materials), and experimental -- a digital Black Mountain curriculum.

When I meet with prospective students and their parents, I stress that, while computer science is technical, it is not vocational. It's more. Many high school students sense this already. What attracts them to the major is a desire to make things: games and apps and websites and .... Earning potential appeals to some of them, of course, but students and parents alike seem more interested in something else that CS offers them: the ability to make things that matter in the modern world. They want to create.

The good news suggested in "Learn by Painting", drawing on the Black Mountain College experiment, is that learning by making things is more than just that. It is a different and, in most ways, more meaningful way to learn about the world. It also teaches you a lot about yourself.

I hope that at least a few of my students got that out of their project course with me, in addition to whatever they learned about compilers.

~~~~

IMAGE. The main building of the former Black Mountain College, on the grounds of Camp Rockmont, a summer camp for boys. Courtesy of Wikipedia. Public domain.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

December 21, 2016 2:31 PM

Retaining a Sense of Wonder

A friend of mine recently shared a link to Radio Garden on a mailing list (remember those?), and in the ensuing conversation, another friend wrote:

I remember when I was a kid playing with my Dad's shortwave radio and just being flabbergasted when late one night I tuned in a station from Peru. Today you can get on your computer and communicate instantly with any spot on the globe, and that engenders no sense of wonder at all.

Such is the nature of advancing technology. Everyone becomes acclimated to amazing new things, and pretty soon they aren't even things any more.

Teachers face a particularly troublesome version of this phenomenon. Teach a subject for a few years, and pretty soon it loses its magic for you. It's all new to your students, though, and if you can let them help you see it through their eyes, you can stay fresh. The danger, though, is that it starts to look pretty ordinary to you, even boring, and you have a hard time helping them feel the magic.

If you read this blog much, you know that I'm pretty easy to amuse and pretty easy to make happy. Even so, I have to guard against taking life and computer science for granted.

Earlier this week, I was reading one reading one of the newer tutorials in Matthew Butterick's Beautiful Racket, Imagine a language: wires. In it, he builds a DSL to solve one of the problems in the 2015 edition of Advent of Code, Some Assembly Required. The problem is fun, specifying a circuit in terms of a small set of operations for wires and gates. Butterick's approach to solving it is fun, too: creating a DSL that treats the specification of a circuit as a program to interpret.

This is no big deal to a jaded old computer scientist, but remember -- or imagine -- what this solution must seem like to a non-computer scientist or to a CS student encountering the study of programming languages for the first time. With a suitable interpreter, every dataset is a program. If that isn't amazing enough, some wires datasets introduce sequencing problems, because the inputs to a gate are defined in the program after the gate. Butterick uses a simple little trick: define wires and gates as functions, not data. This simple little trick is really a big idea in disguise: Functions defer computation. Now circuit programs can be written in any order and executed on demand.

Even after all these years, computing's most mundane ideas can still astonish me sometimes. I am trying to keep my sense of wonder high and to renew it whenever it starts to flag. This is good for me, and good for my students.

~~~~

P.S. As always, I recommend Beautiful Racket, and Matthew Butterick's work more generally, quite highly. He has a nice way of teaching useful ideas in a way that appreciates their beauty.

P.P.S. The working title of this entry was "Paging Louis C.K., Paging Louis C.K." That reference may be a bit dated by now, but still it made me smile.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

December 19, 2016 3:04 PM

Higher Education Has Become A Buyer's Market

... as last week's Friday Fragments reminds us.

Much of higher education is based on the premise of a seller's market. In a seller's market, the institution can decide the terms on which it will accept students. At the very elite, exclusive places, that's still largely true. Swarthmore turns away far more than it admits, and it does so on its own terms. But most of us aren't Swarthmore.

The effects of this change are numerous. It's hard to set prices, let alone correlate price and quality. University administrations are full of people confused by the shifting market. They are also full of people frantic at the thought of a drop in enrollment or retention. There are easy ways to keep these numbers up, of course, but most folks aren't willing to pay the associated price.

Interesting times, indeed.


Posted by Eugene Wallingford | Permalink | Categories: General

December 16, 2016 2:14 PM

Language and Thinking

Earlier this week, Rands tweeted:

Tinkering is a deceptively high value activity.

... to which I followed up:

Which is why a language that enables tinkering is a deceptively high value tool.

I thought about these ideas a couple of days later when I read The Running Conversation in Your Head and came across this paragraph:

The idea is not that you need language for thinking but that when language comes along, it sure is useful. It changes the way you think, it allows you to operate in different ways because you can use the words as tools.

This is how I think about programming in general and about new, and better, programming languages in particular. A programmer can think quite well in just about any language. Many of us cut our teeth in BASIC, and simply learning how to think computationally allowed us to think differently than we did before. But then we learn a radically different or more powerful language, and suddenly we are able to think new thoughts, thoughts we didn't even conceive of in quite the same way before.

It's not that we need the new language in order to think, but when it comes along, it allows us to operate in different ways. New concepts become new tools.

I am looking forward to introducing Racket and functional programming to a new group of students this spring semester. First-class functions and higher-order functions can change how students think about the most basic computations such as loops and about higher-level techniques such as OOP. I hope to do a better job this time around helping them see the ways in which it really is different.

To echo the Running Conversation article again, when we learn a new programming style or language, "Something really special is created. And the thing that is created might well be unique in the universe."


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

December 12, 2016 3:15 PM

Computer Science Is Not That Special

I'm reminded of a student I met with once who told me that he planned to go to law school, and then a few minutes later, when going over a draft of a lab report, said "Yeah... Grammar isn't really my thing." Explaining why I busted up laughing took a while.

When I ask prospective students why they decided not to pursue a CS degree, they often say things to the effect of "Computer science seemed cool, but I heard getting a degree in CS was a lot of work." or "A buddy of mine told me that programming is tedious." Sometimes, I meet these students as they return to the university to get a second degree -- in computer science. Their reasons for returning vary from the economic (a desire for better career opportunities) to personal (a desire to do something that they have always wanted to do, or to pursue a newfound creative interest).

After you've been in the working world a while, a little hard work and some occasional tedium don't seem like deal breakers any more.

Such conversations were on my mind as I read physicist Chad Orzel's recent Science Is Not THAT Special. In this article, Orzel responds to the conventional wisdom that becoming a scientist and doing science involve a lot of hard work that is unlike the exciting stuff that draws kids to science in the first place. Then, when kids encounter the drudgery and hard work, they turn away from science as a potential career.

Orzel's takedown of this idea is spot on. (The quoted passage above is one of the article's lighter moments in confronting the stereotype.) Sure, doing science involves a lot of tedium, but this problem is not unique to science. Getting good at anything requires a lot of hard work and tedious attention to detail. Every job, every area of expertise, has its moments of drudgery. Even the rare few who become professional athletes and artists, with careers generally thought of as dreams that enable people to earn a living doing the thing they love, spend endless hours engaged in the drudgery of practicing technique and automatizing physical actions that become their professional vocabulary.

Why do we act as if science is any different, or should be?

Computer science gets this rap, too. What could be worse than fighting with a compiler to accept a program while you are learning to code? Or plowing threw reams of poorly documented API descriptions to plug your code into someone's e-commerce system?

Personally, I can think of lots of things that are worse. I am under no illusion, however, that other professionals are somehow shielded from such negative experiences. I just prefer my pains to theirs.

Maybe some people don't like certain kinds of drudgery. That's fair. Sometimes we gravitate toward the things whose drudgery we don't mind, and sometimes we come to accept the drudgery of the things we love to do. I'm not sure which explains my fascination with programming. I certainly enjoy the drudgery of computer science more than that of most other activities -- or at least I suffer it more gladly.

I'm with Orzel. Let's be honest with ourselves and our students that getting good at anything takes a lot of hard work and, once you master something, you'll occasionally face some tedium in the trenches. Science, and computer science in particular, are not that much different from anything else.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

December 09, 2016 1:54 PM

Two Quick Hits with a Mathematical Flavor

I've been wanting to write a blog entry or two lately about my compiler course and about papers I've read recently, but I've not managed to free up much time as semester winds down. That's one of the problems with having Big Ideas to write about: they seem daunting and, at the very least, take time to work through.

So instead here are two brief notes about articles that crossed my newsfeed recently and planted themselves in my mind. Perhaps you will enjoy them even without much commentary from me.

A Student's Unusual Proof Might Be A Better Proof

I asked a student to show that between any two rationals is a rational.
She did the following: if x < y are rational then take δ << y-x and rational and use x+δ.

I love the student's two proofs in article! Student programmers are similarly creative. Their unusual solutions often expose biases in my thinking and give me way to think about a problem. If nothing else, they help to understand better how students think about ideas that I take for granted.

Numberless Word Problems

Some girls entered a school art competition. Fewer boys than girls entered the competition.
She projected her screen and asked, "What math do you see in this problem?"
Pregnant pause.
"There isn't any math. There aren't any numbers."

I am fascinated by the possibility of adapting this idea to teaching students to think like a programmer. In an intro course, for example, students struggle with computational ideas such as loops and functions even though they have a lot of experience with these ideas embodied in their daily lives. Perhaps the language we use gets in the way of them developing their own algorithmic skills. Maybe I could use computationless word problems to get them started?

I'm giving serious thought to ways I might use this approach to help students learn functional programming in my Programming Languages course this spring. The authors describes how to write numberless word problems, and I'm wondering how I might bring the philosophy to computer science. If you have any ideas, please share!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

December 05, 2016 2:42 PM

Copying Interfaces, Copying Code

Khoi Vinh wrote a short blog entry called The Underestimated Merits of Copying Someone Else's Work that reminds us how valuable copying others' work, a standard practice in the arts, can be. At the lowest level there is copying at the textual level. Sometimes, the value is mental or mechanical:

Hunter S. Thompson famously re-typed, word for word, F. Scott Fitzgerald's "The Great Gatsby" just to learn how it was done.

This made me think back to the days when people typed up code they found in Byte magazine and other periodicals. Of course, typing a program gave you more than practice typing or a sense of what it was like to type that much; it also gave you a working program that you could use and tinker with. I don't know if anyone would ever copy a short story or novel by hand so that they could morph it into something new, but we can do that meaningfully with code.

I missed the "copy code from Byte" phase of computing. My family never had a home computer, and by the time I got to college and changed my major to CS, I had plenty of new programs to write. I pulled ideas about chess-playing programs and other programs I wanted to write from books and magazines, but I never typed up an entire program's source code. (I mention one of my first personal projects in an old OOPSLA workshop report.)

I don't hear much these days about people copying code keystroke for keystroke. Zed Shaw has championed this idea in a series of introductory programming books such as Learn Python The Hard Way. There is probably something to be learned by copying code Hunter Thompson-style, feeling the rhythm of syntax and format by repetition, and soaking up semantics along the way.

Vinh has a more interesting sort of copying in mind, though: copying the interface of a software product:

It's odd then to realize that copying product interfaces is such an uncommon learning technique in design. ... it's even easier to re-create designs than it is to re-create other forms of art. With a painting or sculpture, it's often difficult to get access to the historically accurate tools and materials that were used to create the original. With today's product design, the tools are readily available; most of us already own the exact same software employed to create any of the most prominent product designs you could name.

This idea generalizes beyond interfaces to any program for which we don't have source code. We often talk about reverse engineering a program, but in my experience this usually refers to creating a program that behaves "just like" the original. Copying an interface pixel by pixel, like copying a program or novel character by character, requires the artist to attend to the smallest details -- to create an exact replica, not a similar work.

We cannot reverse engineer a program and arrive at identical source code, of course, but we can try to replicate behavior and interface exactly. Doing so might help a person appreciate the details of code more. Such a practice might even help a programmer learn the craft of programming in a different way.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning