November 30, 2011 7:07 PM

A Definition of Design from Charles Eames

Paul Rand's logo for NeXT

Our council of department heads meets in the dean's conference room, in the same building that houses the Departments of Theater and Art, among others. Before this morning's meeting, I noticed an Edward Tufte poster on the wall and went out to take a look. It turns out that the graphic design students were exhibiting posters they had made in one of their classes, while studying accomplished designers such as Tufte and Paul Rand, the creator of the NeXT logo for Steve Jobs.

As I browsed the gallery, I came across a couple of posters on the work of Charles and Ray Eames. One of them prominently featured this quote from Charles:

Design is a plan for arranging elements in such a way as best to accomplish a particular purpose.

This definition works just as well for software design as it does for graphic design. It is good to be reminded occasionally how universal the idea of design is to the human condition.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development

November 30, 2011 12:07 PM

Being Less Helpful

I've noticed myself being less helpful in the last couple of weeks. Students have come to me with questions -- about homework questions, and quizzes earlier in the semester, and topics we've covered in lecture. Advisees have come to me with questions about degree requirements and spring registration. In most cases, they expect to receive answers: direct, specific, concrete solutions to their problems.

But lately, that's not what I've been giving most of them.

I've been asking a lot of questions myself. I've pointed them to specific resources: lecture notes, department web pages, and the university catalog. All of these resources are available on-line. In the case of my course, detailed lectures notes, all assignments, and all source code are posted on the course web site. Answers to many questions are in there, sometimes lurking, other times screaming in the headlines.

But mostly, I've been asking a lot of questions.

I am not trying to be unhelpful. I'm trying to be more helpful in the longer run. Often, students have forgotten that we had covered a topic in detail in class. They will be better off re-reading the material, engaging it again, than being given a rote answer now. If they've read the material and still have questions, the questions are likely to be more focused. We will be able to discuss more specific misunderstandings. I will usually be able to diagnose a misconception more accurately and address it specifically.

In general, being less helpful is essential to helping students learn to be self-sufficient, to learn better study skills, and to develop better study habits. It's just as important a part of their education as any lecture on higher-order procedures.

(Dan Meyer has been agitating around this idea for a long time in how we teach K-12 mathematics. I often find neat ways to think about being less helpful on his blog, especially in the design of the problems I give my students.)

However, I have to be careful not to be cavalier about being less helpful. First of all, it's easy for being less helpful to morph into being unhelpful. Good habits can become bad habits when left untended.

Second, and more important, trends in the questions that students ask can indicate larger issues. Sometimes, they can do something better to improve their learning, and sometimes I can do something better. For example, from my recent run of being less helpful, I've learned that...

  • Students are not receiving the same kind of information they used to receive about scheduling and programs of study. To address this, I'll be communicating more information to my advisees earlier in the process.
  • Students in Programming Languages are struggling with a certain class of homework problems. To fix this, I plan to revamp several class sessions in the middle of the semester to help them learn some of these ideas better and sooner. I think I'll also provide more concrete pointers on a couple of homework assignments.

Being less helpful now is a good strategy only if both the students and I have done the right kind of work earlier.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

November 18, 2011 2:10 PM

Teachers Working Themselves Out Of a Job

Dave Rooney wrote a recent entry on what he does when coaching in the realm of agile software development. He summarizes his five main tasks as:

  • Listen
  • Ask boatloads of questions
  • Challenge assumptions
  • Teach/coach Agile practices
  • Work myself out of a job

All but the fourth is a part of my job every day. Listening, asking questions, and challenging assumptions are essential parts of helping people to learn, whatever one's discipline or level of instruction. As a CS prof, I teach a lot of courses that instruct or require programming, and I look for opportunities to inject pragmatic programming skills and agile development practices.

What of working myself out of a job? For consultants like Rooney, this is indeed the goal: help an organization get on a track where they don't need his advice, where they can provide coaching from inside, and where they become sufficient in their own practices.

In a literal sense, this is not part of my job. If I do my job well, I will remain employed by the same organization, or at least have that option available to me.

But in another sense, my goals with respect to working myself out of a job are the same as as a consultant's, only at the level of individual students. I want to help students reach a point where they can learn effectively on their own. As much as possible, I hope for them to become more self-sufficient, able to learn as an equal member of the larger community of programmers and computer scientists.

A teacher's goal is, in part, to prepare students to move on to a new kind of learning, where they don't need us to structure the learning environment or organize the stream of ideas and information they learn from. Many students come to us pretty well able to do this already; they need only to realize that they don't me!

With most universities structured more around courses than one-on-one tutorials, I don't get to see the process through with every student I teach. One of the great joys is to have the chance to work with the same student many times over the years, through multiple courses and one-on-one through projects and research.

In any case, I think it's healthy for teachers to approach their jobs from the perspective of working themselves out of a job. Not to worry; there is always another class of students coming along.

Of course, universities as we know them may be disappearing. But the teachers among us will always find people who want to learn.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

November 16, 2011 2:38 PM

Memories of a Programming Assignment

Last Friday afternoon, the CS faculty met with the department's advisory board. The most recent addition to the board is an alumnus who graduated a decade ago or so. At one point in the meeting, he said that a particular programming project from his undergraduate days stands out in his mind after all these years. It was a series of assignments from my Object-Oriented Programming course called Cat and Mouse.

I can't take credit for the basic assignment. I got the idea from Mike Clancy in the mid 1990s. (He later presented his version of the assignment on the original Nifty Assignments panel at SIGCSE 1999.) The problem includes so many cool features, from coordinate systems to stopping conditions. Whatever one's programming style or language, it is a fun challenge. When done OO in Java, with great support for graphics, it is even more fun.

But those properties aren't what he remembers best about the project. He recalls that the project took place over several weeks and that each week, I changed the requirements of assignment. Sometimes, I added a new feature. Other times, I generalized an existing feature.

What stands out in his mind after all these years is getting the new assignment each week, going home, reading it, and thinking,

@#$%#^. I have to rewrite my entire program.

You, see he had hard-coded assumptions throughout his code. Concerns were commingled, not separated. Objects were buried inside larger pieces of code. Model was tangled up with view.

So, he started from scratch. Over the course of several weeks, he built an object-oriented system. He came to understand dynamic polymorphism and patterns such as MVC and decorator, and found ways to use them effectively in his code.

He remembers the dread, but also that this experience helped him learn how to write software.

I never know exactly what part of what I'm doing in class will stick most with students. From semester to semester and student to student, it probably varies quite a bit. But the experience of growing a piece of software over time in the face of growing and changing requirements is almost always a powerful teacher.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

November 14, 2011 3:43 PM

Using Null Object to Get Around Truthiness

Avdi Grimm recently posted a short but thorough introduction to a common problem encountered by Ruby programmers: nullifying an action when the required object is not present. In a mixed-paradigm language, this becomes an issue when we use if-statements to guard the action. Ruby, like many languages, treats a distinguished value as false and all other values as true. Thus, unless an object is "falsey", it will be seen as true by the if-statement. This requires us to do extra tests or wrap our objects to ensure that tests pass and fail at the appropriate times. Grimm explains the problem and walks us through several approaches to solving this problem in Ruby. I recommend you read it.

Some of my functional programming friends teased me about this article, along the lines of a pithy tweet:

Rubyists have discovered Maybe/Option.

Maybe is a wrapper type in Haskell that allows programmers to indicate that an expected value may not be available; Option is the Scala analog. Wrapper types are the natural way to handle the problem of falsey-ness in functional programming, especially in statically-typed languages. (Clojure, a popular dynamically-typed functional language in the Lisp tradition, doesn't make use of this idea.)

When I write Haskell code, as rare as that is, I use Maybe as a natural part of typing my programs. As a functional programmer more generally, I see its great value in writing compact code. The alternative in Scheme is to us if expressions to switch on value, separating null (usually) base cases from inductive cases.

However, when I work as an OO programmer, I don't miss Maybe or even falsey objects more generally. Indeed, I think the best advice in Grimm's article comes in his conclusion:

If we're trying to coerce a homemade object into acting falsey, we may be chasing a vain ideal. With a little thought, it is almost always possible to transform code from typecasing conditionals to duck-typed polymorphic method calls. All we have to do is remember to represent the special cases as objects in their own right, whether that be a Null Object or something else.

My pithy tweet-like conclusion is this: If you are checking for truthiness, you're doing OO wrong.

To do it right, use the Null Object pattern.

Ruby is an OO language, but it gives us great freedom to write code in other styles as well. Many programmers in the Ruby community, mostly those without prior OO experience, miss out on the beauty and compactness one can achieve using standard OO patterns.

I see a lot of articles on the web about the Null Object pattern, but most of them involve extending the class NilClass or its lone instance, nil. That is fine if you are trying to add generic null object behavior to a system, often in service of truthiness and falsey-ness. A better approach in most contexts is to implement an object that behaves like a "nothing" in your application. If you are writing a program that consist of users, create an unassigned user. If you are creating a logging facility and need the ability for a service not to use a log, create a null log. If you are creating an MVC application and need a read-only controller, create a null controller.

In some applications, the idea of the null object disappears into another primitive object. When we implement a binary tree, we create an object that has references to two other tree objects. If all tree nodes behave similarly except that some do not have children, then we can create a NullTree object to serve as the values of the instance variables in the actual leaves. If leaf nodes behave differently than interior nodes, then we can create a Leaf object to serve as the values of the interior nodes' instance variables. Leaf subsumes any need for a null object.

One of the consequences of using the Null Object pattern is the elimination of if statements that switch on the type of object present. Such if statements are a form of ad hoc polymorphism. Programmers using if statements while trying to write OO code should not be surprised that their lives are more difficult than they need be. The problem isn't with OOP; it's with not taking advantage of dynamic polymorphism, one of the essential benefits of OOP.

If you would like to learn more about the Null Object pattern, I suggest you read its two canonical write-ups:

If you are struggling with making the jump to OOP, away from typecasing switch statements and explicit loops over data structures, take up the programming challenge writing increasingly larger programs with no if statements and no for statements. Sometimes, you have to go to extremes before you feel comfortable in the middle.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

November 12, 2011 10:40 AM

Tools, Software Development, and Teaching

Last week, Bret Victor published a provocative essay on the future of interaction design that reminds us we should be more ambitious in our vision of human-computer interaction. I think it also reminds us that we can and should be more ambitious in our vision of most of our pursuits.

I couldn't help but think of how Victor's particular argument applies to software development. First he defines "tool":

Before we think about how we should interact with our Tools Of The Future, let's consider what a tool is in the first place.

I like this definition: A tool addresses human needs by amplifying human capabilities.

a tool addresses human needs by amplifying human capabilities


That is, a tool converts what we can do into what we want to do. A great tool is designed to fit both sides.

The key point of the essay is that our hands have much more consequential capabilities than our current interfaces use. They feel. They participate with our brains in numerous tactile assessments of the objects we hold and manipulate: "texture, pliability, temperature; their distribution of weight; their edges, curves, and ridges; how they respond in your hand as you use them". Indeed, this tactile sense is more powerful than the touch-and-slide interfaces we have now and, in many ways, is more powerful than even sight. These tactile senses are real, not metaphorical.

As I read the essay, I thought of the software tools we use, from language to text editors to development processes. When I am working on a program, especially a big one, I feel much more than I see. At various times, I experience discomfort, dread, relief, and joy.

Some of my colleagues tell me that these "feelings" are metaphorical, but I don't think so. A big part of my affinity for so-called agile approaches is how these sensations come into play. When I am afraid to change the code, it often means that I need to write more or better unit tests. When I am reluctant to add a new feature, it often means that I need to refactor the code to be more hospitable. When I come across a "code smell", I need to clean up, even if I only have time for a small fix. YAGNI and doing the simplest thing that can possibly work are ways that I feel my way along the path to a more complete program, staying in tune with the code as I go. Pair programming is a social practice that engages more of my mind than programming alone.

Victor closes with some inspiration for inspiration:

In 1968 -- three years before the invention of the microprocessor -- Alan Kay stumbled across Don Bitzer's early flat-panel display. Its resolution was 16 pixels by 16 pixels -- an impressive improvement over their earlier 4 pixel by 4 pixel display.

Alan saw those 256 glowing orange squares, and he went home, and he picked up a pen, and he drew a picture of a goddamn iPad.

We can think bigger about so much of what we do. The challenge I take from Victor's essay is to think about the tools I to teach: what needs do they fulfill, and how well do they amplify my own capabilities? Just as important are the tools we give our students as they learn: what needs do they fulfill, and how well do they amplify our students' capabilities?


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

November 09, 2011 4:29 PM

Sentences of the Day: Sheer Fun

From Averia, The Average Font:

Having found a simple process to use, I was ready to start. And after about a month of part-time slaving away (sheer fun! Better than any computer game) -- in the process of which I learned lots about bezier curves and font metrics -- I had a result.

Programmers love to slave away in their free time on projects that put fires in their bellies, with no slave driver other than their own passion.

The story of Averia is a worthy one to read, even if you are not particularly a font person. It's really about how the seed of an idea can grow as our initial efforts pull us deeper into the beautiful intricacies of a problem. It also reminds us how programs make nice testbeds for our experiments.


Posted by Eugene Wallingford | Permalink | Categories: Computing

November 05, 2011 10:09 AM

Is Computing Too Hard, Too Foreign, or Too Disconnected?

A lot of people are discussing a piece published in the New York Times piece yesterday, Why Science Majors Change Their Minds (It's Just So Darn Hard). It considers many factors that may be contributing to the phenomenon, such as low grades and insufficient work habits.

Grades are typically much lower in STEM departments, and students aren't used to getting that kind of marks. Ben Deaton argues that this sort of rigor is essential, quoting his undergrad steel design prof: "If you earn an A in this course, I am giving you a license to kill." Still, many students think that a low grade -- even a B! -- is a sign that they are not suited for the major, or for the specialty area. (I've had students drop their specialty in AI after getting a B in the foundations course.)

Most of the students who drop under such circumstances are more than capable of succeeding. Unfortunately, they have not usually developed the disciplined work habits they need to succeed in such challenging majors. It's a lot easier to switch to a different major where their current skills suffice.

I think there are two more important factors at play. On the first, the Times article paraphrases Peter Kilpatrick, Notre Dame's Dean of Engineering:

... it's inevitable that students will be lost. Some new students do not have a good feel for how deeply technical engineering is.

In computer science, our challenge is even bigger: most HS students don't have any clue at all what computer science is. My university is nearing the end of its fall visit days for prospective students, who are in the process of choosing a college and a major. The most common question I am asked is, "What is computer science?", or its cousin, "What do computer scientists do?". This question comes from even the brightest students, ones already considering math or physics. Even more students walk by the CS table with their parents with blank looks on their faces. I'm sure some are thinking, "Why consider a major I have no clue about?"

This issue also plagues students who decide to major in CS and then change their minds, which is the topic of the Times article. Students begin the major not really knowing what CS is, they find out that they don't like it as much as they thought they might, and they change. Given what they know coming into the university, it really is inevitable that a lot of students will start and leave CS before finishing.

On the second factor I think most important, here is the money paragraph from the Times piece:

But as Mr. Moniz (a student exceedingly well prepared to study engineering) sat in his mechanics class in 2009, he realized he had already had enough. "I was trying to memorize equations, and engineering's all about the application, which they really didn't teach too well," he says. "It was just like, 'Do these practice problems, then you're on your own.'" And as he looked ahead at the curriculum, he did not see much relief on the horizon.

I have written many times here about the importance of building instructions around problems, beginning with Problems Are The Thing. Students like to work on problems, especially problems that matter to someone in the world. Taken to the next level, as many engineering schools are trying to do, courses should -- whenever possible -- be built around projects. Projects ground theory and motivate students, who will put in a surprising effort on a project they care about or think matters in the world. Projects are also often the best way to help students understand why they are working so hard to memorize and practice tough material.

In closing, I can take heart that schools like mine are doing a better job retaining majors:

But if you take two students who have the same high school grade-point average and SAT scores, and you put one in a highly selective school like Berkeley and the other in a school with lower average scores like Cal State, that Berkeley student is at least 13 percent less likely than the one at Cal State to finish a STEM degree.

Schools tend to teach less abstractly than our research-focused sister schools. We tend to provide more opportunities early in the curriculum to work on projects and to do real research with professors. I think the other public universities in my state do a good job, but if a student is considering an undergrad STEM major, they will be much better served at my university.

There is one more reason for the better retention rate at the "less selective" schools: pressure. The students at the more selective schools are likely to be more competitive about grades and success than the students at the less selective schools. This creates an environment more conducive to learning for most students. In my department, we try not to "treat the freshman year as a 'sink or swim' experience and accept attrition as inevitable" for reasons of Darwinian competition. As the Times article says, this is both unfair to students and wasteful of resources.

By changing our curricula and focusing more on student learning than on how we want to teach, universities can address the problem of motivation and relevance. But that will leave us with the problem of students not really knowing what CS or engineering are, or just how technical and rigorous they need to be. This is an education problem of another sort, one situated in the broader population and in our HS students. We need to find ways to both share the thrill and help more people see just what the STEM disciplines are and what they entail.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

November 02, 2011 7:39 AM

Programming for All: will.i.am

While reading a bit about the recent flap over racism in the tech start-up world, I found this passage in a piece by Michael Arrington:

will.i.am was proposing an ambitious new idea to help get inner city youth (mostly minorities) to begin to see superstar entrepreneurs as the new role models, instead of NBA stars. He believes that we can effect real societal change by getting young people to learn how to program, and realize that they can start businesses that will change the world.

Cool. will.i.am is a pop star who has had the ear of a lot of kids over the last few years. I hope they listen to this message from him as much as they do to his music.


Posted by Eugene Wallingford | Permalink | Categories: Computing