August 28, 2008 4:22 PM

The Universe is a Visualization

I'm not a physicist and don't keep up on the latest -- or even much of the not-so-latest -- in string theory. But recently a colleague pointed me toward Leonard Susskind's 2008 book, The Black Hole War. Before tracking down the book, I read a review from the Los Angeles Times. The review introduced me to Susskind's "holographic principle", which holds that

... our universe is a three-dimensional projection of information stored in two dimensions at the boundary of space ...

Suddenly, data and algorithm visualizations seem so much more important than just ways to make pretty graphs. When we tell the world that computation is everywhere, we may be more right than I ever realized. Add in the principle of the conservation of information that lies at the center of the dispute between Susskind's work and Hawking's, and "computing for everyone" takes on a whole new meaning.

Note to self: read this book.


Posted by Eugene Wallingford | Permalink | Categories: Computing

August 27, 2008 12:25 PM

What Grades Mean

My younger daughter entered seventh grade this year, and at the orientation session last week the teachers made a point of saying that timeliness matters. If a student turns in late work, they will be "docked". My mind ended up wandering away as I thought about what this means for her grade. If she ends up with a B, what does that say about her mastery of the material? About her timeliness?

Perhaps I was primed for this daydream by a conversation I had had recently with a colleague who teaches one of our CS1 sections. Traditionally, he has had a very lax policy on late work: get it done, even late, and he would grade it straight up. His thinking was that this would encourage students to stick with assignments and get the practice they need. In past years, this policy has worked all right for him, but in the last year or so he has noticed more students putting off more assignments, many students turning in several or all of their assignments at the end of the semester. Not surprisingly, these students do poorly on the exams for lack of practice and so do poorly in course overall.

He and I contrasted his policy with mine, which is that late work is not accepted for grading. I'm always willing to look at a student program after the deadline, but it will not count for credit. This is one of the few ways in which I draw a hard line with students, but I find that it encourages students to take assignments seriously and to get practice regularly throughout the semester.

Until I heard my daughters' teachers talk about their policy, I'm not sure I had realized quite so clearly: My late work policy conflates mastery of content with professional work habits. A student can learn everything I want him to learn and more, yet earn a low grade by not submitting assignment on time.

To be honest, that's probably not a problem. In our current system, it is not entirely clear what a grade means anyway. Across universities, across departments at the same university, and even across faculty within the same department, grades can signify very different results. Conflating the evaluations of knowledge and behavior is only one source of variation, and almost certainly not the most significant.

Employers who hire our graduates want employees who know their discipline and who deliver results in a professional many. Still, I can't help but think what it would be like to offer two grades for a course, one for content and one for all that other stuff: timeliness, teamwork, neatness, etc. Instructor: "Johnny, you get a B for your understanding of operating systems, and a D for behavior, because you don't color within the lines." Employer: "We really need someone with the right professional skills for this position; let's teach him what he needs to know after he gets here."

Increasingly, I am drawn to a competency-based scheme for grading what students know. West and Rostal have been advocating this idea for a while, as part of a larger overhaul of CS education. It takes some work do right, but the effect on what we expect of our students might be worth it. Unfortunately, within the broader university culture of grades and effort and time-delimited courses carved out of a discipline's body of knowledge, moving in this direction creates logistic costs that may be larger than the pedagogical ones.

In any case, I've been thinking of ways I might change my grading scheme. I'm not likely to change the "no late work" policy, at least not for upper-division courses, and to be honest I find that very few students have a problem getting their work in on time in face of the policy. (Whether the work is complete is another matter...) Still, I might consider changing how the homework grade figures into the overall grade. Perhaps instead of counting homework as 30% of the grade, I could count it for "up to 30%" and let the student select the percentage. Students who would rather not bother with falderol of assignment requirements could stake more or all of their grade on exams; students who worry about exams could stick with 30%. Perhaps having that be their choice and not mine would motivate them even more to make a good faith effort at completing the entire assignment on time.

I suppose that my real concern in all this thinking is with my seventh-grader. She, my wife, and I already pay close attention to her work behavior, trying help her develop good habits. She's already a conscientious student who just needs to learn how to manage her own time. We also pay close attention to her understanding of the content in her classes, but her assignment and test grades are a big part of how we track that progress. As the grades she receives begin to include both elements, we'll want to pay closer attention to her understanding of the material in other ways. I guess I'm in the same position as the employers who hire my students now!


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

August 26, 2008 3:58 PM

The Start of the Semester

I taught my first session of Programming Languages today, so the semester is officially underway. Right now, my mind is a swirl of Scheme, closures, continuations, MapReduce, and lazy evaluation. I've been teaching this course for a dozen years based on functional programming (a style new to our students at this point) and writing interpreters in Scheme. This makes me very comfortable with the material. Over the years I have watched ideas work their way from the niche of PL courses into mainstream languages. The resurgence of scripting languages has been both a result of this change and a trigger. The discussion of true closures in languages such as Ruby and Java is one example.

This evolution is fun to watch, even if it moves haltingly and perhaps slower than I'd prefer. In order to keep my course current, I need to incorporate some of these changes into my course. This time around, I find myself thinking about what ideas beyond the "edge" of practical languages I should highlight in my course. I'd like for my students to learn about some of the coolest ideas that will be appearing in their professional practice in the near future. For some reason, lazy evaluation seems ripe for deeper consideration. Working it into my class more significantly will be a project for me this semester.

Delving headlong into a new semester's teaching makes Jorge Cham's recent cartoon seem all the more true:

How Professors Spend Their Time -- Jorge Cham

For faculty at a "teaching university", the numbers are often skewed even further. Of course, I am an administrator now, so I teach but one course a semester, not three. Yet the feeling is the same, and the desire to spend more time on real CS -- teaching and research -- is just as strong. Maybe I can add a few hours to each day?


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

August 22, 2008 4:39 PM

Unexpected Computer Science Reference

Today I'm listening to the August 21 episode of The Bob and Tom Show, a syndicated morning radio show. One of the guests on this episode is comedian Dwayne Perkins. There I was working on my previous entry when I hear Perkins say "I was a computer science student in college...". Surprise! Of course, the reference was part of a skit on his dating life and ended up mentioning a preponderance of Asian-American males in his classes, but still.

I don't know if CS plays any role in his comedy more generally, but he at least is tech-savvy enough to have a blog with both discursive entries and short videos.

I'm guessing that it's a good thing to have a performer say "computer science" every so often as part of his work. Whatever he says about CS, people aren't likely to remember much of the content, but they might remember hearing the words "computer science". Something that puts our discipline in the mainstream may well help demythologize it.

Coincidentally, I heard this episode on a day when we did final registration for fall courses, and every introductory programming course we offer is full. This includes our intro course for majors, which is a good 50% larger than last year, and non-majors courses in VB and C++. Perhaps the tide has turned. Or perhaps some of the efforts we have mad in the last three years are beginning to pay off.


Posted by Eugene Wallingford | Permalink | Categories: Computing

August 22, 2008 4:00 PM

Lawyers Read My Blog

Not really. But they do protect their marks.

A couple of years ago, I received a polite request from the lawyers of a well-known retired cartoonist, asking that I not use one of his cartoons. Today, I received a polite request from the lawyers of a well-known business author and speaker, asking:

Please make sure to include [a statement acknowledging our registered trademark] on each page that our trademarked term appears. Additionally, we respectfully request that every time you use our mark in the body of your work of commentary that you capitalize the first letter of each word in the mark and directly follow the mark with the ® symbol so that it reads as "... ®"

Google does change the landscape for many, many things. This is a good thing; it reduces friction in the market and the law.

One result for me is that I now know that the ® symbol is denoted by entity number 174 or entity name reg. I've used © occasionally, but rarely ®.

That said, I'm not too keen on having to capitalize two common words every time I use them in an article. I think I either need to write those articles without using the code phrase, or simply stop quoting books that are likely to trademark simple phrases. The latter rules out most thin business trade books, especially on management and marketing. That's not much of a loss, I suppose.


Posted by Eugene Wallingford | Permalink | Categories: General

August 20, 2008 2:19 PM

Stalking the Wily Misconception

Recently, someone sent me a link to Clifford Stoll's TED talk from February 2006, and yesterday I finally watched. Actually, I listened more than I watched, for two reasons. First, because I was multitasking in several other windows, as I always am at the machine. Second, because Stoll's manic style jumping around the stage isn't much to my liking.

As a university professor and a parent, I enjoyed the talk for its message about science and education. It's worth listening to simply for the epigram he gives in the first minute or so, about science, engineering, and technology, and for the quote he recites to close the talk. (Academic buildings have some of the coolest quotes engraved right on their walls.) But the real meat of the talk, which doesn't start until midway through, is the point.

Prodded by schoolteachers to whom he was talking about science in the schools, Stoll decided that he should put his money where his mouth is: he became a science teacher. Not just giving a guest lecture at a high school, but teaching a real junior-high science class four days a week. He doesn't do the "turn to Chapter 7 and do all the odd problems" kind of teaching either, but real physics. For example, his students measure the speed of light. They may be off by 25%, but they measured the speed of light, using experiments they helped design and real tools. This isn't the baking soda volcano, folks. Good stuff. And I'll bet that junior-high kids love his style; he's much better suited for that audience than I!

One remark irked me, even if he didn't mean it the way I heard it. At about 1:38, he makes a short little riff on his belief that computers don't belong in schools. "No! Keep them out of schools", he says.

In one sense, he is right. Educators, school administrators, and school boards have made "integrating technology" so big a deal that computers are put into classrooms for their own sake. They become devices for delivering lame games and ineffective simulations. We teach Apple Keynote, and students think they have learned "computers" -- and so do most teachers and parents. When we consider what "computers in schools" means to most people, we probably should keep kids away from them, or at least cut back their use.

At first, I thought I was irked at Stoll for saying this, but now I realize that I should be irked at my profession for not having done a better job both educating everyone about what computers really mean for education and producing the tools that capitalize on this opportunity.

Once again I am shamed by Alan Kay's vision. The teachers working with Alan also have their students do real experiments, too, such as measuring the speed of gravity. Then they use computers to build executable models that help students to formalize the mathematics for describing the phenomenon. Programming is one of their tools.

Imagine saying that we should keep pencils and paper out of our schools, returning to the days of chalk slates. People would laugh, scoff, and revolt. Saying we should keep computers out of schools should elicit the same kind of response. And not because kids wouldn't have access to e-mail, the web, and GarageBand.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

August 19, 2008 1:21 PM

A Lost Summer

For the last five years, mid-August has meant more than getting ready for fall semester. It has meant 50+ mile weeks. It has meant once or twice weekly track workouts. It has meant hours each week running, before dawn in cool, moist air; in newly-risen sunlight; in the rain.

A month ago I was still figuring things out about my running, hoping to get well all the while. Unfortunately, I haven't gotten well. At times, I have gotten better, but never well, and punctuated every couple of weeks by a return of the same symptoms that have dogged me since May 2. I've been running since early June, because I wasn't getting better anyway. The last month or two, I have managed between 24 and 29 miles each week, with one 31.5-mile week that left tired for a week afterwards. Many people think that 25-30 mile weeks are awesome, but for me they aren't, and all the while I'm looking to get better.

My doctor is baffled. He has run every test he can imagine, and all he and his nurses can say is, man, you are healthy. That's good news! ... except for the part of not being well.

We'll keep looking, and I'll keep plodding along. But I really miss the summer of running I didn't have. August isn't quite the same.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Running

August 18, 2008 5:27 PM

Inquisitive Computing

I've written here a lot in the last year or so about ideas in the vein of "programming for everyone", so I feel I should point you toward Calculemus!, Brian Hayes's latest column in The American Scientist. This short paper is something of a hodgepodge in celebration of the 25th anniversary of Hayes writing monthly articles on the joys of computation.

First, he talks about the history of his columns, which explore the realm of inquisitive computing -- writing programs as a way to explore ideas that interest him and his readers. This isn't software development, with "requirements" and "process" and "lifecycle management". This is asking a cool question (say, "Is there any pattern in the sequence of numbers that are perfect medians?) and writing some code in search of answers, and more questions. This is exactly the way in which I conceive of programming as an essential intellectual skill of the future to be. I don't imagine that most people will ask themselves such mathematical questions (though they might), but they might be inquisitive at work or at home in their own areas of interest.

They may be sitting on a plane talking with a fellow passenger and have a question about football overtimes. I lived that story once many years ago, talking about sudden-death overtime in the NFL. My seatmate was a pilot who liked to follow football, and after we discussed some of the days scores he asked out loud, "I wonder how frequently the team that wins the coin toss wins the game?" He figured that was the end of the question, because how could we answer it? I whipped out my laptop, fired up Dr. Scheme, and built a little model. We experimented with several parameters, including a percentage we had read for how often the first team scores on its opening drive, until we had a good sense of how much of an advantage winning the coin toss is. He was amazed that we could do what we did. I could only say, it would be great if more people knew this was possible, and learned the little bit of programming they need to do it. I'm not sure he believed that.

Hayes then gives three examples of the kinds of problem he likes to explore and how programs can help. I'm tempted to elaborate on his examples, but that would make this post as long as the paper. Just read it. I can say that all three were fun for me to play with, and two of them admit almost trivial implementations for getting started in the search for answers. (I was surprised to learn that what he calls abc-hits has implications for number theory.)

Finally, Hayes closes with a discussion of what sort of programming environments and languages we need to support inquisitive programming by the masses. He laments the passing of Dartmouth BASIC into the bowels of structured programming, object-oriented programming, and, dare I add, VB.NET -- from a language for everyone to a language for a small group of professionals writing serious programs in a serious setting. (He also laments that "GUI-coddled computer users have forgotten how to install an executable in usr/local/bin and add it to their $PATH", so he's not completely consistent in aiming at the computing non-professional!)

He hopes to be language-agnostic, though he confesses to being a Lisp weenie and suggests that Python may be the language best positioned to fill the needs of inquisitive programmers, with its avid user community in the sciences. He is probably right, as I have noted Python's ascendancy in CS ed and the physics community before. Most encounters I have with Python leave me thinking "Boy, I like Ruby", so I would love to see the Ruby grow into this area, too. For that to happen, we need more books like this one to introduce real people to Ruby in their own contexts. I'm looking forward to seeing an exam copy of this title to see whether it or one like it can be useful among students and professionals working in the life sciences.

I've long enjoyed Hayes's column and still follow it on-line after having dropped my subscription to The American Scientist a few years ago. (It's a fine publication, but I have only so many hours in my day to read!) You can find many of his articles on-line now at bit-player. If you can make time to explore a bit, I encourage you to look there...


Posted by Eugene Wallingford | Permalink | Categories: Computing

August 15, 2008 2:35 PM

Less, Sooner

Fall semester is just around the corner. Students will begin to arrive on campus next week, and classes start a week from Monday. I haven't been able to spend much time on my class yet and am looking forward to next week, when I can.

What I have been doing is clearing a backlog of to-dos from the summer and handling standing tasks that come with the start of a new semester and especially a new academic year. This means managing several different to-do lists, crossing priorities, and generally trying to get things done.

As I look at this mound of things to do I can't help being reminded of something Jeff Patton blogged a month or so ago: two secrets of success in software development, courtesy of agile methods pioneer Jim Highsmith: start sooner, and do less.

Time ain't some magical quantity that I can conjure out of the air. It is finite, fixed, and flowing relentlessly by. If I can't seem to get done on time, I need to start sooner. If I can't seem to get it all done, I need to do less. Nifty procedures and good tools can help only so much.

I need to keep this in mind every day of the year.

Oh, and to you students out there: You may not be able to do less work in my class, but you can start sooner. You may have said so yourself at the end of last semester. Heck, you may even want to do more, like read the book...


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

August 12, 2008 4:24 PM

TDD and GTD: Instances of a Pattern

I once wrote that extreme programming is a self-help system. This generalizes pretty well to other software methodologies, too. As we step away from developing software to personal hygiene, there is an entire ecosystem around the notion of life hacks, self-help for managing information and combatting data overload. Programmers and techies are active players in the lifehacking community because, well, we love to make tools to solve our problems and we love self-help systems. In the end, sometimes, we spend more time making tools and playing with them than actually solving our problems.

One of the popular lifehacking systems among techies is David Allen's Getting Things Done, or GTD. I've never read the book or adopted the system, but I've read about it and borrowed some of its practices in trying to treat my own case of information overload. The practices I have borrowed feel a lot like XP and especially test-driven development. Maybe that's why they appeal to me.

Consider this post on the basic concepts of GTD. Here is why GTD makes me think of TDD:

  1. think in terms of outcomes: write a test
  2. take the next action: take a simple action
  3. review your circumstances regularly: refactor

This is not a perfect match. In GTD, a goal from Step 1 may require many next actions, executed in sequence. In TDD, we decompose such big goals into smaller steps so that we can define a very clear next action to perform. And in GTD, Step 3 isn't really refactoring of a system. It's more a global check of where you are and how your lists of projects and next actions need to be revised or pruned. What resonates, though, is its discipline of regular review of where you are headed and how well your current 'design' can get you there.

It's not a perfect match, but then no metaphor is. Yet the vibe feels undeniably similar to me. Each has a mindset of short-term accountability through tests, small steps to achieve simple, clear goals, and regular review and clean-up of the system. The lifehackers who play with GTD even like to build tools to automate as much as they can so that they stay in the flow of getting things done as much as possible and trust their tools to help them manage performance and progress.

Successful patterns recur. I shouldn't be surprised to find these similarities.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

August 11, 2008 2:38 PM

Side Effects and Types in Refactoring

Greg Wilson relates an observation by Michael Feathers: "refactoring pure functional code is a lot easier than refactoring imperative code". In one sense, this ought not to surprise us. When we eliminate side effects from our code, dependencies among functions flow through parameters, which make individual functions more predictably independent of one another. Without side effects, we don't have sequences of statements, which encourages smaller functions, which also makes it easier to understand the functions.

(A function call does involve sequencing, because arguments are evaluated before the function is invoked. But this encourages small functions, too: Deeply-nested expressions can be quite hard to read.)

There is another force counteracting this one, though. Feathers has been playing a lot with Haskell, which is strongly-typed through manifest types and type inferencing. Many functional languages are dynamically-typed, and dynamic typing makes it harder to refactor functional programs -- at least to guarantee that a particular refactoring does not change the program's behavior.

I'm a Scheme programmer when I use a functional language, so I encounter the conflict between these two forces. My suspicion from personal experience is that functional programmers need less support, or at least different kinds of support, when it comes to refactoring tools. The first key step is to identify refactorings from FP practice. From there, we can find ways to automate support for these refactorings. This is a longstanding interest of mine. One downside to my current position is a lack of time to devote to this research project...


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

August 08, 2008 4:11 PM

SIGCSE Day 2 -- This and That

[A transcript of the SIGCSE 2008 conference: Table of Contents]

(Okay, so I am over four months behind posting my last couple of entries from SIGCSE. Two things I've read in the last week or so jolted my memory about one of these items. I'll risk that they are longer of much interest and try to finish off my SIGCSE reports before classes start.)

A Discipline, Not Just a Job

During his talk, I think, Owen Astrachan said:

Don't talk about where the jobs are. We do not need to kowtow to the market. CS is ideas, a discipline.

We do, of course, need to keep in mind that the first motivation for many of our students is to get a job. But Owen is right. To the extent that we "sell" anything, let's sell that CS is a beautiful and powerful set of ideas. We can broaden the minds of our job-seeking students -- and also attract thinking students who are looking for powerful ideas.

When Good Students Are Too Good

Rich Pattis tossed out an apparently old saw I had never heard: Don't give your spec to the best programmer in the room. She will make it work, even if the spec isn't what you want and doesn't make sense. Give it to a mediocre programmer. If the spec is bad, he will fail and come back with questions.

This applies to homework assignments, too. Good students can make anything work, and most will. That good students solved your problem is not evidence of a well-written spec.

Context Complicates

I've talked a lot here about giving students problems in context, whether in the context of large projects or in the context of "real" problems. As I was listening to Marissa Mayer's talk and lunchtable conversation, I was reminded that context complicates matters, for both teacher and students. We have to be careful when designing instruction to be sure that students are able to attend to what we want them to learn, and not be constantly distracted by details in the backstory. Otherwise, a task being in context hurts more than it helps.

The solution: Start with problems in context, then simplify to a model that captures the essence of the context and eliminates unnecessary complexity and distraction. Joe Bergin has probably already written a pedagogical pattern for this, but I don't see it after a quick glance at some of his papers. I've heard teachers like Owen, Nick Parlante, and Julie Zelensky talk about this problem in a variety of settings, and they have some neat approaches to solving it.

Overshooting Your Mark in the Classroom

It is easy for teachers to dream bigger than they can deliver when they lose touch with the reality of teaching a course. I see this all the time when people talk about first-year CS courses -- including myself. In my piece on the Nifty Assignments session, I expressed disappointment that one of the assignments had a write-up of four pages and suggested that I might be able to get away with giving students only the motivating story and a five-line assignment statement. Right. It is more likely that the assignment's creator knows what he is doing from the experience of actually using the assignment in class. From the easy chairs of the Oregon Convention Center, everything looks easier. (I call this the Jeopardy! Effect.)

The risk of overshooting is even bigger when the instructor has not been in the trenches, ever or even for a long while. Mark Guzdial recently told the story of Richard Feynman's freshman physics course, which is a classic example of this phenomenon. Feynman wrote a great set of lectures, but they don't really work as a freshman text, except perhaps with the most elite students.

I recently ran across a link to a new CS1 textbook for C++ straight from Bjarne Stroustrup himself. Stroustrup has moved from industry to academia and has had the opportunity to develop a new course for freshmen. "We need to improve the education of our software developers," he says. When one of my more acerbic colleagues saw this, he response was sharp and fast: "Gee, that quick! Seems those of us in 'academia' don't catch on as well as the newbies."

For all I know, Stroustrup's text will be just what every school that wants to teach C++ in CS1 needs, but I am also skeptical. A lot of smart guys with extensive teaching experience -- several of them my friends -- have been working on this problem for a long time, and it's hard. I look forward to seeing a copy of the book and to hearing how it works for the early adopters.

Joe, is there a pedagogical pattern called "In the Trenches"? If not, there should be. Let's write it.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

August 07, 2008 2:57 PM

Design Ideas Lying in Wait

Ralph Johnson pointed me to a design idea for very large databases called a shard. This is a neat little essay for several reasons. First, its author, Todd Hoff, explains an architecture for massive, distributed databases that has grown up in support of several well-known, high-performance web sites, including Flickr, Google, and LiveJournal. Second, Hoff also wrote articles that describe the architectures of Flickr, Google, and LiveJournal. Third, all four pages point to external articles that are the source of the information summarized. Collectively, these pages make a wonderful text on building scalable data-based web systems.

I've posted this entry in my Patterns category because this recurring architecture has all the hallmarks of a design pattern. It even has great name and satisfies Rule Of Three, something I've mentioned before -- and what a fine three it is. Each implementation uses the idea of a shard slightly differently, in fitting with the particular forces at play in the three companies' systems.

Buried near the bullet list on the Google page was an item worth repeating:

Don't ignore the Academy. Academia has a lot of good ideas that don't get translated into production environments. Most of what Google has done has prior art, just not prior large scale deployment.

This advice is a bit different from some advice I once shared for entrepreneurs, looking for Unix commands that haven't been implemented on the web yet, but the spirit is similar. Sometimes I hear envious people remark that Google hasn't done anything special; they just used a bunch of ideas others created to build a big system. Now, I don't think that is strictly true, but I do think that many of the ideas they used existed before in the database and networking worlds. And to the extent that is true, good for them! They paid attention in school, read beyond their assignments, and found some cool ideas that they could try in practice. Isn't that the right thing to do?

In any case, I recommend this article and encourage others to write more like it.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development