I remember learning in courses on simulation, operating systems, and networking that, for a given period, the number of events such as cars arriving at an intersection or processes arriving at a scheduler is often best modeled using the Poisson distribution. Mostly, I recall being surprised that these events often occur in clumps, rather than uniformly distributed over a larger time period. Sometimes, it feels like ideas work this way... When I encounter an idea once during the day, I often seem to bump into it again and again. I'm sure that it's just that my mind is sensitized to the idea and recognize -- or project -- it more easily, much as magic books affect us. In any case, yesterday was such a day.
At 3:30 PM I attended a department seminar on bioinformatics by a colleague. I asked him what sort of questions he and his students could ask about bacteriophages in a data-rich environment that they could not ask before. He said that they could now quantify the notions of similarity and difference between phages in ways inaccessible to them before and write programs to apply their metrics. Eventually, he talked about how digital processing of large data sets enforced a more disciplined approach on the approach to problems, in order to battle complexity. Now, they convert big questions into a sequence of smaller, well-defined steps that can be tackled in a clear way. For him as a biologist, this was a surprising and wonderful phenomenon.
I stayed in the same room for a 5:00 PM class taught by one of our adjuncts, whose teaching I was to evaluate. He was teaching a "skills and concepts" course for non-majors, and the day's topic was databases. They talked about the similarities and differences between spreadsheets and databases, especially on how the structural integrity of a database makes it possible to formulate concise queries that can find useful answers. He some of the ideas using an Access database, first using a wizard to query the system and then looking at a raw SQL query. For many queries, he told them, the wizard does all you need. But there will times when you want to ask a question the wizard doesn't support, and then the ability to write your own select statements in SQL becomes a valuable skill.
After class, I caught upon some paperwork in my office until 7:00 PM, when I attended a panel presentation entitled "Visual Art, the Big Screen, and Orchestral Performance". (Here is a poster for the talk, in PDF.) Three local artists -- illustrator Gary Kelley, conductor Jason Weinberger, and videographer Scott Smith -- shared parts of their recent multimedia presentation of Gustav Holst's The Planets and discussed the creative forces that drove them individually and collectively to produce the work. I learned that multimedia presentations of The Planets are relatively common but that this show differed in significant ways from the usual, not the least of which was Kelley's creation of thirty new paintings and monotypes for the show.
(You may recall Smith's name from an earlier post... He had a small acting role in the play I did last winter!)
The panel ended with a discussion of how changes in technology were fundamentally changing how artist work are created and distributed. Not long ago, Hollywood and other media centers produced the entertainment that we all consumed, but now it is possible for folks in the middle of nowhere -- Iowa! -- to create and export their work to a global audience. This is, of course, nothing new in the age of the Internet and YouTube, but it is still cause for marvel to artists who recently lived and worked in a different world.
One of the central themes of the panel was the level of trust and surrender that this kind of presentation required, especially of the symphony members and conductor Weinberger. The timing of the video required the orchestra to hit certain marks in the music on a dot, and Weinberger, who usually controls tempo and shapes the sound of the performance, had to give up control the artwork produced by Kelley and Smith. The visual artists expressed a willingness to turn the tables and find a way to cede control to Weinberger in a future collaboration.
This set me to thinking... The reason that the musicians had to surrender control was essentially technological. Once a video is produced, it is set. Performance of the music was the more malleable medium, as the players could speed up or slow down in real-time to stay in sync. Ideally, of course, they would play a steady predefined pace, but that is quite difficult. But these days, "video" is much more malleable because it is digital. Why not let the musicians play however they and the conductor see fit, and adjust the pace of the video playback to keep in sync with the music? I don't know if such a digital tool exists already, but if not, what fun it would be to write! Then in performance, the videographer could "play" the video by reacting in real-time to the music.
All three of these stories had me thinking the same thing: "Now there's programming." I know the feeling well the feeling my biologist colleague expressed, because both of his answers come down to programming as discipline and medium. When our adjunct instructor told his non-CS students from all over campus about the power of knowing a little SQL, I smiled at the thought of non-programmers writing programs, albeit small ones, to scratch their own itches. Likewise, the ability to imagine how the orchestra might turn the tables on the visual artists in their multimedia collaborations, and then implement the vision in a working tool, is nothing more or less than programming.
The Small Doses pattern I wrote up in my previous entry was triggered almost exclusively by the story I heard from Carl Page. The trigger lives on in the text that runs from "Often times, the value of Small Doses..." to the end, and in the paragraph beginning "There is value in distributing...". The story was light and humorous, just the sort of story that will stick with a person for twenty or more years.
As I finally wrote the pattern, it grew. That happens all the time when I write. It grew both in size and in seriousness. At first I resisted getting too serious, but increasingly I realized that the more serious kernel of truth needed telling. So I gave it a shot.
The result of this change in tone and scope means that the pattern you read is not yet ready for prime time. Rather than wait until it was ready, though, I decided to let the pattern be a self-illustration. I have put it out now, in its rough form. It is rough both in completeness and in quality. Perhaps my readers will help me improve. Perhaps I will have time and inspiration soon to tackle the next version.
In my fantasies, I have time to write more patterns in a Graduate Student pattern language (code name: Chrysalis), even a complete language, and cross-reference it with other pattern languages such as XP. Fantasies are what they are.
Update: Added a known use contributed by Joe Bergin. -- 05/19/08.
Also Known As Frequent Releases
From Pattern Language Graduate Student
You are writing a large document that one or more other people must read and provide feedback on before it is distributed externally.
The archetype is a master's thesis or doctoral dissertation, which must be approved by a faculty committee.
You want to give your reviewers the best document possible. You want them to give you feedback and feel good about approving the document for external distribution.
You want to publish high-quality work. You would also like to have your reviewers see only your best work, for a variety of reasons. First, you respect them and are thankful for their willingness to help you. Second, they often have the ability to help you further in the future, in the form of jobs or recommendations. Good work will impress your reviewers more than weaker work. A more complete and mature document is more likely to resemble the final result than a rougher, earlier version.
In order to produce a document of the highest quality, you need time -- time to research, write, and revise. This delays your opportunity to distribute it to your reviewers. It also delays their opportunity to give you feedback.
Your reviewers are busy. They have their own work to do and are often volunteering their time to serve you.
Big tasks require a lot of time. If a task takes a lot of time to do, some people face a psychological barrier to starting the task.
A big task decomposed into several smaller tasks may take as much time as the big task, or more, but each task takes a smaller time. It is often easier to find time in small chunks, which makes it easier to start on the task in the first place.
The sooner your reviewers are able to read parts of the document, the sooner they will be able to give you feedback. This feedback helps you to improve the document, both in the large (topic, organization) and the small (examples, voice, illustrations, and so on).
Distribute your document to reviewers periodically over a relatively drawn out period. Early versions can be complete parts of the document, or rough versions of the entire document.
In the archetypal thesis, you might give the reviewers one chapter per week, or you might give a whole thesis in which each part is in various stages of completeness.
There is certainly a trade-off between the quality of a document and the timeliness of delivery. Don't worry; this is just a draft. You are always free to improve and extend your work. Keep in mind that there is also a trade-off between the quality of a document and the amount of useful feedback you are able to incorporate.
There is value in distributing even a very rough or incomplete document at regular intervals. If reviewers read the relatively weak version and make suggestions, they will feel valuable. If they don't read it, they won't know or mind that you have made changes in later versions. Furthermore, they may feel wise for not having wasted their time on the earlier draft!
With the widespread availability of networks, we can give our reviewers real-time access to an evolving document in the form of an-line repository. In such a case, Small Doses may take the form of comments recorded as each change is committed to the repository. It is often better for your reviewers if you give them periodic descriptions of changes made to the document, so that they don't have to wade through minutiae and can focus on discrete meaningful jumps in the document.
I have seen Small Doses work effectively in a variety of academic contexts, from graduate students writing theses to instructors writing lecture notes and textbooks for students. I've seen it work for master's students and doctoral students, for text and code and web sites. Joe Bergin says that this pattern is an essential feature of the Doctor of Professional Studies program at Pace University. Joe has documented patterns for succeeding in the DPS program. (If you know of particularly salient examples of Small Doses, I'd love to hear them.)
Often times, the value of Small Doses is demonstrated best in the results of its applying its antipattern, Avalanche. Suppose you are nearing a landmark date, say, the deadline for graduation or the end of spring semester, when faculty will scatter for the summer. You dump a 50-, 70-, or 100+-page document on your thesis committee. In the busy time before the magic date, committee members have a difficulty finding time to read the whole document. Naturally, they put off reading it. Soon they fall way behind and have a hard time meeting the deadline -- and they blame you for the predicament, for unrealistic expectations. Of course, had you given the committee a chapter at a time for 5 or 6 weeks leading up to the magic date, some committee members would still have fallen behind reading it, because they are distracted with their own work. But then you could blame the them for not getting done!
Related Patterns Extreme Programming and other agile software methodologies encourage Short Iterations and Frequent Releases. Such frequent releases are the Small Doses of the software development world. They enable more continuous feedback and allow the programmers to improve the quality of the code based on what they learn. The Avalanche antipattern is the source of many woes in the world of software, in large part due to the lack of feedback they afford from users and clients.
I learned the Small Doses pattern from Carl Page, my Artificial Intelligence II professor at Michigan State University in 1987. Professor Page was a bright guy and very good computer scientist, with an often dark sense of humor. He mentored his students and advisees with sardonic stories like Small Doses. He was also father to another famous Page, Larry.
They can say:
"You! Off the bus!"
The key task, then, becomes getting people into the right seats. That can be difficult when folks are happy with their current seats and have an almost inalienable right to stay on the bus. That's part of the challenge...
But when he started to play the piano, it could have been 1998 in the arena. Or 1988. Or 1978. The music flowing from his hands and his dancing feet filled me. Throughout the night I was 19 again, then 14, 10, and 25. I was lying on my parents' living room floor; sitting in the hand-me-down recliner that filled my college dorm room; dancing in Market Square Arena with an old girlfriend. I was rebellious teen, wistful adult, and mesmerized child.
There are moments when time seems more illusion than reality. Last night I felt like Billy Pilgrim, living two-plus hours unstuck in time.
Oh, and the music. There are not many artists who can, in the course of an evening, give you so many different kinds of music. From the pounding rock of "You May Be Right" to the gentle, plaintive "She's Always A Woman", and everything between. The Latin rhythms of "Don't Ask Me Why" extended with an intro of Beethoven's "Ode to Joy", and a "Root Beer Rag" worthy of Joplin.
Last night, my daughters aged 15 and 11 attended the concert with me. Music lives on, and time folds back on itself yet again.
Of the 20 greatest engineering achievements of the 20th century, two lie within computing: computers (#8) and the Internet (#13). Those are broad categories defined around general-purpose tools that have affected the lives of almost every person and the practice of almost every job.
In 2004, human beings harvested 10 quintillion grains of rice. In 2004, humans beings fabricated 10 quintillion transistors. 10 quintillion is a big number: 10,000,000,000,000,000,000.
Ed Lazowska of the University of Washington opened his Saturday luncheon talk at SIGCSE with these facts, as a way to illustrate the broad effect that our discipline has had on the world and the magnitude of the discipline today. He followed by putting the computational power available today into historical context. The amount of computational power that was available in the mainstream in the 1950s is roughly equivalent to an electronic greeting card today. Jump forward to the Apollo mission to the moon, and that computation power is now available in a Furby. Lazowska didn't give sources for these claims or data to substantiate them, but sound reasonable to me within an order of magnitude.
The title of Lazowska's talk was "Computer Science: Past, Present, and Future", and it was intended to send conference attendees home energized about our discipline. He energized folks with cool facts about computer science's growth and effect. Then he looked to the future, at some of the challenges and some of the steps being taken to address them.
One of the active steps being taken within computing is the Computing Community Consortium a joint venture of the National Science Foundation and the Computing Research Association, whose mission is to "supports the computing research community in creating compelling research visions and the mechanisms to realize these visions". According to Lazowska, the CCC hopes to inspire "audacious and inspiring research" while at the same time articulating visions of the discipline to the rest of the world. Lazowska is one of the leaders of the group. The group's twin goals are both worth the attention of our discipline's biggest thinkers.
As I listened to Lazowska describe the CCC's initiatives, I was reminded of our discipline's revolutionary effect on other disciplines and industries. Lazowska reported that two or two and a half of the 20th century's greatest engineering results were computing, but take a look at the rest of the list. Over the last half century, computers and the Internet have played an increasingly important role in many of these greatest achievements, from embedded computers in automobiles, airplanes, and spacecraft to the software that has opened new horizons in radio and television, telephones, health technologies, and most of the top 20.
Now take a look at the Grand Challenges for Engineering in the 21st Century, which Lazowska pointed us to. Many of these challenges depend crucially upon our discipline. Here are seven:
But imagine doing any of the other seven without involving computing in an intimate way!
I've written a few times about how science has come to be a computational endeavor. Lazowska gave an example that I reported from as part of the next generation of science: databases. A database makes it possible to answer questions that you think of next year, not just the ones you thought of five years ago, when you wrote your proposal to NSF and when you later defined the format of your flat text file. He illustrated his idea with examples of projects at the Ocean Observatories Initiative, and the Quality of Life Technology Center. He also mentioned the idea of prosthetics as the "future of interfaces", which is a natural research and entrepreneurial opportunity for CS students. You may recall having read about this entrepreneurial connection in this blog way back!
For his part, Lazowska suggested advancing personalized learning as an area in which computing could have an immeasurable effect. Adaptive one-on-one tutoring is something that could reach an enormous unserved population and help develop the human capital that could revolutionize the world. This is actually the area into which I was evolving back when I was doing AI research, intelligent tutoring systems. I remain immensely interested in the area and what it could mean for the world. Many folks are uncomfortable with the idea of "computers teaching our children", but I think it's simply a part of the evolution of communication that computer science embodies. The book is a means of educating, communicating, and sharing information, but it is a one-track medium. The computer is a multiple-track medium, a way to deliver interactive and dynamic content to a wide audience. A "dynabook"... I wonder if anyone has been promoting this idea for say, oh, thirty years?
Fear of computers playing a human-like role in human interaction is nothing new. It reminds me of another story Lazowska told, from Time Magazine's article on the computer as the 1982 Machine of the Year. The article mentions CADUCEUS, one of the medical expert systems that was at the forefront of AI's focus on intelligent systems in the '70s and '80s. Here's the best passage:
... while it is possible that a family doctor would recognize 4,000 different symptoms, CADUCEUS is more likely to see patterns in what patients report and can then suggest a diagnosis. The process may sound dehumanized, but in one hospital where the computer specializes in peptic ulcers, a survey of patients showed that they found the machine "more friendly, polite, relaxing and comprehensible" than the average physician.
There are days when I am certain that we can create an adaptive tutoring system that is more relaxing and comprehensible than I am as a teacher, and probably friendlier and politer to boot.
Lazowska closed with an exhortation that computer scientists adopt the stance of the myth buster in trying to educate the general population, whether myths about programming (e.g., "Programming is a solitary activity"), employment ("Computing jobs will all go overseas."), or intrinsic joy ("There are no challenges left.") He certainly gave his audience plenty of raw material for busting one of the myths about the discipline not being interesting: "Computer science lacks opportunities to change the world." Not only do we change the world directly in the form of things like the Internet; these days, when almost anyone changes the world, they do so by using computing!
Lazowska's talk was perhaps too long, trying to pack more information into an hour than we could comfortably digest. But it was a good way to close out SIGCSE, given that one of its explicit themes seemed to engaging the world and that the buzz everywhere I went at the conference was about how we need to reach out more and communicate more effectively.
Yesterday I was listening to comedian Rodney Laney do a bit called "Old Jobs". He explained that the best kind of job to have is a one-word job that everyone understands. Manager, accountant, lawyer -- that's good. But if you have to explain what you do, then you don't have a good job. "You know the inside of the pin has these springs? I put the springs on the inside of the pins." And then you have to explain why you matter... "Without me, the pins wouldn't go click." Bad job.
Okay, computer scientists and software developers, raise your hands if you've had to explain what you do to a new acquaintance at a party. To a curious relative? I should say "tried to explain", because my own attempts come up short far too often.
I think good Rodney has nailed a major flaw in being a computer scientist.
Sadly, going with the one-word job title of "programmer" doesn't help, and the people who think know what a programmer is often don't really know.
Even still, I like what I do and know why it's a great job.
(Thanks to the wonders of the web, you can watch another version of Laney's routine, Good Jobs, on-line at Comedy Central. I offer no assurance that you'll like it, but I did.)
Recently, both Lance Fortnow and Michael Mitzenmacher wrote entries on how often a prof can miss a class during the semester. This is an issue for any instructor who has a professional life. Between conferences to attend and professional service duties, there will always be a conflict some time.
I have a standing solution for myself, developed through many years of teaching, going to conferences, reviewing for NSF, and serving on program committees. I teach a Tuesday/Thursday schedule in a fifteen-week semester, so a full set of class meetings is thirty. Every semester, I just plan for twenty-eight.
In the fall, I have standing plans to attend OOPSLA and, until a couple of years ago, PLoP. In the spring come SIGCSE, ChiliPLoP, or both. I can usually cover 28 sessions both semesters with a little calendar help. Until the 2007-2008, we always had two days during Thanksgiving week, which gave my courses a 29th meeting day. The spring has Spring Break. When my conference schedule falls just wrong and leaves me a day short, I will ask someone to guest lecture.
My students don't seem to mind. I usually leave them with a good project to work on while I'm gone, sometimes larger than the usual project given that they have more time to spend on it. In the end, what students learn is less about what I do in class than about what they do with the course material, and a good-sized project is usually well worth the time alloted. When I get back, we can debrief the project and, when appropriate, discuss what I learned while I was away. Later, I can fold what I learned into future courses, which makes the two class days missed an investment in the experience I can offer.
This semester I faced an unusual choice. Instead of one 15-week course, I am teaching three 5-week courses. My away time, for SIGCSE, all fell during one of the 5-week sessions. 28 out of 30 seems reasonable, but 8 out of 10 did not. So I arranged to meet my students for a couple of "make-up sessions". We held one the day before I left for Portland. After we had completed sessions 8 and 9 after break, we decided that 9 out of 10 had been enough, and we called it a wrap. I was willing to do a tenth session if students were interested, but they seemed ready to move on, so we did.
The choices we face at primarily undergraduate "teaching university" are probably different from those faced at bigger, research school. First, I suspect that some if not all of Lance's and Michael's teaching is done in graduate classes. Grad students are a different audience, one perhaps better able to use time away from class productively while still learning new material. Second, at the bigger schools, teaching a class for undergrads often means having one or more graduate TAs to help. These folks are often more than capable of pinch-hitting for an extra absence or two during the semester, with no apparent loss in quality to the students. (If you believe some of the stereotypes about research-oriented faculty, then you might think that the students could be better off with a TA filling in. But I think that stereotype is overblown and often just wrong.)
Another option available to us these days is videocasting. One of my colleagues who travels a lot in-semester sometimes records a lecture for his students in a classroom that supports showing the professor and the projected image in the video. This takes time, if only because there is a tendency for an instructor to want not to leave blemishes in a videocast recorded for posterity -- even little glitches that are normal in any in-person presentation. I've not tried this yet, but I might one day soon when the conditions are right. Done well, this could be better than even a well-prepared set of lecture notes and questions.
A while back, I wondered out load how I might be able to improve my "topics in languages" courses over the course of the semester, with them being three 5-week iterations instead of the usual 15-week course. The languages in the three courses have been different -- bash, PHP, and now Ruby -- which influences how and when I teach what, but all of the courses are about scripting, so there is a common mindset. This has made it possible, for example, to reuse several in-class examples and homework exercises, as a way for students participating in more than one of the iterations to compare and contrast how the languages work.
I have noticed one unexpected phenomenon from the vantage point of closer-than-usual iterations: How I teach a language I know is different from how I teach a language I don't know. Or perhaps I should say, how I want to teach a language I know is different from how I want to teach a language I don't know.
Going into the bash section, I knew a fair amount of shell scripting already and felt a desire to delve into the areas that I didn't know as well. But the shell was new enough to most of the students, and different enough from the languages they knew, that I was able comfortably in my mind to organize the course around the basic principles of the Unix Way, especially pipes. Using a good secondary text, Classic Shell Scripting, helped, too.
Then came PHP, about which I knew relatively little. I found myself paying close attention to low-level syntactic issues as I myself learned the language well. "Look at this cool thing I just learned about variables..." was a typical expression in class. Some of the examples I used were rather un-PHP-like as I explored the boundaries of the language.
I am now teaching Ruby, a language I know and like pretty well. I find myself wanting to jump passed the low-level stuff like variables and classes right to the application level, where students can see Ruby in action. This makes sense for me, since I know most of that low-level stuff cold, but perhaps not so much for a student seeing Ruby for the first time. Fortunately, many of the ideas in Ruby are similar enough to the ones they have seen in the previous scripting languages and in other programming languages. Ruby also is pretty easy to read, as was PHP, which makes code approachable. Still, I am pushing on myself to be sure that the applications I show them progress in a reasonable way from simpler language features to more complex, so that students can grow in their understanding smoothly. This week, I evolved a simple diff script from Everyday Scripting with Ruby as our first example and wrote a script for finding popular pages in a server log based on Tim Bray's chapter in Beautiful Code.
I'll have to ask the students who have been all three sections if they noticed differences in how I approached the languages and what, if any, difference it made to them.
Owen is reputed to have said something like "Don't give as a programming assignment something the student could just as easily do by hand." (I am still doing penance, even though Lent ended two weeks ago.) This has been dubbed Astrachan's Law, perhaps by Nick Parlante. In the linked paper, Parlante says that showmanship is the key to the Law, that
A trivial bit of code is fine for the introductory in-lecture example, but such simplicity can take the fun out of an assignment. As jaded old programmers, it's too easy to forget the magical quality that software can have, especially when it's churning out an unimaginable result. Astrachan's Law reminds us to do a little showing off with our computation. A program with impressive output is more fun to work on.
I think of this Astrachan's Law in a particular way. First, I think that it reaches beyond showmanship: Not only do students have less fun working on trivial programs, they don't think that trivial programs are worth doing at all -- which means they may not practice enough or at all. Second, I most often think of Astrachan's Law as talking about data. When we ask students to convert Fahrenheit to Celsius, or to sum ten numbers entered at the keyboard, we waste the value of a program on something that can be done faster with a calculator or -- gasp! -- a pencil and paper. Even if students want to know the answer to our trivial assignment, they won't see a need to master Java syntax to find it. You don't have to go all the way to data-intensive computing, but we really should use data sets that matter.
Yesterday, I encountered what might be a variant or extension of Astrachan's Law.
John Zelle of Wartburg College gave a seminar for our department on how to do virtual reality "on a shoestring" -- for $2000 or less. He demonstrated some of his equipment, some of the software he and his students have written, and some of the programs written by students in his classes. His presentation impressed me immensely. The quality of the experience produced by a couple of standard projects, a couple of polarizing filters, and a dollar pair of paper 3D glasses was remarkable. On top of that, John and his students wrote much of the code driving the VR, including the VR-savvy presentation software.
Toward the end of his talk, John was saying something about the quality of the VR and student motivation. He commented that it was hard to motivate many students when it came to 3D animation and filmmaking these days because (I paraphrase) "they grow up accustomed to Pixar, and nothing we do can approach that quality". In response to another question, he said that a particular something they had done in class had been quite successful, at least in part because it was something students could not have done with off-the-shelf software.
These comments made me think about how, in the typical media computation programming course, students spend a lot of time writing code to imitate what programs such as Photoshop and Audacity do. To me, this seems empowering: the idea that a freshman can write code for a common Photoshop filter in a few lines of Java or Python, at passable quality, tells me how powerful being able to write programs makes us.
But maybe to my students, Photoshop filters have been done, so that problem is solved and not worthy of being done again. Like so much of computing, such programs are so much a part of the background noise of their lives that learning how to make them work is as appealing to them as making a ball-point pen is to people of my age. I'd hope that some CS-leaning students do want to learn such trivialities, on the way to learning more and pushing the boundaries, but there may not be enough folks of that bent any more.
On only one day's thought, this is merely a conjecture in search of supporting evidence. I'd love to here what you think, whether pro, con, or other.
I do have some anecdotal experience that is consistent in part with my conjecture, in the world of 2D graphics. When we first started teaching Java in a third-semester object-oriented programming course, some of the faculty were excited by what we could do graphically in that course. It was so much more interesting than some of our other courses! But many students yawned. Even back in 1997 or 1998, college students came to us having experienced graphics much cooler than what they could do in a first Java course. Over time, fewer and fewer students found the examples knock-out compelling; the graphics became just another example.
If this holds, I suppose that we might view it as a new law, but it seems to me a natural extension of Astrachan's Law, a corollary, if you will, that applies the basic idea into the realm of application, rather than data.
My working title for this conjecture is the Pixar Effect, from the Zelle comment that crystallized it in my mind. However, I am open to someone else dubbing it the Wallingford Conjecture or the Wallingford Corollary. My humility is at battle with my ego.
Is anybody home? After a flurry of writing from SIGCSE, I returned home to family time and plenty of work at the office. The result has been one entry in ten days. I look forward to finishing up my SIGCSE reports, but they appear to lie forward a bit, as the next week or so are busy. I have a few new topics in the hopper waiting for a few minutes to write as well.
One bit of good news is that part of my busy-ness this week and next is launching the third iteration of my language topics course. We've done bash and PHP and are now moving on to Ruby, one of my favorite languages. Shell scripting is great, but its tools are too limited to make writing bigger programs fun. PHP was better than I expected, but in the end it is really about building web sites, not writing more general programs. (And after a few weeks of using the language, PHP's warts started to grate on me.)
Ruby is... sublime. It isn't perfect, of course, but even its idiosyncrasies seem to get out of my way when I am deep in code. I looked up the definition of 'sublime', as I sometimes do when I use a word which is outside my daily working vocabulary or is misused enough in conversation that I worry about misusing it myself. The first set of definitions have a subtlety reminiscent of Ruby. To "vaporize and then condense right back again" sounds just like Ruby getting out of my way, only for me to find that I've just written a substantial program in a few lines. (My favorite, though, is "well-meaning ineptitude that rises to empyreal absurdity"!)
This is my first time to teach Ruby formally in a course. I hope to use this new course beginning as a prompt to write a few entries on Ruby and what teaching it is like.
There are many wonderful resources for learning about and programming in Ruby. I've suggested that my students use the pickaxe book as a primary reference, even if they use the first edition, a complete version of which is available on-line. In today's class, though, I used a simple evolutionary example from Brian Marick's book Everyday Scripting with Ruby. I hesitated to use this book as the student's primary source because it was originally written for tester's without any programming background, and my course is for upper-division CS majors with several languages under their belts. But Brian works through several examples in a way that I find compelling, and I think I can base a few solid sessions on one or two of them.
This book makes me wonder how easy it would be to re-target a book from an audience like non-programming testers to an audience of scripting-savvy programmers who want to learn Ruby's particular yumminess. I know that in the course of writing the book Brian generalized his target audience from testers to the union of three different audiences (testers, business analysts, and programmers). Maybe after I've lived with the book and an audience of student programmers I'll have a better sense of how well the re-targeting worked. If it works for my class, then I'll be inclined to adopt it for the next offering of this course.
Anyway, today we evolved a script for diffing to directories of files for a tester. I liked the flow of development and the simple script that resulted. Now we will move on to explore language features and their use in greater depth. One example I hope to work through soon, perhaps in conjunction with Ruby's regular expressions, is "Finding Things", Tim Bray's chapter in Beautiful Code.
Oh, and I must say that this is the first time that one of my courses has a theme song -- and a fine theme song, indeed. Now, if only someone would create a new programming language called "Angie", I would be in heaven.