February 28, 2006 11:19 PM

DNA, Ideas, and the CS Curriculum

Today is the anniversary of Watson and Crick's piecing together the three-dimensional structure of DNA, the famed double helix. As with many great discoveries, Watson and Crick were not working in isolation. Many folks were working on this problem, racing to be the first to "unlock the secret of life". And Watson and Crick's discovery itself depended crucially on data collected by chemist Rosalind Franklin, who died before the importance of her contribution became widely known outside of the inner circle of scientists working on the problem.

Ironically, this is also the birthday of one of the men against whom Watson and Crick were racing: Linus Pauling. Pauling is the author of one of my favorite quotes:

The best way to have a good idea is to have a lot of ideas...

Perhaps the great geniuses have only good ideas, but most of us have to work harder. If we can free ourselves to Think Different, we may actually come across the good ideas we need to make progress.

Of course, you can't stop at generating ideas. You then have to examine your candidates critically, exposing them to the light of theory and known facts. Whereas one's inner critic is an enemy during idea generation, it is an essential player during the sifting phase.

Pauling knew this, too. The oft-forgotten second half of Pauling's quote is:

... and throw the bad ones away.

Artists work this way, and so do scientists.

This isn't a "round" anniversary of Watson and Crick's discovery; they found the double helix in 1953. It's not even a round anniversary of Pauling's birth, as he would be 105 today. (Well, that's sort of round.) But I heard about the anniversaries on the radio this morning, and the story grabbed my attention. Coincidentally, DNA has been on my mind for a couple of reasons lately. First, my last blog entry talked about a paper by Bernard Chazelle that uses DNA as an example of duality, one of the big ideas that computer science has helped us to understand. Then, on the plane today, I read a paper by a group of folks at Duke University, including my friend Owen Astrachan, on an attempt to broaden interest in computing, especially among women.

Most research shows that women become interested in computing when they see how it can be used to solve real problems in the world. The Duke folks are exploring how to use the science of networks as a thematic motivation for computing, but another possible domain of application is bioinformatics. Women who major in science and technology are far more likely to major in biology than in any other discipline. Showing the fundamental role that computing plays in the modern biosciences might be a way to give women students a chance to get excited about our discipline, before we misdirect them into thinking that computerScience.equals( programming ).

My department launched a new undergraduate major in bioinformatics last fall. So we have a mechanism for using the connection between biology and computing to demonstrate computing's utility. Unfortunately, we have made a mistake so far in the structure of our program: all students start by taking two semesters of traditional programming courses before they see any bioinformatics! I think we need to do some work to our first courses. Perhaps Astrachan and his crew can teach us something.

I'm in Houston for SIGCSE this week, and the Duke paper will be presented here on Saturday. Sadly, I have to leave town on Friday... If I want to learn more about the initiative than I can learn just from the paper, I will need to take advantage of my own social network to make a connection.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

February 24, 2006 2:30 PM

iPods and Big Ideas

Last summer I blogged on a CACM article called "The Thrill Is Gone?", by Sanjeev Arora and Bernard Chazelle, which suggested that "we in computing have done ourselves and the world a disservice in failing to communicate effectively the thrill of computer science -- not technology -- to the general public." Apparently, Chazelle is carrying the flag for effective communication into battle by trying to spread the word beyond traditional CS audiences. The theoryCS guys -- Suresh, Lance, and Ernie among them -- have all commented on his latest public works: an interview he gave prior to giving a talk called "Why Computer Science Theory Matters?" at the recent AAAS annual meeting, the talk itself, and a nice little article to appear in an upcoming issue of Math Horizons. Be sure to read the Horizons piece; the buzz is well-deserved.

Chazelle ties to convince his audience that computing is the nexus of three Big Ideas:

  • universality - the idea that any digital computer can, in principle, do what any other does. Your iPod can grow up to be anything any other computer can be.
  • duality - the idea that data and program are in principle interchangeable, that perspective determines whether something is data or program.
  • self-reference - the idea that a program can refer to itself, or data that looks just like it. This, along with the related concept of self-replication, ties the intellectual ground of computing to that of biology.

... and the source of two new, exceedingly powerful ideas that cement the importance of computing to the residents of the 21st century: tractability and algorithm.

Chazelle has a nice way of explaining tractability to a non-technical audience, in terms of the time it takes to answer questions. We have identified classes of questions characterized by their "time signatures", or more generally, their consumption of any resource we care about. This is a Big Idea, too:

Just as modern physics shattered the platonic view of a reality amenable to noninvasive observation, tractability clobbers classical notions of knowledge, trust, persuasion, and belief. No less.

Chazelle's examples, including e-commerce and nuclear non-proliferation policy, are accessible to any educated person.

The algorithm is the "human side" of the program, an abstract description of a process. The whole world is defined by processes, which means that in the largest sense computer science gives us tools for studying just about everything that interests us. Some take the extreme view that all science is computer science now. That may be extreme, but in one way it isn't extreme enough! Computer science doesn't revolutionize how we study only science, but also the social sciences and literature and art. I think that the greatest untapped reservoir of CS's influence lies in the realm of economics and political science.

Chazelle makes his case that CS ultimately will supplant mathematics as the primary vehicle for writing down our science:

Physics, astronomy, and chemistry are all sciences of formulae. Chaos theory moved the algorithmic zinger to center stage. The quantitative sciences of the 21st century (e.g., genomics, neurobiology) will complete the dethronement of the formula by placing the algorithm at the core of their modus operandi.

This view is why I started my live as a computer scientist by studying AI: it offered me the widest vista on the idea of modeling the world in programs.

I will be teaching CS1 this fall for the first time in ten years or so. I am always excited at the prospect of a new course and kind of audience, but I'm especially excited at the prospect of working with freshmen who are beginning our major -- or who might, or who might not but will take a little bit of computing with them off to their other majors. Learning to program (perhaps in Ruby or Python?) is still essential to that course, but I also want my students to see the beauty and importance of CS. If my students can leave CS1 next December with an appreciation of the ideas that Chazelle describes, and the role computing plays in understanding them and bringing them to the rest of the world, then I will have succeeded in some small measure.

Of course, that isn't enough. We need to take these ideas to the rest of our students, especially those in the sciences -- and to their faculty!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

February 20, 2006 6:48 PM

Changing How People Think

Pascal Van Cauwenberghe writes a bit about agile development, lean production, and other views of software engineering. He recently quoted the Toyota Way Fieldbook as inspiration for how to introduce lean manufacturing as change. I think that educators can learn from Pascal's excerpt, too.

... we're more likely to change what people think by changing what they do, rather than changing what people do by changing what they think.

I can teach students about object-oriented programming, functional programming, or agile software development. I can teach vocabulary, definitions, and even practices and methodologies. But this content does not change learners "deeply held values and assumptions". When they get back into the trenches, under the pressure of new problems and time, old habits of thought take over. No one should be surprised that this is true for people who are not looking to change, and that is most people. But even when programmers want to practice the new skill, their old habits kick in with regularity and unconscious force.

The Toyota Way folks use this truth as motivation to "remake the structure and processes of organizations", with changes in thought becoming a result, not a cause. This can work in a software development firm, and maybe across a CS department's curriculum, but within a single classroom this truth tells us something more: how to orient our instruction. As an old pragmatist, I believe that knowledge is habit of thought, and that the best way to create new knowledge is to create new habits. This means that we need to construct learning environments in which people change what they do in practical, repeatable ways. Once students develop habits of practice, they have at their disposal experiences that enable them to think differently about problems. The ideas are no longer abstract and demanded by an outside agent; they are real, grounded in concrete experiences. People are more open to change when it is driven from within than from without, so this model increases the chance that the learner entertain seriously the new ideas that we would like them to learn.

In my experience, folks who try XP practices -- writing and running tests all the time, refactoring, sharing code, all supported by pair programming and a shared culture of tools and communication -- are more likely to "get" the agile methods than are folks to whom the wonderfulness of agile methods is explained. In the end, I think that this is true for nearly all learning.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

February 16, 2006 4:18 PM

Death by Risk Aversion, University Edition

Death by risk aversion, courtesy of Creating Passionate Users

In When Committees Suck the Life Out of Great Ideas, Jeremy Zawodny wrote:

A few days ago, I saw a nice graphic on the Creating Passionate Users blog which was intended to illustrate Death by risk-aversion: [at right]

I contend that you can make a very similar graphic to illustrate what happens when too many people get involved in designing a product.

I saw that blog entry, too, but my first thought wasn't of software products and committees. It was of academic curriculum. I have mentioned this post on academic conservatism so often that I probably sound like a broken record, but real change in how we teach computer science courses is hard to effect at most schools. This difficulty stems in large part from the "fear occurs here" syndrome that Kathy Sierra identifies and Jeremy Zawodny recognizes in product development. Faculties are understandably reluctant to change what they've always done, especially if in their own minds things are going pretty well. Unfortunately, that comfort often results from a lack of circumspection... Maybe things aren't going as well as they always have, and we might see that if we'd only pay more attention to the signals our students are sending. Maybe things are going fine, but the world around us has changed and so we are solving a problem that no longer exists while the real problem sits ingloriously at the back of the room.

The result of the "fear occurs here" syndrome is that we keep doing the same old, same old, while opportunities to get better drift by -- and, sometimes, while some competitor sneaks in and eats our lunch. The world rarely stays the same for very long.

There are always folks who push the boundaries, trying new things, working them out, and sharing the results with the rest of us. The annual SIGCSE conference and Educators Symposium at OOPSLA are places I can reliably learn from teachers who are trying new ideas. And there are many... Stephen Edwards on testing early in the curriculum, Mark Guzdial on multimedia programming in CS1, Owen Astrachan on just about anything. I would love to see my own department consider Mark's media computation approach in CS1. Short of that, I plan to consider something like Owen's science-of-networks approach for next fall. (You can see the SIGCSE 2006 paper on the latter via the conference's on-line program. Go to Saturday, 8:30 AM, and follow the "Recruitment and Retention" link.) Indeed, I am looking forward to SIGCSE in a couple of weeks.

But I've also had ChiliPLoP'06 on my mind, too. I am co-chairing ChiliPLoP again this year, and we have two relevant hot topics on topic: a reprise of last year's elementary patterns project and Dave West and Pam Rostal expanding on their OOPSLA Educators Symposium presentation to a software development curriculum using pedagogical and apprenticeship patterns. But these ideas are emblematic of how hard it is to effect real change in curriculum: it is hard to do a complete job, at least complete enough to attract a large body of adopters, and some ideas are simply so unlike anything that people currently do as to be unthinkable by the vast majority of practitioners. We know that revolutionary change can happen, with the right group of people leading people, working hard and having a little luck along the way. But such revolutions are a low-probability proposition.

[As an aside... After reading this article on productivity patterns and "life hacks" over at 43 Folders, I boldly sent site guru Merlin Mann a flyer of an invitation to consider coming to ChiliPLoP some time to work on a productivity pattern language with his GTD buddies and a few patterns people. Productivity patterns don't fit the narrow definition of Pattern Languages of Programs, but ChiliPLoP has always been about pattern languages more broadly. Besides, I'd love to swap Mac hacks with a few more fellow junkies.]


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

February 16, 2006 3:23 PM

Eat *That* Dog Food

Eric Sink tells one of the best stories ever to illustrate the idea of eating your own dog food. Go read the whole paper and story, but I can set up the punchline pretty quickly: Table saws are powerful, dangerous tools. Many woodworkers lose fingers every year using table saws. But...

A guy named Stephen Gass has come up with an amazing solution to this problem. He is a woodworker, but he also has a PhD in physics. His technology is called Sawstop. It consists of two basic inventions:
  • He has a sensor which can detect the difference in capacitance between a finger and a piece of wood.
  • He has a way to stop a spinning table saw blade within 1/100 of a second, less than a quarter turn of rotation.


The videos of this product are amazing. Slide a piece of wood into the spinning blade, and it cuts the board just like it should. Slide a hot dog into the spinning blade, and it stops instantly, leaving the frankfurter with nothing more than a nick.

Here's the spooky part: Stephen Gass tested his product on his own finger! This is a guy who really wanted to close the distance between him and his customers.

Kinda takes the swagger out of your step for using your own blogging tool.

Eric's paper is really about software developers and their distance from users. His title, Yours, Mine and Ours, identifies three relationships developers can have with the software they write vis-á-vis the other users of the product. Many of his best points come in the section on UsWare, which is software intended for use by both end users and the developers themselves. Eric is well-positioned to comment on this class of programs, as his company develops a version control tool used by his own developers.

It's easy for developers to forget that they are not like other users. I know this danger well; as a university faculty need to remind myself daily that I am not like my students, either in profile or in my daily engagement with the course material.

I like his final paragraph, which summarizes his only advice for solving the ThemWare/UsWare problems:

Your users have things to say. Stop telling them how great your software is and listen to them tell you how to make it better.

We all have to remind ourselves of this every once in a while. Sadly, some folks never seem to. Many faculty assume that they have nothing to learn from what their students are saying, but that is almost always because they aren't really listening. Many universities spend so much time telling students why they should come there that they don't have the time or inclination to listen to students say what would make come.

I also learned an interesting factoid about State Farm Insurance, the corporate headquarters for which are located down I-74 from Eric's home base of Urbana, Illinois. State Farm is also a major corporate partner of the IT-related departments at my university, including the CS department. They work hard to recruit our students, and they've been working hard to help us with resources when possible. The factoid: State Farm is Microsoft's largest non-government customer. [In my best Johnny Carson imitation:] I did not know that. As a result of this fact, Microsoft has an office in the unlikely location of Bloomington, Illinois.

Despite an obvious interest in hiring folks with experience using Microsoft tools, State Farm has never pressured us to teach .NET or C# or VisualStudio or any particular technology. I'm sure they would be happy if we addressed their more immediate needs, but I am glad to know that they have left decisions about curriculum to us.

That said, we are beginning to hear buzz from other insurance companies and banks, most located in Des Moines, about the need for student exposure to .NET. We probably need to find a way to give our students an opportunity to get experience here beyond VB.NET and Office. Where is that link to Mono...


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

February 13, 2006 6:09 PM

Doing What You Love, University Edition

Several folks have commented already on Paul Graham's How to Do What You Love. As always, this essay is chock full of quotable quotes. Ernie's 3D Pancakes highlights one paragraph on pain and graduate school. My grad school experience was mostly enjoyable, but I know for certain that I was doing part of it wrong.

Again as always, Graham expresses a couple of very good ideas in an easy-going style that sometimes makes the ideas seem easier to live than they are. I am thinking about how I as a parent can help my daughters choose a path that fulfills them rather than the world's goals for them, or mine.

Among the quotable quotes that hit close to home for me were these. First, on prestige as siren:

Prestige is especially dangerous to the ambitious. If you want to make ambitious people waste their time on errands, the way to do it is to bait the hook with prestige. That's the recipe for getting people to give talks, write forewords, serve on committees, be department heads, and so on. It might be a good rule simply to avoid any prestigious task. If it didn't suck, they wouldn't have had to make it prestigious.

Ouch. But in my defense I can say that in the previous fourteen years my department had beaten all of the prestige out of being our head. When I came around to considering applying for the job, it looked considerably less prestigious than the ordinary headship. I applied for the job precisely because it needed to be done well, and I thought I was the right person to do it. I accepted the job on with a shorter-than-usual review window with no delusions of grandeur.

Then, on prematurely settling on a career goal:

Don't decide too soon. Kids who know early what they want to do seem impressive, as if they got the answer to some math question before the other kids. They have an answer, certainly, but odds are it's wrong.

From the time I was seven years old until the time I went to college, I knew that I wanted to be an architect -- the regular kind that designs houses and other buildings, not the crazy enterprise integration kind. My premature optimization mostly didn't hurt me. When some people realize that they had been wrong all that time, they are ashamed or afraid to tell everyone and so stay on the wrong path. During my first year in architecture school, when I realized that as much as I liked architecture it probably wasn't the career for me, I was fortunate enough to be feel comfortable changing courses of study right away. It was a sea change for me mentally, but once it happened in mind I knew that I could tell folks.

I somehow knew that computer science was where I should go. Again, I was fortunate not to have skewed my high school preparation in a way that made the right path hard to join; I had taken a broad set of courses that prepared me well for almost any college major, including as much math and science as I could get.

One way that my fixation on architecture may have hurt me was in my choice of university. I optimized school selection locally by picking a university with a strong architecture program. When I decided to switch to a CS major, I ended up in a program not as strong. I certainly could have gone to a university that prepared me better for CS grad school. One piece of I advice that I'll give my daughters is to choose a school that gives you many options. Even if you never change majors, having plenty of strong programs will mean a richer ecosystem of ideas in which to swim. (I already give this advice to students interested in CS grad school, though there are different trade-offs to be made for graduate study.)

That said, I do not regret sticking with my alma mater, which gave me a very good education and exposed me to a lot of new ideas and people. Most of undergraduate education is what the student makes of it; it's only at the boundaries of high ambition where attending a particular school matters all that much.

Nor would I have traded my time in architecture school for a quicker start in CS. I learned a lot there that still affects how I thinking about systems, design, and education. More importantly, it was important for me to try the one thing I thought I would love before moving on to something else. Making such decisions on purely intellectual grounds is a recipe for regret.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

February 08, 2006 2:23 PM

Functional Programming Moments

I've been having a few Functional Programming Moments lately. In my Translation of Programming Languages course, over half of the students have chosen to write their compiler programs in Scheme. This brought back fond memories of a previous course in which one group chose to build a content management system in Scheme, rather than one of the languages they study and use more in their other courses. I've also been buoyed by reports from professors in courses such as Operating Systems that some students are opting to do their assignments in Scheme. These students seem to have really latched onto the simplicity of a powerful language.

I've also run across a couple of web articles worth noting. Shannon Behrens wrote the provocatively titled Everything Your Professor Failed to Tell You About Functional Programming. I plead guilty on only one of the two charges. This paper starts off talking about the seemingly inscrutable concept of monads, but ultimately turns to the question of why anyone should bother learning such unusual ideas and, by extension, functional programming itself. I'm guilty on the count of not teaching monads well, because I've never taught them at all. But I do attempt to make a reasonable case for the value of learning functional programming.

His discussion of monads is quite nice, using an analogy that folks in his reading audience can appreciate:

Somewhere, somebody is going to hate me for saying this, but if I were to try to explain monads to a Java programmer unfamiliar with functional programming, I would say: "Monad is a design pattern that is useful in purely functional languages such as Haskell.

I'm sure that some folks in the functional programming community will object to this characterization, in ways that Behrens anticipates. To some, "design patterns" are a lame crutch object-oriented programmers who use weak languages; functional programming doesn't need them. I like Behrens's response to such a charge (emphasis added):

I've occasionally heard Lisp programmers such as Paul Graham bash the concept of design patterns. To such readers I'd like to suggest that the concept of designing a domain-specific language to solve a problem and then solving that problem in that domain-specific language is itself a design pattern that makes a lot of sense in languages such as Lisp. Just because design patterns that make sense in Java don't often make sense in Lisp doesn't detract from the utility of giving certain patterns names and documenting them for the benefit of ... less experienced programmers.

His discussion of why anyone should bother to do the sometimes hard work needed to learn functional programming is pretty good, too. My favorite part addressed the common question of why someone should willingly take on the constraints of programming without side effects when the freedom to compute both ways seems preferable. I have written on this topic before, in an entry titled Patterns as a Source of Freedom. Behrens gives some examples of self-imposed cosntraints, such as encapsulation, and how breaking the rules ultimately makes your life harder. You soon realize:

What seemed like freedom is really slavery.

Throw off the shackles of deceptive freedom! Use Scheme.

The second article turns the seductiveness angle upside down. Lisp is Sin, by Sriram Krishnan, tells a tale being drawn to Lisp the siren, only to have his boat dashed on the rocks of complexity and non-standard libraries again and again. But in all he speaks favorably of ideas from functional programming and how they enter his own professional work.

I certainly second his praise of Peter Norvig's classic text Paradigms of AI Programming.

I took advantage of a long weekend to curl up with a book which has been called the best book on programming ever -- Peter Norvig's Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp. I have read SICP but the 300 or so pages I've read of Norvig's book have left a greater impression on me than SICP. Norvig's book is definitely one of those 'stay awake all night thinking about it' books.

I have never heard anyone call Norvig's book the best of all programming books, but I have heard many folks say that about SICP -- Structure and Interpretation of Computer Programs, by Abelson and Sussman. I myself have praised Norvig's book as "one of my favorite books on programming", and it teaches a whole lot more than just AI programming or just Lisp programming. If you haven't studied, put it at or near the top of your list, and do so soon. You'll be glad you did.

In speaking of his growth as a Lisp programmer, Krishnan repeats an old saw about the progression of a Lisp programmer that captures some of the magic of functional programming:

... the newbie realizes that the difference between code and data is trivial. The expert realizes that all code is data. And the true master realizes that all data is code.

I'm always heartened when a student takes that last step, or show that they've already been there. One example comes to mind immediately: The last time I taught compilers, students built the parsing tables for the compiler by hand. One student looked at the table, thought about the effort involved in translating the table into C, and chose instead to write a program that could interpret the table directly. Very nice.

Krishnan's article closes with some discussion of how Lisp doesn't -- can't? -- appeal to all programmers. I found his take interesting enough, especially the Microsoft-y characterization of programmers as one of "Mort, Elvis, and Einstein". I am still undecided just where I stand on claims of the sort that Lisp and its ilk are too difficult for "average programmers" and thus will never be adoptable by a large population. Clearly, not every person on this planet is bright enough to do everything that everyone else does. I've learned that about myself many, many times over the years! But I am left wondering how much of this is a matter of ability and how much is a matter of needing different and better ways to teach? The monad article I discuss above is a great example. Monads have been busting the chops of programmers for a long time now, but I'm betting that Behrens has explained it in a way that "the average Java programmer" can understand it and maybe even have a chance of mastering Haskell. I've long been told by colleagues that Scheme was too abstract, too different, to become a staple of our students, but some are now choosing to use it in their courses.

Dick Gabriel once said that talent does not determine how good you can get, only how fast you get there. Maybe when it comes to functional programming, most of us just take too long to get there. Then again, maybe we teachers of FP can find ways to help accelerate the students who want to get good.

Finally, Krishnan closes with a cute but "politically incorrect analogy" that plays off his title:

Lisp is like the villainesses present in the Bond movies. It seduces you with its sheer beauty and its allure is irresistible. A fleeting encounter plays on your mind for a long, long time. However, it may not be the best choice if you're looking for a long term commitment. But in the short term, it sure is fun! In that way, Lisp is...sin."

Forego the demon temptations of Scheme! Use Perl.

Not.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

February 06, 2006 6:35 PM

Deeper Things Under The Surface

As a runner, I sometimes fall into the trap of expecting to see progress in my abilities. When I expect to experience a breakthrough every day, or every weekly. I am sure to be disappointed. First of all, breakthroughs don't happen all that often. In fact, they don't happen often at all, and when they do they seem to come in bunches -- PRs on several different routes at several different distances in the span of a couple of weeks. These spurts often tempt me into thinking that I'll just keep getting better and better!

But when these breakthroughs occur, whether alone or in spurts, they aren't really the point. What you did that day, or in the previous week, isn't often directly responsible for the breakthrough. The big gains happen outside conscious sight. After years as a casual runner, I saw my first big improvements in speed and stamina only after many, many months of slow increases of in my daily and weekly mileage. I hadn't done anything spectacular over those months, just routine miles, day after day. This is what the gurus call 'building my aerobic base'. Deeper things were are happening under the surface.

I think that this sort of thing happens when we learn, too. Can I memorize facts and be a whole smarter tomorrow than today? Maybe, but perhaps only for a short time. While cramming doesn't work very well for running, it may help you pass a fact-based final exam. But the gain is usually short term and, more important if you care about your professional competence, it doesn't result in a deep understanding of the area. That comes only after time, many days and weeks and months of reading and thinking and writing. Those months of routine study are the equivalent of 'building your mental base'. Eventually, you come to understand the rich network of concepts of the area. Sometimes, this understanding seems to come in a flash, but most of the time you just wake up one day and realize that you get it. You see, deeper things are happening under the surface.

I think this is true when mastering programming or a new programming style or language, too. Most of us can really grok a language if we live with it for a while, playing with it and pushing it and having it talk back to us through the errors we make and the joy and pain we feel writing new code and changing old code. Students don't always realize this. They try to program a couple of days a week, only to be disappointed when these episodes don't take them closer to being a Master. They could, if they became part of our routine, if we gave time and contact a chance to do their magic. Deeper things can happen under the surface, but only if we allow them to.

"Deeper things under the surface" is a catchphrase I borrow from an article of that name by Ron Rolheiser which talks about this phenomenon in human relationships. After a few months in my department's headship, I can see quite clearly how Rolheiser's argument applies to the relationship between a manager and the people for whom he works. We can all probably see how it applies to our relationships with friends, family, children, spouses, and significant others. I have to work over the long term to build relationships through contact. "Quality time" is a fine idea, and important, but when it becomes a substitute for sufficient quantity over sufficient time, it becomes a meaningless slogan.

But I think this applies to how we develop our skills. Just as in relationships, routine contact over time matters. In this case, absence may make the heart grow fonder, but it doesn't make the heart stronger. The cliche that better captures reality is "out of sight, out of mind".

A lot of techie students aren't comfortable with the sentiment I'm expressing here, but consider this quote from Rolheiser's article:

What's really important will be what's growing under the surface, namely, a bond and an intimacy that's based upon a familiarity that can only develop and sustain itself by regular contact, by actually sharing life on a day-to-day basis.

It may be sappy, but that's pretty much how I have always felt about the programming languages and topics and research problems that I mastered -- and most enjoyed, too.


Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Running, Teaching and Learning

February 03, 2006 4:52 PM

Is Web 2.0 a Mirage?

Everyone is talking about Web 2.0 these days. This isn't the sort of buzzword that tends to absorb me, as patterns and refactoring and agile methods did, but as tutorials chair for OOPSLA 2006, I am keen to get a sense of what developers are talking about and interested in learning about these days. Web 2.0 is everywhere, and so I've been reading a bit deeper to see what we should offer, technology-wise, at the conference. But the sociology of the term and its penumbra has been as much intriguing as its technology.

Christian Sepulveda gave thought to why Web 2.0 had captured mindshare now and how it was different from what we've been doing. His answer to "why now?" centered on the convergence of intellectual supply and demand: "The demand for a user centric web, where sharing, communication and a rich experience is the norm, is intersecting the availability of technology, such as RSS and AJAX, to make it happen."

How is Web 2.0 different from Web 1.x? It is driven by the demands and needs of real users with real problems. So much of the previous web boom, he feels, was driven by a "build it and they will come" mentality.

Of course, that mentality worked for a lot of ideas that took root back in the old days -- even the 20th century! -- and now have matured. Wikis and blogs are only two such ideas.

Paul Graham took a somewhat more dispassionate position on Web 2.0, which isn't surprising given his general outlook on the world and, more relevant here, his own experiences doing cool web stuff back before the "availability of technology, such as RSS and AJAX, to make it happen." Here's my precis of his article, consisting of its first and last paragraphs:

Does "Web 2.0" mean anything? Till recently I thought it didn't, but the truth turns out to be more complicated. Originally, yes, it was meaningless. Now it seems to have acquired a meaning. And yet those who dislike the term are probably right, because if it means what I think it does, we don't need it.

The fact that Google is a "Web 2.0" company shows that, while meaningful, the term is also rather bogus. It's like the word "allopathic." It just means doing things right, and it's a bad sign when you have a special word for that.

Graham applauds the fact that Ajax now brings the ability to develop sites that take advantage of the web's possibilities to everyday developers. I found his discussion of democracy on the web, exemplified by sites such as del.icio.us, Wikipedia, Reddit, and Digg, to be right on the mark. The original promise of the web was how it could help us share information, but that promise was only the beginning of something bigger. It took guys like Ward Cunningham to show us the way. This sense of democracy extends beyond participants in social conversation to those folks we in software have always called users. It turns out that users get to participate in the conversation, too!

In the March 2006 issue of Dr Dobb's Journal, editor at large Michael Swaine expressed a more cynical version of Graham's take:

Web 2.0 is a commemorative coin minted in celebration of the end of the dot-com crash. Like all commemorative coins, it has no actual value.

So, we should focus our attention on the technologies that buttress the term, but as Graham points out the ideas behind the technologies aren't new; they are just in a new syntax, a new framework, a new language. The software world creates its own troubles when it recycles old ideas in new form, and then complains that the world is changing all the time.

The cynical view on AJAX itself is expressed with great amusement by Brian Foote in What's New Here is that Nothing is New Here:

The fascinating thing about Ajax is that it is an amalgam of existing technologies that all date back to the twentieth century. It's what the Web 2.0 crowd might call a mash-up. Only the name is new.

Of course, all this cynicism doesn't change the fact that today's developers need to learn the current technologies and how to make them play with the rest of their software. So we'll certainly offer the best tutorials we can for the folks who come to OOPSLA'06.

And, as always, we should be careful not to let our nostalgia for our old tools blind us. For a sanity check on a slightly different topic (though perhaps more similar than the subject indicates), check out this article by Adam Connor.

Busy, busy, busy.


Posted by Eugene Wallingford | Permalink | Categories: Software Development

February 02, 2006 4:21 PM

Mac OS X Spell Checker Trivia

Before posting my last piece on Java trivia, I ran Mac OS X's spell checker on my article, and it flagged my very first line:

Via another episode of College Kids Say the Darnedest Things,

It suggested that I used Damnedest instead. This demonstrates several things:

  • My spell checker must not know me very well.
  • My spell checker doesn't know much about Art Linkletter, or Bill Cosby's TV biography.
  • My spell checker doesn't know about Mark Twain's advice:
    Substitute "damn" every time you're inclined to write "very"; your editor will delete it and the writing will be just as it should be.

So that's your Mac OS X spell checker trivia for the day.

Speaking of the day... Happy Groundhog Day! I like many of Bill Murray's movies, and even wrote a agile software development fantasy about one of them, but Groundhog Day is my favorite. Cue it up.


Posted by Eugene Wallingford | Permalink | Categories: General

February 02, 2006 3:47 PM

Java Trivia: Unary Operators in String Concatenation

Via another episode of College Kids Say the Darnedest Things, here is a fun little Java puzzler, suitable for your compiler students or, if you're brave, even for your CS1 students.

What is the output of this snippet of code?

        int price = 75;
        int discount = -25;
        System.out.println( "Price is " + price + discount + " dollars" );
        System.out.println( "Price is " + price - discount + " dollars" );
        System.out.println( "Price is " + price + - + discount + " dollars" );
        System.out.println( "Price is " + price + - - discount + " dollars" );
        System.out.println( "Price is " + price+-+discount + " dollars" );
        System.out.println( "Price is " + price+--discount + " dollars" );

Okay, we all that the second println causes a compile-time error, so comment that line out before going on. After you've made your guesses, check out the answers.

Many students, even upper-division ones, are sometimes surprised that all of the rest are legal Java. The unary operators + and - are applied to the value of discount before it is appended to the string.

Even some who knew that the code would compile got caught by the fact that the output of the last two printlns is not identical to the output of the middle two. These operators are self-delimiting, so the scanner does not require that they be surrounded by white space. But in the last line, the scanner is able to match a single token in -- (the unary operator for pre-decrement) rather than two unary - operators, and so it does. This is an example of how most compilers match the longest possible token whenever they have the choice.

So whitespace does matter -- sometimes!

This turned into a good exercise for my compiler students today, as we just last time finished talking about lexical analysis and were set to talk about syntax analysis today. Coupled with the fact that they are in the midst of writing a scanner for their compiler, we were able to discuss some issues they need to keep in mind.

For me, this wasn't another example of Why Didn't I Know This Already?, but in the process of looking up "official" answers about Java I did learn something new -- and, like that lesson, it involved implicit type conversions of integral types. On Page 27, the Java Language Reference, says:

The unary plus operator (+) ... does no explicit computation .... However, the unary + operator may perform a type conversion on its operand. ... If the type of the operand is byte, short, or char, the unary + operator produces an int value; otherwise the operator produces a value of the same type as its operand.

The unary - operator works similarly, with the type conversion made before the value is negated. So, again, an operation promotes to int in order to do arithmetic. I assume that this done for the same reason that the binary operators promote bytes, shorts, and chars.

Like many CS professors and students, I enjoy this sort of language trivia. I don't imagine that all our students do. If you'd like to see more Java trivia, check out Random Java Trivia at the Fishbowl. (You gotta love the fact that you can change the value of a string constant!) I've also enjoyed thumbing through Joshua Bloch's and Neal Grafter's Java Puzzlers. I am glad that someone knows all these details, but I'm also glad not to have encountered most of them in my own programming experience.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning