Someone tweeted recently about a recent interview with Fred Brooks in Wired magazine. Brooks is one of the giants of our field, so I went straight to the page. I knew that I wanted to write something about the interview as soon as I saw this exchange, which followed up questions about how a 1940s North Carolina schoolboy ended up working with computers:
Wired: When you finally got your hands on a computer in the 1950s, what did you do with it?
Brooks: In our first year of graduate school, a friend and I wrote a program to compose tunes. The only large sample of tunes we had access to was hymns, so we started generating common-meter hymns. They were good enough that we could have palmed them off to any choir.
It never surprises me when I learn that programmers and computer scientists are first drawn to software by a desire to implement creative and intelligent tasks. Brooks was first drawn to computers by a desire to automatic data retrieval, which at the time must have seemed almost as fantastic as composing music. In an Communications of the ACM interview printed sometime last year, Ed Feigenbaum called AI the "manifest destiny" of computer science. I often think he is right. (I hope to write about that interview soon, too.)
But that's not the only great passage in Brooks's short interview. Consider:
Wired: You say that the Job Control Language you developed for the IBM 360 OS was "the worst computer programming language ever devised by anybody, anywhere." Have you always been so frank with yourself?
Brooks: You can learn more from failure than success. In failure you're forced to find out what part did not work. But in success you can believe everything you did was great, when in fact some parts may not have worked at all. Failure forces you to face reality.
As an undergrad, I took a two-course sequence in assembly language programming and JCL on an old IBM 370 system. I don't know how much the JCL on that machine had advanced beyond Brooks's worst computer programming language ever devised, if it had at all. But I do know that the JCL course gave me a negative-split learning experience unlike any I had ever had before or have had since. As difficult as that was, I will be forever grateful for Dr. William Brown, a veteran of the IBM 360/370 world, and what he taught me that year.
There are at least two more quotables from Brooks that are worth hanging on my door some day:
Great design does not come from great processes; it comes from great designers.
Hey to Steve Jobs.
The insight most likely to improve my own work came next:
The critical thing about the design process is to identify your scarcest resource.
This one line will keep me on my toes for many projects to come.
If great design comes from great designers, then how can the rest of us work toward the goal of becoming a great designer, or at least a better one?
Design, design, and design; and seek knowledgeable criticism.
Practice, practice, practice. But that probably won't be enough. Seek out criticism from thoughtful programmers, designers, and users. Listen to what they have to say, and use it to improve your practice.
A good start might be to read this interview and Brooks's books.
Our semester is underway. I've had the pleasure of meeting my compilers course twice and am looking forward to diving into some code next week. As I read these days, I am keenly watching for things I can bring into our project, both the content of defining and interpreting language and the process of student teams writing a compiler. Of course, I end up imposing this viewpoint on whatever I read! Lately, I've been seeing a lot that makes me think about the development process for the semester.
Greg Wilson recently posted three rules for for supervising student programming projects. I think these rules are just as useful for the students as they work on their projects. In a big project course, students need to think about time, technology, and team interaction realistically in a way they. I especially like the rule, "Steady beats smart every time". It gives students hope when things get tough, even if they are smart. More importantly, it encourages them to start and to keep moving. That's the best way to make progress, no matter smart you are. (I gave similar advice during my last offering of compilers.) My most successful project teams in both the compilers course and in our Intelligent Systems course were the once who humbly kept working, one shovel of dirt at a time.
I'd love to help my compiler students develop in an agile way, to the extent they are comfortable. Of course, we don't have time for a full agile development course while learning the intricacies of language translation. In most of our project courses, we teach some project management along side the course content. This means devoting a relatively small amount of time to team and management functions. So I will have to stick to the essential core of agile: short iterations plus continuous feedback. As Hugh Beyer writes:
Everything else is there to make that core work better, faster, or in a more organized way. Throw away everything else if you must but don't trade off this core.
For the last couple of weeks, I have been thinking about ways to decompose the traditional stages of the compiler project (scanning, parsing, semantic analysis, and code generation) into shorter iterations. We can certainly implement the parser in two steps, first writing code to recognize legal programs and then adding code to produce abstract syntax. The students in my most recent offering of the compilers course also suggested splitting the code generation phase of the project into two parts, one for implementing the run-time system and one for producing target code. I like this idea, but we will have to come up with ways to test the separate pieces and get feedback from the earlier piece of our code.
Another way we can increase feedback is to do more in-class code reviews of the students' compilers as they write them. A student from the same previous course offering wrote to me only yesterday, in response to my article on learning from projects in industry, suggesting that reviews of student code would have enhanced his project courses. Too often professors show students only their own code, which has been designed and implemented to be clean and easy to understand. A lot of the most important learning happens in working at the rough edges, encountering problems that make things messy and solving them. Other students' code has to confront and solve the same problems, and reading that code and sharing experiences is a great way to learn.
I'm a big fan of this idea, of course, and have taught several of my courses using a studio style in the past. Now I just need to find a way to bring more of that style into my compilers course.
Today, I quoted Larry Wall's 2000 Atlanta Linux Showcase Talk in the first day of my compilers course. In that talk, he gives a great example of using a decompiler to port code -- in this case, from Perl 5 to Perl 6. While re-reading the talk, I remembered something that struck me as wrong when I read it the first time:
["If you can dream it, you can do it"--Walt Disney]
"If you can dream it, you can do it"--Walt Disney. Now this is actually false (massive laughter). I think Walt was confused between necessary and sufficient conditions. If you *don't* dream it, you can't do it; that is certainly accurate.
I don't think so. I think this is false, too. (Laugh now.)
It is possible to do things you don't dream of doing first. You certainly have to be open to doing things. Sometimes we dream something, set out to do it, and end up doing something else. The history of science and engineering are full of accidents and incidental results.
I once was tempted to say, "If you don't start it, you can't do it; that is certainly accurate." But I'm not sure that's true either, because of the first "it". These days, I'm more inclined to say that if you don't start doing something, you probably won't do anything.
Back to Day 1 of the compilers: I do love this course. The Perl quote in my lecture notes is but one element in a campaign to convince my students that this isn't just a compilers course. The value in the course material and in the project itself go far beyond the creation of an old-style source language-to-machine language translator. Decompilers, refactoring browsers, cross-compilers, preprocessors, interpreters, and translators for all sorts of domain-specific languages -- a compilers course will help you learn about all of these tools, both how they work and how to build them. Besides, there aren't many better ways to consolidate your understanding of the breadth of computer science than to build a compiler.
The official title of my department's course is "Translation of Programming Languages". Back in 1994, before the rebirth of mainstream language experimentation and the growth of interest in scripting languages and domain-specific languages, this seemed like a daring step. These days, the title seems much more fitting than "Compiler Construction". Perhaps my friend and former colleague Mahmoud Pegah and I had a rare moment of foresight. More likely, Mahmoud had the insight, and I was simply wise enough to follow.
Andrew Gelman writes about a competition offered by Kaggle to find a better rating system for chess. The Elo system has been used for the last 40+ years with reasonable success. In the era of big data, powerful ubiquitous computers, and advanced statistical methods, it turns out that we can create a rating system that predicts more accurately the performance of players on games in the near-future. Very cool. I'm still enough of chess geek that I want to know just when Capablanca surpassed Lasker and how much better Fischer was than his competition in the 1972 challenger's matches. I've always had an irrational preference for ridiculously precise values.
Even as we find systems that perform better, I find myself still attached to Elo. I'm sure part of it is that I grew up with Elo ratings as a player, and read Elo's The Rating of Chess Players, Past and Present as a teen.
But there's more. I've also written programs to implement the rating system, including the first program I ever wrote out of passion. Writing the code to assign initial ratings to a pool of players based on the outcomes of games played among them required me to do something I didn't even know was possible at the time: start a process that wasn't guaranteed to stop. I learned about the idea of successive approximations and how my program would have to settle for values that fit the data well enough. This was my first encounter with epsilon, and my first non-trivial use of recursion. Yes, I could have written a loop, but the algorithm seemed so clear written recursively. Such experiences stick with a person.
There is still more, though, beyond my personal preferences and experiences. Compared to most of the alternatives that do a better job objectively, the Elo system is simple. The probability curve is simple enough for anyone to understand, and the update process is basic arithmetic. Even better, there is a simple linear approximation of the curve that made it possible for a bunch of high school kids with no interest in math to update ratings based on games played at the club. We posted a small table of expected values based on rating differences at the front of the room and maintained the ratings on index cards. (This is a different sort of index-card computing than I wrote about long ago.) There may have been more accurate systems we could have run, but the math behind this one was so simple, and the ratings were more than good enough for our purposes. I am guessing that the Elo system is more than good enough for most people's purposes.
Simple and good enough is a strong combination. Perhaps the Elo system will turn out to be the Newtonian physics of ratings. We know there are a better, more accurate models, and we use them whenever we need something very accurate. Otherwise, we stick to the old model and get along just fine almost all the time.
In the category programming for all, Paul Graham's latest essay explains his ideas about What Happened to Yahoo. (Like the classic Marvin Gaye album and song, there is no question mark.) Most people may not care about programming, but they ought to care about programs. More and more, the success of an organization depends on software.
Which companies are "in the software business" in this respect? ... The answer is: any company that needs to have good software.
If this was such a blind spot for an Internet juggernaut like Yahoo, imagine how big a surprise it must be for everyone else.
If you employ programmers, you may be tempted to stay within your comfort zone and treat your tech group just like the rest of the organization. That may not work very well. Programmers are a different breed, especially great programmers. And if you are in the software business, you want good programmers.
Hacker culture often seems kind of irresponsible. ... But there are worse things than seeming irresponsible. Losing, for example.
Again: If this was such a blind spot for an Internet juggernaut like Yahoo, imagine how big an adjustment it would be for everyone else.
I'm in a day-long retreat with my fellow department heads in the arts and sciences, and it's surprising how often software has come up in our discussions. This is especially true in recruitment and external engagement, where consistent communication is so important. It turns out the university is in the software business. Unfortunately, the university doesn't quite get that.
Zed Shaw is known for his rants. I've enjoyed many of them over the years. However, his Go To University, Not For CS hits awfully close to home. I love his defense of a university education, but he doesn't have much good to say about computer science programs. This is the punchline:
This is why you go to a University and also why you should not study Computer Science. Except for a few places like MIT, Computer Science is a pointless discipline with no culture. Everything is borrowed from somewhere else. It's the ultimate post modern discipline, and nobody teaching it seems to know what the hell "post-modernism" even means.
He is perhaps a bit harsh, yet what counterargument might we offer? If you studied computer science, did your undergrad alma mater or your graduate school have a CS culture? Did any of your professors offer a coherent picture of CS as a serious intellectual discipline, worthy of study independent of specific technologies and languages?
In graduate school, my advisor and I talked philosophically about CS, artificial intelligence, and knowledge in a way that stoked my interest in computing as a coherent discipline. A few of my colleagues shared our interests, but many of fellow graduate students were more interested in specific problems and solutions. They viewed our philosophical explorations as diversions from the main attraction.
Unfortunately, when I look around at undergrad CS programs, I rarely see a CS culture. This true of what I see at my own university, at my friends' schools, and at schools I encounter professionally. Some programs do better than others, but most of us could do better. Some of our students would appreciate the intellectual challenge that is computer science beyond installing the latest version of Linux or making Eclipse work with SVN.
Shaw offers one sentence of great advice for those of us thinking about undergrad curriculum:
... the things that are core to Computer Science like language design, parsing, or state machines, aren't even taught unless you take an "advanced" course.
I feel his pain. Few schools seem willing to design a curriculum built around core ideas packaged any differently from the way they were packaged in 1980. Students can graduate from most CS programs in this country without studying language design or parsing in any depth.
I can offer one counterpoint: some of us do know what post-modernism is and means. Larry Wall calls Perl the first postmodern computer language. More insightful to me, though, is work by James Noble and Robert Biddle, in particular Notes on Notes on Postmodern Programming, which I mentioned briefly a few years ago.
Shaw is right: there can be great value in studying at a university. We need to make sure that computer science students receive all of the value they should.
I was about 1.2 miles from the end of my 18-mile long run this morning. My legs were feeling tight, tight enough that I thought about stopping for a few seconds of recovery. That would be okay, right? This is just a training run.
This thought was pushed out of my mind by the realization that 1.2 miles to go is the 25-mile mark in a marathon. I'm sure that when I reach 25 miles in Des Moines this fall, my legs will be sore, sore enough that I will want to stop for a few seconds of precious relief. In the race, I want to have the strength of mind to finish. So I kept going.
Soon I turned the last corner of the run, only two-tenths of a mile from home. There I saw one last challenge: two-tenths of a mile uphill. Ack. Could I make it? A little rest would feel so good...
My mind flashed back to the end of my marathon, on a sunny morning in Chicago. Somewhere near the finish line we also made a right turn, and I saw an incline just like the one I faced this morning. My legs were sore, and I think I gave in to temptation and slowed, maybe even walked for a few feet. At the time, I told myself that this would enable me to "finish strong". In my next race, I want to have the strength of mind to finish strong all the way. So I kept going.
At both moments of challenge this morning, I asked myself, "Do I want to survive or finish?" For many people, surviving to the end of a marathon is the goal. It is an honorable goal, but it's not mine. I still have a desire to run the race, to push my body to its limit, to finish strong.
In general, you don't want to treat training runs like races. Training is about getting ready to run the race. A lot of what you do in training will be specific exercises that prepare your body. That's especially true of long runs, which are for teaching your body to run well for a long time and strengthening your muscles for the stress of a race. As you approach race day, more of your runs will begin to simulate race conditions, but even then you must take care not overdo it. Otherwise, you risk injury that will limit your training and deprive your body of the practice it needs, or you might peak too early and be unable to muster top performance by race day.
Today's situation, though, highlighted one of my weaknesses, one I share with many runners. The decision to keep running was a training exercise for a specific skill that I will need in my marathon: the strength of mind to finish.
I finished strong. What a change from last week, a 16-miler in heat and humidity that knocked me for a loop. Sixteen miles is an annual hard run for me, as I cross over from runs of a comfortable length to runs that even I consider "long". I started today with a humble heart and ended better. Still, I know I have a long way to go before I am ready to run a marathon in two months.
Yesterday, one of my former students tweeted,
"Write a test first." But unit tests are low priority. The business wants high priority items first.
When I asked, "Is correct code a high priority?", he rephrased my question as, "Is proof of correct code a high priority?"
I knew what he meant and so dropped the exchange, sad that companies still settle for code without tests in the name of expediency. I try not to be dogmatic about writing tests with code, let alone driving the design with tests, and know there are times when not writing tests, at least now, feels okay.
But then it occurred to me. When I write tests, the tests aren't really for "them", whoever "them" might be: my bosses, my colleagues, or my users. My best test suites are the ones I write for myself. They are a part of how I write code.
When I'm not doing TDD and not writings tests in parallel with my code -- when not writing tests feels okay -- I am almost always not writing interesting code. Perhaps I know the domain so well that I can write the code with my eyes closed. Perhaps the domain does not engage me enough that I care to get into the deep flow that comes with testing, growing, and refactoring my program. If the task is dull or routine, then tests seem unnecessary, a drag.
(Perhaps, though, I especially need to write tests in these situations, to guard against producing bad code as a result of complacency and boredom!)
When I am writing code I enjoy, the tests are for me. Saying to me, "Don't take time to write tests" is like telling me not to use version control. It's like saying to me, "Don't take time to format your code properly" or "Don't bother naming your variables properly". Not writing tests seems foreign. Not writing tests is an impediment to writing code at all.
It's really not a matter of "taking time to write a test first". I write tests because that's how I write code.
Former student and current ThoughtWorker Chris Turner sent me an article on ThoughtWorks University's new project-based training course. I mentioned Chris once before, soon after he joined ThoughtWorks. (I also mentioned his very cool research on "zoomable" user interfaces, still one of my all-time favorite undergrad projects.)
Chris tool one of my early offerings of agile software development, one that tried to mix traditional in-class activities with a "studio approach" to a large team project. My most recent offering of the course turned the knobs a bit higher, with two weeks of lecture and pair learning exercises followed by two weeks of intensive project. I really like the results of the new course but wonder how I might be able to do the same kind of thing during the regular semester, when students take five courses and typically spend only three hours a week in class over fifteen weeks.
The ThoughtWorks U. folks do not work under such constraints and have even more focused time available than my one-month course. They bring students in for six weeks of full-time work. Not surprisingly they came to question the effectiveness of their old approach: five weeks of lecture and learning activities followed by a one-week simulation of a project. Most of the learning, it seemed, happened in context during the week-long project. Maybe they should expand the project? But... there is so much content to teach!
Eventually they asked themselves the $64,000 Question:
"What if we don't teach this at all? What's the worst that can happen?"
I love this question. When trying to balance practice in context with yet another lecture, university professors should ask this question about each element of the courses they teach. Often the answer is that students will have to learn the concept from their experience on real projects. Maybe students need more experience on real projects, not more lecture and more homework problems from the back of the textbook chapter!
The folks at TWU redesigned their training program for developers to consist of two weeks of training and four weeks of project work. And they -- and their students -- seem pleased with the results.
... information in context trumped instruction out of context in a huge way. The project was an environment for students to fail in safety. Failure created the need for people to learn and a catalyst for us to coach and teach. A real project environment also allowed students to learn to learn.
This echoes my own experience and is one of the big reasons I think so much about project-based courses. Students still need to learn ideas and concepts, and some will need more direct individual assistance to pick them. The ThoughtWorks folks addressed that need upfront:
We also created several pieces of elearning to help students gain some basic skills when they needed them. Coupled with a social learning platform and a 6:1 student-coach ratio, we were looking at a program that focussed heavily on individualisation as against an experience that was one-size-fits-all-but-fits-nobody. Even with the elearning, we ensured that we were pragmatic in partnering with external content providers whose content met our quality standards.
This is a crucial step, and one that I would like to improve before I teach my agile course again. I found lots of links to on-line resources students could use to learn about agile and XP, but I need to create better materials in some areas and create materials to fill gaps in the accessible web literature. If I want to emphasize the project in my compiler course even more, I will need to create a lot of new materials. What I'd really like to do is create active e-learning resources, rather than text to read. The volume, variety, and quality of supporting materials is even more important if we want to make projects the central activity in courses for beginners.
By the way, I also love the phrase "one-size-fits-all-but-fits-nobody".
When faculty who teach more traditional courses in more traditional curricula hear stories such as this one from TWU, they always ask me the same question: How much does the success of such an industry training program depend on "basic knowledge" students learned in traditional courses? I wonder the same thing. Could we start CS1 or CS2 with two weeks of regular classes followed by four weeks of project? What would work and what wouldn't? Could we address the weaknesses to make the idea work? If we could, student motivation might reach a level higher than we see now. Even better, student learning might be improved as they encounter ideas as they need them to solve problems that matter. (For an opinion to the contrary, see Moti Ben-Ari's comments as reported by Mark Guzdial.)
School starts in a week, so my thoughts have turned to my compiler course. This course already based on one of the classic project experiences that CS students can have. There is a tendency to think all is well with the basic structure of the course and that we should leave it alone. That's not really my style. Having taught compilers any times, I know my course's strengths and weaknesses and know that it can be improved. The extent to which I change it is always a open question.
With the analysis of Deolalikar's P != NP paper now under way in earnest, I am reminded of a great post last fall by Lance Fortnow, The Humbling Power of P v NP. Why should every theorist try to prove P = NP and P != NP?
Not because you will succeed but because you will fail. No matter what teeth you sink into P vs NP, the problem will bite back. Think you solved it? Even better. Not because you did but because when you truly understand why your proof has failed you will have achieved enlightenment.
You might even succeed, though I'm not sure if the person making the attempt achieves the same kind of enlightenment in that case.
Even if Deolalikar's proof holds up, Fortnow's short essay will still be valuable and true.
We'll just use a different problem as our standard.
I seem to be running across the fail early, fail often meme a lot lately. First, in an interview on being wrong, Peter Norvig was asked how Google builds tolerance for the inevitable occasional public failures of its innovations "into a public corporation that's accountable to its bottom line". He responded:
We do it by trying to fail faster and smaller.
One of the ways they do this is by keeping iterations short and teams small.
Then this passage from Seth Godin's recent e-book, Insubordinate, jumped out as another great example:
As a result of David [Seuss]'s bias for shipping, we failed a lot. Products got returned. Commodore 64 computers would groan in pain as they tried to run software that was a little too advanced for their puny brains. It didn't matter, because we were running so fast that the successes supported us far more than the failures slowed us down.
In a rapidly changing environment, not to change is often a bigger risk than to change. In an environment most people don't understand well, in which information is unavailable and unevenly distributed, not to change is often a bigger risk than to change.
However, it's important not to fetish-ize failure, as some people seem to do. Dave Winer reminds us, embracing failure is a good way to fail. Sometimes, you have to look at what failure will mean and muster a level of determination that denies failure in order to succeed.
This all seems so contradictory... but it's not. As we humans often do, we create rules for behavior that are underspecified in terms of context and the problem being solved. There are a lot of trade-offs in the mix when we talk about success and failure. For example, we need to distinguish between failure in the large and failure in small. When an agile developer is taking small steps, she can afford to fail on a few -- especially if the failure teaches her something about how to succeed more reliably in the future. The new information gained is worth the cost of the loss.
In the passage from Godin, successes happened, too, not only losses, and the wins more than offset the losses. In that context, it seems that the advice is not about failure so much as getting over fear of failure. When we fear failure so much that we do not act, we deprive ourselves not only of losses but also of wins. Not failing gets in way of succeeding, of not learning and growing.
Winer was talking about something different, what I'm calling in my mind "ultimate failure": sending the employees home, shutting the doors, and turning the lights off for good. That is different than the "transitory failures" Godin was talking about, the sort of failures we experience when we learn in a dynamic, poorly understood environment. Still, Winer might not take any comfort in that idea. His company was at the brink, and only making the product work and sell was good enough to pull it back. At that moment, he probably wasn't interested in thinking about what he could learn from his next failure.
Sometimes, even the small failures can close doors, at least for a while. That's why so many entrepreneurs and commentators on start-up companies encourage people to fail early, before too many resources have been sunk into the venture, before too many people have been drawn into the realm affected by success or failure -- when a failure means that the entrepreneur simply must start over with her primary assets: her energy, determination, and effort.
When I was decorating my first college dorm room, I hung three small quotes on the wall over the desk. One of them comes to mind now. It was from Don Shula, the head coach of my favorite pro football team, the Miami Dolphins:
Failure isn't fatal, and success isn't final.
This seemed like a good mantra to keep in mind as I embarked on a journey into the unknown. It has served me well for many years now, including my time as a programmer and a teacher.
In the last few weeks, I've seen a few interesting metaphors related to agile development. Surprisingly, one of them was actually a metaphor, XP-style.
The Mute Button
Like many newcomers to XP, my students tend not to get the reason that "metaphor" was one of the original XP practices. I try to give examples that I've seen, but my set of examples is too small. That's one reason I was excited when some agile practitioners in industry created a new discussion list on metaphor in software. Joshua Kerievsky posted a link to a blog entry he had written about two metaphors that have helped him recently.
In one case, his company has started using the idea of a playlist for the products it sells, instead a book. In the other, which is the main theme of his entry, he extrapolates from the presence of a "mute" feature in the Twitter client Twittelator to a riff on thinking about Twitter as if it were television. There are some interesting similarities, as well as interesting differences. But it's a good example of how a salient metaphor can be a source of common experience for guiding the design of a software system.
Refactoring "Technical Debt"
A few months back, I wrote an entry on technical debt, suggesting that some technical debt might be okay to carry, so long as we incur it for investment, not consumption. Not everyone believes this, of course. I've been heartened by Kent Beck's writing over the last couple of years about his own questioning of when we might break our own rules, at least the rules as they have been calcified over time or hardened against the challenges of skeptical audiences.
Last month, Steve Freeman proposed a new picture of bad code: it isn't technical debt; it's an unhedged call option. This metaphor highlights the risk we run when we allow bad code to remain in the build. My metaphor's willingness to carry debt for investment implies a risk, too, because some investments fail deliver as hoped. Freeman's metaphor raises this risk to a more dangerous level. I like his story and think it applies quite nicely in many contexts.
Still, I'm willing to accept lower quality code in trade for something of greater value now -- as long as I keep my eye on the balance sheet and remain vigilant about the debt load I carry. If the load begins to creep higher, or if I begin to spend too many resources servicing the debt, then I want to clean the code up right now. The cost of the debt has risen above the value of the investment it bought me. One of the nice things about software is that we can make changes to improve its structure, if we are willing to spend the time doing so.
What is Refactoring?
Finally, here is a metaphor you can use to explain refactoring to people who don't get it yet: refactoring is a time machine. I'll be smarter tomorrow, more experienced, better informed about what the system I am building today should look like. That is when I should hop in the time machine and make my code look the way it ought, given what I know then. Boy, that take's a lot of pressure off trying to fake the perfect design today, when I don't know yet what it is.
(If I could travel in a blogging time machine, I might go back to last Friday, unmention the Kent Beck failing today, and find a way to use for the first time here!)
My newreader and inbox are full of recent articles about computing and education in the news. First, there is a New York Times Technology section piece on Scott McNealy's open-source textbook project, Currwiki. When I first read this, I thought for sure I would blog on the idea of free and open-source textbooks today. The more I thought about it, and especially the more I tried to write about it, the less I found I have to say right now. Mark Guzdial has already responded with a few concerns he has about open-source textbooks. Guzdial conflates "open source" with "free", as does the Times piece, though McNealy's project seems to be mostly about offering low-cost or free alternatives to increasingly expensive school books. Most of Guzdial's concerns echo the red flags people have raised about free and open-source software in the past, and we see the effect FOSS has had in the world.
Maybe I'll have something cogent to say some other day, but for now, all I can think is, "Is that a MacBookPro in the photo of McNealy and his son?" If so, even well-placed pencil holder can't hide the truth!
Then there is a blog entry at Education Week on the Computer Science Education Act, a bill introduced in the U.S. House of Representatives last week aimed at improve the state of K-12 CS education. Again, any initial excitement to write at length on this topic faded as I thought more about it. This sort of bill is introduced all the time in Congress with little or no future, so until I see this one receive serious attention from House leaders I'll think of it as mostly good PR for computer science. I do not generally think that legislation of this kind has a huge effect on practice in the schools, which are much too complicated to be pushed off course by a few exploratory grants or a new commission. That said, it's nice that a few higher-ups in education might think deeply about the role CS might and could play in 21st-century K-12 education. This ain't 1910, folks.
Finally, here's one that I can blog about with excitement and a little pride: One of my students, Nick Cash, has been named one of five finalists in Entrepreneur Magazine's Entrepreneur of 2010 contest. Nick is one of those bright guys for who our education system is a poor fit, because he is thinking bigger thoughts than "when is the next problem set due?" He has been keeping me apprised of his start-up every so often, but things change so fast that it is hard for me to keep up.
One of the things that makes me proud is the company he is keeping in that final five. Maryland and Michigan are big-time universities with big-time business schools. Though you may not have heard of Babson College, it has long had one of the top-ranking undergraduate entrepreneurship programs in the country. (I'm that in part because I double-majored in accounting at Ball State University, which also has a top-ranked entrepreneurship center for undergrads.) UNI has been doing more to support student entrepreneurship over the last few years, including an incubator for launching start-ups. Still, Nick has made it to the finals against students who come from better-funded and better-known programs. That says even more about his accomplishment.
Nick's company, Book Hatchery, is certainly highly relevant in today's digital publishing market. I'll be wishing him well in the coming years and helping in any way he asks. Check out the link above and, if you are so inclined, cast a vote for his start-up in the contest!