Thanks to several of you who have pointed out that Scheme behaves exactly Python on the Python example in my previous post:
> (define (f x) (if (> x 0) (f (- x 1)) 0)) > (define g f) > (define (f x) x) > (g 5) > 4
Sigh. Indeed, it does. That is what I get for writing code for a blog and not test-driving it first. As an instructor, I learned long ago about the dangers of making claims about code that I have not executed and explored a bit -- even seemingly simple code. Now I can re-learn that lesson here.
The reason this code behaves as it does in both Scheme and Python is that the bindings of function f don't involve a closure at all. They refer to a free variable that must be looked up at in the top-level environment when they are executed.
While writing that entry, I was thinking of a Scheme example more like this:
(define f (lambda (x) (letrec ((f (lambda (x) (if (> x 0) (f (- x 1)) 0)))) (f x))))
... in which the recursive call (f (- x 1)) takes place in the context of a local binding. It is also a much more complex piece of code. I do not know whether there is an idiomatic Python program similar to this or to my earlier Scheme example:
(define f (let ((i 100)) (lambda (x) (+ x i)))) (define i 1) (f 1)
If there is, I suspect that real Python programmers would say that they simply don't program in this way very often. As van Rossum's The History of Python blog points out, Python was never intended as a functional language, even when it included features that made functional-style programming possible. So one ought not be too surprised when a purely functional idiom doesn't work out as expected. Whether that is better or worse for students who are learning to program in Python, I'll leave up to the people responsible for Python.
One can find all kinds of discussion on the web about whether Python closures are indeed "broken" or not, such as here (the comments are as interesting as the article). Until I have a little more time to dig deeper into the design and implementation of Python, though, I will have to accept that this is just one of those areas where I am not keeping up as well as I might like. But I will get to the bottom of this.
Back to the original thought beyond my previous post: It seems that Python is not the language to use as an example dynamic scope. Until I find something else, I'll stick with Common Lisp and maybe sneak in a little Perl for variety.
When I teach programming languages, we discuss the concepts of static and dynamic scoping. Scheme, like most languages these days, is statically scoped. This means that a variable refers to the binding that existed when the variable was created. For example,
> (define f (let ((i 100)) (lambda (x) (+ x i)))) > (define i 1) > (f 1) 101
This displays 101, not 2, because the reference to i in the body of function f is to the local variable i that exists when the function was created, not to the i that exists when the function is called. If the interpreter looked to the calling context to find its binding for i, that would be an example of dynamic scope, and the interpreter would display 2 instead.
Most languages use static typic these days for a variety of reasons, not the least of which is that it is easier for programmers to reason about code that is statically scoped. It is also easier to decompose programs and create modules that programmers can understand easily and use reliably.
In my course, when looking for an example of a dynamically-scoped language, I usually refer to Common Lisp. Many old Lisps were scoped dynamically, and Common Lisp gives the programmer the ability to define individual variables as dynamically-scoped. Lisp does not mean much to students these days, though. If I were more of a Perl programmer, I would have known that Perl offers the same ability to choose dynamic scope for a particular variable. But I'm not, so I didn't know about this feature of the language until writing this entry. Besides, Perl itself is beginning to fade from the forefront of students' attention these days, too. I could use an example closer to my students' experience.
A recent post on why Python does not optimize tail calls brought this topic to mind. I've often heard it said that in Python closures are "broken", which is to say that they are not closures at all. Consider this example drawn from the linked article:
IDLE 1.2.1 >>> def f(x): if x > 0: return f(x-1) return 0;
>>> g = f >>> def f(x): return x
>>> g(5) 4
g is a function defined in terms of f. By the time we call g, f refers to a different function at the top level. The result is something that looks a lot like dynamic scope.
I don't know enough about the history of Python to know whether such dynamic scoping is the result of a conscious decision of the language designer or not. Reading over the Python history blog, I get the impression that it was less a conscious choice and more a side effect of having adopted specific semantics for other parts of the language. Opting for simplicity and transparency as an overarching sometimes means accepting their effects downstream. As my programming languages students learn, it's actually easier to implement dynamic scope in an interpreter, because you get it "for free". To implement static scope, the interpreter must go to the effort of storing the data environment that exists at the time a block, function, or other closure is created. This leads to a trade-off: a simpler interpreter supports programs that can be harder to understand, and a more complex interpreter supports programs that are easier to understand.
So for now I will say that dynamic scope is a feature of Python, not a bug, though it may not have been one of the intended features at the time of the language's design.
If any of your current favorite languages use or allow dynamic scope, I'd love to hear about it -- and especially whether and how you ever put that feature to use.
I occasionally sneak a peek at the Learning Curves blog when I should be working. Yesterday I saw this entry, with a sad bullet point for us CS profs:
Keep getting caught up in stupid details on the computer science homework. String handling. Formatting times. That sort of thing. The problem no longer interests me now that I grasp the big idea.
This is an issue I see a lot in students, usually the better ones. In some cases, the problem is that the students feel they have a right not to be bothered with any details, stupid or otherwise. But a lot of programming involves stupid details. So do most other activities, like playing the piano, playing a sport, reading a book, closing a company's financial books, or running a chemistry experiment.
Life isn't a matter only of big ideas that never come into contact with the world. Our fingers have to strike the keys and play the right notes, in the correct order at the proper tempo. I can understand the big ideas of shooting a ball through a hoop, but players succeed because they shoot thousands of shots, over and over, paying careful attention to details such as their point of release, the rotation of the ball, and the bending of their knees.
There may be an element of this in Hirta's lament, but I do not imagine that this is her whole of her problem. Some details really are stupid. For the most part, basketball players need not worry about the lettering on the ball, and piano players need not think about whether their sheet music was printed on 80% or 90% post-consumer recycled paper. Yet too often people who write programs have to attend to details just as silly, irrelevant, and disruptive.
This problem is even worse for people learning to write programs. "Don't worry what public static void main( String args ) means; just type it in before you start." Huh? Java is not alone here. C++ throws all sorts of silly details into the faces of novice programmers, and even languages touted for their novice-friendliness, such as Ada, push all manner of syntax and convention into the minds of beginners. Let's face it: learning to program is hard enough. We don't need to distract learners with details that don't contribute to learning the big idea, and maybe even get in the way.
If we hope to excite people with the power of programming, we will do it with big ideas, not the placement of periods, spaces, keywords, and braces. We need to find ways so that students can solve problems and write programs by understanding the ideas behind them, using tools that get in the way as little as possible. No junk allowed. That may be through simpler languages, better libraries, or something else that I haven't learned about yet.
(And please don't post a link to this entry on Reddit with a comment saying that that silly Eugene fella thinks we should dumb down programming and programming languages by trying to eliminate all the details, and that this is impossible, and that Eugene's thinking it is possible is a sign that he is almost certainly ruining a bunch of poor students in the heartland. Re-read the first part of the entry first...)
Oh, and for my agile developer friends: Read a little farther down the Learning Curves post to find this:
Email from TA alleges that debugging will be faster if one writes all the test cases ahead of time because one won't have to keep typing things while testing by hand.
Hirta dismisses the idea, saying that debugging will still require diagnosis and judgment, and thus be particular to the program and to the bug in question. But I think her TA has re-discovered test-first programming. Standing ovation!
We have fallen behind in my compilers course. You may recall that before the semester I contemplated some changes in the course, including letting letting the students design their own language. My group of six chose that route, and as a part of that choice decided to work as a team of six, rather than in pairs or threes. This was the first time for me to have either of these situations in class, and I was curious to see how it would turn out.
Designing a language is tough, and even having lots of examples to work from, both languages and documents describing languages, is not enough to make it easy. We took a little longer than I expected. Actually, the team met its design deadline (with no time to spare, but...), but then followed a period of thinking more about the language. We both needed to come to a better understanding of the implications of some of their design decisions. Over time they changed their definition, sometimes refining and sometimes simply making the language different. This slowed the process of starting to implement the language and caused a few false starts in the scanner and parser.
Such bumps are a natural part of taking on the tougher problem of creating the language, so I don't mind that we are behind. I have learned a few things to do differently the next time a compiler class chooses this route. Working as a team of six increases the communication overhead they face, so I need to do a better job preparing them for the management component of such a large development project. It's hard for a team to manage itself, either through specific roles that include a nominal team leader or through town-hall style democracy. As the instructor, I need to watch for moments when the team needs me to take the rudder and guide things a bit more closely. Still, I think that this has been a valuable experience for the students. When they get out into industry, they will see successes and failures of the sort they've created for themselves this semester.
Still, things have gone reasonably well. It's almost inevitable that occasional disagreements about technical detail or team management will arise. People are people, and we are all hard to work with sometimes. But I've been happy with the attitude that all have brought to the project. I think all have shown a reasonable degree of commitment to the project, too, though they may not realize yet just what sort of commitment getting a big project like this done can require.
I have resisted the urge to tell (or re-tell?) the story of my senior team project: a self-selected team of good programmers and students who nonetheless found ways to fall way behind their development schedule. We had no way to change the scope of the system, struggled mightily in the last weeks of the two-term project, and watched our system crash on the day of the acceptance test. The average number of hours I spent on this project during its second term? 62 hours. And that was while taking another CS course, two accounting courses, and a numerical analysis course -- the final exam for which I have literally no recollection of at all, because by that time I was functioning on nearly zero sleep for days on end. This story probably makes me sound crazy -- not committed, but in need of being committed. Sometimes, that's what a project takes.
On the technical side, I will do more next time to accelerate our understanding of the new language and our fixing of the definition. One approach I'm considering is early homework assignments writing programs in the new language, even before we have a scanner or parser. This causes us all to get concrete sooner. Maybe I will offer extra-credit points to students who catch errors in the spec or in others students' programs. I'll definitely give extra-credit for catching errors in my programs. That's always fun, and I make a perfect foil for the class. I am sure both to make mistakes and to find holes in their design or their understanding of it.
But what about this semester? We are behind, with three weeks to D-Day. What is the best solution?
The first thing to recognize is that sometimes this sort of thing happens. I do not have the authority to implement a death march, short of becoming an ineffective martinet. While I could try telling students that they will receive incompletes until the project is finished, I don't really have the authority to push the deadline of the project beyond the end of our semester.
The better option is one not made available to my project team in school, but which we in the software world now recognize as an essential option: reduce the scope of the project. The team and I discussed this early in the week. We can't do much to make the language smaller, because it is already rather sparse in data types, primitive operators, and control structure. The one thing we could drop is higher-order procedures, but I was so please when they included this feature that I would feel bad watching it drop out now. But that would not really solve our problem. I am not sure they could complete a compiler for the rest of the language in time anyway.
We decided instead to change the target language from JVM bytecodes to Java itself. This simplifies what remains for them quite a bit, but not so much that it makes the job easy. The big things we lose are designing and implementing a low-level run-time system and emitting machine-level code. The flip side is that we decided to retain the language's support for higher-order procedures, which is not trivial to implement in generated Java code. They'll still get to think about and implement closures, perhaps using anonymous inner classes to implement function arguments and results.
This results in a different challenge, and a change in the experience the students will have. The object lesson is a good one. We have made a trade-off, and that is the nature of life for programmers. Change happens, and things don't always proceed according to plan. So we adapt and do the best we can. We might even spend 60 hours one week working on our project!
For me, the biggest effect of the change is on our last two and a half weeks of lecture. Given where we are and what they will be doing, what do they most need to learn? What ideas and techniques should they see even if they won't use them in their compilers? I get to have some fun right up to the end, too.
Last evening, Mike Feathers tweeted a provocative idea: The world might be better if all code disappeared at a fix age and we had to constantly rewrite it. Heck, he could even write a tool to seek and destroy all code as it reaches the age of three months. Crazy, huh?
Maybe this idea is not so crazy. At my university and most other places, hardware is on a 3- or 4-year "replacement cycle". Whether we need new computers in our student labs, we replace them on a schedule. Why? Because we recognize that hardware reaches a natural "end of life". Using it beyond that time means that we carry an ever-increasing risk that it will fail. Rather than let it fail and be caught without for a short while, we accept the upfront cost of replacing it with newer, more reliable, better equipment. The benefit is piece of mind, and more reliable performance.
Maybe we should recognize that software can be like hardware. It reaches a natural "end of life" -- not because physical components wear out, but because the weight of changing requirements and growing desires push it farther out of compliance with reality. (This is like when we replace a computer because its processor speed and RAM size fall out of compliance with reality: the demands of new operating systems and application software.) Using software beyond its natural end of life means that we carry an ever-increasing risk of failure -- when it actually breaks, or when we "suddenly" need to spend major time and money to "maintain" it. Rather than risk letting our software fail out from under us, we could accept the cost of replacing it with newer, more reliable, better software.
One of the goals of the agile software development community is to reduce the cost of changing our code. If agile approaches are successful, then we might be more willing to bear the risk of our code falling away from reality, because we are not afraid of changing it. (Agile approaches also value continuous feedback, which allows us to recognize the need for change early, perhaps before it becomes too costly.) But there may be times or environments in which these techniques don't work as well as we like.
Suppose that we committed to rewriting 1/4 of every system every year. This would allow graceful, gradual migration to new technologies. A possible cost of this strategy is increasing complexity, because our systems would come to be amalgams of two, three, or even four technologies and programming languages interoperating in one system. But is this all that much different from life now? How many of our systems already consist of modules in several languages, several technologies, several styles?
Another side of me is skeptical. Shouldn't our programs just keep working? Why not take care to design them really well? Why not change small parts of the system as needed, rather than take on wholesale changes we don't need yet? Doesn't this approach violate the principle of YAGNI?
Another advantage of the approach: It gives us a built-in path to continuous learning. Rewriting part of a system means digging into the code, learning or re-learning what it's made of and how it works. With pair programming, we could bring new people into a deeper understanding of the code. This would help us to increase the feeling of collective code ownership, as well as preserving and replenishing corporate memory.
Another disadvantage of the approach: It is hard to maintain this sort of discipline in hard financial times. We see this with hardware, too. When money is short, we often decide to lengthen or eliminate the replacement cycle. In such times, my colleagues who traffic in computer labs and faculty desktop computers are a little more worried than usual; what happens if... Software development seems always to be under financial pressure, because user demands grow to outpace our capacity to produce meet them. Even if we decided to try this out for software, administration might immediately declare exigency and fall back into the old ways: build new systems, now, now, now.
Even after thinking about the idea for a while now, it still sounds a little crazy. Then again, sometimes crazy ideas have something to teach us. It is not so crazy that I will dismiss it out of hand. Maybe I will try it some time just see how it works. If it does, I'll give Mike the credit!
... for a definition of "the day" that includes when I read them, not when the authors posted them.
Tweet of the Day
Marick's Law: In software, anything of the form "X's Law" is better understood by replacing the word "Law" with "Fervent Desire".
-- Brian Marick
I love definitions that apply to themselves. They are the next best thing to recursion. I will have plenty of opportunities to put Brian's fervent desire into practice while preparing to teach software engineering this fall.
Non-Tech Blog of the Day
I don't usually quote former graffiti vandals or tattoo artists here. But I am an open-minded guy, and this says something that many people prefer not to hear. Courtesy of Michael Berman:
"Am I gifted or especially talented?" Cartoon said. "No. I got all this through hard work. Through respecting my old man. From taking direction from people. From painting when everyone else was asleep. I just found something I really love and practiced at it my whole life."
-- Mister Cartoon
Okay, so I am tickled to have quoted a guy named Mister Cartoon. His work isn't my style, but his attitude is. Work. Respect. Deference. Practice. Most days of the week, I would be well-served by setting aside my hubris and following Mister Cartoon's example.
The last few days I have run across several pointers to Scala and Clojure, two dynamic languages that support functional programming style on the JVM. Whenever I run into a new programming language, I start thinking about how much time I should put into learning and using it. If it is a functional language, I think a bit harder, and naturally wonder whether I should consider migrating my Programming Languages course from the venerable but now 30-plus-years-old Scheme to the new language.
My time is finite and in scarce supply, so I have to choose wisely. If I try to chase every language thread that pops up everywhere, I'll end up completely lost and making no progress on anything important. Choosing which threads to follow requires good professional judgment and, frankly, a lot of luck. In the worst case, I'd like to learn something new for the time I invest.
Scala and Clojure have been on my radar for a while and, like books that receive multiple recommendations, are nearing critical mass for a deeper look. With summer around the corner and my usual itch to learn something new, chances go up even more.
Could one of these languages, or another, ever displace Scheme from my course? That's yet another major issue. A few years ago I entertained the notion of using Haskell in lieu of Scheme for a short while, but Scheme's simplicity and dynamic typing won out. Our students need to see something as different as possible from what they are used to, whatever burden that places on me and the course. My own experience with Lisp and Scheme surely had some effect on my decision. For every beautiful idea I could demonstrate in Haskell, I knew of a similar idea or three in Scheme.
I've noticed that computer science faculty technology usage often seems to be frozen to when they start their first tenure-track job. Unclear yet if I'll get stuck to 2008 technology.
Lam is wise to think consciously of this now. I know I did not. Then again, I think my track record learning new technologies, languages, and tools through the 1990s, in my first decade as a tenure-track professor, holds up pretty well. I picked up several new programming languages, played with wikis, adopted and used various tools from the agile community, taught courses in several new courses that required more than passing familiarity with the tools of those subdisciplines, and did a lot of work in the software patterns world.
My pace learning new technologies may have slowed a bit in the 2000s, but I've continued to learn new things. Haskell, Ruby, Subversion, blogs, RSS, Twitter, ... All of have become part of my research, teaching, or daily practice in the last decade. And not just as curiosities next to my real languages and tools; Ruby has become one of my favorite programming languages, alongside old-timers Smalltalk and Scheme.
|A language that doesn't affect
the way you think about programming,
is not worth knowing.
-- Alan Perlis,
Epigrams on Programming
At some point, though, there is something of a "not again..." feeling that accompanies the appearance of new tools on the scene. CVS led to Subversion, which led to ... Darcs, Mercurial, Git, and more. Which new tool is most worth the effort and time? I've always had a fondness for classics, for ideas that will last, so learning yet another tool of the same kind looks increasingly less exciting as time passes. Alan Perlis was right. We need to spend our time and energy learning things that matter.
This approach carries one small risk for university professors, though. Sticking with the classics can leave one's course materials, examples, and assignments looking stale and out of touch. Any CS 1 students care to write a Fahrenheit-to-Celsius converter?
In the 1990s, when I was learning a lot of new stuff in my first few years on the faculty, I managed to publish a few papers and stay active. However, I am not a "research professor" at a "research school", which is Lam's situation. Hence the rest of his comment:
Also unclear if getting stuck is actually necessary for being successful faculty.
As silly as this may sound, it is a legitimate question. If you spend all of your time chasing the next technology, especially for teaching your courses, then you won't have time to do your research, publish papers, and get grants. You have to strike a careful balance. There is more to this question than simply the availability of time; there is also a matter of mindset:
Getting to the bottom of things -- questioning assumptions, investigating causes, making connections -- requires a different state of mind than staying on top of things.
This comes from John Cook's Getting to the Bottom of Things. In that piece, Cook concerns himself mostly with multitasking, focus, and context switching, but there is more. The mindset of the scientist -- who is trying to understand the world at a deep level -- is different than the mindset of the practitioner or tool builder. Time and energy devoted to the latter almost certainly cannibalizes the time and energy available for the former.
As I think in these terms, it seems clearer to me one advantage that some so-called teaching faculty have over research faculty in the classroom. I've always had great respect for the depth of curiosity and understanding that active researchers bring to the classroom. If they are also interested in teaching well, they have something special to share with their students. But teaching faculty have a complementary advantage. Their ability to stay on top of things means that their courses can be on the cutting edge in a way that many research faculty's courses cannot. Trade-offs and balance yet again.
For what it's worth, I really am intrigued by the possibilities offered by Scala and Clojure for my Programming Languages course. If we can have all of the beauty of other functional languages at the same time as a connection to what is happening out in the world, all the better. Practical connections can be wonderfully motivating to students -- or seem seem cloyingly trendy. Running on top of the JVM creates a lot of neat possibilities not only for the languages course but also for the compilers course and for courses in systems and enterprise software development. The JVM has become something of a standard architecture that students should know something about -- but we don't want to give our students too narrow an experience. Busy, busy, busy.
William Stafford's Writing the Australian Crawl includes several essays on language, words, and diction in poetry. Words and language -- he and others say -- are co-authors of poems. Their shapes and sounds drive the writer in unexpected ways and give rise to unexpected results, which are the poems that they needed to write, whatever they had in mind when they started. This idea seems fundamental to the process of creation for most poets.
We in CS think a lot about language. It is part of the fabric of our discipline, even when we don't deal in software. Some of us in CS education think and talk way too much about programming languages: Pascal! Java! Ada! Scheme!
But even if we grant that there is art in programming and programs, can we say that language drives us as we build our software? That language is the co-author of our programs? That its words and shapes (and sounds?) drive the programmer in unexpected ways and gives rise to unexpected results, which are the programs we need to write, whatever we have in mind when we start? Can the programmer's experience resemble in any way the poet's experience that Stafford describes?
[Language] begins to distort, by congealing parts of the total experience into successive, partially relevant signals.... [It] begins to enhance the experience because of a weird quality of language: the successive distortions of language have their own cumulative potential, and under certain conditions the distortions of language can reverberate into new experiences more various, more powerful, and more revealing than the experiences that set off language in the first place.
Successive distortions with cumulative potential... Programmers tend not to like it when the language they use, or must use, distorts what they want to say, and the cumulative effects of such distortions in a program that can give us something that feels cumbersome, feels wrong, is wrong.
Still... I think of my experiences coding in Smalltalk and Scheme, and recall hearing others tell similar tales. I have felt Smalltalk push me towards objects I wasn't planning to write, even to objects of a kind I had previously been unaware. Null objects, and numbers as control structures; objects as streams of behavior. Patterns of object-oriented programs often give rise to mythical objects that don't exist in the world, which belies OOP's oft-stated intention to build accurate models of the world. I have felt Scheme push me toward abstractions I did not know existed until just that moment, abstractions so abstract that they make me -- and many a programmer already fearful of functional style -- uncomfortable. Yet it is simply the correct code to write.
For me: Smalltalk and Lisp and Scheme, yes. Maybe Ruby. Not Java. C?
Is my question even meaningful? Or am I drowning in my own inability to maintain suitable boundaries between things that don't belong together?
My in-flight and bedtime reading for my ChiliPLoP trip was William Stafford's Writing the Australian Crawl, a book on reading and especially writing poetry, and how these relate to Life. Stafford's musings are crashing into my professional work on the trip, about solving problems and writing programs. The collisions give birth to disjointed thoughts about software, programming, and art. Let's see what putting them into words does to them, and to me.
Intention endangers creation.
An intentional person is too effective to be a good guide in the tentative act of creating.
I often think of programming as art. I've certainly read code that felt poetic to me, such as McCarthy's formulation of Lisp in Lisp (which I discussed way back in an entry on the unity of data and program. But most of the programs we write are intentional: we desire to implement a specific functionality. That isn't the sort of creation that most artists do, or strive to do. If we have a particular artifact in mind, are we really "creating"?
Stafford might think not, and many software people would say "No! We are an engineering discipline, not an artistic one." Thinking as "artists", we are undisciplined; we create bad software: software that breaks, software that doesn't serve its intended purpose, software that is bad internally, software that is hard to maintain and modify.
Yet many people I know who program know... They feel something akin to artistry and creation.
How can we impress both sides of this vision on people, especially students who are just starting out? When we tell only one side of the story, we mislead.
Art is an interaction between object and beholder.
Can programs be art? Can a computer system be art? Yes. Even many people inclined to say 'no' will admit, perhaps grudgingly, that the iPod and the iPhone are objects of art, or at least have elements of artistry in them. I began writing some of these notes on the plane, and all around me I see iPods and iPhones serving people's needs, improving their lives. They have changed us. Who would ever have thought that people would be willing to watch full-length cinematic films on a 2" screen? Our youth, whose experiences are most shaped by the new world of media and technology, take for granted this limitation, as a natural side effect of experiencing music and film and cartoons everywhere.
Yet iPods aren't only about delivering music, and iPhones aren't just ways to talk to our friends. People who own them love the feel of these devices in their hands, and in our lives. They are not just engineered artifacts, created only to meet a purely functional need. They do more, and they are more.
Intention endangers creation.
Art reflects and amplifies experience. We programmers often look for inspirations to write programs by being alert to our personal experience and by recognizing disconnects, things that interrupt our wholeness.
Robert Schumann said, To send light into the darkness of men's hearts -- such is the duty of the artist. Artists deal in truth, though not in the direct, assertional sense we often associate with mathematical or scientific truth. But they must deal in truth if they are to shine light into the darkness of our hearts.
Engineering is sometimes defined as using scientific knowledge and physical resources to create artifacts that achieve a goal or meet a need. Poets use words, not "physical resources", but also shapes and sounds. Their poems meet a need, though perhaps not a narrowly defined one, or even one we realize we had until it was met in the poem. Generously, we might think of poets as playing a role somewhat akin to the engineer.
How about engineers playing a role somewhat akin to the artist? Do engineers and programmers "send light into the darkness of men's hearts"? I've read a lot of Smalltalk code in my life that seemed to fill a dark place in my mind, and my soul, and perhaps even my heart. And some engineered artifacts do, indeed, satisfy a need that we didn't even know we had until we experienced them. And in such cases it is usually experience in the broadest sense, not the mechanics of saving a file or deleting our e-mail. Design, well done, satisfies needs users didn't know they had. This applies as well to the programs we write as to any other artifact that we design with intention.
I have more to write about this, but at this time I feel a strong urge to say "Yes".
Well, Carefree. But it plays the Western theme to the hilt.
This was a shorter conference visit than usual. Due to bad weather on the way here, I arrived on the last flight in on Sunday. Due to work constraints of my workshop colleagues, I am heading out before the Wednesday morning session. Yet it was a productive trip -- like last year, but this time on our own work, as originally planned. We produced
Yesterday over our late afternoon break, we joined with the other workshop group and had an animated discussion started by a guy who has been involved with the agile community. He claimed that XP and other agile approaches tell us that "thinking is not allowed", that no design is allowed. A straw man can be fun and useful for exploring the boundaries of a metaphor. But believing it for real? Sigh.
A passing thought: Will professionals in other disciplines really benefit from knowing how to program? Why can't they "just" use a spreadsheet or a modeling tool like Shazam? This question didn't come to mind as a doubt, but as a realization that I need a variety of compelling stories to tell when I talk about this with people who don't already believe my claim.
While speaking of spreadsheets... My co-conspirator Robert Duvall was poking around Swivel, a web site that collects and shares open data sets, and read about the founders' inspiration. They cited something Dan Bricklin said about his own inspiration for inventing the spreadsheet:
I wanted to create a word processor for data.
Very nice. Notice that Bricklin's word processor for data exposes a powerful form of end-user programming.
When I go to conferences, I usually feel as if the friends and colleagues I meet are doing more, and more interesting, things than I -- in research, in class, in life. It turns out that a lot of my friends and colleagues seem to think the same thing about their friends and colleagues, including me. Huh.
I write this in the air. I was booked on a 100% full 6:50 AM PHX-MSP flight. We arrive at the airport a few minutes later than planned. Rats, I have been assigned a window seat by the airline. Okay, so I get on the plane and take my seat. A family of three gets on and asks me hopefully whether there is any chance I'd like an aisle seat. Sure, I can help. (!) I trade out to the aisle seat across the aisle so that they can sit together. Then the guy booked into the middle seat next to me doesn't show. Surprise: room for my Macbook Pro and my elbows. Some days, the smile on me in small and unexpected ways.
This is the idea behind biction:
(The hard part of writing isn't the writing; it's the thinking.)
-- William Zinsser
This line comes from Zinsser's recent article, Visions and Revisions, in which he describes the writing and rewriting of On Writing Well over the course of thirty years. I read On Writing Well a decade or so ago, in one of its earlier editions. It is my favorite book on the craft of writing.
This morning, I ran a 5K. It was my first "race" since my last marathon, with last year lost to not feeling well enough to run more than a bit. Today's race was organized by a student group on campus that includes a couple of my current and former students, so I risked signing up even though when I did I wasn't sure that I'd be able to do more than plow through the miles, if that.
I went in with no expectations, literally -- I had no idea how fast or far I could run all-out, or even what all-out means for me right now. I figured this would be an opportunity to gauge myself a month before my big goal right now, the 500 Festival Half. Even still, I wasn't able not to daydream about times as I stretched before the race... My best guess was that, if I could break 25:00, I should be a very happy man.
This was a race by college kids, for college kids. Most of the runners were undergrad students -- some of whom run, and many of whom probably don't run much. You know college-aged guys... They broke from the gate fast, and within a mile many had fallen back. The early pace felt good to me, so I hung on and passed a few guys, figuring they'd take me in the last mile when I was gasping for air.
First mile split: 7:18. My first thought: They have the marker in the wrong place. We couldn't have run a mile yet.
Second mile split: 7:25. My second first thought: Really? Probably not.
Either the mile markers were wrong, or I faded a bit in the last mile. As I got within a quarter-mile of the finishing line, one of the guys I had passed caught me -- but only one. I picked up the pace, to see if he was for real. He did, too, and stayed a stride ahead of me as we entered the chute.
Time: 23:33. I guess I have to be happy now!
The next test is how I feel tonight and in the morning, when I hope to run an easy 8 miles en route to Indianapolis.
Another sign that this was a race by college kids, for college kids was the age grouping for prizes: 17-under, 18-29, and 30-up. After the race, one of my former students suggested that I might have won my age group, but I was pretty sure at least one of the guys ahead of me was also in the old-man group. I was right. Unfortunately, the race offered only one prize per age group, so I don't know if I finished second yet.
My best guess is that I finished in the top 20-25 overall. It wasn't a fast group. But it was nice ti run against the clock for real again. Here's hoping I feel good in the morning.
I learned a long time ago that the two best debugging tools I own are a nice piece of paper, and a good pencil.
Writing something down is a great way to "think out loud". My Ghostbusters-loving colleague, Mark Jacobson, calls this biction. He doesn't define the term on his web page, though he does have this poetic sequence:
Bic pen, ink flowing, snow falling, writing, thinking, playing, dancing
That sounds fanciful, but biction is a nuts-and-bolts idea. The friction of that Bic pen on the paper is when ideas that are floating fuzzily through the mind confront reality.
Mark and I taught a data structures course together back in the 1990s, and we had a rule: if students wanted to ask one of us a question, they had to show us a picture they had drawn that illustrated their problem: the data structure, pointers, a bit of code, ... If nothing else, this picture helped us to understand their problem better. But it usually offered more. In the process of showing us the problem using their picture, students often figured out the problem in front of our eyes. Other students commented that, while drawing a picture in preparing to ask a question, they saw the answer for themselves. Biction.
Of course, one can also "think out loud" out loud. In response to my post on teaching software engineering, a former student suggested that I should expose my students to pair programming, which he found "hugely beneficial" in another course's major project, or at least to rubber duck debugging. That's biction with your tongue.
It may be that the most important activity happens inside our heads. We just need to create some friction for our thoughts.
You are scanning a list of upcoming lectures on campus.
You see the title "Media Manipulation".
You get excited! Your thoughts turn to image rotations and filters, audio normalization and compression formats.
You read on to see the subtitle: "You, Me, and Them (the 'media' isn't what it used to be)" and realize that the talk isn't about CS; it's about communications and journalism.
You are disappointed.
(I'd probably enjoy this talk anyway... The topic is important, and I know and like the speaker. But still. To be honest, in recent weeks I have been less concerned with the media manipulating me than with the people in the media not doing the research they need to ensure their stories are accurate.)