Maurice Wilkes, one of computer science's pioneers, passed away yesterday at the grand age of 97. Wilkes spent most of his career working on computer systems, so I never studied his work in the detail that I studied Backus's or Floyd's. He is perhaps best known as the creator of the EDSAC, the first practical (if slow!) stored program computer, in May 1949. Wilkes also had an undeniable impact on programming as the primary developer of the idea of microprogramming, whereby the CPU is controlled by a program stored in ROM.
Finally, every programmer must hold a special place in his or her heart for this, Wilkes's most quoted aphorism:
... the realization came over me that a good part of my life was going to be spent finding errors in my own programs.
All programmers have their Wilkes Moment sooner or later.
In his 2008 letter to Amazon shareholders, Jeff Bezos wrote this oft-quoted footnote:
At a fulfillment center recently, one of our Kaizen experts asked me, "I'm in favor of a clean fulfillment center, but why are you cleaning? Why don't you eliminate the source of dirt?" I felt like the Karate Kid.
When I first read this in a blog entry, I immediately thought of refactoring. I favor a style of programming in which "cleaning up" is a fundamental step: pick a small bit of new functionality, do the simplest thing I can to make it work, and then clean up the program's design and implementation. Should I instead eliminate the source of dirt, and think far enough ahead that the program is always clean?
It didn't take me long to realize that I'm neither smart enough nor well enough informed about most problems to do that. I will have to clean up every so often, no matter how far I think ahead. Besides, I find so much value in taking small steps and doing simple things that I am willing to clean up.
Why is that? Why am I willing to clean up, rather than keep things clean from the start? Why does refactoring work for software developers?
If things are too clean, you probably are not creating new things.
Kaizen notions are attractive to many in the "lean" world of software development, and it is important -- in context. Production and creation are different kinds of task. Keeping things clean and efficient has great value in production environments, including factories and perhaps in certain kinds of software development. But when you are making new things, there is great value in exploration, and exploration is messy.
Bezos wrote that footnote on this passage:
Everywhere we look (and we all look), we find what experienced Japanese manufacturers would call "muda" or waste. I find this incredibly energizing. I see it as potential -- years and years of variable and fixed productivity gains and more efficient, higher velocity, more flexible capital expenditures.
Amazon is a company that makes most of its profit by delivering product to customers more efficiently and less expensively than its competitors. If it can eliminate a source of muda, it becomes a better company. That's why the Kaizen expert's advice gave Bezos a Karate Kid moment.
For me, the Karate Kid moment was just the opposite: when I learned that programmers had vocabulary for talking about refactoring and that some experts had made it a deliberate part of their development process. Wax on, wax off.
I watched A League of Their Own with my wife and daughters tonight.
It's supposed to be hard. If it wasn't hard, everybody would do it. The hard... is what makes it great.
-- Coach Jimmy Dugan
I'm thankful for the hard that makes so many of my experiences great. I'm also thankful to live in a world where my daughters have as many opportunities as they do to find and do whatever makes them whole.
I've spent considerable time this morning cleaning out the folder on my desktop where I keep stuff. In one the dozens of notes files I've created over the last year or so, I found this unattributed quote:
In 1961, the scholar and cultural critic George Steiner argued in a controversial book, "The Death of Tragedy", that theatrical tragedies had begun their steady decline with the rise of rationalism and the Enlightenment in the 17th century. The Greek notion of tragedy was premised on man's inability to control his fate in the face of capricious, often brutal gods. But the point of a tragedy was to dramatize man's ability to make choices whatever his uncontrollable end.
The emphasis was not on the death -- what the gods usually had in store -- but on what the hero died for: the state, love, his own dignity. Did he die nobly? Or with shame? For a worthy cause? Or pitiless self-interest? Tragedies, then, were ultimately "an investigation into the possibilities of human freedom", as Walter Kerr put it in "Tragedy and Comedy" (1967).
I like this passage now as much as I must have when I typed it up from some book I was reading. (I'm surprised I did not write down the source!) It reminds me that I face and make choices every day that reveal who I am. Indeed, the choices I make create who I am. That message feels especially important to me today.
And yes, I know there are better tools for keeping notes than dozens of text files thrown into nearly as many folders. I take notes using a couple of them as well. Sometimes I lack the self-discipline I need to leave an ordered life!
Debugging the Law
Recently, Kevin Carey's Decoding the Value of Computer Science got a lot of play among CS faculty I know. Carey talks about how taking a couple of computer programming courses way back at the beginning of his academic career has served him well all these years, though he ended up majoring in the humanities and working in the public affairs sector. Some of my colleagues suggested that this article gives great testimony about the value of computational thinking. But note that Carey didn't study abstractions about computation or theory or design or abstraction. He studied BASIC and Pascal. He learned computer programming.
Indeed, programming plays a central role in the key story within the story. In his first job out of grad school, Carey encountered a convoluted school financing law in my home state of Indiana. He wrote code to simulate the law in SAS and, between improving his program and studying the law, he came to understand the convolution so well that he felt confident writing a simpler formula "from first principles". His formula became the basis of an improved state law.
That's right. His code was so complicated and hard to maintain, he through the system away and wrote a new one. Every programmer has lived this experience with computer code. Carey tried to debug a legal code and found its architecture to be so bad that he was better off creating a new one.
CS professors should use this story every time they try to sell the idea of universal computer programming experience to the rest of the university!
The idea of refactoring legal code via a program that implements it is not really new. When I studied logic programming and Prolog in grad school, I read about the idea of expressing law as a Prolog programming and using the program to explore its implications. Later, I read examples where Prolog was used to do just that. The AI and law community still works on problems of this sort. I should dig into some of the recent work to see what progress, if any, has been made since I moved away from that kind of work.
My doctoral work involved modeling and reasoning about legal arguments, which are very much like computer programs. I managed to think in terms of argumentation patterns, based on the work of Stephen Toulmin (whose work I have mentioned here before). I wish I had been smart or insightful enough to make the deeper connection from argumentation to software development ideas such as architecture and refactoring. It seems like there is room for some interesting cross-fertilization.
(As always, if you know about work in this domain, please let me know!)
In "How Do You Do It?", an article in the latest issue of Running Times about how to develop the intrinsic motivation to do crazy things like run every morning at 5:00 AM, ultrarunner Eric Grossman writes:
The will to run emerges gradually where we cultivate it. It requires humility -- we can't just decide spontaneously and make it happen. Yet we must hold ourselves accountable for anything about which we can say, "I could have done differently."
Cultivation, humility, patience, commitment, accountability -- all features of developing the habits I need to run on days I'd rather stay in bed. After a while, you do it, because that's what you do.
I think this paragraph is true of whatever habit of thinking an doing that you are trying to develop, whether it's object-oriented programming, playing piano, or test-driven design.
Or functional programming. Last night I gave a talk at Tech Talk Cedar Valley, a monthly meet-up of tech and software folks in the region. Many of these developers are coming to grips with a move from Java to Scala and are peddling fast to add functional programming style to their repertoires. I was asked to talk about some of the basic ideas of functional programming. My talk was called "Don't Drive on the Railroad Tracks", referring to Bill Murray's iconic character in the movie Groundhog Day. After hundreds or thousands of days reliving February 2 from the same starting point, Phil Connors finally comes to understand the great power of living in a world without side effects. I hope that my talk can help software developers in the Cedar Valley reach that state of mind sooner than a few years from now.
Yesterday I read two articles that highlight, in different ways, the value of smaller things.
Why Google Can't Build Instagram relates a story about how Larry Ellison coaxed efficiencies from teams:
If a team wasn't productive, he'd come every couple of weeks and say "let me help you out." What did he do? He took away another person until the team started shipping and stopped having unproductive meetings.
This turns mythical man-month on its head. I wonder if I should try this in my project courses?
In the same blog entry, Scoble says:
[Instagram] actually started out as a service that did a lot more than just photographs. But, they learned they couldn't complete such a grand vision and do it well. So they kept throwing out features.
If you can't do all that you dreamed of doing, do less -- and do it well. Articles like this one imply that, as organizations get larger and more visible, they lose the ability to reduce scope and focus quality on a smaller project. I'm not sure they lose their ability so much as their will.
Paul Dyson tells a familiar story about the conflict between what a name comes to connote and the actions that are what it should denote:
I recently sat in front of a customer's project manager -- a very smart and reasonable person -- and accidentally used the A-word ["agile"] when describing how we were going to deliver our product and required customisations to them, and they sneered.
They actually snorted in disgust.
When I then explained we would get them live and using the base product quickly, followed by weekly incremental improvements with regular reviews and plenty of opportunity for rework they were very happy.
But they didn't see any connection between the two things.
The hype that seems inevitably to smother so many great ideas in the software world has, for many parts of our world, made "agile" meaningless at best and risible at worst. That's too bad, because when we ruin good words we lose a useful avenue for communication.
Later in the same piece, Dyson offers his solution:
... it's not about being agile/Agile or achieving agility, or being lean/Lean and efficient. It's about delivering software. And I figure the best way to champion that is actually just to get better at doing it.
I love those last two sentences. The best way to show people the value of patterns or TDD or refactoring or almost any practice is to do it. It's about delivering software.
Playwright Arthur Miller is often quoted as saying:
Man must shape his tools lest they shape him.
I read this again yesterday, in the online book Focus, while listening to faculty at a highly-ranked local school talk about the value of a liberal arts education. The quote reminds me about one of the reasons I so like being a computer scientist. I can shape my tools. If I need a new tool, or even a new kind of tool, I can make it.
Our languages are tools, too. We can shape them, grow them, change them. We can create new ones. (Thanks, Matz.)
Via the power of the Internet I am continuously surrounded by colleagues smarter and more motivated than I doing the same. I've been enjoying watching Brian Marick tweet about his thoughts and decision making as he implements Midje in Clojure. His ongoing dialog reminds me that I do not have to settle.
Recently, Kent Beck tweeted:
cleaning up junit in preparation for 4.9 release with @dsaff. why do bad design decisions spread faster than good ones?
I immediately thought of Gresham's Law from economics, which is commonly expressed as "bad money drives good money out of circulation". That sounds like what Kent is saying: bad design decisions drive good ones out of our code.
In reality, the two ideas are not alike. Gresham's Law refers to a human behavior we can all understand. Suppose we have two coins denominated as a worth one dollar. The first coin is made of gold, a rare metal of enduring social and economic value. The second is made of nickel, a common metal not valued by the people using the coins. Under these conditions, people will tend to hoard the gold coins and use the nickel coins in trade. The result is that, eventually, there will be few or no gold coins in circulation. Hence the aphorism: "bad money drives good money out of circulation".
(Sir Thomas Gresham, a financial agent at the time of Queen Elizabeth I, was not the first person to note this behavior. According to Wikipedia, Aristophanes remarked on the phenomenon in his play "The Frogs", at the end of the 5th century BC.)
That's not what is happening in JUnit or other software systems. Kent and his partners aren't hoarding good design decisions and using bad ones in their place, in order to benefit later from having the good decisions at hand. Good design decisions have value only when they are deployed. They are good only in a context where they balance forces in a pleasing or supportive way. "Spending" bad design decisions in code doesn't get rid of them; it requires that we live with them every time we touch the code!
So, equating the phenomenon that Kent described with Gresham's Law would be to misuse the law. If I did so, I wouldn't be alone. Robert Mundell discusses faulty renderings of the principle in an academic paper. It wouldn't even be the first time I made the mistake. I remember vividly the night I took a midterm exam in my undergrad macroeconomics class. I misstated Gresham's Law in one of my responses. My instructor was wondering around the room. He saw my answer, leaned over, and whispered into my ear that I should think about that question again. I did and was able to correct my answer. (That guy was a way cool teacher, and not just for helping me out on the exam.)
When I saw Kent's tweet, I did not make that mistake again. I answered his question with "a perverse malformation of Gresham's Law?" John Mitchell offered his own colorful phrase: viral toxicity. That is almost certainly a better reflection of the phenomenon than mine, yet the ring of "the bad drives out the good" still appeals to me.
I think Kent's question is worth thinking about some more. Bad design does seem to infect other parts of the system and spread, by requiring us to deform other code to make it work well with the bad code, to fit with the bad structures we've already created. Perhaps Viral Toxicity is a sort of Gresham's Law for software, a phenomenon that developers need to be aware of and guard against. Maybe if we talk about the viral toxicity of bad design, other people will understand better the value and even necessity of regular refactoring!
Poet Marvin Bell tells this story in his collection, A Marvin Bell Reader:
[In Star Trek, Captain] Kirk is eating pizza in a joint in San Francisco with a woman whose help he will need, when he decides to fess up about who he is and where he has come from. The camera circles the room, then homes in on Kirk and his companion as she bursts out with, "You mean your from outer space?"
"No," says Kirk, "I'm from Iowa. I just work in outer space."
My life is in some ways a complement to Kirk's. I often feel like I'm from outer space. I just work in Iowa.
I briefly met Bell, the former poet laureate of Iowa, when he gave the keynote address at a camouflage conference at a few years ago. I gave a talk there on steganography, which is a form of digital camouflage. While Bell's quote comes from his own book, I found it in the front matter of Cook Book: Gertrude Stein, William Cook, and Le Corbusier, a delightful little book by Roy Behrens. Longtime readers of this blog will recognize Behrens's name; his writing has led me to many interesting books and ideas. I have written also written of Behrens's own scholarly work several times, most notably Teaching as Subversive Inactivity, Feats of Association, and Reviewing a Career Studying Camouflage. I am fortunate to have Roy as friend and colleague, right here in Cedar Falls, Iowa.
Cook Book tells the story of William Cook, a little known Iowa artist who left America for Europe as a young man and became a longtime friend of the celebrated American writer and expatriate Gertrude Stein. He later used his inheritance to hire a young, unknown Le Corbusier to design his new home on the outskirts of Paris. Behrens grew up in Cook's hometown of Independence, Iowa. If you would like a taste of the story before reading the book, read this short essay.
I am no longer surprised to learn of surprising connections among people, well-known and unknown alike. Yet I am always surprised at the particular connections that exist. A forgotten Iowa artist was a dear friend of one of America's most famous writers of the early 1900s? He commissioned one of the pioneers of modern architecture before anyone had heard of him? Pope Pius makes a small contribution to the expatriate Iowan's legacy?
A few recent entries have given rise to interesting responses from readers. Here are two.
Relationships, Not Characters talked about how the most important part of design often lies in the space between the modules we create, whether objects or functions, not the modules themselves. After reading this, John Cook reminded me about an article by Thomas Guest, Distorted Software. Near the end of that piece, which talks about design diagrams, Guest suggests that the arrows in application diagrams should be larger, so that they would be proportional to the time their components take to develop. Cook says:
We typically draw big boxes and little arrows in software diagrams. But most of the work is in the arrows! We should draw fat arrows and little boxes.
I'm not sure that would make our OO class diagrams better, but it might help us to think more accurately!
My Kid Could Do That
Ideas, Execution, and Technical Achievement wistfully admitted that knowing how to build Facebook or Twitter isn't enough to become a billionaire. You have to think to do it. David Schmüdde mentioned this entry in his recent My Kid Could Do That, which starts:
One of my favorite artists is Mark Rothko. Many reject his work thinking that they're missing some genius, or offended that others see something in his work that they don't. I don't look for genius because genuine genius is a rare commodity that is only understood in hindsight and reflection. The beauty of Rothko's work is, of course, its simplicity.
That paragraph connects with one of the key points of my entry: Genius is rare, and in most ways irrelevant to what really matters. Many people have ideas; many people have skills. Great things happen when someone brings these ingredients together and does something.
Later, he writes:
The real story with Rothko is not the painting. It's what happens with the painting when it is placed in a museum, in front of people at a specific place in the world, at a specific time.
In a comment on this post, I thanked Dave, and not just because he discusses my personal reminiscence. I love art but am a novice when it comes to understanding much of it. My family and I saw an elaborate Rothko exhibit at the Smithsonian this summer. It was my first trip to the Smithsonian complex -- a wonderful two days -- and my first extended exposure to Rothko's work. I didn't reject his art, but I did leave the exhibit puzzled. What's the big deal?, I wondered. Now I have a new context in which to think about that question and Rothko's art. I didn't expect the new context to come from a connection a reader made to my post on tech start-up ideas that change the world!
I am glad to know that thinkers like Schmüdde are able to make connections like these. I should note that he is a professional artist (both visual and aural), a teacher, and a recovering computer scientist -- and a former student of mine. Opportunities to make connections arise when worlds collide.
Drama is is about relationships, not about characters.
This immediately brought to mind Alan Kay's insistence that object-oriented programmers too often focus so much on the objects in their programmers that they lose sight of something more important: the space between the objects. A few years ago, I wrote about this idea in an entry called Software in Negative Space. It remains one of my most-read articles.
The secret to good OO design is in the ma, the web of relationships that make up a complex system, not in the objects themselves.
I think this is probably true of good design in any style, because it is really about how complex systems can manage and use encapsulation. The very best OO design patterns show us how multiple objects interact via message passing to resolve a thorny set of forces. The individual objects don't solve the problem; the solution lies in their interfaces and delegation of responsibility. We need to think about our designs at that level, too.
Now that functional programming is receiving so much mainstream attention, this is a good time to think about when and how good functional design is about relationships, not (just) functions. A combinator is an example: the particular functions are not as important as they way they hook together to solve a bigger problem. Designing functions in this style requires thinking more about the interfaces that expose abstractions to the world, and how other modules use them as service, and less about their implementation. Relationships.