We professors usually write glowingly of our students. Writing bad things about students in public seems like a bad idea. Besides, we mean the good things we say. By and large students are great people to be around. We get to learn with them watch them grow. Yet...
Yesterday, I tweeted out of an emotion I occasionally feel when I read my student evaluations after a semester ends: even if n-1 students say positive things and offer constructive suggestions for improvement, my mind focuses on the one student who was unhappy and complained unhelpfully. It's just the ego at work; objectively, every instructor realizes that whenever a bunch of students gather in one place, it is likely that a few will be unhappy. It's unrealistic -- foolish, really -- to think that everyone should like you.
Fortunately, after a few minutes (or hours, or days, if you haven't yet trained your mind in this discipline yet), the feeling passes and you move forward, learning from the assessment and improving the course.
Occasionally, the negative comments are not a random event. In this case, I'm pretty sure I know who was unhappy. This student had felt similarly in previous semesters. He or she is just not a fan of mine.
If we are all honest with ourselves and each other, we have to admit that the same is true for us professors. Occasionally, we encounter a student who rubs us the wrong way. It is rare, perhaps surprisingly so, but every few years I encounter a student of whom I am not a big fan. Sometimes the feeling is mutual, but not always. Occasionally, I have students who don't like me much but whom I like well enough, or students who rub me the wrong way but seem to like me fine. The good news is that, even in these circumstances, students and professors alike do a pretty good of working together professionally. For me, it's a point of professional pride not to let how I feel about any student, positive or negative, affect my courses.
I almost titled this post "Difficult Students", but that struck me as wrong. From the student's perspective, this is about difficult instructors. And it's not really about students and instructors at all, at least most of the time. Other students enjoy my courses even when one does not; other faculty like and enjoy the students who aren't my favorites. It's about relationships, one-on-one.
And, as I wrote in the George Costanza post linked above, this is to be expected. We are all human.
(If you prefer an analgesic with a harder edge, I offer you Gaping Void's take on the matter.)
Late last week, Michael Nielsen tweeted:
"The most successful people are those who are good at Plan B." -- James Yorke
This is one of my personal challenges. I am a pretty good Plan A person. Historically, though, I am a mediocre Plan B person. This is true of creating Plan B, but more importantly of recognizing and accepting the need for Plan B.
Great athletes are good at Plan B. My favorite Plan B from the sporting world was executed by Muhammad Ali in the Rumble in the Jungle, his heavyweight title fight against George Foreman in October 1974. Ali was regarded by most at that time as the best boxer in the world, but in Foreman he encountered a puncher of immense power. At the end of Round 1, Ali realized that his initial plan of attacking Foreman was unlikely to succeed, because Foreman was also a quick fighter who had begun to figure out Ali's moves. So Ali changed plans, taking on greater short-term risk by allowing Foreman to hit him as much as he wanted, so long as the blows were not the kind likely to end the fight immediately. Over the next few rounds, Foreman began to wear down, unaccustomed to throwing so many punches for so many rounds against an opponent who did not weaken. Eventually, Ali found his opening, attacked, and ended the fight in Round 8.
This fight is burned in my mind for the all-time great Plan B moment: Ali sitting on his stool between the first and second rounds, eyes as wide and white as platters. I do not ever recall seeing fear in Muhammad Ali's eyes at any other time in his career, before or after this fight. He believed that Foreman could knock him out. But rather than succumb to the fear, he gathered himself, recalculated, and fought a different fight. Plan B. The Greatest indeed.
Crazy software developer that I am, I see seeds of Plan B thinking in agile approaches. Keep Plan A simple, so that you don't overcommit. Accept Plan B as a matter of course, refactoring in each cycle to build what you learn from writing the code back into the program. React to your pair's ideas and to changes in the requirements with aplomb.
There is good news: We can learn how to be better at Plan B. It takes effort and discipline, just as changing any of our habits does. For me, it is worth the effort.
If you would like to learn more about the Rumble in the Jungle, I strongly recommend the documentary film When We Were Kings, which tells the story of this fight and how it came to be. Excellent sport. excellent art, and you can see Ali's Plan B moment with your own eyes.
I've been reading through some of the back entries in Vivek Haldar's blog and came across the entry Coding Blind. Haldar notes that most professionals and craftsmen learn their trade at least in part by watching others work, but that's not how programmers learn. He says that if carpenters learned the way programmers do, they'd learn the theory of how to hammer nails in a classroom and then do it for the rest of their careers, with every other carpenter working in a different room.
Programmers these days have a web full of open-source code to study, but that's not the same. Reading a novel doesn't give you any feel at all for what writing a novel is like, and the same is true for programming. Most CS instructors realize this early in their careers: showing students a good program shows them what a finished program looks like, but it doesn't give them any feel at all for what writing a program is like. In particular, most students are not ready for the false starts and the rewriting that even simple problems will cause them.
Many programming instructors try to bridge this gap by writing code live in class, perhaps with student participation, so that students can experience some of the trials of programming in a less intimidating setting. This is, of course, not a perfect model; instructors tend not to make the same kind of errors as beginners, or as many, but it does have some value.
Haldar points out one way that other kinds of writers learn from their compatriots:
Great artists and writers often leave behind a large amount of work exhaust other than their finished masterpieces: notebooks, sketches, letters and journals. These auxiliary work products are as important as the finished item in understanding them and their work.
He then says, "But in programming, all that is shunned." This made me chuckle, because I recently wrote a bit about my experience having students maintain engineering notebooks for our Intelligent Systems course. I do this so that they have a record of their thoughts, a place to dump ideas and think out loud. It's an exercise in "writing to learn", but Haldar's essay makes me think of another potential use of the notebooks: for other students to read and learn from. Given how reluctant my students were to write at all, I suspect that they would be even more reluctant to share their imperfect thoughts with others in the course. Still, perhaps I can find a way to marry these ideas.
This makes me think of another way that writers learn from each other, writers' workshops. Code reviews are a standard practice in software, and PLoP, the Pattern Languages of Programs conference, has adapted the writers' workshop form for technical writers. One of the reasons I like to teach certain project courses in a studio format is that it gives all then teams an opportunity to see each other's work and to talk about design, coding, and anything else that challenges or excites them. Some semesters, it works better than others.
Of course, a software team itself has the ability to help its members learn from one another. One thing I noticed more this semester than in the past was students commenting that they had learned from their teammates by watching them work. Some of the students who said this viewed themselves as the weakest links on their teams and so saw this as a chance to approach their more accomplished teammates' level. Others thought of themselves as equals to their teammates yet still found themselves learning from how others tackled problems or approached learning a new API. This is a team project succeeding as we faculty hope it might.
Distilling experience with techniques in more than just a finished example or two is one of the motivations for the software patterns community. It's one of the reasons I felt so comfortable with both the literary form and the community: its investment in and commitment to learning from others' practice. That doesn't operate at quite the fundamental level of watching another carpenter drive a nail, but it does strike close to the heart of the matter.
J.B. Rainsberger's short entry grabbed my attention immediately. I think that Rainsberger is talking about a pair of complementary patterns that all developers learn at some point or other as they write more and bigger programs. He elegantly captures the key ideas in only a few words.
These patterns balance common forces between giving things long names and giving things short names. A long name can convey more information, but a short name is easier to type, format, and read. A long name can be a code smell that indicates a missing abstraction, but a short name can be a code smell that indicates premature generalization, a strange kind of YAGNI violation.
The patterns differ in the contexts in which they appear successfully. Long names are most useful the first time or two you implement an idea. At that point, there are few or no other examples of the idea in our code, so there is not yet a need for an abstraction. A long name can convey valuable information about the idea. As an idea appears more often, two or more long names will begin to overlap, which is a form of duplication. We are now ready to factor out the abstraction common to them. Now the abstraction conveys some or all of the information and short names become more valuable.
I need to incorporate these into any elementary pattern language I document, as well as in the foundation patterns layer of any professional pattern language. One thing I would like to think more about is how these patterns relate to Kent Beck's patterns Intention-Revealing Name and Type-Revealing Name.
On the last day of my Intelligent Systems course, I asked my students three retrospective questions. Each question asked them to identify one thing...
Question 3 is a topic for another day, when I will talk a bit about AI. Today I am thinking more about what students learned about writing software. As one of our curriculum's designated "project courses", Intelligent Systems has the goal of giving students an experience building a significant piece of software, as part of a team. What do the students themselves think they learned?
A couple of answers to the first question were of more general software development interest:
I learned that some concepts are easy to understand conceptually but difficult to implement or actually use.
I learned to be open-minded about several approaches to solving a problem. ... be prepared to accept that an approach might take a lot of time to understand and end up being [unsuitable].
There is nothing like trying to solve a real problem to teach you how hard some solutions are to implement. Neural networks were the most frequently mentioned concept that is easy to understand but hard to make work in practice. Many students come out their AI course thinking neural nets are magic; it turns out magic can be hard to serve up. I suspect this is true of many algorithms and techniques students learn over the course of their studies.
I don't recall talking about agile software development much during this course, though no doubt it leaks out in how I typically talk about writing software. Still, I was surprised at the theme running through student responses to the second question.
Design takes time. Multiple iterations, revise and test.
A couple of teams discovered spike solutions, sorta:
You may write a lot of worthless or bad code to help with the final solution. We produced a lot of bad code that was never used in the end product, but it helped us get to that point.
These weren't true spikes, because the teams didn't set out with the intention of using the code to learn. But most didn't realize that they could or should do this. Now that they know, they might behave differently in the future. Most important, they learned that it's okay to "code to learn".
Many students came to appreciate collective code ownership and tools that support it:
When writing software in a group, it is important to make your code readable: descriptive [names] and comments that describe what is going on.
I learned how to maintain a project with a repository so that each team member can keep his copy up-to-date. ... I also learned how to use testing suites.
Tests also showed up in one of my favorite student comments, about refactoring:
I learned that when refactoring even small code you need unit tests to make sure you are doing things correctly. Brute forcing only gets you into trouble and hours of debugging bad code.
Large, semester-long projects usually given students their first opportunity to experience refactoring. Living inside a code base for a while teaches them a lot about what software development is really like, especially code they themselves have written. Many are willing to accept that living with someone else's code can be difficult but believe that their own code will be fine. Turns out it's not. Most students then come to appreciate the value of refactoring techniques I need to help them learn refactoring tools better.
Finally, this comment from the first student retrospective I read captures a theme I saw throughout:
It is best to start off simple and make something work, rather than trying to solve the entire problem at once and get lost in its complexity.
This is in many ways the heart of agile software development and the source for all the other practices we find so useful. Whatever practices my own students adopt in the coming years, I hope they are guided by this idea.
Some of you will recognize the character in the image above as Curly, the philosopher-cowboy from City Slickers. One of the great passages of that 1991 film has Curly teaching protagonist Mitch about the secret of life, "One thing. Just one thing."
I am not the first software person to use Curly as inspiration. Check out, for example, Curly's Law: Do One Thing. Atwood shows how "do one thing" is central to "several core principles of modern software development.
I'm pretty much done with my grading for the semester. All that's left is freezing the grades and submitting them.
Intelligent Systems is a project course, and I have students evaluate their and their teammates' contributions to the project. One part of the evaluation is to allocate the points their team earns on the project to the team members according to the quality and quantity of their respective contributions. As I mentioned to the class earlier in the semester, point allocations from semester to semester tend to exhibit certain features. With few exceptions:
All that adds up to me being rather satisfied with the grades that fall out of the grinder at the end of the semester.
One thing that has not changed since I last taught this course ten years ago or so is that most students don't like the idea of an engineer's notebook. I ask each student to maintain a record their of their notes while working on the project along with a weekly log intended to be a periodic retrospective of their work and progress, their team's work and progress, and the problems they encounter and solve along the way. Students have never liked keeping notebooks. Writing doesn't seem to be a habit we develop in our majors, and by the time they reach their last ultimate or penultimate semester, the habit of not writing is deeply ingrained.
One thing that may have changed in the last decade: students seem more surly at being asked to keep a notebook. In the past, students either did write or didn't write. This year, for the most part, students either didn't write or didn't write much except to say how much they didn't like being asked to write. I have to admire their honesty at the risk of being graded more harshly for having spoken up. (Actually, I am proud they trust me enough to still grade them fairly!) I can't draw a sound conclusion from one semester's worth of data, but I will watch for a trend in future semesters.
One thing that did change this semester: I allowed students to blog instead of maintaining a paper notebook. I was surprised that only two students took me up on the offer. Both ended up with records well above the average for the class. One of the students treated his blog a bit more formally than I think of an engineer's notebook, but the other seemed to treat much as he would have a paper journal. This was a win, one I hope to replicate in the future.
The Greeks long ago recorded that old habits die hard, if at all. In the future, I will have to approach the notebook differently, including more and perhaps more persuasive arguments for it up front and more frequent evaluation and feedback during the term. I might even encourage or require students to blog. This is 2011, after all.
This is finals week, so my Intelligent Systems students have presented their projects and submitted their code and documentation. All that's left for some of them is to submit their project notebooks. By and large, all four teams did good job this semester, and I'm happy both with the produced and with the way they produced it.
(The notebooks may be an exception, and that means I need to do a better job convincing them to write as they think and learn throughout the semester.)
A couple of teams were disappointed that they did not accomplish as much as they had hoped. I reassured them that when we explore, we often take wrong turns. Otherwise, it wouldn't be exploration! In science, too, we sometime run experiments that fail. Still, we can learn from the experience.
This experience, coupled with a tweet I saw a week or so ago, has given me my first new idea for next semester:
a prize for the best failed idea of the semester
I teach Programming Languages in the fall, in which students learn Scheme, functional programming, and a fair bit about language interpretation. All of these are fertile areas for failure, by students and professor alike! At this early stage of planning, I think I'll announce the prize early in the semester and allow students to submit entries throughout. A strong candidate for the prize will be an idea that seemed good at the time, so the student tried it out, whether in code or some other form. After investing time and energy, the student has to undo the work, maybe even start from scratch, in order to solve the original problem.
This sounds like failure to most students, but the point of the prize is this: you can learn a lot from an idea that doesn't pan out. If students can look back on their failures and understand why it was valuable trying the ideas anyway, they will have learned something. Whether they win a prize or not, they may well end up with a funny story to tell!
Now, I need a good name for the prize. Suggestions are welcome!
I also need a prize. I've considered the possibility of giving extra credit but just about convinced myself to do something more fun and perhaps more lasting. Besides, course credit is so not the point. Giving extra credit might encourage broader participation among the students, but I believe that the number of students who care more about their grades than about learning is smaller than most people think. And the idea of offering a prize is to encourage a willingness to explore good ideas, even to cultivate a sense of adventure. Awarding class points would be like giving your best friend in the world money as a birthday gift: it misses the point.
My hope in offering such a prize is to help students move a little down the path from thinking like this:
to thinking like this:
[Engineers Without Borders] believes that success in development is not possible without taking risks and innovating -- which inevitably means failing sometimes. We also believe that it's important to publicly celebrate these failures, which allows us to share the lessons more broadly and create a culture that encourages creativity and calculated risk taking.
An annual report of failures! These are engineers who get it.
I just read this passage from The Rhythm of Life, by Matthew Kelly:
You never can get enough of what you don't really need.
Fulfillment comes not from having more and more of everything forever into oblivion. Fulfillment comes from having what you need.
Kelly is talking about how we live our lives. However, I could not help but think of You Aren't Gonna Need It and agile software development.
From there, Kelly takes a moral turn, but even then I hear the agile voice within:
The whole world is chasing illegitimate wants with reckless abandon. We use all of our time, effort, and energy in the pursuit of our illegitimate wants, hypnotized by the lie that our illegitimate wants are the key to our happiness.
At the same time, the gentle voice within us is constantly calling out to us, trying to encourage us not to ignore the wisdom we already possess.
There is a lot to be said for learning to be content with implementing the features we are working on right now, not features we think are coming in the future. Perhaps if we can learn to be content in life we can also learn to be content in code.
Last week, I thought out loud about the university's relationship with its students, which may be different from what some are thinking it is. I just ran across an interview with Tim O'Reilly from last week about the future of his industry. His industry is different from what some are thinking it is, too:
At O'Reilly the way we think about our business is that we're not a publisher; we're not a conference producer; we're a company that helps change the world by spreading the knowledge of innovators.
My university could do worse than simply stealing O'Reilly's mission statement. There is more to our mission, of course. Research universities, at least, have historically also been about creating knowledge; universities such as mine have been as much about integrating knowledge as creating it. Public universities also serve their states in various ways, not the least of which is preparing educated citizens for participation in public life. It's hard to serve all these roles, and ultimately it all comes down to students and learning.
Mission statements and strategic plans have a bad reputation among faculty, and for good reason. They to be corporate statements diluted, by trying to say everything and, as a result, saying nothing much. But thinking hard about the real core of the university's mission might help us evolve and survive, rather than becoming the next dinosaur.
Our mission certainly isn't about classrooms, packaged courses, and labs filled with general purpose computers. Those are implementation details, built from the technology of a particular era. Just as the book and newspaper are undergoing changes in their basic form as technology evolves, so too should the university experience. Some of the technology we use now belongs to a dying era.
That's what makes the O'Reilly's statement I quote above so important. He has always recognized that his business is not defined by the technology of the time. Some people are afraid of changing technology because they do see themselves and their businesses as defined by their implementations. As technology evolves, O'Reilly is comfortable evolving his business model along with it, without abandoning what the company is really about.
Part of what made diagnosing my knee injury challenging is that the injury has not presented normally. Normally, this condition follows an obvious trauma. I did not suffer one. Normally, the symptoms include occasional locking of the joint and occasionally feeling as if the joint is going to give out. I have not experienced either. Normally, there is more pain than I seem to be having.
The doctors were surprised by this unusual presentation, but it didn't worry them much. They are used to the fact that there is no normal.
The human body is a complex machine, and people subject their bodies to a complex set of stimuli and conditions. As a result, the body responds in an unbelievable number of ways. What we think of as the "normal" path of most diseases, injuries, and processes is a composite of many examples. Each symptom or observation has some likelihood of occurring, but it is common for a particular case to look quite unusual.
This is something we learn when we study statistical methods. It's possible that no number in a set is equal to the average of all the numbers in a set. It's possible that no member in a set is normal in the sense of sharing all the features that are common to most members.
A large software system is a complex machine, and people subject software to a complex set of stimuli and conditions. As a result, the software responds in a surprising number of ways. When we think of this from the perspective people as users, we realize just how important designing for usability, reliability, and robustness are.
Programmers are people who interact with software, too, and we subject our programs to a wide-ranging set of demands. When we think about "there is no normal" from this perspective, we better understand why it is so challenging to debug, extend, and maintain programs.
Our programs may not be as complex as the human body, and we try to design them rather than let them evolve unguided. But I think it's still useful to program with a mindset that there is no normal. That way, like my doctor, we can handle cases that seem unusual with aplomb.
Last week, someone I follow tweeted this link in order to share this passage:
You will be newbie forever. Get good at the beginner mode, learning new programs, asking dumb questions, making stupid mistakes, soliciting help, and helping others with what you learn (the best way to learn yourself).
That blog entry is about inexorable change of technology in then modern world and how, if we want to succeed in this world, we need a mindset that accommodates change. We might even argue that we need a mindset that welcomes or seeks out change. To me, this is one of the more compelling reasons for us to broaden the common definition of the liberal arts to include computing and other digital forms of communication.
As much as I like the quoted passage, I liked a couple of others as much or more. Consider:
Understanding how a technology works is not necessary to use it well. We don't understand how biology works, but we still use wood well.
As we introduce computing and other digital media to more people, we need to balance teaching how to use new ideas and techniques and teaching underlying implementations. Some tools change how we work without us knowing how they work, or needing to know. It's easy for people like me to get so excited about, say, programming that we exaggerate its importance. Not everyone needs to program all the time.
Then again, consider this:
The proper response to a stupid technology is to make a better one yourself, just as the proper response to a stupid idea is not to outlaw it but to replace it with a better idea.
In the digital world as in the physical world, we are not limited by our tools. We can change how our tools work, through configuration files and scripts. We can make our own tools.
Finally, an aphorism that captures differences between how today's youth think about technology and how people my age often think (emphasis added):
Nobody has any idea of what a new invention will really be good for. To evaluate, don't think; try.
This has always been true of inventions. I doubt many people appreciated just how different the world would be after the creation of the automobile or the transistor. But with digital tools, the cost of trying things out has been driven so low, relative to the cost of trying things in the physical world, that the cost is effectively zero. In so many situations now, the net value of trying things exceeds the net value of thinking.
I know that sounds strange, and I certainly don't mean to say that we should all just stop thinking. That's the sort of misinterpretation too many people made of the tenets of extreme programming. But the simple fact is, thinking too much means waiting too long. While you are thinking -- waiting to start -- someone else is trying, learning faster, and doing things that matter.
I love this quote from Elisabeth Hendrickson, who reminded herself of the wisdom of "try; don't think" when creating her latest product:
... empirical evidence trumps speculation. Every. Single. Time.
The scientific method has been teaching us the value of empiricism over pure thought for a long time. In the digital world, the value is even more pronounced.