MacKenzie Bezos, in her Amazon review of The Everything Store: Jeff Bezos and the Age of Amazon, writes:
One of the biggest challenges in non-fiction writing is the risk that a truthfully balanced narration of the facts will be boring, and this presents an author with some difficult choices.
Teachers face these choices all the time, too. Whenever I teach a course, I want to help my students be excited about the ideas and tools that we are studying. I like to tell stories that entertain as well as illuminate. But not every moment of learning a new programming language, or a new programming style, or a set of mathematical ideas, is going to have my students on the edges of their seats.
The best I can hope for is that the exciting parts of the course will give us the momentum we need to make it through the more boring parts. A big part of education is learning that the best parts of a course are the motivation for doing the hard work that gets us to the next exciting idea.
In The Great Works of Software, Paul Ford tells us that the Photoshop file format is
a fascinating hellish palimpsest.
"Palimpsest" is one of those words I seem always have to look up whenever I run across it. What a lyrical word.
After working with a student a few summers ago on a translator from Photoshop PSD format to HTML/CSS (mentioned in the first paragraph of this essay, I can second the assertion that PSD is fascinating and hellish. Likewise, however often it has changed over time, it looks in several places as if it is held together with bailing wire.
Ford said it better than I could have, though.
Most people seem to believe that personalizing instruction to each individual is an unalloyed good. However, Benjamin Riley argues that two common axioms of individualized instruction "run afoul of our current understanding of cognition":
He says that both run the risk of giving the learner too much freedom.
Path. Knowledge is cumulative, and students need a suitable context in which to interpret and assimilate new information. If they try to learn things in the wrong order, they may not be able to make sense of the new information. They are also more likely to become frustrated, which impedes learning further.
Pace. Thinking is hard, and learning isn't always fun. Most people have a natural tendency to shy away from difficult or unpleasant tasks, and as a result can slow our overall rate of learning when we have to choose what to work on next.
(Dan Meyer offers a second reason to doubt the pace axiom: a lot of the fun and insight that comes from learning happens when we learn synchronously with a group.)
Of course, we could take Riley's arguments to their extremes and eliminate any consideration of the individual from our instructional plans. That would be a mistake. For example, each student comes into the classroom with a particular level of understanding and a particular body of background knowledge. When we take this background into account in a reasonable way, then we should be able to maximize each student's learning potential. When we don't, we unnecessarily limit their learning.
However, on balance, I agree with Riley's concerns. Some of my university students benefit greatly when given control over their own learning. Most, though, struggle making choices about what to think about next and why. They also tend not to give themselves enough credit for how much they can learn if only they put in the time and energy studying and practicing. They need help with both path and pace.
I've been teaching long enough now to respect the value that comes with experience as a teacher. By no means am I a perfect teacher, but after teaching a course for a few times I begin to see ways in which I can order topics and pace the coverage in ways that help more students succeed in the course. I don't think I appreciated this when I was a student. The best teachers I ever had were the ones who had this experience and used it well.
I'll stick with my usual approach of trying to design a curriculum intentionally with regard to bother order and timing, while at the same time trying to take my students' current knowledge into account as we move through the course.
A lot of people I know have been discussing the recent New Yorker article "debunking" Clayton Christensen's theory of disruptive innovation. I'm withholding judgment, because that usually is the right thing for me to do when discussing theories about systems we don't understand well and critiques of such theories. The best way to find out the answer is to wait for more data.
That said, we have seen this before in the space of economics and business management. A few years back, the book Good to Great by James Collins became quite popular on my campus, because our new president, an economist by training, was a proponent of its view of how companies had gone from being merely steady producers to being stars in their markets. He hoped that we could use some of its prescriptions to help transform our university from a decent public comprehensive into a better, stronger institution.
But in recent years we have seen critiques of Collins's theory. The problem: some of the companies that Collins touts in the book have fallen on hard times and been unable to sustain their greatness. (As I said, more data usually settles all scores.) Good to Great's prescriptions weren't enough for companies to sustain greatness; maybe they were not sufficient, or even necessary, for achieving (short-term) market dominance.
This has long been a weakness of the business management literature. When I was an undergrad double majoring in CS and accounting, I read a lot of case studies about successful companies, and my professors tried to help us draw out truths that would help any company succeed. Neither the authors of the case studies nor the professors seemed aware that we were suffering from a base case of survivor bias. Sure, that set of strategies worked for Coca Cola. Did other companies use the same strategies and fail? If so, why? Maybe Coca Cola just got lucky. We didn't really know.
My takeaway from reading most business books of this sort is that they tell great stories. They give us posthoc explanations of complex systems that fit the data at hand, but they don't have much in the way of predictive power. Buying into such theories wholesale as a plan for the future is rarely a good idea.
These books can still be useful to people who read them as inspirational stories and a source of ideas to try. For example, I found Collins's idea of "getting the right people on the bus" to be helpful when I was first starting as department head. I took a broad view of the book and learned some things.
I think the positive reaction to the New Yorker article is really a reaction to the many people who have been using the idea of disruptive innovation as a bludgeon in the university space, especially with regard to MOOCs. Christensen himself has sometimes been guilty of speaking rather confidently about particular ways to disrupt universities. After a period of groupthink in which people know without evidence that MOOCs will topple the existing university model, many of my colleagues are simply happy to have someone speak up on their side of the debate.
The current way that universities do business faces a number of big challenges as the balance of revenue streams and costs shifts. Perhaps universities as we know them now will ultimately be disrupted. This does not mean that any technology we throw at the problem will be the disruptive force that topples them. As Mark Guzdial wrote recently,
Moving education onto MOOCs just to be disruptive isn't valuable.
That's the most important point to take away from the piece in the New Yorker: disruptors ultimately have to provide value in the market. We don't know yet if MOOCs or any other current technology experiment in education can do that. We likely won't know until after it starts to happen. That's one of the important points to take away from so much of the business management literature. Good descriptive theories often don't make good prescriptive theories.
The risk people inside universities run is falling into a groupthink of their own, in which something very like the status quo is the future of higher education. My colleagues tend to speak in more measured tones than some of the revolutionaries espousing on-line courses and MOOCs, but their words carry an unmistakable message: "What we do is essential. The way we do has stood the test of time. No one can replace us." Some of my colleagues admit ruefully that perhaps something can replace the university as it is, but that we will all be worse off as a result.
That's dangerous thinking, too. Over the years, plenty of people who have said, "No one can do what we do as well as we do" have been proven wrong.
In Generation Liminal, Dorian Taylor recalls how the World Wide Web arrived at the perfect time in his life:
It's difficult to appreciate this tiny window of opportunity unless you were present for it. It was the World-Wild West, and it taught me one essential idea: that I can do things. I don't need a license, and I don't need credentials. I certainly don't need anybody telling me what to do. I just need the operating manual and some time to read it. And with that, I can bring some amazing -- and valuable -- creations to life.
I predate the birth of the web. But when we turned on the computers at my high school, BASIC was there. We could program, and it seemed the natural thing to do. These days, the dominant devices are smart phones and iPads and tablets. Users begin their experience far away from the magic of creating. It is a user experience for consumers.
One day many years ago, my older daughter needed to know how many words she had written for a school assignment. I showed her Terminal.app and wc. She was amazed by its simplicity; it looked like nothing else she'd ever seen. She still uses it occasionally.
I spent several days last week watching middle schoolers -- play. They consumed other people's creations, including some tools my colleagues set up for them. They have creative minds, but for the most part it doesn't occur to them that they can create things, too.
We need to let them know they don't need our permission to start, or credentials defined by anyone else. We need to give them the tools they need, and the time to play with them. And, sometimes, we need to give them a little push to get started.
At least David Auerbach thinks so. One of the reasons is that programming has a self-perpetuating cycle of creation, implementation, repair, and new birth:
"Coding" isn't just sitting down and churning out code. There's a fair amount of that, but it's complemented by large chunks of testing and debugging, where you put your code through its paces and see where it breaks, then chase down the clues to figure out what went wrong. Sometimes you spend a long time in one phase or another of this cycle, but especially as you near completion, the cycle tightens -- and becomes more addictive. You're boosted by the tight feedback cycle of coding, compiling, testing, and debugging, and each stage pretty much demands the next without delay. You write a feature, you want to see if it works. You test it, it breaks. It breaks, you want to fix it. You fix it, you want to build the next piece. And so on, with the tantalizing possibility of -- just maybe! -- a perfect piece of code gesturing at you in the distance.
My experience is similar. I can get lost for hours in code, and come out tired but mentally energized. Writing has never given me that kind of high, but then I've not written a really long piece of prose in a long time. Perhaps writing fiction could give me the sort of high I experience when deep in a program.
What about playing games? Back in my younger days, I experienced incredible flow while playing chess for long stretches. I never approached master level play, but a good game could still take my mind to a different level of consciousness. That high differed from a coder's high, though, in that it left me tired. After a three-round day at a chess tournament, all I wanted to do was sleep.
Getting lost in a computer game gives me a misleading feeling of flow, but it differs from the chess high. When I come out of a session lost in most computer games, I feel destroyed. The experience doesn't give life the way coding does, or the way I imagine meditation does. I just end up feeling tired and used. Maybe that's what drug addiction feels like.
I was thinking about computer games even before reading Auerbach's article. Last week, I was sitting next to one of the more mature kids in our summer camp after he had just spent some time gaming, er, collecting data for our our study of internet traffic. We had an exchange that went something like this:
Student: I love this feeling. I'd like to create a game like this some day.
Eugene: You can!
Student: Really? Where?
Eugene: Here. A group of students in my class last month wrote a computer game next door. And it's way cooler than playing a game.
I was a little surprised to find that this young high schooler had no idea that he could learn computer programming at our university. Or maybe he didn't make the connection between computer games and computer programs.
In any case, this is one of the best reasons for us CS profs to get out of their university labs and classrooms and interact with younger students. Many of them have no way of knowing what computer science is, what they can do with computer science, or what computer science can do for them -- unless we show them!
Q: What do you call a company that has staff members with "programmer" or "software developer" in their titles?
A: A company.
Back in 2012, Alex Payne wrote What Is and Is Not A Technology Company to address a variety of issues related to the confounding of companies that sell technology with companies that merely use technology to sell something else. Even then, developing technology in house was a potential source of competitive advantage for many businesses, whether that involved modifying existing software or writing new.
The competitive value in being able to adapt and create software is only larger and more significant in the last two years. Not having someone on staff with "programmer" in the title is almost a red flag even for non-tech companies these days.
Those programmers aren't likely to have been CS majors in college, though. We don't produce enough. So we need to find a way to convince more non-majors to learn a little programming.
In 2003, Stefan Ram asked Alan Kay to explain some of the ideas and history behind the term "object-oriented". Ram posted Kay's responses for all to see. Here is how Kay responded to the specific question, "What does 'object-oriented [programming]' mean to you?":
OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things.
Messaging and extreme late-binding have been consistent parts of Kay's answer to this question over the years. He has also always emphasized the encapsulated autonomy of objects, with analogy to cells from biology and nodes on the internet. As Kay has said many times, in his conception of the basic unit of computation is a whole computer.
For some reason, I really like the way Kay phrased the encapsulated autonomy clause in this definition: local retention and protection and hiding of state-process. It's not poetry or anything, but it has a rhythm.
Kay's e-mail mentions another of Kay's common themes, that most computer scientists didn't take full advantage of the idea of objects. Instead, we stayed too close to the dominant data-centric perspective. I often encounter this with colleagues who confound object-oriented programming with abstract data types. A system designed around ADTs will not offer the same benefits that Kay envisions for objects defined by their interactions.
In some cases, the words we adopted for OO concepts may have contributed to the remaining bias toward data, even if unintentionally. For example, Kay thinks that the term "polymorphism" hews too closely to the standard concept of a function to convey the somewhat different notion of an object as embodying multiple algebras.
Kay's message also mentions two projects I need to learn more about. I've heard of Robert Balzer's Dataless Programming paper but never read it. I've heard of GEDANKEN, a programming language project by John Reynolds, but never seen any write-up. This time I downloaded GEDANKEN: A Simple Typeless Language Which Permits Functional Data Structures and Coroutines, Reynolds's tech report from Argonne National Lab. Now I am ready to become a little better informed than I was this morning.
The messages posted by Ram are worth a look. They serve as a short precursor to (re-)reading Kay's history of Smalltalk paper. Enjoy!
Today is the first day of Cookies, Games, and Websites, a four-day summer camp for middle-school students being offered by our department. A colleague of mine developed the idea for a workshop that would help kids of that age group understand better what goes on when they play games on their phones and tablets. I have been helping, as a sounding board for ideas during the prep phase and now as a chaperone and helper during the camp. A local high school student has been providing much more substantial help, setting up hardware and software and serving as a jack-of-all-trades.
The camp's hook is playing games. To judge from this diverse group of fifteen students from the area, kids this age already know very well how to download, install, and play games. Lots of games. Lots and lots of games. If they had spent as much time learning to program as they seem to have spent playing games, they would be true masters of the internet.
The first-order lesson of the camp is privacy. Kids this age play a lot of games, but they don't have a very good idea how much network traffic a game like Cut the Rope 2 generates, or how much traffic accessing Instagram generates. Many of their apps and social websites allow them to exercise some control over who sees what in their space, but they don't always know what that means. More importantly, they don't realize how important all this all is, because they don't know how much traffic goes on under the hood when they use their mobiles devices -- and even when they don't!
The second-order lesson of the camp, introduced as a means to an end, is computing: the technology that makes communication on the web possible, and some of the tools they can use to look at and make sense of the network traffic. We can use some tools they already know and love, such as Google maps, to visualize the relevant data.
This is a great idea: helping young people understand better the technology they use and why concepts like privacy matter to them when they are using that technology. If the camp is successful, they will be better-informed users of on-line technology, and better prepared to protect their identities and privacy. The camp should be a lot of fun, too, so perhaps one or two of them will be interested diving deeper into computer science after the camp is over.
This morning, the campers learned a little about IP addresses and domain names, mostly through interactive exercises. This afternoon, they are learning a little about watching traffic on the net and then generating traffic by playing some of their favorite games. Tomorrow, we'll look at all the traffic they generated playing, as well as all the traffic generated while their tablets were idle overnight.
We are only three-fourths of the way through Day 1, and I have already learned my first lesson: I really don't want to teach middle school. The Grinch explains why quite succinctly: noise, noise, NOISE! One thing seems to be true of any room full of fifteen middle-school students: several of them are talking at any given time. They are fun people to be around, but they are wearing me out...
As I mentioned recently, design skills were a limiting factor for some of the students in my May term course on agile software development. I saw similar issues for many in my spring Algorithms course as well. Implementing an algorithm from lecture or reading was straightforward enough, but organizing the code of the larger system in which the algorithm resided often created challenges for students.
I've been thinking about ways to improve how I teach design in the future, both in courses where design is a focus and in courses where it lives in the background of other material. Anything I come up with can be also part of conversation with colleagues as we talk about design in their courses.
I read Kent Beck's initial Responsive Design article when it first came out a few years ago and blogged about it then, because it had so many useful ideas for me and my students. I decided to re-read the article again last week, looking for a booster shot of inspiration.
First off, it was nice to remember how many of the techniques and ideas that Kent mentions already play a big role in my courses. Ones that stood out on this reading included:
My recent experiences in the classroom made two other items in Kent's list stand out as things I'll probably emphasize more, or at least differently, in upcoming courses.
Exploit Symmetries. Divide similar elements into identical parts and different parts.
As I noted in my first blog about this article, many programmers find it counterintuitive to use duplication as a tool in design. My students struggle with this, too. Soon after that blog entry, I described an example of increasing duplication in order to eliminate duplication in a course. A few years later, in a fit of deja vu, I wrote about another example, in which code duplication is a hint to think differently about a problem.
I am going to look for more opportunities to help students see ways in which they can make design better by isolating code into the identical and the different.
Inside or Outside. Change the interface or the implementation but not both at the same time.
This is one of the fundamental tenets of design, something students should learn as early as possible. I was surprised to see how normal it was for students in my agile development course not to follow this pattern, even when it quickly got them into trouble. When you try to refactor interface and implementation at the same time, things usually don't go well. That's not a safe step to take...
My students and I discussed writing unit tests before writing code a lot during the course. Only afterward did it occur to me that Inside or Outside is the basic element of test-first programming and TDD. First, we write the test; this is where we design the interface of our system. Then, we write code to pass the test; this is where we implement the system.
Again, in upcoming courses, I am going to look for opportunities to help students think more effectively about the distinction between the inside and the outside of their code.
Thus, I have a couple of ideas for the future. Hurray! Even so, I'm not sure how I feel about my blog entry of four years ago. I had the good sense to read Kent's article back then, draw some good ideas from it, and write a blog entry about them. That's good. But here I am four years later, and I still feel like I need to make the same sort of improvements to how I teach.
In the end, I am glad I wrote that blog entry four years ago. Reading it now reminds me of thoughts I forgot long ago, and reminds me to aim higher. My opening reference to getting a booster shot seems like a useful analogy for talking about this situation in my teaching.
Last time, I thought about the the role of forgiveness in selecting programming languages for instruction. I mentioned that BASIC had worked well for me as a first programming language, as it had worked for so many others. Yet I would probably would never choose it as a language for CS1, at least for more than a few weeks of instruction. It is missing a lot of the features that we want CS majors to learn about early. It's also a bit too free.
In that post, I did say that I still consider Pascal a good standard for first languages. It dominated CS1 for a couple of decades. What made it work so well as a first instructional language?
Pascal struck a nice balance for its time. It was small enough that students could master it all, and also provided constructs for structured programming. It had the sort of syntax that enabled a compiler to provide students guidance about errors, but its compilers did not seem overbearing. It had a few "gothchas", such as the ; as a statement separator, but not so many that students were constantly perplexed. (Hey to C++.) Students were able try things out and get programs to work without becoming demoralized by a seemingly endless stream of complaints.
(Aside: I have to admit that I liked Pascal's ; statement separator. I understood it conceptually and, in a strange way, appreciated it aesthetically. Most others seem to have disagreed with me...)
Python has attracted a lot of interest as a CS1 language in recent years. It's the first popular language in a long while that brings to mind Pascal's feel for me. However, Pascal had two things that supported the teaching of CS majors that Python does not: manifest types and pointers. I love dynamically-typed languages with managed memory and prefer them for my own work, but using that sort of language in CS1 creates some new challenges when preparing students for upper-division majors courses.
So, Pascal holds a special place for me as a CS1 language, though it was not the language I learned there. We used it to teach CS1 for many years and it served me and our students well. I think it balances a good level of forgiveness with a reasonable level of structure, all in a relatively small package.
In the last session of my May term course on agile software development, discussion eventually turned to tools and programming languages. We talked about whether some languages are more suited to agile development than others, and whether some languages are better tools for a given developer team at a given time. Students being students, we also discussed the courses used in CS courses, including the intro course.
Having recently thought some about choosing the right languages for early CS instruction, I was interested to hear what students thought. Haskell and Scala came up; they are the current pet languages of students in the course. So did Python, Java, and Ada, which are languages our students have seen in their first-year courses. I was the old guy in the room, so I mentioned Pascal, which I still consider a good standard for comparing CS1 languages, and classic Basic, which so many programmers of my generation and earlier learned as their first exposure to the magic of making computers do our bidding.
Somewhere in the conversation, an interesting idea came up regarding the first language that people learn: good first languages provide the right amount of forgiveness when programmers make mistakes.
A language that is too forgiving will allow the learner to be sloppy and fall into bad habits.
A language that is not forgiving enough can leave students dispirited under a barrage of not good enough, a barrage of type errors and syntax gotchas.
What we mean by 'forgiving' is hard to define. For this and other reasons, not everyone agrees with this claim.
Even when people agree in principle with this idea, they often have a hard time agreeing on where to draw the line between too forgiving and not forgiving enough. As with so many design decisions, the correct answer is likely a local maximum that balances the forces at play among the teachers, students, and desired applications involved.
I found Basic to be just right. It gave me freedom to play, to quickly make interesting programs run, and to learn from programs that didn't do what I expected. For many people's taste, though Basic is too forgiving and leads to diseased minds. (Hey to Edsger Dijkstra.) Maybe I was fortunate to learn how to use GOSUBs early and well.
Haskell seems like a language that would be too unforgiving for most learners. Then again, neither my students nor I have experience with it as a first-year language, so maybe we are wrong. We could imagine ways in which learning it first would lead to useful habits of thought about types and problem decomposition. We are aware of schools that use Haskell in CS1; perhaps they have made it work for them. Still, it feels a little too judgmental...
In the end, you can't overlook context and the value of good tools. Maybe these things shift the line of "just right" forgiveness for different audiences. In any case, finding the right level seems to be a useful consideration in choosing a language.
I suspect this is true when choosing languages to work in professionally, too.
In today's ACM interview, Donald Knuth identifies one of the problems he has with computer science instruction:
Similarly, the most common fault in computer classes is to emphasize the rules of specific programming languages, instead of to emphasize the algorithms that are being expressed in those languages. It's bad to dwell on form over substance.
I agree. The challenges are at least two in number:
Choosing the right languages can greatly help in conquering Challenges 1 and 2. Choosing the wrong languages can make overcoming them almost impossible, if only because we lose students before they cross the divide.
I guess that makes choosing the right languages Challenge 3.