I have already mentioned a couple of my first impressions of being a guest on the Google campus:
Here are a few other things I noticed.
Calling it the "Google campus" is just right. It looks and feels like a college campus. Dining service, gym facilities, a small goodies store, laundry, sand volleyball courts... and lots of employees who look college-aged because they recently were.
Everywhere we walked outdoors, we saw numerous blue bicycles. They are free for the use of employees, presumably to move between buildings. But there appeared to be bike trails across the road where the bikes could be used for recreation, too.
The quad area between Buildings 40 and 43 had a dinosaur skeleton with pink flamingos in its mouth. Either someone forgot to tell the dinosaur "don't be evil", or the dinosaur has volunteered to serve as aviary for kitsch.
The same area included a neat little vegetable garden. How's that for eating local? (Maybe the dinosaur just wanted to fit in.)
As we entered Building 43 for breakfast, we were greeted with a rolling display of search terms that Google was processing, presumably in real time. I wondered if we were seeing a filtered list, but we did see a "paris hilton" in there somewhere.
The dining rooms served Google-branded ice cream sandwiches, IT's IT, "a San Francisco tradition since 1928". In typical Google fashion, the tasty treat (I verified its tastiness empirically with a trial of size N=2) has been improved, into "a natural, locally sourced, trans-fat-free rendition of their excellent treat". So there.
I don't usually comment on my experience in the restroom, but... The men's rooms at Google do more than simply provide relief; they also provide opportunities for professional development. Testing on the Toilet consists of flyers over the urinal with stories and questions about software testing. (But what's a "C.L.", as in "one conceptual change per C.L."?) I cannot confirm that female engineers at Google have the same opportunities to learn while taking the requisite breaks from their work.
I earlier commented that we visitors had to stay within sight of a Google employee. After a few more hours on campus, it became clear that security is a major industry at Google. Security guards were everywhere. My fellow guests and I couldn't decide whether they were guarding against intellectual property theft by brazen Microsoft or Yahoo! employees or souvenir theft by Google groupies. But I did decide that the Google security force far outnumbers the police force in my metro area.
All in all, an interesting and enjoyable experience.
The second half of the workshop opened with one of the best sessions of the event, the presentation "What Research Tells Us About Best Practices for Recruiting Girls into Computing" by Lecia Barker, a senior research scientist at the National Center for Women and IT. This was great stuff, empirical data on what girls and boys think and prefer. I'll be spending some time looking into Barker's summary and citations later. Some of the items she suggested confirm commonsense, such as not implying that you need to be a genius to succeed in computing; you only need to be capable, like anything else. I wonder if we realize how often our actions and examples implicitly say "CS is difficult" to interested young people. We can also use implicit cues to connect with the interests of our audience, such as applications that involve animals or the health sciences, or images of women performing in positions of leadership.
Other suggestions were newer to me. For example, evidence shows that Latina girls differ more from white and African-American girls than white and African-American girls differ from each other. This is good to know for my school, which is in the Iowa metro area with the highest percentage of African-Americans and a burgeoning Latina population. She also suggested that middle-school girls and high-school girls have different interests and preferences, so outreach activities should be tailored to the audience. We need to appeal to girls now, not to who they will be in three years. We want them to be making choices now that lead to a career path.
A second Five-Minute Madness session had less new information for me. I thought most about funding for outreach activities, such as ongoing support for an undergraduate outreach assistant whom we have hired for next year using a one-time grant form the university's co-op office. I had never considered applying for a special projects grant from the ACM for outreach, and the idea of applying to Avon was even more shocking!
The last two sessions were aimed at helping people get a start on designing an outreach project. First, the whole group brainstormed ideas for target audiences and goals, and then the newbies in the room designed a few slides for an outreach presentation with guidance from the more experienced people. Second, the two groups split, with the newbies working more on design and the experienced folks discussing the biggest challenges they face and ways to overcome them.
These sessions again made clear that I need to "think bigger". One, Outreach need not aim only at schools; we can engage kids through libraries, 4-H (which has broadened its mission to include technology teams), the FFA, Boys and Girls Clubs, and the YMCA and YWCA. Some schools report interesting results from working with minority girls through mother/daughter groups at community centers. Sometimes, the daughters end up encouraging the moms to think bigger themselves and seek education for more challenging and interesting careers. Two, we have a lot more support from upper administration and from CS faculty at my school than most outreach groups have at their schools. This means that we could be more aggressive in our efforts. I think we will next year.
The workshop ended with a presentation by Gabe Cohen, the project manager for Google Apps. This was the only sales pitch we received from Google in the time we were here (other than being treated and fed well), and it lasted only fifteen minutes. Cohen showed a couple of new-ish features of the free Apps suite, including spreadsheets with built-in support for web-based form input. He closed hurriedly with a spin through the new AppEngine, which debuted to the public on Wednesday. It looks cool, but do I have time?
The workshop was well-done and worth the trip. The main point I take away is to be more aggressive on several fronts, especially in seeking funding opportunities. Several companies we work with have funded outreach activities at other schools, and our state legislative and executive branches have begun to take this issue seriously from the standpoint of economic development. I also need to find ways to leverage faculty interest in doing outreach and interest from our administration in both STEM education initiatives and community service and outreach.
The workshop has ended. Google was a great host, from beginning to end. They began offering food and drinks almost immediately, and we never hungered or thirsted for long. That part of the trip made Google feel like the young person's haven it is. Wherever we went, the meeting tables included recessed power and ethernet cables for every kind of laptop imaginable, including my new Mac. (Macbook Pros were everywhere we went at Google.) But we also learned right away that visitors also must stay within bounds. No wandering around was allowed; we had to remain within sight of a Googler. And we were told not to take any photos on the grounds or in the buildings.
The workshop was presented live from within Google Docs, which allowed the leaders and presenters to display from a common tool and to add content as we went along. The participants didn't have access to the doc, but we were it as a PDF file -- on the smallest flash drive I've ever owned. It's a 1GB stick with the dimensions of the delete key on my laptop (including height).
The introduction to the workshop consisted of a linked-list game in which each person introduced the person to his left, followed by remarks from Maggie Johnson, the Learning and Development Director at Google Engineering, and Chris Stephenson, the executive director of ACM's Computer Science Teachers Association. The game ran a bit long, but it let everyone see how many different kinds of people were in the room, including a lot of non-CS faculty who lead outreach activities for some of the bigger CS departments. Chris expressed happiness that K-12, community colleges, and universities were beginning to work together on the CS pipeline. Outreach is necessary, but it can also be joyful. (This brought to mind her panel statement at SIGCSE, in a session I still haven't written up...)
Next up was Liz Adams reporting on her survey of people and places who are doing road shows or thinking about it. She has amassed a lot of raw data, which is probably most useful as a source of ideas. During her talk, someone asked, does anyone know if what they are doing is working? This led to a good discussion of assessment and just what you can learn. The goals of these road shows are many. When we meet with students, are we recruiting for our own school? Or are we trying to recruit for discipline, getting more kids to consider CS as a possible major? Are we working to reach more girls and underrepresented groups, or do we seek a rising tide? Perhaps we are doing service for the economy of our community, region, or state? The general answer is 'yes' to all of these things, which makes measuring success all the more difficult. While it's comforting to shoot wide, this may not be the most effective strategy for achieving any goal at all!
One idea I took away from this session was to ask students to complete a short post-event evaluation. I view most of our outreach activities these days as efforts to broaden interest in computer science generally, and to broaden students' views of the usefulness and attractiveness of computing even more generally. So I'd like to ask students about their perceptions of computing after we work with them. Comparing these answers to ones gathered before the activity would be even better. My department already asks students declaring CS majors to complete a short survey, and I plan to ensure it includes a question that will allow us to see whether our outreach activities have had any effect on the new students we see.
Then came a session called Five-Minute Madness, in which three people from existing outreach programs answered several questions in round-robin fashion, spending five minutes altogether on each. I heard a few useful nuggets here:
Dinner in one of the Google cafeterias was just like dinner in one of my university's residence halls, only with more diverse fare. A remarkable number of employees were there. Ah, to be young again.
Our first day closed with people from five existing programs telling us about their road shows. My main thought throughout this session was that these people spend a lot of time talking to -- at -- the kids. I wonder how effective this is with high school students and imagine that as the audience gets younger, this approach becomes even less effective. That said, I saw a lot of good slides with information that we can use to do some things. The presenters have developed a lot of good material.
Off to bed. Traveling west makes for long, productive days, but it also makes me ready to sleep!
A while back I read this xkcd comic, which introduced the idea of geohashing, selecting a meet-up location based on a date, the Dow-Jones Industrial Average, and MD5 hashing. Last month I ran across this wiki page on geohashing, which offers a reference implementation in Python. That's fine, even with all those underscores, but I decided to write a Ruby implementation for kicks. In particular, I had never worked with Ruby's MD5 digests and was glad to have a reason.
So, during a few stolen moments at the roadshow workshop (summary soon...), I knocked off an implementation. Here's my code. It's very simple and can certainly be improved. I tried to use idiomatic Ruby where I knew it, but some bits feel awkward. In other places, I mimicked the reference implementation perhaps too closely, so they still feel Python-y. Please send me your suggestions!
I'm preparing for a quick visit to the Google campus tomorrow and Friday. This is my first trip to the Google campus, and I have to admit that I'm looking forward to it. To this wide-eyed Midwestern computer scientist, it feels as if I am visiting Camelot.
The occasion of my trip is a "roadshow summit" co-sponsored by the Computer Science Teachers Association and SIGCSE, and hosted by Google. The CSTA is a unit of the ACM "that supports and promotes the teaching of computer science and other computing disciplines" in K-12 schools. The goal of the workshop is:
... to bring together faculty and students who are currently offering, or planning to develop, outreach "road shows" to local K-12 schools. Our goal is to improve the quality and number of college and university-supported careers and equity outreach programs by helping to develop a community that will share research, expertise, and best practices, and create shared resources.
My selfish goal in wanting to attend the workshop initially was to steal lots of good ideas with more experience and creativity than I. My contribution will be to share what we have done in our department, especially over the last semester. I asked two faculty members to develop curricula for K-12 outreach activities, in lieu of one of their usual course assignments. The curriculum materials should be useful whether we take them on the road to the schools or when we have students on campus for visits. One professor started with robotics in mind but quickly switched to some simple programming activities with the Scratch programming environment. The other worked on high-performance and parallel computing for pre-college students, an education thread he has been working in for much of this decade. I do not have a link to materials he developed specifically for our outreach efforts yet, but I can point you to LittleFe, one of his ongoing projects.
I'm curious to see what other schools have done and still plan to steal as many ideas as I can! And, while I'm looking forward to the workshop and seeing Google's campus, I am not looking forward to the fast turnaround... My flight leaves tomorrow morning; we work Thursday afternoon, Thursday evening, Friday morning, and Friday early afternoon; and then I start the sojourn back home. I'll cover a lot of miles in forty-eight hours, but I hope they prove fruitful.
Last week, I read Patrick Lencioni's The Five Dysfunctions of a Team. It's another one of those "parable" books, wherein the author teaches his method, stance, or idea by telling a story of the theme in action. The book has been in my to-read pile for several months, after seeing a couple of positive references to it on the XP mailing list a while back. But I put off reading it in favor of other things for a long time. Indeed, my experience mirrors the one reported here: I couldn't seem to get started for the longest time, despite my wife's good recommendation, and then I read it quickly and found it to be a "very good little book". I recommend it.
Lencioni describes a pyramid of dysfunctions that sabotage a team's effectiveness. This model can also be viewed as a sequence of patterns of effective teams: trust, conflict, commitment, accountability, and collective results. I have experienced elements of all five layers in my current "team". I would not say that we are as self-destructive as the team in his story, but even small chinks in the foundation can weaken the structure above. I hope that in my time as head we have taken steps in the right direction from dysfunction to function. Certainly, trust has been one of my focal points. I have been less successful in encouraging healthy conflict as I had hoped, which indicates weakness in trust. This is an area in which I can grow more able as a leader. Lencioni's model gives me a standard against which to evaluate myself.
Of course, as Seth Godin says, Obviously, knowing what to do is very, very different than actually doing it. That is one of the underlying themes of this blog and its name.
Speaking of Godin, I am reminded of a misgiving I've had about the parable books that dominate the popular business press. I've enjoyed many of them, but these days I am less eager to read another. I even told my wife that I was planning to skip past the story in Lencioni's book straight to the appendix that explains his model in about 30 pages. Typical academic that I am, I hungered for the meat of the book, not the appetizer.
Almost every one of the popular business books could be boiled down to a single chapter that states the main idea and tells me how to implement it. That doesn't make for much of a book, though, and wouldn't attract many readers. But why should a busy guy like me waste time reading the fully dressed version? Am I missing something? I didn't think so, but... There is a big gap between knowing and doing.
I was lucky to read Godin's blog entry How to read a business book within a day of finishing Five Dysfunctions, and he set my mind at ease. My understanding of these books is spot-on, yet the parable itself really is important. It is where the author hopes to cultivate in us the motivation to act. Godin says as much about his own books:
... if three weeks go by and you haven't taken action on what you've written down, you wasted your time.
Three weeks is probably too tight for me, with the onset of summer vacation and the end to regular interactions among the whole faculty until fall. I either need to re-read the book in August or find a way to act on what I've learned during the summer.
In closing, I see Godin's prescription as something like a unit test for my reading about teams and leadership:
Effective managers hand books to their team. Not so they can be reminded of high school, but so that next week she can say to them, "are we there yet?"
Kent Beck would be proud.
Someone pointed me toward a video of a talk given at Google by John Medina on his new book Brain Rules. I enjoyed the talk and will have to track down a copy of the book. Early on, he explains that the way he have designed our schools and workplaces produce the worst possible environments in which for us to learn and work. But my favorite passage came near the end, in response to the question, "Do you believe in magic?"
Hopefully I'm a nice guy, but I'm a really grumpy scientist, and in the end, I'm a reductionist. So if you can show me, [I'll believe it]. As a scientist, I have to be grumpy about everything and be able to be willing to believe anything. ... If you care what you believe, you should never be in the investigative fields -- ever. You can't care what you believe; you just have to care what's out there. And when you do that, your bandwidth is as wide as that sounds, and the rigor ... has to be as narrow as as the biggest bigot you've ever seen. Both are resident in a scientist's mind at the same time.
Yes. Unfortunately, public discourse seems to include an unusually high number of scientists are very good at the "being grumpy about everything" part and not so good at the "being able to be willing to believe anything" part. Notice that Medina said "be able to be willing to believe", not "be willing to believe". I think that some people are less able to be willing to believe something they don't already believe, which makes them not especially good candidates to be scientists.
Why is it that I feel compelled to write about getting a new Macbook Pro? Lots of people have one by now. But for a computer guy like me, a new laptop is one part professional tool and one part toy, a new user experience that shapes how I live.
Unlike my last laptop purchase, I splurged and bought an entry-level Macbook Pro. The 15" screen seems so much bigger than my iBook's 13" screen, because it is. The actual screen size is 13-1/8"x8-1/4" versus 9-3/4"x7-1/4", which is 50% larger. One motivation for buying the iBook last time was having a smaller machine for while flying. That worked out as planned, but even when I travel a lot I don't travel all that much. I'll have a chance to see how well the new machine travels next week when I visit Google.
Migrating files and configuration was much simpler this time. The Pro comes with a 200GB drive, rather than the 30GB(!) drive the 2005 iBook shipped with. Of course, this experience only accentuates that I am old. I think of that 30GB drive as horribly restrictive, yet not that many years ago I would have felt like a king with one. The new machine's drive is close enough to my office machine's 240-gig drive that I was able to mirror all of my files. That said, I was surprised a bit to find that the "200GB Serial ATA" drive advertised has an actual capacity of 186.31GB...)
It's a good idea for me to get a new machine every once in a while, and not just for the new technology, which is itself a wonderful advantage. I'm a creature of habit, more so than most people I know, and my brain benefits from being pulled out of its rut. My fingers must learn a new keyboard. I have to dig out a new bag to carry it in, because my OOPSLA 2005 isn't wide enough. The Leopard interface is just different enough to open my eyes to tasks that are now carried out subconsciously on the older machines.
Whenever I get a new machine and face the task of despoiling the pristine, out-of-the-box set-up of my system with my own files, I feel the urge eliminate clutter. A big part of this is always clearing out my stuff/ folder -- currently at 13,604 files and 1.46 GB on disk. (My stuff/ folder is full of folders, so I just took a break to write a quick Ruby script to count the files for me.) But this time I also paid close attention to /Applications/personal, where I store nearly all of the Mac applications I install on my machine. The only exceptions are major-league apps such as iWork and Adobe Creative Suite.
/Applications/personal on my desktop machine contains 59 apps total, including four "classic" (pre-OS ) programs. I also have two folders of apps on trial in the stuff/ folder, totaling another 27.
Hello. My name is Eugene. I am an application junkie.
Whenever I read about a cool app in a blog or an e-mail or a magazine, I go "Ooh!" and download it. I delete many of these; for the 86 on my machine, I've probably tried and deleted several multiples more. But often they find there way into a folder somewhere because I just know that I'll use them soon. But usually I don't. I don't use Paparazzi or Keyboard Cleaner, or PsyncX or WordService. They are all fine programs, I am sure, but they just never broke into my workflow. Same for Growl and AquamacsEmacs.
So this time, I decided to transfer only programs that I recall using as a part of my work or play. Right now, the new Macbook has 20 apps, ranging from workhorses such as NetNewsWire and VoodooPad to programming tools such as PLT Scheme and Scratch down to fun little utilities such as LittleSecrets and PagePacker -- and one game so far, SudokuCompanion. Let's see what I miss from the big stash, if anything...
And don't get me started on the widgets I installed back when Dashboard seemed so very cool. I almost never use a one of them. None have made it across the divide yet.
My Macbook Pro now knows me as wallingf. Perhaps I should give her a name, too. It's personal.
I grew up on the sitcom of the 1970s and 1980s. As kids, we watched almost everything we saw in reruns, whether from the '60s or the '70s, but I enjoyed so many of them. By the time I got to college, I had well-thought ideas on why The Dick Van Dyke Show remains one of the best sitcoms ever, why WKRP in Cincinnati was underrated for its quality, and why All in the Family was _the_ best sitcom ever. I still hold all these biases in my heart. Of course, I didn't limit myself to sitcoms; I also loved light-action dramas, especially The Rockford Files.
Little did I know then that my TV viewing was soaking up a cognitive surplus in a time of social transition, or that it had anything in common with gin pushcarts in the streets of London at the onset of the Industrial Revolution.
Clay Shirky has published a wonderful little essay, Gin, Television, and Social Surplus that taught me these things and put much of what we see on happening on the web into the context of a changing social, cultural, and economic order. Shirky contends that, as our economy and technology evolve, a "cognitive surplus" is created. Energy that used to be spent on activities required in the old way is now freed for other purposes. But society doesn't know what to do with this surplus immediately, and so there is a transition period where the surplus is dissipated in (we hope) harmless ways.
My generation, and perhaps my parents', was part of this transition. We consumed media content produced by others. Some denigrate that era as one of mindless consumption, but I think we should not be so harsh. Shows like All in the Family and, yes, WKRP in Cincinnati often tackled issues on the fault lines of our culture and gave people a different way to be exposed to new ideas. Even more frivolous shows such as The Dick Van Dyke Show and The Rockford Files helped people relax and enjoy, and this was especially useful for those who were unprepared for the expectations of a new world.
We are now seeing the advent of the new order in which people are not relegated to consuming from the media channels of others but are empowered to create and share their own content. Much attention is given by Shirky and many, many others to the traditional media such as audio and video, and these are surely where the new generation has had its first great opportunities to shape its world. As Shirky says:
Here's something four-year-olds know: A screen that ships without a mouse ships broken. Here's something four-year-olds know: Media that's targeted at you but doesn't include you may not be worth sitting still for.
But as I've been writing about here, lets not forget the next step: the power to create and shape the media themselves via programming. When people can write programs, they are not relegated even to using the media they have been given but are empowered to create new media, and thus to express and share ideas that may otherwise have been limited to the abstraction of words. Flickr and YouTube didn't drop from the sky; people with ideas created new channels of dissemination. The same is true of tools like Photoshop and technologies such as wikis: they are ideas turned into reality through code.
Do read Shirky's article, if you haven't already. It has me thinking about the challenge we academics face in reaching this new generation and engaging them in the power that is now available to them. Until we understand this world better, I think that we will do well to offer young people lots of options -- different ways to connect, and different paths to follow into futures that they are creating.
One thing we can learn from the democratized landscape of the web. I think, is that we are not offering one audience many choices; we are offering many audiences the one or two choices each that they need to get on board. We can do this through programming courses aimed at different audiences and through interdisciplinary major and minor programs that embed the power of computing in the context of problems and issues that matter to our students.
Let's keep around the good old CS majors as well, for those students who want to go deep creating the technology that others are using to create media and content -- just as we can use the new technologies and media channels to keep great old sitcoms available for geezers like me.
I've received an invitation to Peter Denning's "Rebooting Computing" summit, which I first mentioned when covering Denning's talk at SIGCSE. The summit is scheduled for January 2009 and is part of Denning's NSF-funded Resparking Innovation in Computing Education project. This will be a chance to spend a few days with others thinking about this issue to outline concrete steps that we all might take to make change. I've written about this issue frequently here, most recently in the form of studio-based computing, project-based and problem-based learning, and programming for non-CS folks like computing into their own work. (Like scientists and artists. I'm excited about this chance.
A former student recently mentioned a tough choice he faces. He has a great job at a Big Company here in the Midwest. The company loves him and wants him to stay for the long term. He likes the job, the company, and the community in which he lives. But this isn't the sort of job he originally had hoped for upon graduation.
Now a position of just the sort he was originally looking for is available to him in a sunny paradise. He says, "I have quite a decision to make.... it's hard to convince myself to leave the secure confines of [Big Company]. Now I see why their turnover rate is so low."
I had a hard time offering any advice. When I was growing up, my dad work for Ford Motor Company in an assembly plant, and he faced insecurity about the continuance of his job several times. I don't know how much this experience affected my outlook on jobs, but in any case my personality is one that tends to value security over big risk/big gain opportunities.
Now I hold a job with greater job security than anyone who works for a big corporation. An older colleague is fond of saying Real men don't accept tenure. I first heard him say that when I was in grad school, and I remember not getting it at all. What's not to like about tenure?
After a decade with tenure, I understand better now what he means. I always thought that the security provided by having tenure would promote taking risks, even if only of the intellectual sort. But too much security is just as likely to stunt growth and inhibit taking risks. I sometimes have to make a conscious effort to push myself out of my comfort zone. Intellectually, I feel free to try new things, but pushing myself out of a comfortable nest here into a new wnvironment -- well, that's another matter. What are the opportunity costs in that?
I love what Paul Graham says about young CS students and grads having the ability to take entrepreneurial risk, and how taking those risks may well be the safer choice in the long run. It's kind of like investing in stocks instead of bonds, I think. I encourage all of my students to give entrepreneurship a thought, and I encourage even more the ones whom I think have a significant chance to do something big. There is probably a bit of wistfulness in my encouragement, not having done that myself, but I don't think I'm simply projecting my own feelings. I really do believe that taking some employment risk, especially while young, is good for many CS grads.
But when faced with a concrete case -- a particular student having to make a particular decision -- I don't feel quite so cocksure in saying "go for it with abandon". This is not abstract theory; his job and home and fiancee are all in play. He will have to make this decision on his own, and I'd hate to push him toward something that isn't right for him from my cushy, secure seat in the tower. I feel a need to stay abstract in my advice and leave him to sort things out. Fortunately, he is a bright, level-headed guy, and I'm sure he'll do fine whichever way he chooses. I wish him luck.
Surround yourself with smart, competent people, and you will find ideas in the air. One of the compelling thoughts in that article is this:
A scientific genius is not a person who does what no one else can do; he or she is someone who does what it takes many others to do.
For those of us who are not geniuses, the lesson is that we can still accomplish great things -- if we take part in the right sort of collaboration and be curious, inquisitive, and open to big ideas. I think this applies not only to inventions but also to ideas for start-ups and insight to class projects.
(So go to class. You'll find people there.)
But being in a group is not a path to easy accomplishment, as people who have tried to write a book in a group know:
Talking about a "group-book" is a lot of fun. Actually putting one together, maybe less fun.
The ongoing ChiliPLoP working group of which I am a member is another datapoint for Mitzenmacher's claim. Doing more than brainstorming ideas in a groups takes all the same effort, coordination, and individual and collective responsibility as any other sort of work.
(As an aside, I love Stigler's Law as quoted in the Gladwell article linked above! Self-reference can be a joy, especially with the twist engendered by this one.)
I sometimes talk about lecture (say, here) as not being the optimal way for students to learn. That doesn't mean that I don't think it lecture has value at all. I still lecture, though I prefer to punctuate my disquisition with occasional problem breaks, during which students try out some idea. Without those breaks, the active learning they afford, and the feedback they can give students about where they stand, I sometimes wonder how much value being in class with me for seventy-five minutes has.
It turns out that there is probably value even in "just listening". Mark Guzdial recently described work by a psychology grad student that explains the relationship between learning and reading text, hearing narration, and viewing images. Most people learn more efficiently when they hear an explanation while looking at text, code, or diagrams. If they read the same explanation while looking at the text, code, or diagrams, it will take them longer to learn the material to the same depth.
So, coming to class and hearing a good lecture can be a good investment of time. It jump-starts the brain. Of course, the student still needs to read and work through problems at home later, too. Reading and solving problems give the mind an opportunity to rehearse and to process material more deeply. The result of listening to lecture followed by intense study can be a powerful form of learning.
I encourage students to take advantage of all their modalities. Augmenting lecture with opportunities for practice and feedback gives them a strong combination of learning styles in class. Then, as often as I can, I provide students with written notes that contain both my explanations and the in-class exercises we did. This allows them to recall their in-class experience as much as possible. On the occasion when students really must miss class, they can get a flavor of what happened in class, but reading the notes is a poor substitute for experiencing the class live. Then, I assign readings from a text or other sources that supplements the material with cover in class with a different presentation. Finally, I ask students to do a significant amount of project work, which gives them the chance to learn how to do while exercising the knowledge of the material in ways that make connections in their minds. I hope that this multi-faceted approach maximizes student opportunities to learn deeply and come to appreciate what they learn.
I notice a common rhetorical device in many academic arguments. It goes like this. One person makes a claim and offers some evidence. Often, the claim involves doing something new or seeing something in a new way. The next person rebuts the argument with a claim that the old way of doing or seeing things is more "fundamental" -- it is the foundation on which other ways of doing and seeing are built. Oftentimes, the rebuttal comes with no particular supporting evidence, with the claimant relying on many in the discussion to accept the claim prima facie. We might call this The Fundamental Imperative.
This device is standard issue in the CS curriculum discussions about object-oriented programming and structured programming in first-year courses. I recently noticed its use on the SIGCSE mailing list, in a discussion of what mathematics courses should be required as part of a CS major. After several folks observed that calculus was being de-emphasized in some CS majors, in favor of more discrete mathematics, one frequent poster declared:
(In a word, computer science is no longer to be considered a hard science.)
If we know [the applicants'] school well we may decide to treat them as having solid and relevant math backgrounds, but we will no longer automatically make that assumption.
Often, the conversation ends there; folks don't want to argue against what is accepted as basic, fundamental, good, and true. But someone in this thread had the courage to call out the emperor:
If you want good physicists, then hire people who have calculus. If you want good computer scientists, then hire people who have discrete structures, theory of computation, and program verification.
I don't believe that people who are doing computer science are not doing "hard science" just because it is not physics. The world is bigger than that.
You say "solid and relevant" when you really should be saying "relevant". The math that CS majors take is solid. It may not be immediately relevant to problems [at your company]. That doesn't mean it is not "solid" or "hard science".
I sent this poster a private "thank you". For some reason, people who drop the The Fundamental Imperative into an argument seem to think that it is true absolutely, regardless of context. Sure, there may be students who would benefit from learning to program using a "back to the basics" approach, and there may be CS students for whom calculus will be an essential skill in their professional toolkits. But that's probably not true of all students, and it may well be that the world has changed enough that most students would benefit from different preparation.
"The Fundamental Imperative" is a nice formal name for this technique, but I tend to think of it as "if it was good enough for me...", because so often it comes down to old fogies like me projecting our experience onto the future. Both parties in such discussions would do well not to fall victim to their own storytelling.
In his recent bestseller The Black Swan: The Impact of the Highly Improbable, Nassim Nicholas Taleb uses the term narrative fallacy to describe man's penchant for creating a story after the fact, perhaps subconsciously, in order to explain why something happened -- to impute a cause for an event we did not expect. This fallacy derives from our habit of imposing patterns on data. Many view this as a weakness, but I think it is a strength as well. It is good when we use it to communicate ideas and to push us into backing up our stories with empirical investigation. It is bad when we let our stories become unexamined truth and when we use the stories to take actions that are not warranted or well-founded.
Of late, I've been thinking of the narrative fallacy in its broadest sense, telling ourselves stories that justify what we see or want to see. My entry on a response to the Onward! submission by my ChiliPLoP group was one trigger. Those of us who believe strongly that we could and perhaps should be doing something different in computer science education construct stories about what is wrong and what could be better; we're like anyone else. That one OOPSLA reviewer shed a critical light on our story, questioning its foundation. That is good! It forces us to re-examine our story, to consider to what extent it is narrative fallacy and to what extent it matches reality. In the best case, we now know more about how to tell the story better and what evidence might be useful in persuading others. In the worst, we may learn that our story is a crock. But that's a pretty good worst case, because it gets us back on the path to truth, if indeed we have fallen off.
A second trigger was finding a reference in Mark Guzdial's blog to a short piece on universal programming literacy at Ken Perlin's blog. "Universal programming literacy" is Perlin's term for something I've discussed here occasionally over the last year, the idea that all people might want or need to write computer programs. Perlin agrees but uses this article to consider whether it's a good idea to pursue the possibility that all children learn to program. It's wise to consider the soundness of your own ideas every once in a while. While Perlin may not be able to construct as challenging a counterargument as our OOPSLA reviewer did, he at least is able to begin exploring the truth of his axioms and the soundness of his own arguments. And the beauty of blogging is that readers can comment, which opens the door to other thinkers who might not be entirely sympathetic to the arguments. (I know...)
It is essential to expose our ideas to the light of scrutiny. It is perhaps even more important to expose the stories we construct subconsciously to explain the world around us, because they are most prone to being self-serving or simply convenient screens to protect our psyches. Once we have exposed the story, we must adopt a stance of skepticism and really listen to what we hear. This is the mindset of the scientist, but it can be hard to take on when our cherished beliefs are on the line.
... with folks in university administration, colleagues whom I respect and with whom I like to work.
Them: These reports can't be combined. Please make separate ones.
Me: They will be identical.
Them: That's okay.
Them: These reports are identical. That's a problem.
Me: That's what you asked me to do.
Them: They can't be identical. Please make them different.
Me: But the committee in charge told us long ago to combine the tasks, so we did. Most of the resulting document really is one item, so we have a common report.
Them: But the reports can't be the same. Please make them different.
Me: But they aren't different.
Them: If they aren't different, the board will be unhappy.
Me: But if I make reports that differ artificially, they'll see that, too. And if they differ only a little, then it will be hard to tell where they differ.
Them: If they aren't different, the board will be unhappy. Please make them different.
This is the sort of situation that makes me wonder (1) if the board really understands how universities work and (2) what I am doing in The Office.
The verdict is in on the paper we wrote at ChiliPLoP and submitted to Onward!: rejected. (We are still waiting to hear back on our Educators' Symposium submission.) The reviews of our Onward! paper were mostly on mark, both on surface features (e.g., our list of references was weak) and on the deeper ideas we offer (e.g., questions about the history of studio approaches, and questions about how the costs will scale). We knew that this submission was risky; our time was simply too short to afford enough iterations and legwork to produce a good enough paper for Onward!.
I found it interesting that the most negative reviewer recommended the paper for acceptance. This reviewer was clearly engaged by the idea of our paper and ended up writing the most thorough, thoughtful review, challenging many of our assumptions along the way. I'd love to have the chance to engage this person in conversation at the conference. For now, I'll have to settle for pointing out some of the more colorful and interesting bits of the review.
In at least one regard, this reviewer holds the traditional view about university education. When it comes to the "significant body of knowledge that is more or less standard and that everyone in the field should acquire at some point in time", "the current lecture plus problem sets approach is a substantially more efficient and thorough way to do this."
Agreed. But isn't it more efficient to give the students a book to read? A full prof or even a TA standing in a big room is an expensive way to demonstrate standard bodies of knowledge. Lecture made more sense when books and other written material were scarce and expensive. Most evidence on learning is that lecture is actually much less effective than we professors (and the students who do well in lecture courses) tend to think.
The reviewer does offer one alternative to lecture: "setting up a competition based on mastery of these skills". Actually, this approach is consistent with the spirit of our paper's studio-based, apprenticeship-based, and project-based. Small teams working to improve their skills in order to win a competition could well inhabit the studio. Our paper tended to overemphasize the softer collaboration of an idyllic large-scale team.
This comment fascinated me:
Another issue is that this approach, in comparison with standard approaches, emphasizes work over thinking. In comparison with doing, for example, graph theory or computational complexity proofs, software development has a much lower ratio of thought to work. An undergraduate education should maximize this ratio.
Because I write a blog called Knowing and Doing, you might imagine that I think highly of the interplay between working and thinking. The reviewer has a point: an education centered on projects in a studio must be certain to engage students with the deep theoretical material of the discipline, because it is that material which provides the foundation for everything we do and which enables us to do and create new things. I am skeptical of the notion that an undergrad education should maximize the ratio of thinking to doing, because thinking unfettered by doing tends to drift off into an ether of unreality. However, I do agree that we must try to achieve an appropriate balance between thinking and doing, and that a project-based approach will tend to list toward doing.
One comment by the reviewer reveals that he or she is a researcher, not a practitioner:
In my undergraduate education I tried to avoid any course that involved significant software development (once I had obtained a basic mastery of programming). I believe this is generally appropriate for undergraduates.
Imagine the product of an English department saying, "In my undergraduate education I tried to avoid any course that involved significant composition (once I had obtained a basic mastery of grammar and syntax). I believe this is generally appropriate for undergraduates." I doubt this person would make much of a writer. He or she might be well prepared, though, to teach lit-crit theory at a university.
Most of my students go into industry, and I encourage them to take as many courses as they can in which they will build serious pieces of software with intellectual content. The mixture of thinking and doing stretches them and keeps them honest.
An education system that produces both practitioners and theoreticians must walk a strange line. One of the goals of our paper was to argue that a studio approach could do a better job of producing both researchers and practitioners than our current system, which often seems to do only a middling job by trying to cater to both audiences.
I agree wholeheartedly, though, with this observation:
A great strength of the American system is that it keeps people's options open until very late, maximizing the ability of society to recognize and obtain the benefits of placing able people in positions where they can be maximally productive. In my view this is worth the lack of focus.
My colleagues and I need to sharpen our focus so that we can communicate more effectively the notion that a system based on apprenticeship and projects in a studio can, in fact, help learners develop as researchers and as practitioners better than a traditional classroom approach.
The reviewer's closing comment expresses rather starkly the challenge we face in advocating a new approach to undergraduate education:
In summary, the paper advocates a return to an archaic system that was abandoned in the sciences for good reason, namely the inefficiency and ineffectiveness of the advocated system in transmitting the required basic foundational information to people entering the field. The write-up itself reflects naive assumptions about the group and individual dynamics that are required to make the approach succeed. I would support some of the proposed activities as part of an undergraduate education, but not as the primary approach.
The fact that so many university educators and graduates believe our current system exists in its current form because it is more efficient and effective than the alternatives -- and that it was designed intentionally for these reasons -- is a substantial cultural obstacle to any reform. Such is the challenge. We owe this reviewer our gratitude for laying out the issues so well.
In closing, I can't resist quoting one last passage from this review, for my friends in the other sciences:
The problem with putting students with no mastery of the basics into an apprenticeship position is that, at least in computer science, they are largely useless. (This is less true in sciences such as biology and chemistry, which involve shallower ideas and more menial activities. But even in these sciences, it is more efficient to teach students the basics outside of an apprenticeship situation.)
The serious truth behind this comment is the one that explains why building an effective computer science research program around undergraduates can be so difficult. The jocular truth behind it is that, well, CS is just plain deeper and harder! (I'll duck now.)
Not from me, but from The Common School Journal, an early-1800s primer on teaching:
Make no effort to simplify language.
Also: Treat all students as equals, with expectation that all participate and learn. Teach where each student lacks.
I think I have a more detailed essay on this topic in me, in terms of how we teach programming and software development. But it will have to wait for another day. In any case, my agedness seems to be growing asymptotically faster than my wisdom.
I've tried to explain the idea of software patterns in a lot of different ways, to a lot of different kinds of people. Reading James Tauber's Grammar Rules reminds me of one of my favorites: a pattern language is a descriptive grammar. Patterns describe how (good) programmers "really speak" when they are working in the trenches.
Talking about patterns as grammar creates the potential for the sort of misunderstanding that Tauber discusses in his entry. Many people, including many linguists, think of grammar rules as, well, rules. I was taught to "follow the rules" in school and came to think of the rules as beyond human control. Linguists know that the rules of grammar are man-made, yet some still seem to view them as prescriptive:
It is as if these people are viewing rules of grammar like they would road rules--human inventions that one may disagree with, but which are still, in some sense, what is "correct"...
Software patterns are rarely prescriptive in this sense. They describe a construct that programmers use in a particular context to balance the forces at play in the problem. Over time, they have been found useful and so recur in similar contexts. But if a programmer decides not to use a pattern in a situation where it seems to apply, the programmer isn't "wrong" in any absolute sense. But he'll have to resolve the competing forces in some other way.
While the programmer isn't wrong, other programmers might look at him (or, more accurately, his program) funny. They will probably ask "why did you do it that way?", hoping to learn something knew, or at least confirm that the programmer has done something oddly.
This is similar to how human grammar works. If I say, "Me wrote this blog", you would be justified in looking at me funny. You'd probably think that what I speaking incorrectly.
Tauber points out that, while I might be violating the accepted rules of grammar, I'm not wrong in any absolute sense:
... most linguists focus on modeling the tacit intuitions native speakers have about their language, which are very often at odds with the "rules of grammar" learnt at school.
He gives a couple of examples of rules that we hear broken all of the time. For example, native speakers of English almost always say "It's me", not "It's I", though that violates the rules of nominative and accusative case. Are we all wrong? In Sr. Jeanne's 7th-grade English class, perhaps. But English grammar didn't fall from the heavens as incontrovertible rules; it was created by humans as a description of accepted forms of speech.
When a programmer chooses not to use a pattern, other programmers are justified in taking a second look at the program and asking "why?", but they can't really say he's guilty of anything more than doing things differently.
Like grammar rules, some patterns are more "right" than others, in the sense that it's less acceptable to break some than others. I can get away with "It's me", even in more formal settings, but I cannot get away with "Me wrote this blog", even in the most informal settings. An OO programmer might be able get away with not using the Chain of Responsibility pattern in a context where it applies, but not using Strategy or Composite in appropriate contexts just makes him look uninformed, or uneducated.
A few more thoughts:
So, patterns are not like a grammar for programming language, which is prescriptive. To speak Java at all, you have to follow the rules. They are like the grammar of a human language, which model observations about how people speak in the wild.
As a tool for teaching and learning, patterns are so useful precisely because they give us a way to learn accepted usages that go beyond the surface syntactic rules of a language. Even better, the pattern form emphasizes documenting when a construct works and why. Patterns are better than English grammar in this regard, at least better than the way English grammar is typically taught to us as schoolchildren.
There are certainly programmers, software engineers, and programming language theorists who want to tell us how to program, to define prescriptive rules. There can be value in this approach. We can often learn something from a model that has been designed based on theory and experience. But to me prescriptive models for programming are most useful when we don't feel like we have to follow them to the letter! I want to be able to learn something new and then figure out how I can use it to become a better programmer, not a programmer of the model's kind.
But there is also a huge, untapped resource in writing the descriptive grammar of how software is built in practice. It is awfully useful to know what real people do -- smart, creative people; programmers solving real problems under real constraints. We don't understand programming or software development well enough yet not to seek out the lessons learned by folks working in the trenches.
This brings to mind a colorful image, of software linguists venturing into the thick rain forest of a programming ecosystem, uncovering heretofore unexplored grammars and cultures. This may not seem as exotic as studying the Pirahã, but we never know when some remote programming tribe might upend our understanding of programming...
Brian Marick lamented recently that his daughter's homework probably wasn't affecting her future in the same way that some of his school experiences affected his. I've had that feeling, too, but sometimes wonder whether (1) my memory is good enough to draw such conclusions and (2) my daughters will remember key experiences from their school days anyway. After teaching for all these years I am sometimes surprised by what former students remember from their time in my courses, and how those memories affect them.
Brian's mention of New Math elicited some interesting comments. Kevin Lawrence hit on a point that has been on my mind in two contexts lately:
A big decision point in education is whether you are optimizing for people who will go on to be very good at a subject or for people who find it difficult.
In the context of university CS curricula, I often field complaints from colleagues here and everywhere about how the use of (graphics | games | anything post-1980 | non-scientific applications) in CS courses is dumbing down of our curriculum. These folks claim that we are spending too much time catering to folks who won't succeed in the discipline, or at least excel, and that at the same time we drive away the folks who would be good at CS but dislike the "softness" of the new approach.
In the context of reaching out to pre-university students, to show folks cool and glitzy things that they might do in computer science, I sometimes hear the same sort of thing. Be careful, folks say, not to popularize the science too much. We might mislead students into thinking that CS is not serious, or that it is easy.
I fully agree that we don't want to mislead middle schoolers or CS majors about the content or rigor of our discipline, or to give the impression that we cannot do serious and important work. But physics students and math geeks are not the only folks who can or should use computing. They are most definitely not the only folks who can make vital contributions to the discipline. (We can even learn from people who quote "King Lear".)
By not reaching out to students with different views and interests, we do computer science a disservice. Once they are attracted to the discipline and excited to learn, we can teach them all about rigor and science and math. Some of those folks won't succeed in CS, but then again neither do some of the folks who come in with the more traditional "geeky" interests.
If this topic interests you, follow the trail from Brian's blog to two blog entries by Kevin Lawrence, one old and one new. Both are worth a read. (I always knew there was a really good reason to enable comments on my blog -- Alan Kay might drop by!)
I may not be a web guy, but some of my students are -- and very good ones. Back in December, I wrote about one of my students, Sergei Golitsinski, defending an MA thesis in Communications, which used computing to elucidate a problem in that discipline. For that study, he wrote tools that allowed him to trace the threads of influence in a prominent blog-driven controversy.
Sergei finally defended his MS thesis in computer science yesterday. Its title -- "Specification and Automatic Code Generation of the Data Layer for Data-Intensive Web-Based Applications" -- sounds like the usual thesis title, but as is often the case the idea behind it is really quite accessible. This thesis shows how you can use knowledge about your web application to generate much of the code you need for your site.
I like this work for several reasons. First, it was all about finding patterns in real applications and using them to inform software development. Second, it focused on how to use domain knowledge to get leverage from the patterns. Third, it used standard language-processing ideas to create a modeling language and then use models written in it to generate code. This thesis demonstrates how several areas of computer science -- database, information storage and retrieval, and programming languages among them -- can work together to help us write programs to do work for us. I also like it because Sergei applied his ideas to his own professional work and took a critical look at what the outcome means for his own practice.
Listening to the defense, I had two favorite phrases. The first was recursive weakness. He used this term in reference to weak entities in a database that are themselves parents to weak entities. But it brought to mind so many images for the functional programmer in me. (I'm almost certainly recursively weak myself, but where is the base case?) The second arose arose while discussing alternative approaches to a particular problem. Referring to one, he said trivial approach; non-trivial implementation. It occurred to me that so many ideas fall into this category, and part of understanding your domain well is recognizing them. Sometimes we need to avoid their black holes; other times, we need their challenges. Another big part of becoming a master is knowing which path to choose once you have recognized them.
Sergei is a master, and soon he will have a CS degree that says so. But like all masters, he has much to learn. When I wrote about his previous defense, his plan was up in the air but pointing toward applying CS in the world of communications. Since then, he has accepted admission to a Ph.D. program in communications at the University of Maryland, where he hopes to be in the vanguard of a new discipline he calls computational communications. I look forward to watching his progress.
You can read his CS thesis on-line, and all of the code used in his study will be, too, soon.
I am not a web guy. If I intend to teach languages (PHP) or frameworks (Rails) with the web as their natural home, I need to do a lot more practice myself. It's too easy to know how to do something and still not know it well enough to teach it well. Unexpected results under time pressure create too much trouble.
Ruby's dynamic features are so, so nice. Still, I occasionally find myself wishing for Smalltalk. First love never dies.
Fifteen hours of instruction -- 5 weeks at 3 hours per week -- is plenty of time to teach most or all of the ideas in a language like bash, PHP, or Ruby. But the instructor still needs to select specific examples and parts of the class library carefully. It's too easy to start down a path of "Now look at this [class, method, primitive]..."
When I succeeded in selecting carefully, I suffered from persistent omitter's resource. "But I wish I could have covered that... Sometimes that is what students wanted to see. But most of the time they can figure that out. What they want is some insight. What insight could I have shared had I covered that instead of this?
If you want to know what students want, ask them. Easy to say, hard to do unless I slow down occasionally to reflect.
Practice, practice, practice. That's where students learn. It's also where students who don't learn don't learn.
Oh, and professor: That "Practice, practice, practice" thing -- it applies to you, too. You'll remember just how much fun programming can be.