January 31, 2014 3:13 PM

"What Should It Be?"

When asked to design and implement a program, beginning programmers often aren't sure what data type or data structure to use for a particular value. Should they use an array or a list? Or they've decided to use a record but can't decide exactly what fields to include, or names to give them.

"What should it be?", they ask.

I often have a particular implementation in mind, based on what we've been studying or on my own experience as a programmer, but I prefer not to tell them what to do. This is a great opportunity for them to learn to think about design.

Instead, I ask questions. "What have you considered?" "Do you think one or the other is better?" "Why?"

We discuss how so often there is no "right" answer. There are merely trade-offs. They have to choose. This is a design decision.

But, in making this decision, there's another opportunity to learn something about design. They don't have to commit now and forever to an implementation before proceeding with the rest of their program. Because the rest of the program shouldn't know about their decision anyway!

They should make an object that encapsulates the choice. They are then able to start building the rest of the program without fear that it depends on the details of their design choice. The rest of the program will interact with the object in terms of what the object means in the program, not in terms of how it is implemented. Later, if they change their minds, they will be able to change the implementation details without disturbing the rest of the code.

Yes this is basic stuff, but beginners often struggle with basic stuff. They've learned about ADTs or OOP, and they can talk abstractly about abstraction. But when it comes time to write code, indecision descends upon them. They are afraid of messing up.

If I can help allay their fears of proceeding, then I've contributed something to their work that day. I even suggest that writing the rest of the program might even help them figure out which alternative is better. I like to listen to my code, even if that idea seems strange or absurd to them. Some day soon, it may not.

In any case, they have the beginnings of a program, and perhaps a better idea of what design is all about.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

January 27, 2014 3:29 PM

An Example of the Difference Between Scientists and Humanists

Earlier today, I tweeted a link to The origin of consciousness in the breakdown of the bicameral mind, in which Erik Weijers discusses an unusual theory about the origin of consciousness developed by Julian Jaynes:

[U]ntil a few thousand years ago human beings did not 'view themselves'. They did not have the ability: they had no introspection and no concept of 'self' that they could reflect upon. In other words: they had no subjective consciousness. Jaynes calls their mental world the bicameral mind.

It sounds odd, I know, but I found Jaynes's hypothesis to be a fascinating extrapolation of human history. Read more of Weijers's review if you might be interested.

A number of people who saw my tweet expressed interest in the article or a similar fascination with Jaynes's idea. Two people mentioned the book in which Jaynes presented his hypothesis. I responded that I would now have to dive into the book and learn more. How could I resist the opportunity?

Two of the comments that followed illustrate nicely the differing perspectives of the scientist and the humanist. First, Chris said:

My uncle always loved that book; I should read it, since I suspect serious fundamental evidentiary problems with his thesis.

And then Liz said:

It's good! I come from a humanities angle, so I read it as a thought experiment & human narrative.

The scientist thinks almost immediately of evidence and how well supported the hypothesis might be. The humanist thinks of the hypothesis first as a human narrative, and perhaps only then as a narrow scientific claim. Both perspectives are valuable; they simply highlight different forms of the claim.

From what I've seen on Twitter, I think that Chris and Liz are like me and most of the people I know: a little bit scientist, a little bit humanist -- interested in both the story and the argument. All that differs sometimes is the point from which we launch our investigations.


Posted by Eugene Wallingford | Permalink | Categories: General

January 27, 2014 11:39 AM

The Polymath as Intellectual Polygamist

Carl Djerassi, quoted in The Last Days of the Polymath:

Nowadays people [who] are called polymaths are dabblers -- are dabblers in many different areas. I aspire to be an intellectual polygamist. And I deliberately use that metaphor to provoke with its sexual allusion and to point out the real difference to me between polygamy and promiscuity.

On this view, a dilettante is merely promiscuous, making no real commitment to any love interest. A polymath has many great loves, and loves them all deeply, if not equally.

We tend to look down on dilettantes, but they can perform a useful service. Sometimes, making a connection between two ideas at the right time and in the right place can help spur someone else to "go deep" with the idea. Even when that doesn't happen, dabbling can bring great personal joy and provide more substantial entertainment than a lot of pop culture.

Academics are among the people these days with a well-defined social opportunity to be explore at least two areas deeply and seriously: their chosen discipline and teaching. This is perhaps the most compelling reason to desire a life in academia. It even offers a freedom to branch out into new areas later in one's career that is not so easily available to people who work in industry.

These days, it's hard to be a polymath even inside one's own discipline. To know all sub-areas of computer science, say, as well as the experts in those sub-areas is a daunting challenge. I think back to the effort my fellow students and I put in over the years that enabled us to take the Ph.D. qualifying exams in CS. I did quite well across the board, but even then I didn't understand operating systems or programming languages as well as experts in those areas. Many years later, despite continued reading and programming, the gap has only grown.

I share the vague sense of loss, expressed by the author of the article linked to above, of a time when one human could master multiple areas of discourse and make fundamental advances to several. We are certainly better off for collective understanding the world so much much better, but the result is a blow to a certain sort of individual mind and spirit.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

January 26, 2014 3:05 PM

One Reason We Need Computer Programs

Code bridges the gap between theory and data. From A few thoughts on code review of scientific code:

... there is a gulf of unknown size between the theory and the data. Code is what bridges that gap, and specifies how edge cases, weird features of the data, and unknown unknowns are handled or ignored.

I learned this lesson the hard way as a novice programmer. Other activities, such as writing and doing math, exhibit the same characteristic, but it wasn't until I started learning to program that the gap between theory and data really challenged me.

Since learning to program myself, I have observed hundreds of CS students encounter this gap. To their credit, they usually buckle down, work hard, and close the gap. Of course, we have to close the gap for every new problem we try to solve. The challenge doesn't go away; it simply becomes more manageable as we become better programmers.

In the passage above, Titus Brown is talking to his fellow scientists in biology and chemistry. I imagine that they encounter the gap between theory and data in a new and visceral way when they move into computational science. Programming has that power to change how we think.

There is an element of this, too, in how techies and non-techies alike sometimes lose track of how hard it is to create a successful start up. You need an idea, you need a programmer, and you need a lot of hard work to bridge the gap between idea and executed idea.

Whether doing science or starting a company, the code teaches us a lot about out theory. The code makes our theory better.

As Ward Cunningham is fond of saying, it's all talk until the tests run.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

January 24, 2014 2:18 PM

Could I be a programmer?

... with apologies to Annie Dillard.

A well-known programmer got collared by a university student who asked, "Do you think I could be a programmer?"

"Well," the programmer said, "I don't know... Do you like function calls?"

The programmer could see the student's amazement. Function calls? Do I like function calls? I am twenty years old and do I like function calls?

If the student had liked function calls, of course, he could begin, like a joyful painter I knew. I asked him how he came to be a painter. He said, "I liked the smell of paint."


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

January 23, 2014 4:14 PM

The CS Mindset

Chad Orzel often blogs about the physics mindset, the default orientation that physicists tend to have toward the world, and the way they think about and solve problems. It is fun to read a scientist talking about doing science.

Earlier this week I finally read this article about the popularity of CS50, an intro CS course at Harvard. It's all about how Harvard is "is righting an imbalance in academia" by finally teaching practical skills to its students. When I read:

"CS50 made me look at Harvard with new eyes," Guimaraes said.

That is a sea change from what Harvard represented to the outside world for decades: the guardian of a classic education, where the value of learning is for its own sake.

I sighed audibly, loud enough for the students on the neighboring exercise equipment to hear. A Harvard education used to be about learning only for its own sake, but now students can learn practical stuff, too. Even computer programming!

As I re-read the article now, I see that it's not as blunt as that throughout. Many Harvard students are learning computing because of the important role it plays in their futures, whatever their major, and they understand the value of understanding it better. But there are plenty of references to "practical ends" and Harvard's newfound willingness to teach practical skills it once considered beneath it.

Computer programming is surely one of those topics old Charles William Eliot would deem unworthy of inclusion in Harvard's curriculum.

I'm sensitive to such attitudes because I think computer science is and should be more. If you look deeper, you will see that the creators of CS50 think so, too. On its Should I take CS50? FAQ page, we find:

More than just teach you how to program, this course teaches you how to think more methodically and how to solve problems more effectively. As such, its lessons are applicable well beyond the boundaries of computer science itself.

The next two sentences muddy the water a bit, though:

That the course does teach you how to program, though, is perhaps its most empowering return. With this skill comes the ability to solve real-world problems in ways and at speeds beyond the abilities of most humans.

With this skill comes something else, something even more important: a discipline of thinking and a clarity of thought that are hard to attain when you learn "how to think more methodically and how to solve problems more effectively" in the abstract or while doing almost any other activity.

Later the same day, I was catching up on a discussion taking place on the PLT-EDU mailing list, which is populated by the creators, users, and fans of the Racket programming language and the CS curriculum designed in tandem with it. One poster offered an analogy for talking to HS students about how and why they are learning to program. A common theme in the discussion that ensued was to take the conversation off of the "vocational track". Why encourage students to settle for such a limiting view of what they are doing?

One snippet from Matthias Felleisen (this link works only if you are a member of the list) captured my dissatisfaction with the tone of the Globe article about CS50:

If we require K-12 students to take programming, it cannot be justified (on a moral basis) with 'all of you will become professional programmers.' I want MDs who know the design recipe, I want journalists who write their articles using the design recipe, and so on.

The "design recipe" is a thinking tool students learn in Felleisen "How to Design Programs" curriculum. It is a structured way to think about problems and to create solutions. Two essential ideas stand out for me:

  • Students learn the design recipe in the process of writing programs. This isn't an abstract exercise. Creating a working computer program is tangible evidence that student has understood the problem and created a clear solution.
  • This way of thinking is valuable for everyone. We will all better off if our doctors, lawyers, and journalists are able to think this way.

This is one of my favorite instantiations of the vague term computational thinking so many people use without much thought. It is a way of thinking about problems both abstractly and concretely, that leads to solutions that we have verified with tests.

You might call this the CS mindset. It is present in CS50 independent of any practical ends associated with tech start-ups and massaging spreadsheet data. It is practical on a higher plane. It is also present in the HtDP curriculum and especially in the Racket Way.

It is present in all good CS education, even the CS intro courses that more students should be taking -- even if they are only going to be doctors, lawyers, or journalists.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 16, 2014 4:22 PM

Another Take on "Know the Past"

Ted Nelson offers a particularly stark assessment of how well we fulfill our obligation to know the past in his eulogy for Douglas Engelbart:

To quote Joan of Arc, from Shaw's play about her: "When will the world be ready to receive its saints?"

I think we know the answer -- when they are dead, pasteurized and homogenized and simplified into stereotypes, and the true depth and integrity of their ideas and initiatives are forgotten.

Nelson's position is stronger yet, because he laments the way in which Engelbart and his visions of the power of computing were treated throughout his career. How, he wails, could we have let this vision slip through our hands while Engelbart lived among us?

Instead, we worked on another Java IDE or a glue language for object-relational mapping. All the while, as Nelson says, "the urgent and complex problems of mankind have only grown more urgent and more complex."

This teaching is difficult; who can accept it?


Posted by Eugene Wallingford | Permalink | Categories: Computing

January 08, 2014 3:06 PM

"I'm Not a Programmer"

In The Exceptional Beauty of Doom 3's Source Code, Shawn McGrath first says this:

I've never really cared about source code before. I don't really consider myself a 'programmer'.

Then he says this:

Dyad has 193k lines of code, all C++.

193,000 lines of C++? Um, dude, you're a programmer.

Even so, the point is worth thinking about. For most people, programming is a means to an end: a way to create something. Many CS students start with a dream program in mind and only later, like McGrath, come to appreciate code for its own sake. Some of our graduates never really get there, and appreciate programming mostly for what they can do with it.

If the message we send from academic CS is "come to us only if you already care about code for its own sake", then we may want to fix our message.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development

January 07, 2014 4:09 PM

Know The Past

In this profile of computational geneticist Jason Moore, the scientist speaks explains how his work draws on work from the 1910s, which may offer computational genetics a better path forward than the work that catapulted genetics forward in the 1990s and 2000s.

Yet despite his use of new media and advanced technology, Moore spends a lot of time thinking about the past. "We have a lot to learn from early geneticists," he says. "They were smart people who were really thinking deeply about the problem."

Today, he argues, genetics students spend too much time learning to use the newest equipment and too little time reading the old genetics literature. Not surprisingly, given his ambivalent attitude toward technology, Moore believes in the importance of history. "Historical context is so important for what we do," he says. "It provides a grounding, a foundation. You have to understand the history in order ... to understand your place in the science."

Anyone familiar with the history of computing knows there is another good reason to know your history: Sometimes, we dream too small these days, and settle for too little. We have a lot to learn from early computer scientists.

I intend to make this a point of emphasis in my algorithms course this spring. I'd like to expose students to important new ideas outside the traditional canon (more on that soon), while at the same time exposing them to some of the classic work that hasn't been topped.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning