August 25, 2015 1:57 PM

The Art of Not Reading

The beginning of a new semester brings with it a crush of new things to read, write, and do, which means it's a good time to remember this advice from Arthur Schopenhauer:

Hence, in regard to our subject, the art of not reading is highly important. This consists in not taking a book into one's hand merely because it is interesting the great public at the time -- such as political or religious pamphlets, novels, poetry, and the like, which make a noise and reach perhaps several editions in their first and last years of existence. Remember rather that the man who writes for fools always finds a large public: and only read for a limited and definite time exclusively the works of great minds, those who surpass other men of all times and countries, and whom the voice of fame points to as such. These alone really educate and instruct.

"The man who writes for fools always finds a large public." You do not have to be part of it. Time is limited. Read something that matters.

The good news for me is that there is a lot of writing about compilers by great minds. This is, of course, also the bad news. Part of my job is to help my students navigate the preponderance of worthwhile readings.

Reading in my role as department head is an altogether different matter...

~~~~

The passage above is from On Books and Reading, which is available via Project Gutenberg, a wonderful source of many great works.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

August 23, 2015 10:12 AM

Science Students Should Learn How to Program, and Do Research

Physicist, science blogger, and pop science author Chad Orzel offered some advice for prospective science students in a post on his Forbes blog last week. Among other things, he suggests that science students learn to program. Orzel is among many physics profs who integrate computer simulations into their introductory courses, using the Matter and Interactions curriculum (which you may recall reading about here in a post from 2007).

I like the way Orzel explains the approach to his students:

When we start doing programming, I tell students that this matters because there are only about a dozen problems in physics that you can readily solve exactly with pencil and paper, and many of them are not that interesting. And that goes double, maybe triple for engineering, where you can't get away with the simplifying spherical-cow approximations we're so fond of in physics. Any really interesting problem in any technical field is going to require some numerical simulation, and the sooner you learn to do that, the better.

This advice complements Astrachan's Law and its variants, which assert that we should not ask students to write a program if they can do the task by hand. Conversely, if they can't solve their problems by hand, then they should get comfortable writing programs that can. (Actually, that's the contrapositive of Astrachan, but "contrapositively" doesn't sound as good.) Programming is a medium for scientists, just as math is, and it becomes more important as they try to solve more challenging problems.

Orzel and Astrachan both know that the best way to learn to program is to have a problem you need a computer to solve. Curricula such as Matter and Interactions draw on this motivation and integrate computing directly into science courses. This is good news for us in computer science. Some of the students who learn how to program in their science courses find that they like it and want to learn more. We have just the courses they need to go deeper.

I concur with all five of Orzel's suggestions for prospective science students. They apply as well to computer science students as to those interested in the physical sciences. When I meet with prospective CS students and their families, I emphasize especially that students should get involved in research. Here is Orzel's take:

While you might think you love science based on your experience in classes, classwork is a pale imitation of actual science. One of my colleagues at Williams used a phrase that I love, and quote all the time, saying that "the hardest thing to teach new research students is that this is not a three-hour lab."

CS students can get involved in empirical research, but they also have the ability to write their own programs to explore their own ideas and interests. The world of open source software enables them to engage the discipline in ways that preceding generations could only have dreamed of. By doing empirical CS research with a professor or working on substantial programs that have users other than the creators, students can find out what computer science is really about -- and find out what they want to devote their lives to.

As Orzel points out, this is one of the ways in which small colleges are great for science students: undergrads can more readily become involved in research with their professors. This advantage extends to smaller public universities, too. In the past year, we have had undergrads do some challenging work on bioinformatics algorithms, motion virtual manipulatives, and system security. These students are having a qualitatively different learning experience than students who are only taking courses, and it is an experience that is open to all undergrad students in CS and the other sciences here.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

August 19, 2015 4:07 PM

Working Too Much Means Never Having to Say "No"

Among the reasons David Heinemeier Hansson gives in his advice to Fire the Workaholics is that working too much is a sign of bad judgment:

If all you do is work, your value judgements are unlikely to be sound. Making good calls on "is it worth it?" is absolutely critical to great work. Missing out on life in general to put more hours in at the office screams "misguided values".

I agree, in two ways. First, as DHH says, working too much is itself a general indicator that your judgment is out of whack. Second is the more specific case:

For workaholics, doing more work always looks like a reasonable option. As a result, when you are trying to decide, "Should I make this or not?", you never have to choose not to make the something in question -- even when not making it is the right thing to do. That sort of indifferent decision making can be death in any creative endeavor.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

August 12, 2015 10:09 AM

Graphic Art: Links in Jewish Literature

"Genesis 1:1 is the Kevin Bacon of Sefaria."

This morning I finally read Sefaria in Gephi: Seeing Links in Jewish Literature, which had been in my reading list for a few months. In it, Liz Shayne introduces a collaborative project to visualize the relationships among 100,000+ sections of Jewish literature encoded in Sefaria, an online library of Jewish texts. It's a cool project, and the blog entries about it remind us how beautiful visualizations of graphs can be. I love this basic image, in which nodes represent sections of text, color indicates the type of text, and size corresponds to the degree of the node:

a graph of relationships in the Sefaria

This is suitable for framing and would make a fine piece of art on my office wall.

Images like this can help us to understand a large dataset at a high level more easily than simply looking at the data themselves. Of course, creating the image requires some initial understanding, too. There is a give-and-take between analyzing the data and visualizing it that mutually reinforces our understanding.

As I mentioned in a December 2004 post, sometimes a computer scientist can produce a beautiful picture without intending to. One of my grad students, Nate Labelle, studied package dependencies in Linux as part of a project on power laws and open-source software. He created this image that shows the dependencies among one hundred randomly selected packages:

Linux package dependencies as art

Unlike the neat concentric Sefaria image above, Nate's image has a messy asymmetry that reflects the more decentralized nature of the Linux ecosystem. It evokes for me a line drawing of a book whose pages are being riffled. After all these years, I still think it's an attractive image.

I have not read the rest of the Sefaria blog series, but peeking ahead I saw a neat example in Sefaria III: Comparative Graphing that shows the evolution of the crowd-sourced Sefaria dataset over the course of four months:

evolution of the Sefaria dataset over time

These images look almost like a time-lapse photograph of a supernova exploding ( video). They are pretty as art, and perhaps instructive about how the Sefaria community operates.

The Ludic Analytics site has links to two additional entries for the project [ II | IV ], but the latest is dated the end of 2014. I hope that Shayne or others involved with the project write more about their use of visualizations to understand the growing dataset. If nothing else, they may create more art for my walls.


Posted by Eugene Wallingford | Permalink | Categories: Computing

August 06, 2015 10:22 AM

Not So Different

Trevor Blackwell on The Lessons of Viaweb:

[Scott Kirsner]: What was the biggest challenge you faced with Viaweb?

[Trevor Blackwell]: Focusing every day on the few things that mattered and not getting distracted by the hundreds of things that didn't.

Maybe the life of a department head isn't all that different from the life of an entrepreneur after all. Well, except for the $49 million.


Posted by Eugene Wallingford | Permalink | Categories: General

August 04, 2015 1:00 PM

Concrete, Then Abstract

One of the things that ten years teaching the same topic has taught Daniel Lemire is that students generally learn more effectively when they learn practical skills first and only then confront the underlying theory:

Though I am probably biased, I find that it is a lot harder to take students from a theoretical understanding to a practical one... than to take someone with practical skills and teach him the theory. My instinct is that most people can more easily acquire an in-depth practical knowledge through practice (since the content is relevant) and they then can build on this knowledge to acquire the theory.

He summarizes the lesson he learned as:

A good example, well understood, is worth a hundred theorems.

My years of teaching have taught me similar lessons. I described a related idea in Examples First, Names Last: showing students examples of an idea before giving it a name.

Lemire's experience teaching XML and my experience teaching a number of topics, including the object-oriented programming example in that blog post, are specific examples of a pattern I usually call Concrete, Then Abstract. I have found this to be an effective strategy in my teaching and writing. I may have picked up the name from Ralph Johnson at ChiliPLoP 2003, where we were part of a hot topic group sketching programming patterns for beginning programmers. Ralph is a big proponent of showing concrete examples before introducing abstract ideas. You can see that in just about every pattern, paper, and book he has written.

My favorite example of "Concrete, Then Abstract" this week is in an old blog entry by Scott Vokes, Making Diagrams with Graphviz. I recently came back to an idea I've had on hold for a while: using Graphviz to generate a diagram showing all of my department's courses and prerequisites. Whenever I return to Graphviz after time away, I bypass its documentation for a while and pull up instead a cached link to Scott's short introduction. I immediately scroll down to this sample program written in Graphviz's language, DOT:

an example program in Graphviz's DOT language

... and the corresponding diagram produced by Graphviz:

an example diagram produced by Graphviz

This example makes me happy, and productive quickly. It demonstrates an assortment of the possibilities available in DOT, including several specific attributes, and shows how they are rendered by Graphviz. With this example as a starting point, I can experiment with variations of my own. If I ever want or need more, I dig deeper and review the grammar of DOT in more detail. By that time, I have a pretty good practical understanding of how the language works, which makes remembering how the grammar works easier.

Sometimes, the abstract idea to learn, or re-learn, is a context-free grammar. Sometimes, it's a rule for solving a class of problems or a design pattern. And sometimes, it's a theorem or a theory. In all these cases, examples provide hooks that help us learn an abstract idea that is initially hard for us to hold in our heads.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

July 30, 2015 2:45 PM

The Word Came First

James Somers's article You're Probably Using the Wrong Dictionary describes well how a good dictionary can change your life. In comparing a definition from Webster's 1913 Revised Unabridged Dictionary with a definition from the New Oxford Dictionary, which he offers as an exemplar of the pedestrian dictionaries we use today, he reminds us that words are elusive and their definitions only approximations:

Notice, too, how much less certain the Webster definition seems about itself, even though it's more complete -- as if to remind you that the word came first, that the word isn't defined by its definition here, in this humble dictionary, that definitions grasp, tentatively, at words, but that what words really are is this haze and halo of associations and evocations, a little networked cloud of uses and contexts.

Such poetry is not wasted on words; it is not, too use his own example from the essay, fustian. Words deserve this beauty, and a good dictionary.

There is also a more general reminder just beneath the surface here. In so many ways, more knowledge makes us less certain, not more, and more circumspect, not less. It is hard to make sharp distinctions within a complex web of ideas when you know a little about the web.

I strongly second Somers's recommendation of John McPhee's work, which I blogged about indirectly a few years ago. I also strongly second his recommendation of Webster's 1913 Revised Unabridged Dictionary. I learned about it from another blog article years ago and have been using it ever since. It's one of the first things I install whenever I set up a new computer.


Posted by Eugene Wallingford | Permalink | Categories: General

July 29, 2015 2:10 PM

How Do You Know If It Is Good? You Don't.

In the Paris Review's Garrison Keillor, The Art of Humor No. 2, Keillor thinks back to his decision to become a writer, which left him feeling uncertain about himself:

Someone once asked John Berryman, How do you know if something you've written is good? And John Berryman said, You don't. You never know, and if you need to know then you don't want to be a writer.

This doesn't mean that you don't care about getting better. It means that you aren't doing it to please someone else, or at least that your doing it is not predicated on what someone else thinks. You are doing it because that's what you think about. It means that you keep writing, whether it's good or not. That's how you get better.

It's always fun to watch our students wrestle with this sort of uncertainty and come out on the other side of the darkness. Last fall, I taught first-semester freshmen who were just beginning to find out if they wanted to be programmers or computer scientists, asking questions and learning a lot about themselves. This fall, I'm teaching our senior project course, with students who are nearing the end of their time at the university. Many of them think a lot about programming and programming languages, and they will drive the course with their questions and intensity. As a teacher, I enjoy both ends of the spectrum.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

July 27, 2015 2:23 PM

The Flip Side to "Programming for All"

a thin volume of William Blake

We all hear the common refrain these days that more people should learn to program, not just CS majors. I agree. If you know how to program, you can make things. Even if you don't write many programs yourself, you are better prepared to talk to the programmers who make things for you. And even if you don't need to talk to programmers, you have expanded your mind a bit to a way of thinking that is changing the world we live in.

But there are two sides to this equation, as Chris Crawford laments in his essay, Fundamentals of Interactivity:

Why is it that our entertainment software has such primitive algorithms in it? The answer lies in the people creating them. The majority are programmers. Programmers aren't really idea people; they're technical people. Yes, they use their brains a great deal in their jobs. But they don't live in the world of ideas. Scan a programmer's bookshelf and you'll find mostly technical manuals plus a handful of science fiction novels. That's about the extent of their reading habits. Ask a programmer about Rabelais, Vivaldi, Boethius, Mendel, Voltaire, Churchill, or Van Gogh, and you'll draw a blank. Gene pools? Grimm's Law? Gresham's Law? Negentropy? Fluxions? The mind-body problem? Most programmers cannot be troubled with such trivia. So how can we expect them to have interesting ideas to put into their algorithms? The result is unsurprising: the algorithms in most entertainment products are boring, predictable, uninformed, and pedestrian. They're about as interesting in conversation as the programmers themselves.

We do have some idea people working on interactive entertainment; more of them show up in multimedia than in games. Unfortunately, most of the idea people can't program. They refuse to learn the technology well enough to express themselves in the language of the medium. I don't understand this cruel joke that Fate has played upon the industry: programmers have no ideas and idea people can't program. Arg!

My office bookshelf occasionally elicits a comment or two from first-time visitors, because even here at work I have a complete works of Shakespeare, a thin volume of William Blake (I love me some Blake!), several philosophy books, and "The Brittanica Book of Usage". I really should have some Voltaire here, too. I do cover one of Crawford's bases: a recent blog entry made a software analogy to Gresham's Law.

In general, I think you're more likely to find a computer scientist who knows some literature than you are to find a literary professional who knows much CS. That's partly an artifact of our school system and partly a result of the wider range historically of literature and the humanities. It's fun to run into a colleague from across campus who has read deeply in some area of science or math, but rare.

However, we are all prone to fall into the chasm of our own specialties and miss out on the well-roundedness that makes us better at whatever specialty we practice. That's one reason that, when high school students and their parents ask me what students should take to prepare for a CS major, I tell them: four years of all the major subjects, including English, math, science, social science, and the arts; plus whatever else interests them, because that's often where they will learn the most. All of these topics help students to become better computer scientists, and better people.

And, not surprisingly, better game developers. I agree with Crawford that more programmers should be learn enough other stuff to be idea people, too. Even if they don't make games.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

July 26, 2015 10:03 AM

A Couple of Passages on Disintermediation

"Disintermediation" is just a fancy word for getting other people out of the space between the people who create things and the people who read or listen to those things.

1. In What If Authors Were Paid Every Time Someone Turned a Page?, Peter Wayner writes:

One latter-day Medici posted a review of my (short) book on Amazon complaining that even 99 cents was too expensive for what was just a "blog post". I've often wondered if he was writing that comment in a Starbucks, sipping a $6 cup of coffee that took two minutes to prepare.

Even in the flatter world of ebooks, Amazon has the power to shape the interactions of creators and consumers and to influence strongly who makes money and what kind of books we read.

2. Late last year, Steve Albini spoke on the surprisingly sturdy state of the music industry:

So there's no reason to insist that other obsolete bureaux and offices of the lapsed era be brought along into the new one. The music industry has shrunk. In shrinking it has rung out the middle, leaving the bands and the audiences to work out their relationship from the ends. I see this as both healthy and exciting. If we've learned anything over the past 30 years it's that left to its own devices bands and their audiences can get along fine: the bands can figure out how to get their music out in front of an audience and the audience will figure out how to reward them.

Most of the authors and bands who aren't making a lot of money these days weren't making a lot of money -- or any money at all -- in the old days, either. They had few effective ways to distribute their writings or their music.

Yes, there are still people in between bands and their fans, and writers and their readers, but Albini reminds us how much things have improved for creators and audiences alike. I especially like his takedown of the common lament, "We need to figure out how to make this work for everyone." That sentence has always struck me as the reactionary sentiment of middlemen who no longer control the space between creators and audiences and thus no longer get their cut of the transaction.

I still think often about what this means for universities. We need to figure out how to make this internet thing work for everyone...


Posted by Eugene Wallingford | Permalink | Categories: Computing, General