August 08, 2019 2:42 PM

Encountering an Old Idea Three Times in Ten Days

I hope to eventually write up a reflection on my first Dagstuhl seminar, but for now I have a short story about how I encountered a new idea three times in ten days, purely by coincidence. Actually, the idea is over one hundred fifty years old but, as my brother often says, "Hey, it's new to me."

On the second day of Dagstuhl, Mark Guzdial presented a poster showing several inspirations for his current thinking about task-specific programming languages. In addition to displaying screenshots of two cool software tools, the poster included a picture of an old mechanical device that looked both familiar and strange. Telegraphy had been invented in the early 1840s, and telegraph operators needed some way to type messages. But how? The QWERTY keyboard was not created for the typewriter until the early 1870s, and no other such devices were in common use yet. To meet the need, Royal Earl House adapted a portion of a piano keyboard to create the input device for the "printing telegraph", or teleprinter. The photo on Mark's poster looked similar to the one on Wikipedia page for the teleprinter.

There was a need for a keyboard thirty years before anyone designed a standard typing interface, so telegraphers adapted an existing tool to fit their needs. What if we are in that same thirty-year gap in the design of programming languages? This has been one of Mark's inspirations as he works with non-computer scientists on task-specific programming languages. I had never seen an 1870s teleprinter before and thought its keyboard to be a rather ingenious way to solve a very specific problem with a tool borrowed from another domain.

When Dagstuhl ended, my wife and I spent another ten days in Europe on a much-needed vacation. Our first stop was Paris, and on our first full day there we visited the museum of the Conservatoire National des Arts et Métiers. As we moved into the more recent exhibits of the museum, what should I see but...

a Hughes teleprinter with piano-style keyboard, circa 1975, in the CNAM museum, Paris

... a Hughes teleprinter with piano-style keyboard, circa 1975. Déjà vu! I snapped a photo, even though the device was behind glass, and planned to share it with Mark when I got home.

We concluded our vacation with a few days in Martinici, Montenegro, the hometown of a department colleague and his wife. They still have a lot of family in the old country and spend their summers there working and relaxing. On our last day in this beautiful country, we visited its national historical museum, which is part of the National Museum of Montenegro in the royal capital of Cetinje. One of the country's most influential princes was a collector of modern technology, and many of his artifacts are in the museum -- including:

a teleprinter with piano-style keyboard in the Historical Museum of Montenegro, Cetinje

This full-desk teleprinter was close enough to touch and examine up close. (I didn't touch!) The piano keyboard on the device shows the wear of heavy use, which brings to mind each of my laptops' keyboards after a couple of years. Again, I snapped a photo, this time in fading light, and made a note to pass it on.

In ten days, I went from never having heard much about a "printing telegraph" to seeing a photo of one, hearing how it is an inspiration for research in programming language design, and then seeing two such devices that had been used in the 19th-century heyday of telegraphy. It was an unexpected intersection of my professional and personal lives. I must say, though, that having heard Mark's story made the museum pieces leap into my attention in a way that they might not have otherwise. The coincidence added a spark to each encounter.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal

August 02, 2019 2:48 PM

Programming is an Infinite Construction Kit

As so often, Marvin Minsky loved to tell us about the beauty of programming. Kids love to play with construction sets like Legos, TinkerToys, and Erector sets. Programming provides an infinite construction kit: you never run out of parts!

In the linked essay, which was published as a preface to a 1986 book about Logo, Minsky tells several stories. One of the stories relates that once, as a small child, he built a large tower out of TinkerToys. The grownups who saw it were "terribly impressed". He inferred from their reaction that:

some adults just can't understand how you can build whatever you want, so long as you don't run out of sticks and spools.

Kids get it, though. Why do so many of us grow out of this simple understanding as we get older? Whatever its cause, this gap between children's imaginations and the imaginations of adults around them creates a new sort of problem when we give the children a programming language such as Logo or Scratch. Many kids take to these languages just as they do to Legos and TinkerToys: they're off to the races making things, limited only by their expansive imaginations. The memory on today's computers is so large that children never run out of raw material for writing programs. But adults often don't possess the vocabulary for talking with the children about their creations!

... many adults just don't have words to talk about such things -- and maybe, no procedures in their heads to help them think of them. They just do not know what to think when little kids converse about "representations" and "simulations" and "recursive procedures". Be tolerant. Adults have enough problems of their own.

Minsky thinks there are a few key ideas that everyone should know about computation. He highlights two:

Computer programs are societies. Making a big computer program is putting together little programs.
Any computer can be programmed to do anything that any other computer can do--or that any other kind of "society of processes" can do.

He explains the second using ideas pioneered by Alan Turing and long championed in the popular sphere by Douglas Hofstadter. Check out this blog post, which reflects on a talk Hofstadter gave at my university celebrating the Turing centennial.

The inability of even educated adults to appreciate computing is a symptom of a more general problem. As Minsky says toward the end of his essay, People who don't appreciate how simple things can grow into entire worlds are missing something important. If you don't understand how simple things can grow into complex systems, it's hard to understand much at all about modern science, including how quantum mechanics accounts for what we see in the world and even how evolution works.

You can usually do well by reading Minsky; this essay is a fine example of that. It comes linked to an afterword written by Alan Kay, another computer scientist with a lot to say about both the beauty of computing and its essential role in a modern understanding of the world. Check both out.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

July 30, 2019 3:27 PM

"Eugene-Past Knew Things That Eugene-Present Does Not"

A few months back, Mark Guzdial began to ponder a new research question:

I did some literature searches, and found a highly relevant paper: "Task specific programming languages as a first programming language." And the lead author is... me. I wrote this paper with Allison Elliott Tew and Mike McCracken, and published it in 1997. I honestly completely forgot that I had written this paper 22 years ago. Guzdial-past knew things that Guzdial-present does not.

I know this feeling too well. It seems that whenever I look back at an old blog post, especially from the early years, I am surprised to have already thought something, and usually to have thought it better and more deeply than I'm thinking it now! Perhaps this says something about the quality of my thinking now, or the quality of my blogging then. Or maybe it's simply an artifact of time and memory. In any case, stumbling across a link to an ancient blog entry often leads to a few moments of pleasure after an initial bit of disorientation.

On a related note, the fifteenth anniversary of my first blog post passed while I was at Dagstuhl earlier this month. For the first few years, I regularly wrote twelve to twenty posts a month. Then for a few years I settled into a pattern of ten to twelve monthly. Since early 2017, though, I've been in the single digits, with fewer substantial entries. I'm not giving Eugene-2025 much material to look back on.

With a new academic year soon upon us, I hope to write a bit more frequently and a bit more in depth about my programming, my teaching, and my encounters with computer science and the world. I think that will be good for me in many ways. Sometimes, knowing that I will write something encourages me to engage more deeply than I might otherwise. Nearly every time, the writing helps me to make better sense of the encounter. That's one way to make Eugene-Present a little smarter.

As always, I hope that whoever is still reading here finds it worth their time, too.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal

July 05, 2019 12:40 PM

A Very Good Reason to Leave Your Home and Move to a New Country

He applied to switch his major from mathematics to computer science, but the authorities forbade it. "That is what tipped me to accept the idea that perhaps Russia is not the best place for me," he says. "When they wouldn't allow me to study computer science."

-- Sergey Aleynikov, as told to Michael Lewis and reported in Chapter 5 of Flash Boys.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

July 01, 2019 11:59 AM

Wandering the Stacks

In You Are Here, Ben Hunt writes:

You know what I miss most about the world before Amazon? I miss going to the library and looking up a book in the card catalog, searching the stacks for the book in question, and then losing myself in the experience of discovery AROUND the book I was originally searching for. It's one of the best feelings in the world, and I'm not sure that my children have ever felt it. I haven't felt it in at least 20 years.

My daughters, now in their mid-20s, have felt it. We were a library family, not a bookstore family or an Amazon family. Beginning as soon as they could follow picture books, we spent countless hours at the public library in our town and the one in the neighboring city. We took the girls to Story Time and to other activities, but mostly we went to read and wander and select a big stack of books to take home. The books we took home never lasted as long as we thought they would, so back we'd go.

I still wander the stacks myself, both at the university library and, less often these days, the local public libraries. I always start with a few books in mind, recommendations gathered from friends and articles I've read, but I usually bring home an unexpected bounty. Every year I find a real surprise or two, books I love but would never have known about if I hadn't let myself browse. Even when I don't find anything surprising to take home, it's worth the time I spend just wandering.

Writing a little code often makes my day better. So does going to the library. Walking among books, starting with a goal and then aimlessly browsing, calms me on days I need calming and invigorates me on days when my energy is down. Some days, it does both at the same time. Hunt is right: It's one of the best feelings in the world. I hope that whatever else modern technology does for our children, it gives them something to rival this feeling.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal

June 28, 2019 3:39 PM

Another Peter Principle-Like Observation

Raganwald tweeted:

If you design a language for people who have a talent for managing accidental complexity, you'll beget more and more accidental complexity over time.
Someone who can manage accidental complexity will always take on more if it makes them more productive.

This reminded me of a blog post from last November in which I half-jokingly coined The Peter Principle of Software Growth:

Software grows until it exceeds our capacity to understand it.

In the case of Raganwald's tweet, languages that enable us to handle accidental complexity well lead to more accidental complexity, because the people who use them will be more be more ambitious -- until they reach their saturation point. Both of these observations about software resemble the original Peter Principle, in which people who succeed are promoted until they reach a point at which they can't, or don't, succeed.

I am happy to dub Raganwald's observation "The Peter Principle of Accidental Complexity", but after three examples, I begin to recognize a pattern... Is there a general name for this phenomenon, in which successful actors advance or evolve naturally until they reach a point at which the can't, or don't, succeed?

If you have any ideas, please email me or respond on Twitter.

In a playful mood at the end of a strange and hectic week, I am now wondering whether there is a Peter Principle of Peter Principles.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

June 21, 2019 2:35 PM

Computing Everywhere, Sea Mammal Edition

In The Narluga Is a Strange Beluga-Narwhal Hybrid, Ed Yong tells the story of a narluga, the offspring of a beluga father and a narwhal mother:

Most of its DNA was a half-and-half mix between the two species, but its mitochondrial DNA -- a secondary set that animals inherit only from their mothers -- was entirely narwhal.

This strange hybrid had a mouth and teeth unlike either of its parents, the product of an unexpected DNA computation:

It's as if someone took the program for creating a narwhal tusk and ran it in a beluga's mouth.

The analogy to software doesn't end there, though...

There's something faintly magical about that. This fluky merger between two species ended up with a mouth that doesn't normally exist in nature but still found a way of using it. It lived neither like a beluga nor a narwhal, but it lived nonetheless.

Fluky and abnormal; a one-off, yet it adapts and survives. That sounds like a lot of the software I've used over the years and, if I'm honest, like some of the software I've written, too.

That said, nature is amazing.


Posted by Eugene Wallingford | Permalink | Categories: Computing

June 20, 2019 3:51 PM

Implementing a "Read Lines" Operator in Joy

I wasn't getting any work done today on my to-do list, so I decided to write some code.

One of my learning exercises to open the Summer of Joy is to solve the term frequency problem from Crista Lopes's Exercises in Programming Style. Joy is a little like Scheme: it has a lot of cool operations, especially higher-order operators, but it doesn't have much in the way of practical level tools for basic tasks like I/O. To compute term frequencies on an arbitrary file, I need to read the file onto Joy's stack.

I played around with Joy's low-level I/O operators for a while and built a new operator called readfile, which expects the pathname for an input file on top of the stack:

    DEFINE readfile ==
        (* 1 *)  [] swap "r" fopen
        (* 2 *)  [feof not] [fgets swap swonsd] while
        (* 3 *)  fclose.

The first line leaves an empty list and an input stream object on the stack. Line 2 reads lines from the file and conses them onto the list until it reaches EOF, leaving a list of lines under the input stream object on the stack. The last line closes the stream and pops it from the stack.

This may not seem like a big deal, but I was beaming when I got it working. First of all, this is my first while in Joy, which requires two quoted programs. Second, and more meaningful to me, the loop body not only works in terms of the dip idiom I mentioned in my previous post, it even uses the higher-order swonsd operator to implement the idiom. This must be how I felt the first time I mapped an anonymous lambda over a list in Scheme.

readfile leaves a list of lines on the stack. Unfortunately, the list is in reverse order: the last line of the file is the front of the list. Besides, given that Joy is a stack-based language, I think I'd like to have the lines on the stack itself. So I noodled around some more and implemented the operator pushlist:

    DEFINE pushlist ==
        (* 1 *)  [ null not ] [ uncons ] while
        (* 2 *)  pop.

Look at me... I get one loop working, so I write another. The loop on Line 1 iterates over a list, repeatedly taking (head . tail) and pushing head and tail onto the stack in that order. Line 2 pops the empty list after the loop terminates. The result is a stack with the lines from the file in order, first line on top:

    line-n ... line-3 line-2 line-1

Put readfile and pushlist together:

    DEFINE fileToStack == readfile pushlist.
and you get fileToStack, something like Python's readlines() function, but in the spirit of Joy: the file's lines are on the stack ready to be processed.

I'll admit that I'm pleased with myself, but I suspect that this code can be improved. Joy has a lot of dandy higher-order operators. There is probably a better way to implement pushlist and maybe even readfile. I won't be surprised if there is a more idiomatic way to implement the two that makes the two operations plug together with less rework. And I may find that I don't want to leave bare lines of text on the stack after all and would prefer having a list of lines. Learning whether I can improve the code, and how, are tasks for another day.

My next job for solving the term frequency problem is to split the lines into individual words, canonicalize them, and filter out stop words. Right now, all I know is that I have two more functions in my toolbox, I learned a little Joy, and writing some code made my day better.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

June 18, 2019 3:09 PM

Notations, Representations, and Names

In The Power of Simple Representations, Keith Devlin takes on a quote attributed to the mathematician Gauss: "What we need are notions, not notations."

While most mathematicians would agree that Gauss was correct in pointing out that concepts, not symbol manipulation, are at the heart of mathematics, his words do have to be properly interpreted. While a notation does not matter, a representation can make a huge difference.

Spot on. Devlin's opening made me think of that short video of Richard Feynman that everyone always shares, on the difference between knowing the name of something and knowing something. I've seen people mis-interpret Feynman's words in both directions. The people who share this video sometimes seem to imply that names don't matter. Others dismiss the idea as nonsense: how can you not know the names of things and claim to know anything?

Devlin's distinction makes clear the sense in which Feynman is right. Names are like notations. The specific names we use don't really matter and could be changed, if we all agreed. But the "if we all agreed" part is crucial. Names do matter as a part of a larger model, a representation of the world that relates different ideas. Names are an index into the model. We need to know them so that we can speak with others, read their literature, and learn from them.

This brings to mind an article with a specific example of the importance of using the correct name: Through the Looking Glass, or ... This is the Red Pill, by Ben Hunt at Epsilon Theory:

I'm a big believer in calling things by their proper names. Why? Because if you make the mistake of conflating instability with volatility, and then you try to hedge your portfolio today with volatility "protection" ...., you are throwing your money away.

Calling a problem by the wrong name might lead you to the wrong remedy.

Feynman isn't telling us that names don't matter. He's telling us that knowing only names isn't valuable. Names are not useful outside the web of knowledge in which they mean something. As long as we interpret his words properly, they teach us something useful.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

June 11, 2019 3:04 PM

Summer of Joy

"Elementary" ideas are really hard & need to be revisited
& explored & re-revisited at all levels of mathematical
sophistication. Doing so actually moves math forward.

-- James Tanton

Three summers ago, I spent a couple of weeks re-familiarizing myself with the concatenative programming language Joy and trying to go a little deeper with the style. I even wrote a few blog entries, including a few quick lessons I learned in my first week with the language. Several of those lessons hold up, but please don't look at the code linked there; it is the raw code of a beginner who doesn't yet get the idioms of the style or the language. Then other duties at work and home pulled me away, and I never made the time to get back to my studies.

my Summer of Joy folder

I have dubbed this the Summer of Joy. I can't devote the entire summer to concatenative programming, but I'm making a conscious effort to spend a couple of days each week in real study and practice. After only one week, I have created enough forward momentum that I think about problems and solutions at random times of the day, such as while walking home or making dinner. I think that's a good sign.

An even better sign is that I'm starting to grok some of the idioms of the style. Joy is different from other concatenative languages like Forth and Factor, but it shares the mindset of using stack operators effectively to shape the data a program uses. I'm finally starting to think in terms of dip, an operator that enables a program to manipulate data just below the top of the stack. As a result, a lot of my code is getting smaller and beginning to look like idiomatic Joy. When I really master dip and begin to think in terms of other "dipping" operators, I'll know I'm really on my way.

One of my goals for the summer is to write a Joy compiler from scratch that I can use as a demonstration in my fall compiler course. Right now, though, I'm still in Joy user mode and am getting the itch for a different sort of language tool... As my Joy skills get better, I find myself refactoring short programs I've written in the past. How can I be sure that I'm not breaking the code? I need unit tests!

So my first bit of tool building is to implement a simple JoyUnit. As a tentative step in this direction, I created the simplest version of RackUnit's check-equal? function possible:

    DEFINE check-equal == [i] dip i =.
This operator takes two quoted programs (a test expression and an expected result), executes them, and compares the results. For example, this test exercises a square function:
    [ 2 square ] [ 4 ] check-equal.

This is, of course, only the beginning. Next I'll add a message to display when a test fails, so that I can tell at a glance which tests have failed. Eventually I'll want my JoyUnit to support tests as objects that can be organized into suites, so that their results can be tallied, inspected, and reported on. But for now, YAGNI. With even a few simple functions like this one, I am able to run tests and keep my code clean. That's a good feeling.

To top it all off, implementing JoyUnit will force me to practice writing Joy and push me to extend my understanding while growing the set of programming tools I have at my disposal. That's another good feeling, and one that might help me keep my momentum as a busy summer moves on.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning