TITLE: A Spectator to Phase Change AUTHOR: Eugene Wallingford DATE: March 12, 2023 9:00 AM DESC: ----- BODY: Robin Sloan speculates that language-learning models like ChatGPT have gone through a phase change in what they can accomplish.
AI at this moment feels like a mash-up of programming and biology. The programming part is obvious; the biology part becomes apparent when you see AI engineers probing their own creations the way scientists might probe a mouse in a lab.
Like so many people, I find my social media and blog feeds filled with ChatGPT and LLMs and DALL-E and ... speculation about what these tools mean for (1) the production of text and code, and (2) learning to write and program. A lot of that speculation is tinged with fear. I admire Sloan's effort to be constructive in his approach to the uncertainty:
I've found it helpful, these past few years, to frame my anxieties and dissatisfactions as questions. For example, fed up with the state of social media, I asked: what do I want from the internet, anyway?

It turns out I had an answer to that question.

Where the GPT-alikes are concerned, a question that's emerging for me is:

What could I do with a universal function — a tool for turning just about any X into just about any Y with plain language instructions?
I admit that I am reacting to these developments slowly compared to many people. That's my style in most things: I am more likely to under-react to a change than to over-react, especially at the onset of the change. In this case, there is no chance of immediate peril, so waiting to see what happens as people use these tools seems like a reasonable reasonable. I haven't made any effort to use these tools actively (or even been moved to), so any speculating I do would be uninformed by personal experience. Instead, I read as people whose work I respect experiment with these tools and try to make sense of them. Occasionally, I draw a small, tentative conclusion, such as that prompting these generators is a lot like prompting students. After a few months of reading and a little reflection, I still think the biggest risk we face is probably that we tend to change the world around us to accommodate our technologies. If we put these tools to work for us in ways that enhance what we do, then the accommodation will pay off. If not, then we may, as Daniel Steinberg wrote in one of his newsletters, stop asking the questions we want to ask and start asking only the questions these tools can answer. Professionally, I think most about the effect that ChatGPT and its ilk will have on programming and CS education. In these regards, I've been paying special attention to reports from David Humphrey, such as this blog post on his university's attempt to grapple the widespread availability of these tools. David has approached OpenAI with an open mind and written thoughtfully about the promise and the risk. For example, he has written a lot of code with an LLM assistant and found it improving his ability both to write code and to think about problems. Advanced CS students can benefit from this kind of assistance, too, but David wonders how such tools might interfere with students first learning to program. What do we educators want from generative programming tools anyway? What do I as a programmer and educator want from them? So, at this point, my personal interaction with the phase change that Sloan describes has been mostly passive: I read about what others are doing and think about the results of their exploration. Perhaps this post is a guilty conscience asserting that I should be doing more. Really, though, I think of it more as an active form of inaction: an effort to collect some of my thinking as I slowly respond to the changes that are coming. Perhaps some day soon I will feel moved to use of these tools as I write code of my own. For now, though, I am content to watch from the sidelines. You can learn a lot just by watching. -----