I am a dinosaur who still keeps track of our family finances by hand. (But not for long; one of my Christmas break projects is to convert to Quicken.) While going through some brokerage statements over the long Thanksgiving weekend, an analogy between investing the stock market and learning occurred to me. Let me know what you think.
"Market timing" is the attempt to invest money in a stock or a market when it is near its lowest price and then to cash out when the price is near its peak. With a perfect model, I could invest when the price is at its minimum and divest when the price is at its maximum, with a maximal profit as my perfect prize. Market timing is a risky endeavor that almost always fails. Why? Because no one has a very good idea about how to recognize maximum and minimum prices. That may not seem so bad; even with prices only near their minimum and maximum, an investor could do quite well. But the risk is much worse than that,
Many investing gurus are keen to point out that most of the gain in the stock market over the last 80 years or more have occurred on relatively few days. In the past I have seen a list of the ten days with the largest gains in the history of the New York Stock Exchange and the Dow-Jones Industrial Average Any investor who was out of the market on these days missed out on a significant percentage of the market's gain over the last century. I can't seem to find such a list on the web just now, but I did find this page that tells a similar tale:
Consider, for instance, the returns on small company stocks between 1925 and 1992. If you had been invested in small company stocks over this period, your average annual return would have been 12.1%. If you sat out the single best month during that 67 year period, you would have only made 11.2% a year. If you missed out on the best five months, well, forget it... you would have only notched gains of 8.5%. Finally, if you had missed the best ten months -- something all sorts of market timers managed to do in 1995 -- you would have only retained 6.3% annual gains, almost half of what you could have made had you been fully invested.
I think that learning works the same way. Once before, I related this phenomenon to negative splits in running: the gains a learner makes may be relative small early in a course and relatively larger later in the semester, as the material steeps in her mind and is processed both consciously and subconsciously. I first experienced this is a course as an undergraduate when I plodded along for eight weeks before I somehow "got it".
As near as I can tell, the day on which one gets it is often unpredictable. Sure, some courses and some kinds of material can be learned steadily through a methodical process. But most of the best kind of learning involves a shift in how we see problems or how we see understand solutions, and this kind of learning usually seems to bear an a-ha! moment.
If I think of my effort studying a new topic or learning a new technique as an investment of mental energy, then I don't want to find myself in the position to "time the market" -- trying to guess the right day or few right days to be thinking hard about the material or practicing the skill. The a-ha! moment will come when I least expect, and if I am "out of the market" that day -- waiting until next weekend to start my programming assignment, or skipping a day of study to work on something else, or doing the work but merely going through the motions -- then I will miss the chance to make the big stride to the next level of understanding. And like the investing sort of market timing, the risk is pernicious. Not only won't I end up with the big gain; I may end up with nothing at all to show for my study, just a time spent and no understanding of anything of consequence. Learning is cumulative, and unless we give it a chance to accrue and to reinforce itself, we don't accumulate anything.
Sadly, today's students often operate under conditions that require a random sort of market timing. Many of them work far too many hours to allow them to work each day on each of their courses, at least not in a full-engaged way. Some of them take too many classes, in an effort to graduate "on time" or as soon as possible. (The rush is sometimes driven by financial considerations, but sometimes by misplaced ambition.) Many students come to college these days with many other interests, curricular or extracurricular, that interfere with their classwork. The result is hit-or-miss study, too many days without thinking about a particular course, and little understanding.
I think that this analogy applies to teaching, too. Unless I stay engaged with my course and my students throughout the semester, then I will likely be unprepared when opportunity's knock. Indeed, opportunity probably knocks at all only because we stay engaged and make the conditions possible for a visit.
After over a year in the Big Office Downstairs and now the Office with a View, I am finally unpacking all the boxes from my moves and hanging pictures on the walls. Last year it didn't seem to make much sense to unpack, with the impending move to our new home, and since then I've always seemed to have more pressing things to do. But my wife and daughters finally tired of the bare walls and boxes on the floor, so I decided to use this day-off-while-still-on-duty to make a big pass at the task. It has gone well but would have gone faster if I only I could go through boxes of old books without looking at them.
Old books ought to be able to toss without a thought, right? I mean, who needs a manual for a Forth-79 interpreter that ran on a 1980 Apple ][ ? A 1971 compiler textbook by David Gries (not even mine -- a colleague's hand-me-down)? Who wants to thumb through Sowa's Conceptual Structures, Raphael's The Thinking Computer, Winograd and Flores's Understanding Computers and Cognition? Guilty as charged.
And while I may not be an active AI researcher anymore, I still love the field and its ideas and classic texts. I spent most of my time today browsing Raphael, Minsky, Schank's Dynamic Memory, Weld and de Kleer's Readings in Qualitative Reasoning about Physical Systems, the notes from a workshop on functional reasoning at AAAI 1994 (one of the last AI conferences I attended). These books brought back memories, of research group meetings on Wednesday afternoons where I cut my teeth as a participant in the academic discussion, of dissertation research, and of becoming a scholar in my own right. There were also programming books to be unpacked -- too many Lisp books to mention, including first and second editions of The Little LISPer, and a cool old book on computer chess from 1982 that is so out of date now as to be hardly worth a thumbing through. But I was rapt. These books brought back memories of implementing the software foundation for my advisor's newly established research lab -- and reimplementing it again and again as we moved onto ever better platforms for our needs. (Does anyone remember PCL?) Eventually, we moved away from Lisp altogether and into a strange language that no one seemed to know much about... Smalltalk. And so I came to learn OOP many years before it came into vogue via C++ and Java.
Some of these books are classics, books I could never toss out. Haugeland's Mind Design, Schank, Minsky, Raphael, The Little LISPer. Others hold value only in memory of time and place, and how they were with me when I learned AI and computer science.
I tossed a few books (among them the Gries compiler book) and kept a few more. I told my daughter I was being ruthless, but in reality I was far softer than I could have been. That's okay... I have shelf space to spare yet, at least in this office, and the prospect of my next move is far enough off that I am willing to hold onto that old computer chess book just in case I ever want to play through games by Belle and an early version of Cray Blitz, or steal a little code for a program of my own.
I wonder what our grandchildren and great-grandchildren will think of our quaint fetish for books. For me, as I close up shop for a long weekend devoted to the American holiday of Thanksgiving, I know that I am thankful for all the many books I've had the pleasure to hold and read and fall asleep with, and thankful for all the wonderful people who took the time to write them.
One measure of how busy I am is how much outside reading I am able to do. By that measure, this term has been busier than most. In the last few days, I've tried to catch up on some blog reading.
Last month Kathy Sierra posted two short pieces that ring true as I reach the end of my first semester back in CS1 in oh so long. Her primary context is on the business place, with managers and employees and products and users. But her advice is a useful starting point in the context of an academic department, with its chair and faculty and courses and students.
In Knocking the Exuberance out of Employees, Sierra reminds us how easy it is to say that we value creativity and curiosity yet create an environment that not only devalues these traits but even penalizes them. As a teacher, I say that I value creativity and curiosity in my students, but I have to attend to creating a course in which students feel free to have and express these traits.
I think that I have done reasonably well this semester in not knocking the exuberance out of my students, at least in gratuitous ways. With the media computation theme, I have tried to:
Earlier in my academic career, I was more prone to violating the last two of these. I know that students in our department struggle with the last of these in one course in particular, especially in the form of ticky-tack style requirements. I understand why some faculty impose rules -- they are convinced that there is a right way to do things and want their students to learn the right way sooner. Even if there is a right way, though, instructors have to walk the line between helping students learn to "do things right" while keeping them interested and motivated enough to want to get better. I also know that giving students latitude requires exercising latitude in judging how far is far enough. Without confidence in one's own ability, an instructor often feels safer within constraining rules. But will students live comfortably there, too? Often not.
Sierra writes about similar issues from the user's perspective in Reducing Fear is the Killer App. Users won't feel comfortable to cozy up to your product if they are afraid -- of breaking something, of feeling stupid, of most anything. Students are in a similar frame of mind when they approach a course, and students who are just beginning college, or their major, are most at risk. They want to do well in the course. They want to enjoy their new major. They want to master tools and ideas.
How can an instructor reduce fear? I can think of a few ways.
I don't think that my classroom or office say "comfortable" quite in the way the dentist office or hospital do in Sierra's pictures. My office certainly looks like a place that someone works. (In the common phrase of the day, my office "looks lived in".) I've tried to rely on my textbook authors' experience by sticking to the textbook as much as my constitution allows, in an effort to take the right sort of steps and set the right sort of problems before the class. However, I'm not a "natural" teacher, at least not the kind of natural teacher who makes instant personal bonds with his students, who sets them at ease with the twinkle of an eye. My hope is that, by consciously thinking about the things Sierra writes about in these two essays, I can at least do no harm.
I overhead a conversation in the locker room yesterday that saddened me. Two university students were chatting about their lives and work-outs. In the course of discussing their rather spotty exercise routines, one of them said that he was planning to start using creatine as a way to develop a leaner "look". Creatine is a naturally-occurring compound that some folks use as a nutritional supplement.
Maybe I'm a fuddy-duddy, but I'm a little leery about using supplements to enhance my physical appearance and performance. It also may just be a matter of where to draw the line; I am willing to take multivitamins and zinc supplements for my immune system. The casual use of creatine by regular guys, though, seems like something different: an attempted shortcut.
There aren't all that many shortcuts to getting better in this world. Regular exercise and a good diet will help you develop a leaner body and the ability to perform better athletically. The guys I overhead knew that they could achieve the results they needed by exercising and cutting back on their beer consumption, but they wanted to reach their goal without having to make the changes needed to get there in the usual way.
The exercise-and-diet route also has other positive effects on one's body and mind, such as increased stamina and better sleep. Taking a supplement may let you target a specific goal, but the healthier approach improves your whole person.
Then there's the question of whether taking a supplement actually achieves the promised effect...
These thoughts about no shortcuts reminded me of something I read on Bob Martin's blog a few weeks ago, called WadingThroughCode. There Bob cautioned against the natural inclination not to work hard enough to slog through other people's programs. We all figure sometimes that we can learn more just by writing our own code, but Bob tells us that reading other people's code is an essential part of a complete learning regimen. "Get your wading boots on."
I've become sensitized to this notion over the last few years as I've noticed an increasing tendency among some of even my best students to not want to put in the effort to read their textbooks. "I've tried, and I just don't get it. So I just study your lecture notes." As good as my lecture notes might be, they are no substitute for the text. And the student would grow by making the extra effort it takes to read a technical book.
There are no shortcuts.
When my CS1 class and I explored compression recently, it occurred to one student that we might be able to compress our images, too. Pictures are composed of Pixels, which encode RGB values for colors. We had spent many weeks accessing the color components using accessors getRed(), getGreen(), and getBlue(), retrieving integer values. But the color component values lie in the range [0..255], which would fit in byte variables.
So I spent a few minutes in class letting students implement compression and decompression of Pixels using one int to hold the three bytes of data. It gave us a good set of in-class exercises to practice on, and let students think about compression some more.
The we took a peek at how Pixels are encoded -- and found, much to the surprise of some, that the code for our text already uses our compressed encoding! We had reinvented the existing implementation.
I didn't mind this at all; it was a nice experience. First, it helped students to see very clearly that there does not have to be a one-to-one relationship between accessors and instance variables. getRed(), getGreen(), and getBlue() do not retrieve the values of separate variables, but rather the corresponding bytes in a single integer. This point, that IVs != accessors, is one I like to stress when we begin to talk about class design. Indeed, unlike many CS1 instructors, I make a habit of creating accessors only when they are necessary to meet the requirements of a task. Objects are about behavior, not state, and I fear that accessors-by-default gives a wrong impression to students. I wonder if this is an unnecessary abstraction that I introduce too early in my courses, but if so then it is just one of my faults. If not, then this was a great way to experience the idea that objects provide services and encapsulate data representation.
Second, this gave my students a chance to do a little bit arithmetic, figuring out how to use multiplication to move values into higher-order bytes of an integer. Then we looked inside the Pixel class, we got to see the use of Java's shift operators to accomplish the same goal. This was a convenient way to see a little extra Java without having to make a big fuss about motivating it. Our context provided all the motivation we needed.
I hope the students enjoyed this as much as I did. I'll have to ask as we wrap up the course. (I should practice what I preach about feedback!)
It seemed an innocent enough question to ask my CS 1 class.
"Do you all know how to make a .zip file?"
My students laughed at me. As near as I could tell, it was unanimous.
For a brief second I felt old. But it wasn't that long ago that students in my courses had to be shown how to zip up a directory, so perhaps their reaction is testimony more to the inexorable march of technology than to my impending decrepitude.
At least most of them seemed interested when I offered to show them how to make a .jar file en route to creating a double-clickable app from their slideshow program.
I may be a dinosaur, but I'm not completely useless to them.
When teaching a class, sometimes a moment comes along in that is pregnant with possibilities. I usually like to seize these moments, if only to add a little spice to my own enjoyment of the course. When the new avenue leads to a new understanding for the students, all the better!
A couple of weeks ago, I was getting ready to finish off the unit on manipulating sound in CS 1. The last chapter of that unit in the textbook didn't excite me as much as I had hoped, and I had also just been sensitized by a student's comment. The day before, at a group advising session, one of my students had commented that the media computation was cool and all, but he "data to crunch", and he was hoping to do more of that in class. My first reaction was, isn't processing a 1-megapixel image crunching enough data? Sure, he might say, but the processing we were doing at each pixel, or at each sample in a sound, was relatively simple.
With that in the back of my mind somewhere, I was reading the part of the chapter that discussed different encodings for sound, such as MP3 and MIDI. My stream of consciousness was working independent of my reading, or so it seemed. "MP3 is a way to compress sound ... compression algorithms range from simple to complex operations ... we can benefit greatly by compressing sound ... our Sound uses a representation that is rather inefficient ...". Suddenly I knew what I wanted to do in class that day: teach my students how to compression sound!
Time was short, but I dove into my new idea. I hadn't looked very deeply at how the textbook was encoding sounds, and I'd never written a sound compression algorithm before. The result was a lot of fun for me. I had to come up with a reasonable encoding to compress our sounds, one that allowed me to talk about lossy and lossless compressions; I had to make the code work. Then I had to figure out how to tell the story so that students could reach my intended destination. This story has to include code that the students write, so that they can grow into the idea and feel some need for what I ask them to do.
I ended up creating a DiffSound that encoded sounds as differences between sound samples, rather than as samples. The differences between samples tend to be smaller than the sample values themselves, which gives us some hope of creating a smaller file that loses little or no sound fidelity.
This opportunity had another unexpected benefit. The next chapter of the text introduced students to classes and objects. While we had been using objects of the Picture, Pixel, Sound, and SoundSample classes in a "bottom-up" fashion, but we had never read a full class definition. And we certainly hadn't written one. The textbook used what was for me an uninspiring first example, a Student class that knows its grades. What was worse than not exciting me was that the class was not motivated by any need the students could feel from their own programming. But after writing simple code to convert a sound from an array of sound samples into an array of sample differences, we had a great reason to create a new class -- to encapsulate our new representation and to create a natural home for the methods that manipulate it. When I first embarked on the compression odyssey, I had no idea that I would be able to segue so nicely into the next chapter. Serendipity.
After many years of teaching, bumping into such opportunities, and occasionally converting them into improvements to my course, I've learned a few lessons. The first is that not all opportunities are worth seizing. Sometimes, the opportunity is solely to my benefit, letting me play with some new idea. If it produces a zero sum for my students, then it may be worth trying. But too often an opportunity creates a distraction for students, or adds overhead to what they do, and as a result interferes with their learning. Some design patterns work this way for OOP instructors. When you first learn Chain of Responsibility, it may seem really cool, but that doesn't mean that it fits in your course or adds to what your students will learn. Such opportunities are mirages, and I have to be careful not to let them pull me off course.
But many opportunities make my course better, by helping my students learn something new, or something old in a new way. These are the ideas worth pursuing. The second lesson I've learned is that such an idea usually creates more work for me. It's almost always easier to stay on script, to do what I've done before, what I know well. The extra work is fun, though, because I'm learning something new, too, and getting and chance to write the code and figure out how to teach the idea well. A few years ago, I had great fun creating a small unit on Bloom filters for my algorithms course, after reading a paper on the plane back from a conference. The result was a lot of work -- but also a lot of fun, and also an enrichment to what my students learned about the trade-offs between data and algorithm and between efficiency and correctness. That was an opportunity well-seized. But I needed time to turn the possibility into a reality.
The third lesson I've learned is that using real data and real problems greatly increases the chances that I will see an unexpected opportunity. Images and sounds are rich objects, and manipulating them raises a lot of interesting questions. Were I teaching with toy problems -- converting Fahrenheit to Celsius, or averaging grades in an ad hoc student array -- then the number of questions that might raise my interest or my students' interest would be much smaller. Compression only matters if you are working with big data files.
Finally, I've learned to be open to the possibility of something good. I have to take care not to fall into the rut of simply doing what's in my notes for the day. Eternal vigilance is the price of freedom, but it is also the price we must pay if we want to be ready to be excited and to excite our students with neat ideas lying underneath the surface of what we are learning.
One of my colleagues occasionally comments that many of the folks in our department don't often practice in the rest of our professional lives what we preach in our areas of technical expertise. For example, in software engineering we often speak of the importance of gathering requirements, writing a complete specification, and then later testing our product to ensure that it meets the spec. But CS faculty are often reluctant to practice these ideas in the context of curriculum and departmental mission, which leads to a lack of motivation for tasks such as academic program review both on the side of specifying concrete department goals and concrete course competencies and on the side of measuring outcomes.
My colleague's observation is usually true. Expertise doesn't transfer very well across domains of practice; and, even when the mindset transfers, the practices and habits don't. It takes a lot of work to translate the mindset into the new habits we need in the new domain, and we have to watch out for pitfalls that let us convince ourselves that the new domain is so different that we can't practice what we preach.
Though my colleague has never made his observation to me when commenting on my performance, I know well that I am guilty. I strongly encourage the use of agile methods in software development, and I've even written in this space on how I have intended to "be agile" in how I approach my administrative duties. But as I look back over my first fifteen months as a department head, I see a path littered with good intentions leading to a very different place than I wanted to be.
I had hoped to write a retrospective of my first year in the Big Office by now, but I haven't yet -- in part because I don't feel I have synthesized much of an understanding of what I do yet, but also in part, I think, because I feel a bit ashamed of my weaknesses. I haven fallen woefully behind on several major projects, including ones that were centerpieces of my desire to become head. As I look back, I see many of the signature problems of big software projects falling behind. As Fred Brooks tells us in The Mythical Man-Month, how do disastrously late projects get that way? "One day at a time." When I fall farther behind, it is rarely because a major task preempts my time; most of the slippage in my schedule results from "termites" -- little interruptions, small distractions, and bad decisions made in the small.
I am agile in mindset, but not in practice. How can I change that? Go back to the basics: Define small tasks. Define "tests" that will help me know that I have made concrete progress. Release small deliverables frequently to the folks who depend on my work, especially the faculty.
I know what to do. Now it is time to get serious about new practices for the new tasks I tackle.
... that was without a doubt
the hardest physical thing
I have ever done.
-- Lance Armstrong
"That" was the New York City Marathon, which Armstrong ran last Sunday. He is, of course, world-renown as a seven-time winner of the Tour de France, which is among the most grueling and physically-challenging feat of athletic endurance. I have long admired Armstrong's accomplishments on the Tour, overcoming the vicissitudes of competition, the annual challenges from new and younger riders, and the wear and tear of such a demanding event -- and winning, not once, not just seven times, but seven consecutive times. And all this after overcoming a metastasized case of testicular cancer.
But the marathon offered him a new sort of challenge. If you have read much of my writing on running, then you have seen me say more than once that you have to "respect the distance". Running a marathon goes beyond what the human body is typically configured to do. It stresses the body in ways that other physical feats don't often. I've never cycled for the distances or remarkable inclines that the Tour de France requires, but I've cycled enough to know that it does not stress the joints like running does. Armstrong found this out:
"I think I bit off more than I could chew. I thought the marathon would be easier," he said. "(My shins) started to hurt in the second half, especially the right one. I could barely walk up here, because the calves are completely knotted up."
So, all of you fellow runners out there, take heart that even the greatest athletes find running at the edges of their endurance and speed daunting. I take heart, too, that they fight through the same pain as I, because I know then that I can do the same.
To be fair to Armstrong, the quote I opened with above starts with an ellipsis. The portion of the quote omitted by me -- and most newspapers that highlighted this statement -- is "For the level of condition that I have now". So he may be able to run a faster and more comfortable marathon in the future, if he reaches a different level of fitness. I've seen reports that Armstrong had never previously run more than 16 miles at once, and that he dod no particular speed training before attempting NYC.
This explains some of Armstrong's struggle, but it raises another question. Why didn't he train (better) for the marathon? I guess he didn't realize just how much respect we all have to show the distance. Perhaps he could learn something from our old friend Santiago Botero, who once learned something from Lance:
His smile said to me, 'I was training while you were sleeping, Santiago'. It also said, 'I won this tour four months ago, while you were deciding what bike frame to use in the Tour. I trained harder than you did, Santiago. I don't know if I am better than you, but I have outworked you and right now, you cannot do anything about it. Enjoy your ride, Santiago. See you in Paris.'
When Armstrong first publicly discussed the possibility of running a marathon a few years ago, there were two schools of thought. Some folks thought that he would be think he would be a good but not great marathoner -- running something like the 3:00 race he ran in New York. Others thought that, given his great aerobic base and mental toughness, with only a little training he could run a 2:20 marathon better. I was in the second camp -- and still am. This race only highlighted the importance of that little bit of training.
And of course, Armstrong's time was still faster than my best time by 45 minutes. I have a lot of work yet to do!
Academic departments at universities can sometimes be quite agile, introducing new courses and new approaches into curricula in response to feedback from students and the world. But, like large corporations and almost any organization that reaches a certain size, the modern university also tends to calcify certain processes to the point that they become almost useless. Academic program review is an example.
Every seven years, my university conducts an academic program review of each academic department, on a rolling schedule. My department was last reviewed in 1999, so we were on the schedule for Fall 2006. As a part of the review, the department conducts a self-study of each of its programs, reviewing curriculum, student outcomes, faculty, facilities and resources, budget and finance, and program strengths and weaknesses. Then a set of external auditors come to campus to conduct an independent review, informed by the self-study reports. Finally, the dean uses the results of the internal and external reviews to help the department plan for improvement and maintenance.
Periodically examining one's practice, gathering feedback from independent reviewers, and then feeding what you learn back into process improvement seems like a good idea, a natural way for an organization to monitor itself. So why don't faculty take to it enthusiastically? Instead, they dread it.
Any agile software developer knows one part of the answer. Waiting seven years to gather feedback and adjust course is simply a bad idea. Imagine how far off track a department can go in seven years!
Of course, it's not really that bad. To some extent, faculty, department head, and dean are all in a continual process of monitoring the state of the department and making changes. In the seven years since our last review, we have hired three faculty, changed department heads twice, launched two new majors, and moved into a new building. All of these changes resulted from collective discussion or managerial choice.
Part of the problem is documentation process that accompanies the review. Since returning from OOPSLA, I have spent much of my time encouraging faculty to find time to finish their work on the review and finding time myself to assemble and complete the reports for our undergraduate and graduate programs. None of us has a lot of unencumbered time to devote to such a big task, and the result is process thrashing and delays.
This is my first time leading a review (last time around I wrote a big part of the M.S. program self-study but had another department head to do the encouraging, assembling, and completing), and I think I've learned something about how to make this work better in the future. Most of my ideas are inspired by agile methods.
To be fair to the university and its policy, we are already charged with doing some of this by the university itself, in the form of a Student Outcomes Assessment plan. This plan should be monitoring student outcomes throughout their time on campus and then into their alumni years. Unfortunately our department -- and many others, I suspect -- have never taken these plans seriously. Some faculty view this process as an unnecessary bureaucratic burden, and others think that we are all too busy to do it right. I think that this means we need to develop a better plan, one in which data collection is manageable under supervision of instructors and staff and immediately useful in evaluating our progress. (Because many of us didn't take student outcomes assessment seriously when we wrote the plan, we wrote a plan that was unrealistic and aimed more at satisfying the committee charged with approving the plan than at satisfying our needs!)
There are practical reasons for doing some elements of program review every seven years. For example, getting on-campus feedback from good external reviewers is difficult. They are busy people, in demand, and bringing them to campus is costly. But an advisory board can provide a lighter-weight feedback more frequently.
Of course, who knows who will be our department head in 2013, so my learning may or may not have an effect on how we do our next academic program review. But I will proceed with some of these ideas now in an effort to help us improve as a group.
Our self-study reports were due today at 5 PM, and we haven't submitted them yet. You know what I'll be doing this weekend.
On Sunday I had a beautiful morning for my first run since returning from OOPSLA. That run marked four weeks since my marathon in the Twin Cities, and it was my longest since then, an enjoyable twelve miles over my favorite route. I threw in an 8-minute mile near the end, but mostly I took it easy. My body reminded me that I hadn't run that far in a while.
Running in Portland was as good as the conference itself. The cool morning temperatures were perfect, and we had clear skies and no wind all week. I even started feeling a little bit of speed coming back, but my legs felt the increase in mileage.
Dare I consider a spring marathon in, say, Green Bay or Madison or Cincinnati? Winter training in Iowa would help me to keep my mileage -- and my expectations -- down. I could run for fun. My virtual training partner from Arkansas would appreciate an opportunity to train in the South's most comfortable season!
I know, I know. "Normal" and "thinking of doing another marathon" probably don't go hand in hand for most people. But I have a strange desire at this point to run one for the sake of running it, not for a time. I haven't given up on getting better; I just realize that that isn't the only thing that matters.