TITLE: Dynamic Scope as Bug or Feature AUTHOR: Eugene Wallingford DATE: April 27, 2009 7:24 PM DESC: ----- BODY: When I teach programming languages, we discuss the concepts of static and dynamic scoping. Scheme, like most languages these days, is statically scoped. This means that a variable refers to the binding that existed when the variable was created. For example,
> (define f
    (let ((i 100))
      (lambda (x)
        (+ x i))))
> (define i 1)
> (f 1)
This displays 101, not 2, because the reference to i in the body of function f is to the local variable i that exists when the function was created, not to the i that exists when the function is called. If the interpreter looked to the calling context to find its binding for i, that would be an example of dynamic scope, and the interpreter would display 2 instead. Most languages use static typic these days for a variety of reasons, not the least of which is that it is easier for programmers to reason about code that is statically scoped. It is also easier to decompose programs and create modules that programmers can understand easily and use reliably. In my course, when looking for an example of a dynamically-scoped language, I usually refer to Common Lisp. Many old Lisps were scoped dynamically, and Common Lisp gives the programmer the ability to define individual variables as dynamically-scoped. Lisp does not mean much to students these days, though. If I were more of a Perl programmer, I would have known that Perl offers the same ability to choose dynamic scope for a particular variable. But I'm not, so I didn't know about this feature of the language until writing this entry. Besides, Perl itself is beginning to fade from the forefront of students' attention these days, too. I could use an example closer to my students' experience. A recent post on why Python does not optimize tail calls brought this topic to mind. I've often heard it said that in Python closures are "broken", which is to say that they are not closures at all. Consider this example drawn from the linked article:
IDLE 1.2.1      
>>> def f(x):
    if x > 0:
       return f(x-1)
    return 0;

>>> g = f
>>> def f(x):
    return x

>>> g(5)
g is a function defined in terms of f. By the time we call g, f refers to a different function at the top level. The result is something that looks a lot like dynamic scope. I don't know enough about the history of Python to know whether such dynamic scoping is the result of a conscious decision of the language designer or not. Reading over the Python history blog, I get the impression that it was less a conscious choice and more a side effect of having adopted specific semantics for other parts of the language. Opting for simplicity and transparency as an overarching sometimes means accepting their effects downstream. As my programming languages students learn, it's actually easier to implement dynamic scope in an interpreter, because you get it "for free". To implement static scope, the interpreter must go to the effort of storing the data environment that exists at the time a block, function, or other closure is created. This leads to a trade-off: a simpler interpreter supports programs that can be harder to understand, and a more complex interpreter supports programs that are easier to understand. So for now I will say that dynamic scope is a feature of Python, not a bug, though it may not have been one of the intended features at the time of the language's design. If any of your current favorite languages use or allow dynamic scope, I'd love to hear about it -- and especially whether and how you ever put that feature to use. -----