Logical Connectives and Conditionals:
Syntactic Abstractions of Core Features

Where We Are

We are discussing the idea of syntactic abstractions, those features of a language that are convenient to have but that are not absolutely essential to the language. At this point, we have considered how the following common features of a programming language can, in fact, be considered abstractions of other, more primitive features: functions that take more than one argument, local variables, and local functions. This section considers logical connectives, conditional expressions, and case analysis.

Logical Connectives as Syntactic Abstractions

Like most languages, Racket provides ways to write compound Boolean expressions. For example:

> (and (> 1 0) (< 1 0))
#f
> (and (procedure? car) (procedure? cons))
#t
> (or (> 1 0) (< 1 0))
#t
> (or (procedure? 'car) (procedure? 'cons))
#f
> (not (< 1 0))
#t
> (not (procedure? cons))
#f

Logical operators such as and and or have a special semantics. We have already seen that having variable arity is nothing special, but there is something else. To see what is special about them, let's consider a function definition of and using our evaluation model. When we evaluate a function, we (1) evaluate each of the subexpressions and (2) apply the leftmost result to the rest.

If and were a function, then it would have to evaluate all of its arguments. But consider the expression (and #f #t #t #t #t #t #t #t). Before the evaluator applied and, it would have to evaluate seven expressions to #t in order to determine that the value of expression is #f. Even if each of these subexpressions required an expensive function call, it would have to evaluate all of them first.

For this reason, most languages offer logical connectives that do "short-circuit evaluation" of their arguments. As soon as the interpreter encounters a value that determines the value of the whole expression, it returns that value. This means that, in function, the connectives must be special forms, not functions.

We can describe how short-circuit evaluation works with the inductive definitions for and and or. These definitions show how to translate one expression into another, with a simpler version of the operator in question.

and Expressions

An and expression:

(and test_1 test_2 ... test_n)

is equivalent to an if expression:

(if test_1
  (and test_2 ... test_n)             ;; looking for a reason
  false)

An and expression with a single argument is equivalent to that argument.

So, for example, this and expression:

(and (list? arg) (null? (rest arg)))

is equivalent to:

(if (list? arg)
  (null? (rest arg))
  false)
or Expressions

An or expression:

(or test_1 test_2 ... test_n)

is equivalent to an if expression:

(if test_1
  true
  (or test_2 ... test_n))

An or expression with a single argument is equivalent to that argument, too.

So, for example, this or expression:

(or (list? arg) (null? (rest arg)))

is equivalent to:

(if (list? arg)
  true
  (null? (rest arg)))
Transational Semantics

These translations show us that and and or are really syntactic abstractions of the more general conditional expression if. Indeed, many of you still avoid use of and and or by building if expressions with which you feel more comfortable. But this can be dangerous, because you have to handle all of the alternative cases properly. The structure of an if or a cond is considerably more complex. The connectives may be sugar, but their sweetness is worth a little effort.

What if n is 0 in the above expressions? What should and and or return then? We can take a hint from the patterns established here. and looks for "false" conjuncts. If it finds one, then it returns false; otherwise, it continues to look. When there are no conjuncts, it will not find a false one, so and returns true. (Think of this from the perspective of a recursive function...)

Likewise, or looks for "true" disjuncts. If it finds one, then it returns true; otherwise, it continues to look. When there are no disjuncts, it will not find a true one, so or returns false.

Conditional Expressions as Syntactic Abstractions

We have had a glimpse of Racket's multi-part branching construct, cond, in a previous lecture. Here is an example of the use of a cond:

> (define sign
        (lambda (x)
          (cond ((< x 0) -1)
                ((> x 0)  1)
                (else        0)) ))
> (sign -2)
-1
> (sign 4)
1
> (sign 0)
0

We can provide an inductive semantic translation of cond to if as follows:

(cond (<p1> <e1>)
      (<p2> <e2>)
      .
      .
      .
      (<pn> <en>))

is equivalent to an if expression:

(if <p1>
    <e1>
    (cond (<p2> <e2>)
          .
          .
          .
          (<pn> <en>)))

This definition is not complete, because it doesn't take into account the optional else expression. Still, this gives us a good idea of what is going on.

Quick Exercise: How would you modify the definition to take else into account?

The biggest difference between if and cond is that cond allows more than two choices. Any expression that can be written using one can be rewritten using the other.

Quick Exercise, in Reverse: Give an inductive translation of an if expressions into a cond expression.

The fact that all ifs can be written as equivalent conds means that neither expression is more fundamental than the other. An interpreter that can handle one can also handle the other, as well as logical connectives, as syntactic abstractions.

Case Analysis as Syntactic Abstraction

Racket provides a case special form that is similar to the switch statements of C/C++ and Java. (Check it out in the Racket Guide.) . If you would like to practice more with the idea of conditional operators as syntactic abstractions, use the translations given in this reading to determine the main differences between a case expression and a more general conditional construct. You might even try writing an inductive definition for it!