Basic Examples | Eager Specialization | Lazy Specialization | Larger Examples |

To run these examples, click on the hyperlinks below. To see the input that generated the example, click in the input tab. To see the reductions click on the reductions tab.

(+ 1 (+ 2 (+ 3 4)))

(letrec ((a (+ 1 1)) (b (+ a a)) (c (+ b b)) (d (+ c c))) d)

Note how the body of the function is copied, overwriting the application node. Specifically, the(define double (lambda (x) (+ x x)))

(double (double (double (double 1))))

The above example used the call-by-need argument strategy, there same example can be seen using call-by-value and call-by-name.

There are three version call-by-name, call-by-value and call-by-need.(define addup (lambda (x) (if (= x 0) 0 (+ x (addup (- x 1))))))

It's also interesting to see how recursion can be achieved without sharing. Here the recursive call is abstracted out to remove the self-reference.

Note how again the body of the applied function is copied, and in particular note how the variables of the copied nested function have been renamed, or rather renumbered. Top-level functions always bind variable '1'. Function nested directly within top-level functions bind variable '2', and so on.(define abs-diff (lambda (x) (lambda (y) (if (< x y) (- y x) (- x y)))))

(abs-diff 3 7)

Here is a curried version of the addup function. This function uses an accumulation argument to build the result:

(define addup (lambda (x total) (if (= x 0) total (addup (- x 1) (+ total x)))))

(addup 4 0)

Here is a recursive curried function for computing powers:

(define power (lambda (n x) (if (= n 0) 1 (* x (power (- n 1) x)))))

(power 5 3)

This example demonstrates a program which terminates with the call-by-need strategy, but not when using hyper-strict call-by-value.(define gen (lambda (x) (cons x (gen (+ x 1)))))

(define take (lambda (n l) (if (= n 0) (quote ()) (cons (head l) (take (- n 1) (tail l))))))

(define sum (lambda (a) (if (pair? a) (+ (head a) (sum (tail a))) 0)))

The(define foldl (lambda (f z xs) (if (pair? xs) (foldl f (f z (head xs)) (tail xs)) z)))

(define foldr (lambda (f z xs) (if (pair? xs) (f (head xs) (foldr f z (tail xs))) z)))

(define unfold (lambda (p f g x) (if (p x) '() (cons (f x) (unfold p f g (g x))))))

And the(define sum (foldl + 0))

Where(define gen (unfold (K false) I (+ 1)))

This pair of examples demonstrates the difference between left folds and right folds. Note how the left fold runs in constant space, every ten reduction steps, the graph looks identical in shape. Whereas in the right fold example, a growing list of partially applied(define K (lambda (x y) x))

(define I (lambda (x) x))

Here's the infinite data example written using folds and unfolds.

Here's an example of careless use of eager specialization.
The body of the `power` function grows endlessly (*explodes*) as the recursive call to itself is applied.
This function can never be reduced to normal form, and the attempt to do so will not terminate.
Three techniques can be used to remedy the non-termination:

- Remove direct recursion
- Postpone reductions by using
`seq` - Selective use of eager specialization

Note how before the value `5` is applied the function has been reduced to
an efficient cubing function.

Almost equivalently, the self-reference can be left in place, but not in a function position. In this example the identity function is abstracted out and used to prevent recurion until termination is possible. This technique places a burden on the user of the function, who has to pass in extra argument to prevent premature recursion. And worse this burden cannot be simply hidden in a wrapper function, as eager specialization of this wrapper returns us right back to the original non-terminating function.

This example uses conventional substitute-by-name with no function body reduction as the default strategy. No attempt is now made to reduce(apply-strategy 'nf 'whnf (power 3) 5)

Using the power example, which failed to terminate with eager specialization, demonstrates the benefits of lazy specialization, shown here.

Note the use of the substitution nodes `1->` and `2->`.
These postpone the substitution of nodes in function bodies until the nodes have been reduced to WHNF.
To ensure any sharing present in the function body is maintained,
related substitution nodes,
that is those originating from the same beta-reduction,
share a memo-table.

If we re-reun that last example,
but this time hold onto the `power` function and the
partially applied `power 3` function we'll see an efficient cubing function has been generated.
However we'll also see that the `power` function has *lazily exploded*.
This has been done lazily so termination wasn't at risk, however this unfolding is undesireable.
All the same techniques used to ensure eager specialization terminates can be used to to improve the
efficiency of lazy specialization.

Here is the same program again, but using `apply-strategy` to explicity control
where lazy-specialization is used.

In the Larger Examples section, examples of lazy specialization of an interpreter with CPS and eta-exapnsion transformations applied to a stream fusion program are given.

Here's an example of an interpreter being specialized away.
Note the use of the `seq` function in the program being interpreted.
Without this reduction wouldn't terminate, the interpreter would be eliminated, but the power function would explode as before.

An alternative way to stop the eager specializing after the interpreter
has been eliminated is to control the specialization programatically.
Consider this definition and use of the `power` function call:

The result of evaluating(define power (eval init-env quote-power))

(power 3 5)

However in performing all possible computations, the(define power (apply-strategy 'nf 'whnf (eval init-env quote-power)))

The problem here is that if we reduce to normal form before applying the first argument to `power`
then the function explodes.
If we reduce to normal form after applying the first argument to `power` then we compute the *cube*
function in a very expensive way, without an efficient intermediate `power` function.

The solution is to provide an extra argument between the program to be specialized, and the
first argument to that program. Extra `seq` calls are placed within the `eval`
function which prevent the reduction of the computations which become possible after the computations dependent
on the program have been specialized.

This new definition of eval is used like this:

(define power (apply-strategy 'nf 'whnf (eval init-env quote-power) '()))

This example demonstrates the technique.
In this example, the residualized `seq` calls are placed on `letrec` bound values.

Note how to get this to work here, the program had to be manually written in CPS style and `if` expressions eta-expanded.
Writing in CPS style helps bring together sum-type constructors and case pattern matchers
so that they cancel out.

Eta-expanding the `if` expressions is required to enable the CPS abstractions used in the sum-type constructors in the *then* and *else*
branches of the `if` expression to be shared *above* the `if`.
This enables the alternate branches of the `if` expression to meet the context of the `if` expression once before
the `if` predicate can be determined, instead of many times after.

We can use this stream fusion within functions. Using this definition of `sum-range`:

This example shows the use of(define sum-range (lambda (x y) (sum (take-until (< y) (gen x)))))

To save the programmer the extra effort of manually performing CPS and eta transformations, an alternative technique is
to write an interpreter for a language extended with a `case` construct.
This example shows the use
of an interpreter for a language with `case` written in such
a way as to effectively perform CPS transformations for constructors
and case patterns, and eta-expansion of if expressions.

In order to perform eta-expansion, the arity of the values present in the interpreter are computed.
This requires the initial environment to be extended with the arity of its values.
In order to compute the arity lazily, the arity is represented as a list of nulls rather than integers.
This makes it possible, for example, to compute the minimum gauranteed arity of recursive variable-arity functions such as `eval-abs`

The benefit of using this intermediate interpreter technique, is that the programmer gets the benefits of transforming their code into CPS form without having to do this transformation manually. As the example shows, the program is still fused as before.

Write a type checker to catch more errors early rather than trying to recognise fragments of code in a post-mortem of a failed program.