|Launch in||sandbox||mode with Java Web Start or in an Applet|
|Launch in||trusted|| mode with Java Web Start or in an Applet |
Lambda Animator is a tool for demonstrating and experimenting with alternative reduction strategies in the lambda calculus. Eager languages reduce arguments before function application. Lazy languages reduce arguments, if needed, after function application.
Reductions can also be performed within function bodies.
Performing reductions within functions can have a specializing effect.
If a function is applied more than once then any reductions performed within the function
will be performed once before the function body is copied instead of multiple times after.
These specializing reductions can also be performed eagerly or lazily.
The specializing effect is sufficient to remove interpretive overhead.
Lambda Animator can be launched in two ways:
The JAR file contains the source code so you can compile it on your own computer if you want.
Lambda Animator uses resources on this website.
These include further instructions and examples.
These screen shots demonstrates the power function being specialized with the value 3.
The resulting cube function takes 4 beta and 17 delta reduction steps the first time it is applied, and then only one beta and three delta reduction steps on subsequent uses.
To see the intermediate steps, and to see more complex examples, launch the sandbox version.
To try your own programs run the trusted version.
These screen shots show stream fusion of a three stage pipeline in 22 beta-reductions.
Please send any comments, questions, suggestions or bugs to:
Thanks to David Thyer, Chris Skepper and Karel Hruda for their help testing pre-release versions.
Thanks to AT&T Research for Graphviz and Grappa.
This is where graph reduction all started. Wadsworth introduces sharing so as to keep the normalizing benefits of call-by-name, and the efficiency of call-by-value. Wadsworth performs reductions to normal form but only performs reductions within functions which will not be called again, so no specializing effect is achieved.
Levy formalises the properties of an optimal reduction strategy for the lambda-calculus.
Lamping is the first to discover an implementation of optimal evaluation.
Field performs reductions within functions. He devised a system with delayed substitutions so as to only perform reductions within functions when they are known to be needed, and so avoid potentially unneeded reduction of non-terminating terms. However as Field notes, the delayed substitutions do not respect any sharing already present in the terms they substitute. The lambda animator uses delayed substitutions, it also uses memo-tables to ensure any sharing present in terms being substituted is maintained.
Holst and Gomard coin the term complete laziness, and describe its specializing properties.
Karlsson describes the use of integers to delimit the scope of functions and to improve the efficiency of substitutions. The lambda animator uses the same integers.
Ariola and Blom use scoped graphs to better formalise the connection between the lambda-calculus and functional languages. The lambda animator shows these scopes.
Asperti, Giovannetti and Naletto implement an improved optimal evaluator, and perform experiments with it. No specializaing effect is demonstrated.
Thyer (me) implements the first completely lazy evaluator, and demonstrates the lazy specialization of a number of interpreters. Using a completely lazy strategy, interpretive overhead is completely eliminated. The delayed substitution scheme is modified to swap substitutions so as to implement optimal evaluation. This optimal implementation and the BOHM are used in a number of experiments, neither eliminates interpretive overhead. Lambda animator implements complete laziness but not optimal evaluation. The experiments with eager specialization and the use of seq are new to the lambda animator.
A formal semantics for complete laziness.