Lambda Animator

Launch in sandbox mode with Java Web Start or in an Applet
Launch in trusted mode with Java Web Start or in an Applet

Lambda Animator is a tool for demonstrating and experimenting with alternative reduction strategies in the lambda calculus. Eager languages reduce arguments before function application. Lazy languages reduce arguments, if needed, after function application.

Reductions can also be performed within function bodies. Performing reductions within functions can have a specializing effect. If a function is applied more than once then any reductions performed within the function will be performed once before the function body is copied instead of multiple times after. These specializing reductions can also be performed eagerly or lazily. The specializing effect is sufficient to remove interpretive overhead.

Usage

Lambda Animator can be launched in two ways: If you wish to run from the command line, type one of: If your browser doesn't know what Java Web Start is, or if you use a network proxy and Java Web Start doesn't know what the settings are, then you may prefer to run the applet versions: If you're running Debian or Ubuntu, you'll need to have installed Sun's Java: Installing Graphviz on Debian and Ubuntu is also very easy: Java 1.5 or greater is required (any version of Java installed within the last two years should be fine).

The JAR file contains the source code so you can compile it on your own computer if you want.

Lambda Animator uses resources on this website. These include further instructions and examples.

Screen shots

These screen shots demonstrates the power function being specialized with the value 3. The resulting cube function takes 4 beta and 17 delta reduction steps the first time it is applied, and then only one beta and three delta reduction steps on subsequent uses. To see the intermediate steps, and to see more complex examples, launch the sandbox version. To try your own programs run the trusted version.

These screen shots show stream fusion of a three stage pipeline in 22 beta-reductions.

Feedback

Please send any comments, questions, suggestions or bugs to:

Acknowledgements

Thanks to David Thyer, Chris Skepper and Karel Hruda for their help testing pre-release versions.
Thanks to AT&T Research for Graphviz and Grappa.

Related work

  • Christopher P. Wadsworth, Semantics and Pragmatics of the Lambda Calculus , DPhil thesis, Oxford, 1971.
  • This is where graph reduction all started. Wadsworth introduces sharing so as to keep the normalizing benefits of call-by-name, and the efficiency of call-by-value. Wadsworth performs reductions to normal form but only performs reductions within functions which will not be called again, so no specializing effect is achieved.

  • Jean-Jacques Levy, Optimal reductions in the lambda-calculus. In J.P. Seldin and J.R. Hindley, editors, To H.B. Curry: Essays on Combinatory Logic, Lambda Caclulus and Formalism, Academic Press, 1980.
  • Levy formalises the properties of an optimal reduction strategy for the lambda-calculus.

  • John Lamping, An algorithm for optimal lambda calculus reduction, in Symposium on Principles of Programming Lanugages, ACM Press, 1990.

    Lamping is the first to discover an implementation of optimal evaluation.

  • John Henry Field, Incremental Reduction in the Lambda Calculus and Related Systems, PhD thesis, Cornell, 1991
  • Field performs reductions within functions. He devised a system with delayed substitutions so as to only perform reductions within functions when they are known to be needed, and so avoid potentially unneeded reduction of non-terminating terms. However as Field notes, the delayed substitutions do not respect any sharing already present in the terms they substitute. The lambda animator uses delayed substitutions, it also uses memo-tables to ensure any sharing present in terms being substituted is maintained.

  • Carsten Kehler Holst and Carsten Krogh Gomard, Partial evaluation is fuller laziness, SIGPLAN Notices, September 1991.

    Holst and Gomard coin the term complete laziness, and describe its specializing properties.

  • Kent Karlsson, Open Closures, Winter Meeting, Computer Science Department, Chalmers, 1995.
  • Karlsson describes the use of integers to delimit the scope of functions and to improve the efficiency of substitutions. The lambda animator uses the same integers.

  • Zena Ariola and Stefan Blom, Lambda calculi plus letrec. Technical Report IR-434, Department of Mathematics and Computer Science, Vrije University, 1997

    Ariola and Blom use scoped graphs to better formalise the connection between the lambda-calculus and functional languages. The lambda animator shows these scopes.

  • Andrea Asperti, Cecilia Giovannetti and Andrea Naletto. The bologna optimal higher-order machine. Journal of Functional Programming, November 1996.
  • Asperti, Giovannetti and Naletto implement an improved optimal evaluator, and perform experiments with it. No specializaing effect is demonstrated.

  • Michael Jonathan Thyer, Lazy Specialization, PhD thesis, York 1999.

    Thyer (me) implements the first completely lazy evaluator, and demonstrates the lazy specialization of a number of interpreters. Using a completely lazy strategy, interpretive overhead is completely eliminated. The delayed substitution scheme is modified to swap substitutions so as to implement optimal evaluation. This optimal implementation and the BOHM are used in a number of experiments, neither eliminates interpretive overhead. Lambda animator implements complete laziness but not optimal evaluation. The experiments with eager specialization and the use of seq are new to the lambda animator.

  • François-Régis Sinot, A Natural Semantics for Completely Lazy Evaluation, Draft, 2006

    A formal semantics for complete laziness.