The technique called store-passing arises from the following
challenge: implement an interpreter for LC including
Cons
, Car
, Cdr
,
set-Car!
and set-Cdr!
using allocation but
no side-effecting operations.
The technique we will use is to make the store a parameter to all the procedures, record changes in the store through functional update, and to return the updated store from any procedure that might change it. These changes are reflected in the following abstract specification for our store operations:
setup: () -> store allocate: store x val x val -> loc x store lookup: store x loc -> val update: store x loc x val -> storeWe now present an implementation of this specification, representing stores as a list of two elements: the next-use pointer and an association list of ``mutations'' made to the store.
(define make-store ; int x (int,value) list -> store (lambda (next-use allocations) (list next-use allocations))) (define next-use-of ; store -> int (lambda (store) (car store))) (define table-of ; store -> (int,value) list (lambda (store) (cadr store))) (define setup (lambda () (make-store 0 '()))) (define allocate (lambda (s a b) (let ((first-free (next-use-of s))) (list first-free ; location (make-store (+ first-free 2) ; store (cons (list (+ first-free 1) b) (cons (list first-free a) (table-of s))))))))
lookup
extracts the association list and looks up the
value corresponding to the given location in that list. Finally, the
implementation of update
is
(define update (lambda (s l v) (make-store (next-use-of s) (cons (list l v) (table-of s)))))
In harmony with the above changes, we need to also change
MEval
to pass the store around. We will call this new
evaluator SMEval
. It needs to take an expression, an
environment and a store, and return a value and a store.
First, let us see how the evaluation of simple things like numerals
changes (where st
is the name for the store argument):
((numeral? M) (list (make-Num (numeral-n M)) st))We have chosen to represent the two returned values -- the result of evaluation, and the store -- as a list. Other representations, such as Scheme's multiple-value mechanism, are also possible.
Now we consider a slightly more complex example, the primitive
plus
:
((plus? M) (let* ((L (SMEval (plus-lhs M) env st)) (R (SMEval (plus-rhs M) env (store-of L)))) (list (add (value-of L) (value-of R)) (store-of R))))Note how converting to store-passing style makes explicit the dependence on the order of evaluation. Finally, we need to adapt the implementation of the mutating primitives, such as
set-Car!
:
((set-Car!? M) (let* ((p (SMEval (set-Car!-pair M) st)) (v (SMEval (set-Car!-value M) (store-of p)))) (list your-favorite-value (update (store-of v) (value-of p) (value-of v)))))Hence, we have explained in a fairly different way what it means to add side-effects to the language; in particular, we have done so without relying on side-effects in the meta-language.
When the store is propagated in this manner, it is said to be threaded. The environment, in contrast, is not threaded; it is, in theory, duplicated across its various uses.
Answer: The environment corresponds to the lexical scoping structure of the program, while the store captures the program's data creation behavior.
While converting an implementation to store-passing style requires an extensive re-write of the interpreter, there are also some benefits to be derived from using this style. These include:
set-Car!
, we had
instead written
(update (store-of p) (value-of p) (value-of v))in the last line. What would this do? It would create a new primitive that still changes the
Car
field of the
indicated pair, but that does so without recognizing any of the
side-effects that may have taken place in the process of
determining the value being put into the pair. Put differently,
an understanding of language constructs without an equivalent
meta-language construct enables researchers to explore
alternatives in programming language design.
(let ((x (malloc ...))) body)and what does this tell us about the relationship between lexical scope and dynamic extent?
The previous lecture and this one are duals. In
the previous lecture, we assumed that our language had mutation
operators, and showed how we may use these to model allocation (and
mutation) in LC. This corresponds quite closely to the architecture
of most modern computers, where a certain fixed amount of virtual
memory is present at start-up, and allocation is modeled by
apportioning fragments of this memory to individual processes upon
request. Indeed, the implementation of allocate
presented in the previous lecture is conceptually quite similar to
that of Unix's sbrk
primitive, which is used by C's
malloc
library routine.
In contrast, in this lecture, we have shown how to model mutation by assuming only the presence of dynamic allocation routines in our language. This is unlike the architectural model of most stock hardware. We found that under this assumption, we were required to make fairly extensive changes to our implementation of LC, but in return, we gained fine-grained control over the effect of mutations, and were able to understand better the distinction between static scope and dynamic extent.