# A grammar-based approach to invertible programs

Kazutaka Matsuda, Shin-Cheng Mu, Zhenjiang Hu, and Masato Takeichi. In 19th European Symposium on Programming (ESOP 2010), LNCS 6012, pp 448-467, March 2010. [PDF]

Program inversion has many applications such as in the implementation of serialization/deserialization and in providing support for redo/undo, and has been studied by many researchers. However, little attention has been paid to two problems: how to characterize programs that are easy or hard to invert and whether, for each class of programs, efficient inverses can be obtained. In this paper, we propose an inversion framework that we call grammar-based inversion, where a program is associated with an unambiguous grammar describing the range of the program. The complexity of the grammar indicates how hard it is to invert the program, while the complexity is related to how efficient an inverse can be obtained.

# No Inverses for Injective but Non-Surjective Functions?

“I cannot prove that if a function is injective, it has an inverse,” Hideki Hashimoto posed this question to me a while ago. It turned out that this was not the property he really wanted, but it got me into thinking: is it possible at all?

### Preliminaries

Let us start with some basic definitions. A relation from A to B is denoted by the wavy arrow:

``````_↝_ : Set → Set → Set1
A ↝ B = A → B → Set``````

Given a relation `R : A ↝ B` we can always take its converse `R ˘ : B ↝ A`, and a function can be lifted to a relation by `fun`:

``````_˘ : ∀ {A B} → (A ↝ B) → (B ↝ A)
(R ˘) b a = R a b

fun : ∀ {A B} → (A → B) → (A ↝ B)
fun f a b = f a ≡ b``````

A relation `R : A ↝ B` is simple if it does not map one input to multiple outputs. It is entire if everything in `A` is mapped to something in `B` — a more familiar word may be “total”.

``````simple : ∀ {A B} → (A ↝ B) → Set
simple R = ∀ {a b₁ b₂} →  R a b₁ → R a b₂ → b₁ ≡ b₂

entire : ∀ {A B} → (A ↝ B) → Set
entire R = ∀ a → ∃ (λ b → R a b)``````

A function is a relation that is simple and entire. Indeed, one can show that `fun f` is simple and entire for every `f`. Injectivity and surjectivity are similar notions defined for converse of `R`:

``````injective : ∀ {A B} → (A ↝ B) → Set
injective R = ∀ {a₁ a₂ b} → R a₁ b → R a₂ b → a₁ ≡ a₂

surjective : ∀ {A B} → (A ↝ B) → Set
surjective R = ∀ b → ∃ (λ a → R a b)``````

The (constructive variant of the) axiom of choice states that an entire relation `A ↝ B `can be refined to a function `A → B`:

``````ac : ∀ {A B} → (R : A ↝ B) →
(∀ a → ∃ (λ b → R a b)) → ∃ {A → B} (λ f → ∀ a → R a (f a))
ac R R-entire = ((λ a → proj₁ (R-entire a)) , λ a → proj₂ (R-entire a))``````

See the axiom of choice homepage, or the Stanford Encyclopedia of Philosophy for more information on this axiom.

### Inverting Injective and Surjective Functions

Now, let us restate Hashimoto san’s challenge:

Let `fun f : A → B` be injective. Prove that `f` has a left inverse. That is, some `f⁻¹` such that `f⁻¹ (f a) = a` forall `a`.

It turned out that he forgot a condition: `f` is also surjective. If `f` is also (provably) surjective, one can pick some `g ⊆ f˘` using the axiom of choice (since `f` is surjective if and only if `f˘` is total) and further prove that `g ∘ f = id` using injectivity:

``````inv-sur : ∀ {A B} → (f : A → B) →
injective (fun f) → surjective (fun f) →
∃ {B → A} (λ f⁻¹ → (∀ a → f⁻¹ (f a) ≡ a))
inv-sur f f-inj f-sur with ac ((fun f) ˘) f-sur
... | (g , fgb≡b) =  (g , λ a → f-inj {g (f a)} {a} {f a} (fgb≡b (f a)) refl)``````

Like the proof of the constructive axiom of choice, the proof above does not really do much. The proof of surjectivity of `f` has already provided, for every `b : B`, an `a : A` such that `f a ≡ b`. So we simply let `f⁻¹` return that `a`.

Can we lift the restriction that `f` must be surjective? That is, can this be proved?

``````inv : ∀ {A B} → (f : A → B) → injective (fun f) →
∃ {B → A} (λ f⁻¹ → (∀ a → f⁻¹ (f a) ≡ a))``````

To make the scenario clear: we have a (total) function `f : A → B` that is injective but not necessarily surjective. The set `B` could be “larger” than `A` in the sense that there could be some elements `b : B` for which no `f a` equals `b` — that is, `B` may not be “fully covered.” Can we construct `f⁻¹ : B → A` such that `f⁻¹ (f a) ≡ a` for all `a : A`?

At the first glance it did not look like something terribly difficult to do. Given `b`, if it equals some `f a`, let `f⁻¹` simply return that `a`. Otherwise `f⁻¹` could just map `b` to any element in `A`, since this `b` is not used in any invocation of `f⁻¹ (f a)` anyway. It should be possible as long as `A` is not empty, right?

I tried to construct this `f⁻¹` but could not help noticing something funny going on. It turns out that had this function existed, we could, again, prove the law of excluded middle. That is, for any predicate `P : B → Set` and any `b : B`, there would be a decision procedure telling us whether `P b` is true or not.

Provided that we assume proof irrelevance, that is.

### Excluded Middle, Again

Here we go. Let `B` be some type having decidable equality. That is, there exists some `eqB`:

``eqB : (b₁ b₂ : B) → (b₁ ≡ b₂) ⊎ (¬ (b₁ ≡ b₂))``

where `_⊎_` is disjoint sum.

Now take some predicate `P : B → Set`. Let `A` be defined by:

``````A : (B → Set) → Set
A P = Σ B (λ b → P b)``````

That is, `A P` is the subset of `B` for which `P` holds. Each element of `A P` is a pair `(b, Pb)` where `Pb` is a proof of `P b`.

Finally, take

``````f : ∀ {P} → A P → B
f = proj₁``````

Thus `f (b, Pb) = b`.

The function `f` is injective if we assume proof irrelevance. That is, if `f (b, Pb) = b` and` f (b', Pb') = b`, we must have `b = b'` and (due to proof irrelevance) `Pb = Pb'`, and therefore `(b, Pb) = (b', Pb')`. Indeed, if we postulate proof irrelevance:

``postulate irr : (P : B → Set) → ∀ {b} → (p₁ : P b) → (p₂ : P b) → p₁ ≡ p₂``

We can construct a proof that `f` is injective:

``````f-inj : ∀ {P} → injective (fun (f {P}))
f-inj {P} {(.b , Pb₁)} {(.b , Pb₂)} {b} refl refl = cong (λ p → (b , p)) (irr P Pb₁ Pb₂)``````

Assume that we have proved `inv`. We can now apply `inv` and obtain some `f⁻¹`, the left inverse of `f`.

However, with this particular choice of `A`, `f`, and `f⁻¹`, we can construct a deciding procedure for `P`. That is, for any `P` and `b`, we can determine `P b` holds or not:

``em : {P : B → Set} → ∀ b → P b ⊎ ¬ (P b)``

This is how `em` works. Given some `b`, let’s apply `f⁻¹` to `b`. The result is a pair `(b', Pb')`. Let’s compare `b` and `b'` using `eqB`:

``````em {P} b with inv f (f-inj {P})
...      | (f⁻¹ , f⁻¹fa≡a) with inspect (f⁻¹ b)
...                        | (b' , Pb') with-≡ _          with eqB b b'``````

If `b ≡ b'`, `Pb'` is a proof of `P b'` and also a proof of `P b`. Let us just return it (after some type casting):

``````em {P} b | (f⁻¹ , f⁻¹fa≡a) | (b' , Pb') with-≡ _          | inj₁ b≡b'  =
inj₁ (subst P (sym b≡b') Pb')``````

Consider the case that `b` does not equal `b'`. We want to show that `P b` is not true. That is, a proof of `P b` leads to contradiction. Assume we have a proof `Pb` of `P b`. Since `f (b , P b) ≡ b`, we have `f⁻¹ b ≡ (b , Pb)`:

``````em {P} b | (f⁻¹ , f⁻¹fa≡a) | (b' , Pb') with-≡ b'Pb'≡f⁻¹b | inj₂ ¬b≡b' =
inj₂ (λ Pb →
let f⁻¹b≡bPb : f⁻¹ b ≡ (b , Pb)
f⁻¹b≡bPb = f⁻¹fa≡a (b , Pb)``````

The assumption says that `f⁻¹ b = (b' , Pb')`. By transitivity we have `(b , Pb) ≡ (b' , Pb')`.

``````               bPb≡b'Pb' : (b , Pb) ≡ (b' , Pb')
bPb≡b'Pb' = sym (trans b'Pb'≡f⁻¹b f⁻¹b≡bPb)``````

But if we take the first component of both sides, we get `b ≡ b'`. That contradicts our assumption that `b` does not equal `b'`:

``````               b≡b' : b ≡ b'
b≡b' = cong proj₁ bPb≡b'Pb'
in ¬b≡b' b≡b')``````

which completes the proof.

In retrospect, `f⁻¹` cannot exist because for it to work, it has to magically know whether `b` is in the range of `f`, that is, whether `P b` is true or not.

### Nakano’s Challenge

When I talked about this to Keisuke Nakano he posed me another related challenge. Set-theoretically, we understand that if there exists an injective function `f : A → B` and another injective function `g : B → A`, the sets `A` and `B` are of the same cardinality and there ought to be a bijection `A → B`. Can we construct this bijection? That is, can we prove this theorem?

``````nakano : {A B : Set} →
(f : A → B) → injective (fun f) →
(g : B → A) → injective (fun g) →
∃ {A → B} (λ h → injective (fun h) × surjective (fun h))``````

I do not yet have a solution. Any one wanna give it a try?

# A programmable editor for developing structured documents based on bidirectional transformations

Z. Hu, S-C. Mu and M. Takeichi. Higher-Order and Symbolic Computation, Vol 21(1-2), pp 89-118, May 2008.
[PDF]

This paper presents an application of bidirectional transformations to design and implementation of a novel editor supporting interactive refinement in the development of structured documents. The user performs a sequence of editing operations on the document view, and the editor automatically derives an efficient and reliable document source and a transformation that produces the document view. The editor is unique in its programmability, in the sense that transformation can be obtained through editing operations. The main tricks behind are the utilization of the view-updating technique developed in the database community, and a new bidirectional transformation language that can describe not only relationship between the document source and its view, but also data dependency in the view.

# Constructing List Homomorphism from Left and Right Folds

Since Josh has already mentioned it, I had better give it a full account.

Back in 2003 when I just started my postdoc in PSD Lab in University of Tokyo, my colleagues there were discussing about the third homomorphism theorem. The theorem says that if a function f can be expressed both as a foldr and a foldl:

$f = foldr \left(\ll \right) e$
f = foldl (≫) e

there exists some associative binary operator such that

$f \left[\right] = e$
f [a] = a ≪ e = e ≫ a
f (xs ⧺ ys) = f xs ⊚ f ys

Being a list homomorphism implies the potential of parallel computation.

The paper we studied was, of course, The Thrid Homomorphism Theorem by Jeremy. The aim then was to automatically, or semi-automatically, derive , given , , and e. The motivating examples include:

• f = sum, where can simply be +.
• f = sort, where can be the function merge merging two sorted lists.
• f = scanr ⊛ e. While scanr appears to be an inherently sequential function, it is actually possible to compute f “from the middle”, provided that is associative, if we take xs ⊚ (y : ys) = map (⊛y) xs ⧺ (y : ys).

Can we come up with a method to derive for all these examples?

I myself was not particularly interested in automation. I was interested in the challenge for two reasons. Firstly, it appears that some kind of inverse function is needed. Secondly, when I looked at Jeremy’s proof, I felt there is a relational proof inside waiting to be discovered. So I tried.

### Setting the Stage

For the ease of point-free calculation, we define alternatives of folds where the input is paired with the base-cases:

$foldrp \left(\ll \right) \left(\left[\right],a\right) = a$
foldrp (≪) (x:xs,a) = x ≪ foldrp (≪) (xs,a)
foldlp (≫) (a,[]) = a
foldlp (≫) (a, xs⧺[x]) = foldlp (≫) (a,xs) ≫ x

The advantage is that we can shift a suffix or a prefix of the input list to the base case. That is:

$foldr \left(\ll \right) e \left(xs⧺ys\right) = foldrp\ll \left(xs, foldr\ll e ys\right)$
foldl (≫) e (xs⧺ys) = foldlp≪ (foldl ≪ e xs, ys)

or, in point-free style:

$foldr \left(\ll \right) e . cat = foldrp \left(\ll \right) . \left(id × foldr \left(\ll \right) e\right)$(1)
foldl (≫) e . cat = foldlp (≫) . (foldlp (≫) e × id) (2)

where cat (xs,ys) = xs ⧺ ys and (f × g) (a,b) = (f a, g b).

The key property, however, is one that was shown in (possibly among other literature) Algebra of Programming: for a simple relation (i.e. a partial function) S, we have:

$S . S° . S = S$

where ° stands for relational converse (the relational concept of an “inverse function”).

Recall our aim: given f, look for such that f xs ⊚ f ys = f (xs⧺ys). It translates to point-free style as ⊚ . (f × f) = f . cat.

### Proving the Theorem

The third homomorphism theorem is a direct corollary of the following lemma:

Lemma 1: f = foldr (≪) e = f = foldl (≫) e implies that f is prefix and postfix insensitive. That is:

$f xs = f xs’ \wedge f ys = f ys’ ⇒ f \left(xs⧺ys\right) = f \left(xs’⧺ys’\right).$

In point-free style: f . cat . (f°. f × f°. f) = f . cat.

Once we have the lemma proved, the theorem follows by taking ⊚ = f . cat . (f °× f °). It has to be a (partial) function because ⊚ . (f × f) = f . cat is a function. Futhermore, is associative because cat is.

The proof of Lemma 1 is simply some applications of (1), (2), and S . S° . S = S:

=      { (1) }
foldrp (≪) . (id × f)
=      { since f . f° . f = f }
foldrp (≪) . (id × f) . (id × f° . f)
=      { (1) }
f . cat . (id × f° . f)
=      { (2) }
foldlp (≫) . (f × id) . (id × f° . f)
=      { since f . f° . f = f }
foldlp (≫) . (f × id) . (f° . f × f° . f)
=      { (2) }
f . cat . (f° . f × f° . f)

The proof is still the same as that of Jeremy’s, but in a relational disguise.

### Refining to Functions

To derive actual algorithms, we have yet to refine ⊚ = f . cat . (f°× f°) so that it uses functional components only. We shall pick some g ⊆ f° whose domain equals the range of f, such that ⊚ = f . cat . (g × g). (An equality, rather than inclusion, holds because in both definitions are partial functions having the same domain.)

For example, consider the special case sum = foldr (+) 0 = foldl (+) 0. Here ⊚ = sum . cat . (sum°× sum°). One simple choice is to pick wrap ⊆ sum°, where wrap a = [a]. In this case a ⊚ b = sum [a, b] = a + b.

For sort, define sorted = ran sort. It is a partial function such that sorted xs = xs iff xs is sorted. Notice that sorted ⊆ sort°. Therefore, a possible choice for would be sort . cat . (sorted × sorted) — concatenating two sorted lists, and sort them again. Jeremy then went on with this definition and proved that ⊚ = merge, taking advantage of the fact that both input lists are sorted.

### Some more Properties

Some more properties are needed to deal with scanr. The following properties allow foldrp and foldrp to proceed by shifting elements of the list to the base case:

$foldrp \left(\ll \right) \left(xs⧺\left[x\right],a\right) = foldrp \left(\ll \right) \left(xs, x \ll a\right)$
foldlp (≫) (a, x:xs) = foldlp (≫) (a ≫ x, xs) (3)

When simple approaches of refining ⊚ = f . cat . (f°× f°) does not work, the following approach sometimes does. Since f is a fold, one may attempt to take one of the as an unfold, thus forming an “unfold, then fold” pattern:

=      { (1) }
foldrp (≪) . (id × f) . (f°× f°)
⊇      { since f . f° ⊇ ran f }
foldrp (≪) . (f°× ran f)
=      { since f = foldl ≫ [] }
foldrp (≪) . ((foldl ≫ [])°× ran f)

Symetrically,

$⊚ = foldlp \left(\gg \right) . \left(ran f × \left(foldr \ll \left[\right]\right)°\right)$(4)

### Scanr “from the Middle”

Finally, consider f = scanr ⊛ e for some associative with identity e (that is, e ⊛ x = x ⊛ e = x). A scan can be defined both as a foldr and a foldl as below:

$scanr ⊛ e = foldr \ll \left[e\right]$
x ≪ (y : xs) = x ⊛ y : y : xs
scanr ⊛ e = foldl ≫ [e]
xs ≫ x = map (⊛x) xs ⧺ [e]

From ⊚ = f . cat . (f°× f°), we can prove that xs ⊚ (y : ys) = map (⊛ y) xs ⧺ ys. Here is the inductive case:

=      { (4) }
foldlp (≫) (xs, (foldr ≪ [e])° (x : y :ys))
=      { definition of , let z ⊛ y = x }
foldlp (≫) (xs, z : (foldr ≪ [e])° (y :ys))
=      { (3) }
foldlp (≫) (xs ≫ z, (foldr ≪ [])° (y :ys))
=      { (4) }
(xs ≫ z) ⊚ (y : ys)
=      { induction }
map (⊛ y) (xs ≫ z) ⧺ ys
=      { definition of }
map (⊛ y) (map (⊛ z) xs ⧺ [e]) ⧺ ys
=      { map, e identity }
map (λv → (v ⊛ y) ⊛ z) xs ⧺ [y] ⧺ ys
=      { associtivity of }
map (λv → v ⊛ (y ⊛ z)) xs ⧺ [y] ⧺ ys
=      { z ⊛ y =x }
map (⊛ x) xs ⧺ [x] ⧺ (y:ys)

### Prologue

Given the complexity of the proof above, I did not think there was a hope to automatically construct for a reasonably useful set of list homomorphisms. My colleagues were talking about “weak inverses” — their attempts to look for a refinement of , which I considered too ad-hoc.

Being just graduated from AoP, I was perhaps too arrogant and proud of all the clever derivation we did to care about automatic program construction. (Like Jeremy, in the end of my thesis I quoted “善數不用籌策 (Those who good at calculation need no mechanical aids)” from 老子 Lao Tzu. And didn’t Dijkstra say “I hardly used the computer they gave me. I was perfectly happy without it.”?) The relational method, which seemed to cover everything, gave me a false feeling that I knew the problem inside out.

Last year, however, my colleagues and I met again and I was told that they eventually published a paper on this subject:

Kazutaka Morita, Akimasa Morihata, Kiminori Matsuzaki, Zhenjiang Hu, Masato Takeichi. Automatic Inversion Generates Divide-and-Conquer Parallel Programs, PLDI 2007.

They focused on weak inverses that returns only lists with fixed lengths. The advantage is that calculations like the one above are no longer necessary — since the lists are of fixed-length, always takes constant time. On the other hand, scanr and scanl are special cases dealt with on a different level. Such distinction is enforced by the language they allow to construct f. No, they do not seem to have handled sort. But their approach still covered a reasonably useful collection of functions.

Well, I think the moral is that we cannot just stop when it appears that is no elegant solution that suits our taste. It sometimes pays to get our hands dirty, through which we may eventually discover the beauty within.

# S Combinator is Injective, with Proofs

By chance, I came upon a blog entry by Masahiro Sakai (酒井政裕) in which he, after reading my short comment on my previous homepage “Do you know that the `S` combinator is injective? I have a simple algebraic proof!”, tried to construct the inverse of `S` and showed that `S⁻¹ ○ S = id` in, guess what, Agda!

Wow, people were actually reading what I wrote! This made me feel that I have to be a bit more responsible and, at least, provide the proof when I claim I have one. So, here it is.

### S is injective

Recall the definition of `S`:

``````
S : (A -> B -> C) -> (A -> B) -> A -> C
S = λ x y a -> x a (y a)
``````

I am assuming a simple semantics of sets and functions, and by `S` being injective I mean that `S x = S x' ⇒ x = x'`, which can be trivially shown below:

``````
S x = S x'
≡       { η expansion, for all y : A -> B, a : A }
(∀ a, y : S x y a = S x' y a)
≡       { definition of S }
(∀ a, y : x a (y a) = x' a (y a))
⇒       { choose y = K b for some b }
(∀ a, b : x a (K b a) = x' a (K b a))
≡       { definition of K: K b a = b }
(∀ a, b : x a b = x' a b)
≡       { pointwise equality }
x = x'
``````

I would usually leave out the `∀`‘s in derivations, but in this case, they are explicitly written to emphasise that `b` is indeed universally quantified.

### So, what is the inverse of S?

Now that we know `S` is injective, there ought to exist some function `S⁻¹` such that `S⁻¹ ○ S = id`. Nakano san claimed that a definition would be:

``````
S⁻¹ : ((A -> B) -> A -> C) -> A -> B -> C
S⁻¹ = λ x a b -> x (K b) a
``````

That `S⁻¹ ○ S = id` is easy to see:

``````
S⁻¹ (S x) a b
=  (S x) (K b) a
=  x a (K b a)
=  x a b
``````

From another direction, we have only `S ○ S⁻¹ ⊆ id` since `S` is not a surjective function. How the range of `S` look like? Inspecting the definition of `S`. Since `y` is applied only once to `a`, the value of `y` on other input does not matter. That is, the range of `S` consists of only functions `e` such that:

``````
e y a = e y' a     for all y, y' such that y a = y' a      ......(1)
``````

We will show that `S (S⁻¹ e) = e` for all `e` satisfying (1):

``````
S (S⁻¹ e) y a
=      { definition of S }
S⁻¹ e a (y a)
=      { definition of S⁻¹ }
e (K (y a)) a
=      { by (1), since K (y a) a = y a }
e y a
``````

### Inverting higher-order functions?

Some endnotes. Once Nakano and I thought we discovered how to invert higher-order functions (and even derive the necessary side conditions, like the one on `e` above) by drawing λ expressions as trees and manipulating them. It turned out that I overlooked something obvious in the very beginning and the result was a disaster.

Does anyone know of some related work on inverting higher order functions? The only one I know is Samson Abramsky’s paper A Structural Approach to Reversible Computation, but I am not sure whether it is about reversibility/invertibility in the same sense.

# Countdown: a case study in origami programming

R. S. Bird and S-C. Mu. In Journal of Functional Programming Vol. 15(5), pp. 679-702, 2005.
[GZipped Postscript]

Countdown is the name of a game in which one is given a list of source numbers and a target number, with the aim of building an arithmetic expression out of the source numbers to get as close to the target
as possible. Starting with a relational specification we derive a number of functional programs for solving Countdown. These programs are obtained by exploiting the properties of the folds and unfolds of various data types, a style of programming Gibbons has aptly called origami programming. Countdown is attractive as a case study in origami programming both as an illustration of how different algorithms can emerge from a single specification, as well as the space and time trade-offs that have to be taken into account in comparing functional programs.

# Inverting the Burrows-Wheeler transform

R. S. Bird and S-C. Mu. In Journal of Functional Programming Vol. 14(6) Special Issue on Functional Pearls, pp. 603-612, Novermber 2004.
[PDF][GZipped Postscript]

The Burrows-Wheeler Transform is a string-to-string transform which, when used as a preprocessing phase in compression, significantly enhances the compression rate. However, it often puzzles people how the inverse transform can be carried out. In this pearl we to exploit simple equational reasoning to derive the inverse of the Burrows-Wheeler transform from its specification. We also outline how to derive the inverse of two more general versions of the transform, one proposed by Schindler and the other by Chapin and Tate.

# An algebraic approach to bidirectional updating

S-C. Mu, Z. Hu and M. Takeichi. In The Second Asian Symposium on Programming Language and Systems, pp. 2-18. November 2004.
[PDF]

In many occasions would one encounter the task of maintaining the consistency of two pieces of structured data related by some transform — synchronising bookmarks in different web browsers, the source and the view in an editor, or views in databases, to name a few. This paper proposes a formal model of such tasks, basing on a programming language allowing injective functions only, inspired by previous work on program inversion. The programmer designs the transformation as if she is writing a functional program, while the synchronisation behaviour is automatically derived by algebraic reasoning. The main advantage is being able to deal with duplication and structural changes. The result will be integrated to our structure XML editor in the Programmable Structured Document project.

# A programmable editor for developing structured documents based on bidirectional transformations

Z. Hu, S-C. Mu and M. Takeichi. In Partial Evaluation and Semantics-Based Program Manipulation, pp. 178-189. August 2004.
[PDF]

This paper presents a novel editor supporting interactive refinement in the development of structured documents. The user performs a sequence of editing operations on the document view, and our system automatically derives an efficient and reliable document source and a transformation that produces the document view. The editor is unique in its programmability, in the sense that transformation can be obtained through editing operations. The
important techniques behind this editor are the utilization of the view-updating idea developed in the database community, and a bidirectional transformation language that concisely describes the relationship between the document source and its view, as well as data dependency in the view.

# An injective language for reversible computation

S-C. Mu, Z. Hu and M. Takeichi. In Mathematics of Program Construction 2004, LNCS 3125, pp. 289-313, July 2004.
[PDF]

Erasure of information incurs an increase in entropy and dissipatse heat. Therefore, information-preserving computation is essential for constructing computers that use energy more effectively. A more recent motivation to understand reversible transformations also comes from the design of editors where editing actions on a view need to be reflected back to the source data.

In this paper we present a point-free functional language, with a relational semantics, in which the programmer is allowed to define injective functions only. Non-injective functions can be transformed into a program returning a history. The language is presented with many examples, and its relationship with Bennett’s reversible Turing machine is explained.

The language serves as a good model for program construction and reasoning for reversible computers, and hopefully for modelling bi-directional updating in an editor.