Category Archives: Research Blog

Agda Exercise: Proving that Mergesort Returns Ordered Lists

Continuing with my Adga programming practice. This time I am implementing the example in another section of Why Dependent Types Matter by Altenkirch, McBride, and McKinna in Agda: proving that mergesort returns an ordered list.


> module Main where

> open import Prelude
> open import Data.Nat hiding (_≤_) 
> open import Data.List

Order

To begin with, a term of type m ≤ n is a proof that a natural number m is at most n:


> data _≤_ : Nat -> Nat -> Set where
>  le0 : {n : Nat} -> zero ≤ n
>  le1 : {m n : Nat} -> m ≤ n -> (1 + m) ≤ (1 + n)

An auxiliary function weaken, which I will need later, weakens a proof of (1+m) ≤ n to a proof of m ≤ n:


> weaken : {y x : Nat} -> (1 + y) ≤ x -> y ≤ x
> weaken (le1 le0) = le0
> weaken (le1 (le1 p)) = le1 (weaken (le1 p))

Given two natural numbers m and n, the function order, to be defined later, determines their order by returning a proof. The proof has to be wrapped in a datatype Order:


> data Order : Nat -> Nat -> Set where
>   leq : {m n : Nat} -> m ≤ n -> Order m n
>   gt : {m n : Nat} -> (1 + n) ≤ m -> Order m n

Agda does not provide a generic case expression, so I have to define my own for Order:


> caseord : {m n : Nat} {a : Set} -> Order m n  
>             -> ((m ≤ n) -> a) -> (((1 + n) ≤ m) -> a) -> a
> caseord (leq p) f g = f p
> caseord (gt p) f g = g p

The function order can thus be defined by:


> order : (m : Nat) -> (n : Nat) -> Order m n
> order zero n = leq le0
> order (suc m) zero = gt (le1 le0)
> order (suc m) (suc n) = 
>   caseord (order m n) (leq ∘ le1) (gt ∘ le1)  

Ordered List

There are many ways one can express the property that a list is sorted. The choice depends on what one needs for the proof. To prove that merge sorts the list, Altenkirch, McBride, and McKinna suggests the following “external bound” representation of sortedness:


> data OList : Nat -> Set where
>   Nil : {b : Nat} -> OList b
>   Cons : {b : Nat} -> (x : Nat) -> (b ≤ x) -> OList x -> OList b

The datatype OList b represents a sorted list of natural numbers whose elements are all greater than or equal to the bound b. The bound, however, is supplied externally and need not be precise. Every sorted list of natural numbers has type OList 0, for instance. The Cons constructor demands that the newly added element x is bounded by the externally supplied bound b, and that the tail can be bounded by x.

Merging Sorted Lists

With OList, merge can be verified very easily:


> merge : {b : Nat} -> OList b -> OList b -> OList b
> merge Nil ys = ys
> merge xs Nil = xs
> merge (Cons x blex xs) (Cons y bley ys) =
>   caseord (order x y)
>     (\\ xley -> Cons x blex (merge {x} xs (Cons y xley ys)))
>     (\\ pylex -> Cons y bley (merge {y} (Cons x (weaken pylex) xs) ys))

Note, for example, that Cons y bley ys : OList b is updated to Cons y xley ys : OList x simply by using a different proof.

Mergesort

The rest are just some routine code: dealing elements to a tree, and perform successive merging by a fold on the tree. Mergesort is defined by mergeT ∘ dealT. The code below is similar to that in my previous blog entry, but without the size constraint.


> data Parity : Set where
>   Even : Parity
>   Odd : Parity

> data Tree (a : Set) : Set where
>   Nul : Tree a 
>   Tip : a -> Tree a
>   Bin : Parity -> Tree a -> Tree a -> Tree a

> foldT : {a b : Set}
>           -> (Parity -> b -> b -> b) 
>           -> (a -> b)
>           -> b  
>           -> Tree a -> b
> foldT f g e Nul = e
> foldT f g e (Tip x) = g x
> foldT f g e (Bin p t u) = f p (foldT f g e t) (foldT f g e u)

> insert : {a : Set} -> a -> Tree a -> Tree a
> insert x Nul = Tip x
> insert x (Tip y) = Bin Even (Tip x) (Tip y)
> insert x (Bin Even t u) =
>    Bin Odd (insert x t) u
> insert x (Bin Odd t u) = 
>    Bin Even t (insert x u)

> dealT : List Nat -> Tree Nat
> dealT = foldr insert Nul

> mergeT : Tree Nat -> OList 0
> mergeT = foldT (\p xs ys -> merge xs ys)
>                (\x -> Cons x le0 Nil)
>                Nil
               
> msort : List Nat -> OList 0
> msort = mergeT ∘ dealT

Agda Exercise: Sized Mergesort

Continuing with my Adga programming practice. Part of the mergesort example became easy once Max showed me how I should perform the pattern matching in insert.


> open import Prelude
> open import Logic.Identity
> open import Data.Bool
> open import Data.Nat
> open import Data.Nat.Properties

> module Main where

Length-Indexed Lists

Firstly, define length-indexed lists and its fold function.


> data List (a : Set) : Nat -> Set where
>   [] : List a zero
>   _::_ : {n : Nat} -> a -> List a n -> List a (suc n)

> foldr : {a : Set} -> {n : Nat} -> (b : {m : Nat} -> List a m -> Set)
>           -> ({m : Nat} -> {xs : List a m} 
>                   -> (x : a) -> b xs -> b (x :: xs)) 
>           -> b [] 
>           -> (xs : List a n) -> b xs
> foldr b f e [] = e
> foldr b f e (x :: xs) = f x (foldr b f e xs)

Apart from the extra parameter b, the body of foldr is not too different from the ordinary fold on lists. For genericity, however, its type is more complicated. I wonder whether it is still appropriate to call it a “fold”, now that xs appears as a parameter. Perhaps that is why people talk about “eliminators” rather than folds or catamorphisms.

Size-Indexed Balanced Trees

This is one of the possible ways to define sized-indexed balanced trees. The parity bit keep notes of whether the number of the elements is even or odd.


> data Parity : Set where
>   Even : Parity
>   Odd : Parity

> parity : Parity -> Nat
> parity Even = 0
> parity Odd = 1

The Bin constructor enforces the constraint that if the size is even, the two subtrees must have the same size. Otherwise the left subtree contains exactly one more element than the right.


> data Tree (a : Set) : Nat -> Set where
>   Nul : Tree a 0
>   Tip : a -> Tree a 1
>   Bin : {n : Nat} -> (p : Parity) ->
>           Tree a (parity p + n) -> Tree a n -> Tree a (parity p + (n + n))

Fold on trees is defined in a way similar to fold on lists.


> foldT : {a : Set} -> {n : Nat} -> (b : {m : Nat} -> Tree a m -> Set) 
>           -> ({m : Nat} -> (p : Parity)
>                 -> {t : Tree a (parity p + m)} -> {u : Tree a m}
>                 -> b t -> b u -> b (Bin p t u)) 
>           -> ((x : a) -> b (Tip x))
>           -> b Nul 
>           -> (xs : Tree a n) -> b xs
> foldT b f g e Nul = e
> foldT b f g e (Tip x) = g x
> foldT b f g e (Bin p t u) = f p (foldT b f g e t) (foldT b f g e u)

Mergesort

The function insert adds one element to a balanced tree. I did not know how to get the case for Bin Odd t u type-check until Max pointed out that I could just pattern match the implicit argument {n} of Bin.


> insert : {a : Set} -> {n : Nat} ->
>            a -> Tree a n -> Tree a (suc n)
> insert x Nul = Tip x
> insert x (Tip y) = Bin Even (Tip x) (Tip y)
> insert x (Bin Even t u) =
>    Bin Odd (insert x t) u
> insert {a} x (Bin {n} Odd t u) = 
>    subst (\\ i -> Tree a (suc i)) (sym (+suc n n)) 
>      (Bin Even t (insert x u))

The function dealT, dealing elements of the input list to a balanced tree, can thus be defined as a fold:


> dealT : {a : Set} -> {n : Nat} -> List a n -> Tree a n 
> dealT {a} = foldr (\{m} xs -> Tree a m) insert Nul

The function merge has been discussed before:


> merge : {m n : Nat} ->
>           List Nat m -> List Nat n -> List Nat (m + n)
> merge [] ys = ys
> merge {suc m} {zero} (x :: xs) Nil = 
>   subst (List Nat) (+zero (suc m)) (x :: xs)
> merge {suc m} {suc n} (x :: xs) (y :: ys) =
>    if x < y then
>        x :: merge xs (y :: ys)
>      else (y :: 
>            subst (List Nat) (+suc m n) (merge (x :: xs) ys))

The merging phase of mergesort can be seen as a fold on a binary tree:

    
> mergeT : {n : Nat} -> Tree Nat n -> List Nat n 
> mergeT  = foldT (\\ {m} t -> List Nat m)
>                 (\\ {m} p xs ys -> 
>                   subst (List Nat) (+assoc (parity p) m m) (merge xs ys))
>                 (\\ x -> x :: [])
>                 []

I needed +assoc to type-cast (parity p + m) + m into parity p + (m + m).

Finally, here comes mergesort.


> msort : {n : Nat} -> List Nat n -> List Nat n 
> msort = mergeT  ∘ dealT

Of course, when it comes to sorting, one would expect that we prove that it actually sorts the list than merely that it preserves the length. It will perhaps be left to the next exercise.

My First Agda Program: Append, Reverse, and Merge

I have been looking for a programming language to practice dependent type with. During the WG 2.1 meeting, my attention was brought to Agda (in fact, Adga 2). The word Agda may refer to both the proof assistant and its input language. It has been under long development, is relatively matured, has good library support. It supports real dependent type, and, pardon me for my personal preference, it is Haskell-like. So far it seems like a perfect choice.

So, here is my first Agda program. I am merely repeating some simple exercises on lists, which I tried using Haskell in a previous entry Developing Programs and Proofs Spontaneously using GADT.


> open import Logic.Identity
> open import Data.Bool
> open import Data.Nat
> open import Data.Nat.Properties
> module Main where

List Concatenation

Adga has natural numbers defined in Data.Nat. With Nat, one can define lists indexed by their lengths as:


> data List (a : Set) : Nat -> Set where
>  [] : List a zero
>  _::_ : {n : Nat} -> a -> List a n -> List a (suc n)

The definition of append naturally follows:


> append : {a : Set} -> {m , n : Nat} ->
>             List a m -> List a n -> List a (m + n)
> append [] ys = ys
> append (x :: xs) ys = x :: append xs ys

List Reversal

Unlike in Haskell, identifiers in Adga have to be defined earlier in the file before they can be referred to. In the usual definition of linear-time list reversal using an accumulating parameter, the recursive call yields a list of length m + suc n. I have got to somehow provide a proof that it equals suc m + n.

How does one provide such a proof? In Omega, one would make a list of all lemmas needed in a where clause and the compiler would mysteriously figure out a way to use them. (Well, I am sure it is well documented. I just have not understood its algorithm.) I would like to have more control over what is happening, and therefore would prefer to present the proof as a type cast from f (m + suc n) to f (suc m + n) for arbitrary type functor f. This is what one would do in Epigram and Adga.


  psuc : {m : Nat} -> {n : Nat} -> {f : Nat -> Set} ->
              f (m + suc n) -> f (suc m + n)
  psuc {zero} {n} {f} xs = xs
  psuc {suc m} {n} {f} xs = psuc {m} {n} {f'} xs
     where f' : Nat -> Set
           f' n = f (suc n)

At this point I did not yet know that I could use the lemmas defined in Data.Nat.Properties instead of rolling my own. The function revcat can then be defined as below:


  revcat : {a : Set} -> {m n : Nat} ->
              List a m -> List a n -> List a (m + n)
  revcat {a} {zero} {n} [] ys = ys
  revcat {a} {suc m} {n} (x :: xs) ys =
     psuc {m} {n} {List a} (revcat xs (x :: ys))

One thing I have yet to figure out is how much of the type information can be inferred. The first clause can be simplified to revcat [] ys = ys. If I omit the type information in the right-hand side of the second clause:


  revcat {a} {suc m} {n} (x :: xs) ys =
     psuc (revcat xs (x :: ys))

I would expect Agda to tell me that it cannot infer the types. However, the program still appears to typecheck. When I try to execute revcat (1 :: (3 :: [])) (2 :: (4 :: [])) in the interpreter, I get _86 1 1 (3 :: []) (2 :: (4 :: [])), which seems to be an unevaluated expression. Does it not violate the slogan that “well-typed programs don’t go wrong?”

Using the predefined lemma in Data.Nat.Properties:


  +suc : (n m : Nat) -> n + suc m ≡ suc (n + m)
  +suc  zero   m = refl
  +suc (suc n) m = cong suc (+suc n m)

The type cast in revcat can be performed using subst:


>  revcat : {a : Set} -> {m n : Nat} ->
>              List a m -> List a n -> List a (m + n)
>  revcat {a} {zero} {n} [] ys = ys
>  revcat {a} {suc m} {n} (x :: xs) ys =
>     subst (List a) (sym (+suc m n)) (revcat xs (x :: ys))

Merging

Once I got revcat working, merge is not too difficult.


> merge : {m n : Nat} ->
>           List Nat m -> List Nat n -> List Nat (m + n)
> merge [] ys = ys
> merge {suc m} {zero} (x :: xs) Nil = 
>   subst (List Nat) (+zero (suc m)) (x :: xs)
> merge {suc m} {suc n} (x :: xs) (y :: ys) =
>    if x < y then
>        x :: merge xs (y :: ys)
>      else (y :: 
>            subst (List Nat) (+suc m n) (merge (x :: xs) ys))

Curiously, the parentheses around the else branch can not be omitted. Without them the expression would be parsed as (if ... then .. else y) :: subst.... The precedence for _::_, which is not declared, is assumed to be higher than if_then_else_.

Balanced Trees

I tried to go on to define balanced trees, like in Altenkirch, McBride, and McKinna’s introduction to Epigram:


data Parity : Set where
  Even : Parity
  Odd : Parity

parity : Parity -> Nat
parity Even = 0
parity Odd = 1

data Tree (a : Set) : Nat -> Set where
  Nul : Tree a zero
  Tip : a -> Tree a (suc zero)
  Bin : {m n : Nat} -> (p : Parity) ->
          Tree a (parity p + n) -> Tree a n -> Tree a (parity p + (n + n))

The Bin constructor needs a parity bit. If the parity bit is Even, the two subtrees have the same size. Otherwise they differ by one.

However, I have not figured out how to pattern match the size parameter: when defining a function f : {a :Set} -> {n : Nat} -> Tree a n-> ..., Agda complained when I tried this pattern f {a} {n} (Bin Odd t u) = ... because parity p + (n' + n') != n. Perhaps I should start reading some more documents.

An Unsuccessful Attempt to Compute the Intersection of Regular Expressions

Several months ago, Tyng-Ruey, Chin-Lung, and I were working on some ideas in which we needed to compute the intersection of two regular expressions (REs). The traditional approach is to convert them to finite state automata, take the intersection, and convert the result back to a RE. The disadvantage is that the structure of the original REs would be lost. We therefore tried to perform the intersection directly on the REs. The result was not satisfactory, however.

Regular expressions can be represented by the following algebraic datatype:

> data RE = Z            -- Empty
>         | E            -- Epsilon
>         | L Token      -- Literal
>         | RE :+ RE     -- Plus
>         | RE :. RE     -- Times 
>         | S RE         -- Star
>            deriving Show

> type Token = Char
> la = L 'a'
> lb = L 'b'
> lc = L 'c'

At one point we will need to determine whether an expression is nullable. The following function nullable actually computes the intersection of the input with epsilon:

> nullable :: RE -> RE
> nullable Z = Z
> nullable E = E
> nullable (L _) = Z
> nullable (r :+ s) = nullable r .+ nullable s
> nullable (r :. s) = nullable r .: nullable s
> nullable (S r) = E

I defined some “smart” constructors doing some minimal simplification for plus and times:

> Z .+ r = r
> r .+ Z = r
> r .+ s = r :+ s

> Z .: r = Z
> r .: Z = Z
> E .: r = r
> r .: E = r
> r .: s = r :. s

Now, here comes Brzozowski’s derivative of regular expressions. Intuitively speaking, d a e yields a regular expression resulting from extracting a prefixing token a from the regular expression e. Brzozowski actually defined derivative with respect to strings rather than just tokens.

> d :: Token -> RE -> RE
> d _ Z = Z
> d _ E = Z
> d a (L b) | a == b = E
>           | otherwise = Z
> d a (r :+ s) = d a r .+ d a s
> d a (r :. s) = (d a r .: s) .+ (nullable r .: d a s)
> d a (S r) = d a r .: S r

The interesting case is probably r :. s. To extract a from r :. s, we either extract a from r or, if r is nullable, extract a from s.

Now let us consider how to define intersection. The intersection of any expression with the empty set Z yields Z. Intersection with epsilon has been defined as nullable above.

> Z `cap` _ = Z
> _ `cap` Z = Z
> E `cap` E = E
> E `cap` r = nullable r
> r `cap` E = nullable r

The case for literals is defined below. Again the interesting case is the one regarding r :. s:

> L a `cap` L b | a == b = L a
>               | otherwise = Z
> L a `cap` (r :+ s) = (L a `cap` r) .+ (L a `cap` s)
> L a `cap` (r :. s) = (L a `cap` r .: nullable s) .+ 
>                      (nullable r .: L a `cap` s) 
> L a `cap` S r = L a `cap` r

The cases for plus is simple:

> (r :+ s) `cap` t = (r `cap` t) .+ (s `cap` t)

The case for :. is the most difficult. We have to do case analysis on the left side of :.:

> (Z :. s) `cap` t = Z
> (E :. s) `cap` t = s `cap` t
> (L a :. s) `cap` t = L a .: (s `cap` d a t)
> ((r :+ s) :. t) `cap` u = ((r .: t) `cap` u) .+ ((s .: t) `cap` u)
> ((r :. s) :. t) `cap` u = (r .: (s .: t)) `cap` u
> (S r :. s) `cap` t = (s `cap` t) .+ ((r .: (S r .: s)) `cap` t)

The case (Z :. s) `cap` t should be Z because (Z :. s) is Z, while (E :. s) `cap` t is s `cap` t because (E :. s) simplifies to s. The intersection of (L a :. s) and t is a cons the intersection of s and d a t. If it is not possible to extract a from t, the expression collapses to Z.

The case for :+ is given by distributivity, the case for (r :. s) :. t by associativity. The case for (S r :. s) `cap` t actually comes from expanding S r :. s to s :+ (r :. S r :. s). This may have contributed to the non-termination we will see later.

Finally, the case for (S r) `cap` t results from expanding S r to E :+ (r :. S r):

> (S r) `cap` t = E `cap` t .+ ((r .: S r) `cap` t)

Unfortunately, cap does not always terminate when star is involved. In some cases it works well:

*Main> S la `cap` lb
Z
*Main> S la `cap` (la :+ lb)
L 'a'
*Main> S la `cap` S lb
E

In some cases it loops. For example, consider S lb `cap` S lb. We get

  b* cap b* 
 = e cap b* + bb* cap b*
 = e + b (b* cap d b b*)
 = e + b (b* cap b*)
 = e + b ({-- loop --})

We could probably try to capture the recursive pattern in, perhaps, a fold. But we were then distracted by other topics and did not explore further.

Is it possible at all?

Developing Programs and Proofs Spontaneously using GADT

This entry was also posted to the Haskell-Cafe mailing list.

I am curious about the possibility of developing Haskell programs spontaneously with proofs about their properties and have the type checker verify the proofs for us, in a way one would do in a dependently typed language.

In the exercise below, I tried to redo part of the merge-sort example in Altenkirch, McBride, and McKinna’s introduction to Epigram: deal the input list into a binary tree, and fold the tree by the function merging two sorted lists into one. The property I am going to show is merely that the length of the input list is preserved.

Given that dependent types and GADTs are such popular topics, I believe the same must have been done before, and there may be better ways to do it. If so, please give me some comments or references. Any comments are welcomed.

> {-# OPTIONS_GHC -fglasgow-exts #-}

To begin with, we define the usual type-level representation of natural numbers and lists indexed by their lengths.

> data Z = Z       deriving Show
> data S a = S a   deriving Show

> data List a n where
>   Nil :: List a Z
>   Cons :: a -> List a n -> List a (S n)

Append

To warm up, let us see the familiar “append” example. Unfortunately, unlike Omega, Haskell does not provide type functions. I am not sure which is the best way to emulate type functions. One possibility is to introduce the following GADT:

> data Plus m n k where    --- m + n = k
>   PlusZ :: Plus Z n n
>   PlusS :: Plus m n k -> Plus (S m) n (S k)

such that Plus m n k represents a proof that m + n = k.

Not having type functions, one of the possible ways to do append is to have the function, given two lists of lengths m and n, return a list of length k and a proof that m + n = k. Thus, the type of append would be:

   append :: List a m -> List a n
               -> exists k. (List a k, Plus m n k)

In Haskell, the existential quantifier is mimicked by forall. We define:

> data DepSum a p = forall i . DepSum (a i) (p i)

The term “dependent sum” is borrowed from the Omega tutorial of Sheard, Hook, and Linger. Derek Elkins and Dan Licata explained to me why they are called dependent sums, rather than products.

The function append can thus be defined as:

> append :: List a m -> List a n -> DepSum (List a) (Plus m n)
> append Nil ys = DepSum ys PlusZ
> append (Cons x xs) ys =
>    case (append xs ys) of
>      DepSum zs p -> DepSum (Cons x zs) (PlusS p)

Another possibility is to provide append a proof that m + n = k. The type and definition of append would be:

< append :: Plus m n k -> List a m -> List a n -> List a k
< append PlusZ Nil ys = ys
< append (PlusS pf) (Cons x xs) ys = Cons x (append pf xs ys)

I thought the second append would be more difficult to use: to append two lists, I have to provide a proof about their lengths! It turns out that this append actually composes easier with other parts of the program. We will come to this later.

Some Lemmas

Here are some lemmas represented as functions on terms. The function incAssocL, for example, converts a proof of m + (1+n) = k to a proof of (1+m) + n = k.

> incAssocL :: Plus m (S n) k -> Plus (S m) n k
> incAssocL PlusZ = PlusS PlusZ
> incAssocL (PlusS p) = PlusS (incAssocL p)

> incAssocR :: Plus (S m) n k -> Plus m (S n) k
> incAssocR (PlusS p) = plusMono p

> plusMono :: Plus m n k -> Plus m (S n) (S k)
> plusMono PlusZ = PlusZ
> plusMono (PlusS p) = PlusS (plusMono p)

For example, the following function revcat performs list reversal by an accumulating parameter. The invariant we maintain is m + n = k. To prove that the invariant holds, we have to use incAssocL.

> revcat :: List a m -> List a n -> DepSum (List a) (Plus m n)
> revcat Nil ys = DepSum ys PlusZ
> revcat (Cons x xs) ys =
>     case revcat xs (Cons x ys) of
>        DepSum zs p -> DepSum zs (incAssocL p)

Merge

Apart from the proof manipulations, the function merge is not very different from what one would expect:

> merge :: Ord a => List a m -> List a n -> DepSum (List a) (Plus m n)
> merge Nil ys = DepSum ys PlusZ
> merge (Cons x xs) Nil = append (Cons x xs) Nil
> merge (Cons x xs) (Cons y ys)
>   | x <= y = case merge xs (Cons y ys) of
>                  DepSum zs p -> DepSum (Cons x zs) (PlusS p)
>   | otherwise = case merge (Cons x xs) ys of
>                   DepSum zs p -> DepSum (Cons y zs) (plusMono p)

The lemma plusMono is used to convert a proof of m + n = k to a proof of m + (1+n) = 1+k.

Sized Trees

We also index binary trees by their sizes:

> data Tree a n where
>   Nul :: Tree a Z
>   Tip :: a -> Tree a (S Z)
>   Bin :: Tree a n1 -> Tree a n ->
>             (Plus p n n1, Plus n1 n k) -> Tree a k

The two trees given to the constructor Bin have sizes n1 and n respectively. The resulting tree, of size k, comes with a proof that n1 + n = k. Furthermore, we want to maintain an invariant that n1 either equals n, or is bigger than n by one. This is represented by the proof Plus p n n1. In the definition of insertT later, p is either PlusZ or PlusS PlusZ.

Lists to Trees

The function insertT inserts an element into a tree:

> insertT :: a -> Tree a n -> Tree a (S n)
> insertT x Nul = Tip x
> insertT x (Tip y) = Bin (Tip x) (Tip y) (PlusZ, PlusS PlusZ)
> insertT x (Bin t u (PlusZ, p)) =
>      Bin (insertT x t) u (PlusS PlusZ, PlusS p)
> insertT x (Bin t u (PlusS PlusZ, p)) =
>      Bin t (insertT x u) (PlusZ, PlusS (incAssocR p))

Note that whenever we construct a tree using Bin, the first proof, corresponding to the difference in size of the two subtrees, is either PlusZ or PlusS PlusZ.

The counterpart of foldr on indexed list is defined by:

> foldrd :: (forall k . (a -> b k -> b (S k))) -> b Z
>               -> List a n -> b n
> foldrd f e Nil = e
> foldrd f e (Cons x xs) = f x (foldrd f e xs)

The result is also an indexed type (b n).

The function deal :: List a n -> Tree a n, building a tree out of a list, can be defined as a fold:

> deal :: List a n -> Tree a n
> deal = foldrd insertT Nul

Trees to Lists, and Merge Sort

The next step is to fold through the tree by the function merge. The first two clauses are simple:

> mergeT :: Ord a => Tree a n -> List a n
> mergeT Nul = Nil
> mergeT (Tip x) = Cons x Nil

For the third clause, one would wish that we could write something as simple as:

   mergeT (Bin t u (_,p1)) =
      case merge (mergeT t) (mergeT u) of
        DepSum xs p -> xs

However, this does not type check. Assume that t has size n1, and u has size n. The DepSum returned by merge consists of a list of size i, and a proof p of type Plus m n i, for some i. The proof p1, on the other hand, is of type P m n k for some k. Haskell does not know that Plus m n is actually a function and cannot conclude that i=k.

To explicitly state the equality, we assume that there is a function plusFn which, given a proof of m + n = i and a proof of m + n = k, yields a function converting an i in any context to a k. That is:

   plusFn :: Plus m n i -> Plus m n k
               -> (forall f . f i -> f k)

The last clause of mergeT can be written as:

mergeT (Bin t u (_,p1)) =
   case merge (mergeT t) (mergeT u) of
     DepSum xs p -> plusFn p p1 xs

How do I define plusFn? On Haskell-Cafe, Jim Apple suggested this implementation (by the way, he maintains a very interesting blog on typed programming):

  plusFn :: Plus m n h -> Plus m n k -> f h -> f k
  plusFn PlusZ PlusZ x = x
  plusFn (PlusS p1) (PlusS p2) x =
      case plusFn p1 p2 Equal of
        Equal -> x

  data Equal a b where
      Equal :: Equal a a

Another implementation, which looks closer to the previous work on type equality [3, 4, 5], was suggested by apfelmus:

> newtype Equal a b = Proof { coerce :: forall f . f a -> f b }

> newtype Succ f a  = InSucc { outSucc :: f (S a) }

> equalZ :: Equal Z Z
> equalZ = Proof id

> equalS :: Equal m n -> Equal (S m) (S n)
> equalS (Proof eq) = Proof (outSucc . eq . InSucc)

> plusFnEq :: Plus m n i -> Plus m n k -> Equal i k
> plusFnEq PlusZ     PlusZ     = Proof id
> plusFnEq (PlusS x) (PlusS y) = equalS (plusFn x y)

> plusFn :: Plus m n i -> Plus m n k -> f i -> f k
> plusFn p1 p2 = coerce (plusFnEq p1 p2)

Now that we have both deal and mergeT, merge sort is simply their composition:

> msort :: Ord a => List a n -> List a n
> msort = mergeT . deal

The function mergeT can be defined using a fold on trees as well. Such a fold might probably look like this:

> foldTd :: (forall m n k . Plus m n k -> b m -> b n -> b k)
>           -> (a -> b (S Z)) -> b Z
>           -> Tree a n -> b n
> foldTd f g e Nul = e
> foldTd f g e (Tip x) = g x
> foldTd f g e (Bin t u (_,p)) =
>    f p (foldTd f g e t) (foldTd f g e u)

   mergeT :: Ord a => Tree a n -> List a n
   mergeT = foldTd merge' (\x -> Cons x Nil) Nil
    where merge' p1 xs ys =
            case merge xs ys of
              DepSum xs p -> plusFn p p1 xs

I am not sure whether this is a "reasonable" type for foldTd.

Passing in the Proof as an Argument

Previously I thought the second definition of append would be more difficult to use, because we will have to construct a proof about the lengths before calling append. In the context above, however, it may actually be more appropriate to use this style of definitions.

An alternative definition of merge taking a proof as an argument can be defined by:

< merge :: Ord a => Plus m n k -> List a m ->
<                       List a n -> List a k
< merge PlusZ Nil ys = ys
< merge pf (Cons x xs) Nil = append pf (Cons x xs) Nil
< merge (PlusS p) (Cons x xs) (Cons y ys)
<   | x <= y    = Cons x (merge p xs (Cons y ys))
<   | otherwise = Cons y (merge (incAssocL p) (Cons x xs) ys)

A definition of mergeT using this definition of merge follows immediately because we can simply use the proof coming with the tree:

< mergeT :: Ord a => Tree a n -> List a n
< mergeT Nul = Nil
< mergeT (Tip x) = Cons x Nil
< mergeT (Bin t u (_,p1)) =
<   merge p1 (mergeT t) (mergeT u)

I don't know which approach can be called more "natural". However, both Jim Apple and apfelmus pointed out that type families, a planned new feature of Haskell, may serve as a kind of type functions. With type families, the append example would look like ( code from apfelmus):

  type family   Plus :: * -> * -> *
  type instance Plus Z n     = n
  type instance Plus (S m) n = S (Plus m n)

  append :: (Plus m n ~ k) => List a m -> List a n -> List a k
  append Nil         ys = ys
  append (Cons x xs) ys = Cons x (append xs ys)

apfelmus commented that "viewed with the dictionary translation for type classes in mind, this is probably exactly the alternative type of append you propose: append :: Plus m n k -> List a m -> List a n -> List a k".

References

  • [1] Thorsten Altenkirch, Conor McBride, and James McKinna. Why Dependent Types Matter.
  • [2] Tim Sheard, James Hook, and Nathan Linger. GADTs + Extensible Kinds = Dependent Programming.
  • [3] James Cheney, Ralf Hinze. A lightweight implementation of generics and dynamics.
  • [4] Stephanie Weirich, Type-safe cast: (functional pearl), ICFP 2000.
  • [5]Arthur I. Baars , S. Doaitse Swierstra, Typing dynamictyping, ACM SIGPLAN Notices, v.37 n.9, p.157-166, September 2002

Appendix. Auxiliary Functions

> instance Show a => Show (List a n) where
>  showsPrec _ Nil = ("[]"++)
>  showsPrec _ (Cons x xs) = shows x . (':':) . shows xs

> instance Show (Plus m n k) where
>   showsPrec _ PlusZ = ("pZ"++)
>   showsPrec p (PlusS pf) = showParen (p>=10) (("pS " ++) .
>                              showsPrec 10 pf)

> instance Show a => Show (Tree a n) where
>   showsPrec _ Nul = ("Nul"++)
>   showsPrec p (Tip x) = showParen (p >= 10) (("Tip " ++) . shows x)
>   showsPrec p (Bin t u pf) =
>     showParen (p>=10)
>      (("Bin "++) . showsPrec 10 t . (' ':) . showsPrec 10 u .
>       (' ':) . showsPrec 10 pf)

Encoding Inductive and Coinductive Types in Polymorphic Lambda Calculus

This was written about two years ago. Mostly inspired by Philip Wadler‘s note Recursive types for free.

Inductive types

Inductive types are least fixed-points. An indcutive type μF is simulated by

 forall x . (F x -> x) -> x

In Haskell, we need a constructor to wrap a “forall” quantifier.

> newtype LFix f = LF (forall x. (f x -> x) -> x)
> instance Show (LFix f) where
>   show _ = ""

> foldL :: Functor f => (f x -> x) -> LFix f -> x
> foldL f (LF x) = x f

> inL :: Functor f => f (LFix f) -> LFix f
> inL fx = LF (\\f -> f . fmap (foldL f) $ fx)

> outL :: Functor f => LFix f -> f (LFix f)
> outL = foldL (fmap inL)

Example: lists.

> data F a x = L | R a x deriving Show
> instance Functor (F a) where
>   fmap f L = L
>   fmap f (R a x) = R a (f x)

> type ListL a = LFix (F a)

A ListL can only be constructed out of nil and cons.

    Main> consL 1 (consL 2 (consL 3 nilL))
      
> nilL = inL L
> consL a x = inL (R a x)

We can convert a ListL to a ‘real’ Haskell list.


   Main> lList $ consL 1 (consL 2 (consL 3 nilL))
   [1,2,3]
> lList :: LFix (F a) -> [a]
> lList = foldL ll 
>   where ll L = []
>         ll (R a x) = a : x

However, a ListL has to be explicitly built, so there is no infinite ListL.

Coinductive types

A coinductive type νF is simulatd by

 exists x . (x -> F x, x)

We represent it in Haskell using the property exists x . F x == forall y . (forall x . F x -> y) -> y:

> data GFix f = forall x . GF (x -> f x, x)
> instance Show (GFix f) where
>   show _ = ""

> unfoldG :: Functor f => (x -> f x) -> x -> GFix f
> unfoldG g x = GF (g,x)

> outG :: Functor f => GFix f -> f (GFix f)
> outG (GF (g,x)) = fmap (unfoldG g) . g $ x

> inG :: Functor f => f (GFix f) -> GFix f
> inG = unfoldG (fmap outG) 

Example:

> type ListG a = GFix (F a)

ListG can be constructed out of nil and cons, as well as an unfold.

> nilG = inG L
> consG a x = inG (R a x)

> fromG :: Int -> ListG Int
> fromG m = unfoldG ft m
>    where ft m = R m (m+1)

However, one can perform only a finite number of outG.

  Main> fmap outG . outG . fromG $ 1 
  R 1 (R 2 )

Isomorphism

The function force witnesses the isomorphism between LFix and GFix.

> force :: Functor f => GFix f -> LFix f
> force = inL . fmap force . outG     -- recursion!

If LFix and GFix are isomorphic, we are allowed to build hylomorphisms:

> fromTo :: Int -> Int -> LFix (F Int)
> fromTo m n = takeLessThan n . force . fromG $ m

> takeLessThan :: Int -> LFix (F Int) -> LFix (F Int)
> takeLessThan n = foldL (tlt n)
>   where tlt n L = nilL
>         tlt n (R m x) | n <= m = nilL
>                       | otherwise = consL m x

  Main> lList (fromTo 1 10)
  [1,2,3,4,5,6,7,8,9]

However, force has to be defined using general recursion. The price is that it is now possible to write programs that do not terminate.

Proving Some Equalities in Propositional Logic

Notes taken while listening to Max‘s course on logic, recorded here for memo.

The basic laws we assume of a boolean algebra:

  • commutativity,
  • associativity,
  • absorption: a /\ (a \/ b) = a; a \/ (a /\ b) = a;
  • complement: a \/ ~a = T; a /\ ~a = F;
  • distribution: a /\ (b \/ c) = (a /\ b) \/ (a /\ c); a \/ (b /\ c) = (a \/ b) /\ (a \/ c).

Basics

Prove a /\ T = a (and complementarily, a \/ F = a):

a /\ T
= { complement }
a /\ (a \/ ~a)
= { absorption }
a

Prove a /\ F = F (and complementarily, a \/ T = T):

a /\ F
= { since a \/ F = a }
(a \/ F) /\ F
= { absorption }
F

Prove ~T = F (and complementarily, ~F = T):

~T
= { since a /\ T = a }
~T /\ T
= { complement }
F

Prove a /\ a = a = a \/ a:

a /\ a
= { since a \/ F = a}
a /\ (a \/ F)
= { absorption }
a
The other side follows similarly.

De Morgan’s Laws

De Morgan’s laws:

~(a /\ b) = ~a \/ ~b
~(a \/ b) = ~a /\ ~b

They are more tricky to prove. The following proof is from Notes on Basic Proofs of Peter Williams.

We will need the following lemma:

If a /\ b = F and a \/ b = T, then a = ~b.

We try to prove the first of the two laws.

Firstly, we prove that (a /\ b) /\ (~a \/ ~b) = F:

(a /\ b) /\ (~a \/ ~b)
= { distribution }
((a /\ b) /\ ~a) \/ ((a /\ b) /\ ~b)
= { commutativity, associativity }
((a /\ ~a) /\ b) \/ (a /\ (b /\ ~b))
= { complement }
(F /\ b) \/ (a /\ F)
= { since a /\ F = F }
0 \/ 0
= 0

Then we prove that (a /\ b) \/ (~a \/ ~b) = T:

(a /\ b) \/ (~a \/ ~b)
= { associativity }
((a /\ b) \/ ~a) \/ ~b
= { distributivitiy }
((a \/ ~a) /\ (b \/ ~a)) \/ ~b
= { complement }
(T /\ (b \/ ~a)) \/ ~ b
= { since T /\ a = a }
(b \/ ~a) \/ ~b
= { associativity, commutativity }
(b \/ ~b) \/ ~a
= { complement }
T \/ ~a
= T

The lemma, on the other hand, is proved below. The following proof is adapted from that on AntiQuark. We show that ~b = a with the assumptions a \/ b = T and a /\ b = F:

~b
= ~b /\ T
= { assumption: a \/ b = T }
~b /\ (a \/ b)
= { distribution }
(~b /\ a) \/ (~b \/ b)
= { complement }
(~b /\ a) \/ F
= { assumption: a /\ b = F }
(~b /\ a) \/ (a /\ b)
= { distribution (backwards) }
a /\ (~b \/ b)
= a /\ T
= a

Double Negation

The seemingly simple rule ~~a = a turns out to be the most tricky:

~~a
= ~~a /\ T
= ~~a /\ (~a \/ a)
= (~~a /\ ~a) \/ (~~a /\ a)
= { complement: (~~a /\ ~a) = F }
F \/ (~~a /\ a)
= (~a /\ a) \/ (~~a /\ a)
= (a /\ ~a) \/ (a /\ ~~a)
= { distribution (backwards) }
a /\ (~a \/ ~~a)
= a /\ T
= a

Polymorphic Types in Haskell without Constructors?

I wrote this during Feb. 2005. Since it occurs to me every once in a while and I kept forgetting the details, I’d better keep a note.

I was trying to simulate church numerals and primitive recursion in second rank polymorphism of Haskell, and I thought I could do it without introducing a constructor by newtype. Church’s encoding of natural numbers can be expressed by the type:

> type N = forall a . (a -> a) -> a -> a

Basics

We first define zero, succ, and some constants:

> zero :: N 
> zero = \f -> \a ->  a

> succ :: N -> N 
> succ n = \f -> \a -> f (n f a)

> toInt :: N -> Int
> toInt n = n (1+) 0

> one, two, three :: N
> one = succ zero
> two = succ one
> three = succ two

Addition and multiplication can be defined as usual:

> add, mul :: N -> N -> N
> add m n = \f -> \a -> m f (n f a)
> mul m n = \f -> \a -> m (n f) a

Predecessor

Even pred typechecks — at least this particular version of pred:

> pred :: N -> N
> pred n f x = n (\g h -> h (g f)) (const x) id

I am more familiar with another way to implement pred, which uses pairs. This pred, on the other hand, can be understood by observing that the “fold” part returns a function that abstracts away the outermost f:

  pred n f x = dec n id
    where dec 0 = \h -> x
          dec (1+n) = \h -> h (f^n x)

Exponential

I know of two ways to define exp. However, only this one typechecks:

> exp :: N -> N -> N
> exp m n = n m

Another definition, making use of mul, does not typecheck!

  exp :: N -> N -> N
  exp m n = n (mul m) one

GHC returns a rather cryptic error message “Inferred type is less polymorphic than expected. Quantified type variable `a’ escapes.” The real reason is that polymorphic types in Haskell can only be instantiated with monomorphic types. If we spell out the explicit type applications, we get:

   exp  (m::N) (n::N) = n [N] ((mul m)::N->N) (one::N)

which is not allowed. On the other hand, the previous definition of exp does not instantiate type variables to polymorphic types:

   exp (m::N) (n::N) = /\b -> \(f::(b->b)) -> \(x::b) -> n[b->b] (m[b]) f x

For the same reason, primitive recursion does not type check:

 primrec :: (N -> a -> a) -> a -> N -> a
 primrec f c n =
   snd (n (\z -> pair (succ (fst z)) 
                      (f (fst z) (snd z)))
          (pair zero c))

nor does the pair-based definition of pred.

To overcome the problem we have to wrap N in newtype. The solution was presented by Oleg Kiselyov.

Substraction not Definable in Simply Typed Lambda Calculus

I probably wrote this a couple of years ago when I was studying polymorphic types.

What happens if we have only monomorphic type when we define church numerals? Let us represent church numerals by the type (a -> a) -> a -> a for some monomorphic a:

> import Prelude hiding (succ, pred)

> type N a = (a -> a) -> a -> a

Zero and successor can still be defined. Their values do not matter.

> zero :: N a
> zero = error ""

> succ :: N a -> N a
> succ = error ""

Also assume we have pairs predefined:

> pair a b = (a,b)

We can even define primitive recursion, only that it does not have the desired type:

  primrec :: (N -> b -> b) -> b -> N -> b

Instead we get the type below:

> primrec :: (N a -> b -> b) -> b -> N (N a, b) -> b
> primrec f c n =
>   snd (n (\z -> pair (succ (fst z)) 
>                      (f (fst z) (snd z)))
>          (pair zero c))

Therefore, predecessor does not get the type pred :: N a -> N a
we want. Instead we have:

> pred :: N (N a, N a) -> N a
> pred = primrec (\n m -> n) zero

Ref.

Oleg Kiselyov, Predecessor and lists are not representable in simply typed lambda-calculus.

Zipping Half of a List with Its Reversal

I found this note with one page of Haskell code, without comments, when I was cleaning up my files. It took me quite a while to figure out what it might probably be about: given a list, zip its first half with the reverse of its second half, in only “one and a half” traversals of the list. It gives us a fast algorithm to, for example, check whether a list is a palindrome. I thought it was probably something we discussed in AoP, but it turns out that I must have taken the note when people in IPL were studying There and Back Again of Danvy and Goldberg. Recorded in the note was my attempt to understand it.

Let convolute (xs,ys) = zip xs (reverse ys) and half be a function cutting a list in half, the problem is simply specified by revzip = convolute . half. The first trick is that convolute can be expressed in terms of its generalisation:

> convolute :: ([a],[b]) -> [(a,b)]
> convolute = fstn . uncurry walk
>   where fstn (xs,[]) = xs

where walk is defined by:

  walk xs ys = (zip xs (reverse (take n ys)), drop n ys)
      where n = length xs

The reason introducing walk is that it can be defined as a fold:

> walk = foldr ((.).shift) (split (nil, id))
> shift x (zs,y:ys) = ((x,y):zs, ys)
> nil _ = []

But the definition below is probably a lot more comprehensible:

  walk [] ys = ([],ys)
  walk (x:xs) ys = shift x (walk xs ys)

Some of these auxiliary functions should be part of the Haskell Prelude:

> prod (f,g) (a,b) = (f a, g b)
> split (f,g) a = (f a, g a)

The following implementation of half splits a list into two halfs using two pointers:

> half :: [a] -> ([a],[a])
> half = race . dup 
> race (xs,[]) = ([],xs)
> race (_:xs,[_]) = ([],xs)
> race (x:xs,_:_:ys) = cross ((x:),id) (race (xs,ys))
> dup a = (a,a)

Furthermore, uncurry f = apply . prod (f,id) where

> apply (f,a) = f a

Therefore the specification has become:

  revzip = fstn . apply . prod (walk,id) . race . dup

Now we try to fuse prod (walk,id) . race and get

> wrace (xs,[]) = (split (nil, id), xs)
> wrace (_:xs, [_]) = (split (nil, id), xs)
> wrace (x:xs, _:_:ys) = prod ((shift x .), id) (wrace (xs,ys))

Hence the result:

> revzip = fstn . apply . wrace . dup