Tag Archives: Thinning Theorem

Constructing datatype-generic fully polynomial-time approximation schemes using generalised thinning

Shin-Cheng Mu, Yu-Han Lyu, and Akimasa Morihata. In the 6th ACM SIGPLAN workshop on Generic programming (WGP 2010), pages 97-108, Sep. 2010. [PDF]

The fully polynomial-time approximation scheme (FPTAS) is a class of approximation algorithms that is able to deliver an approximate solution within any chosen ratio in polynomial time. By generalising Bird and de Moor’s Thinning Theorem to a property between three orderings, we come up with a datatype-generic strategy for constructing fold-based FPTASs. Greedy, thinning, and approximation algorithms can thus be seen as a series of generalisations. Components needed in constructing an FPTAS are often natural extensions of those in the thinning algorithm. Design of complex FPTASs is thus made easier, and some of the resulting algorithms turn out to be simpler than those in previous works.

The Pruning Theorem: Thinning Based on a Loose Notion of Monotonicity

The reason I studied the thinning theorem again is because I need a slightly generalised variation. The following seems to be what I need. The general idea and the term “pruning” emerged from discussion with Sharon Curtis. The term “lax preorder” is invented by myself. I am not good at naming, and welcome suggestions for better names.

The notation below are mostly taken from the book Algebra of Programming. Not many people, even among functional programmers, are familiar with these notations involving relational intersection, division, etc. One starts to appreciate their benefits once he/she gets used to using their calculation rules. Most of the time when I was doing the proof, I was merely manipulating the symbols. I could not have managed the complexity if I had to fall back to the semantics and think about what they “mean” all the time.

A relation Q :: PA ← A between a set of A and an element is called a lax preorder if it is

  1. reflexive in the sense that ∋ ⊆ Q, and
  2. transitive in the sense that (Q/∋) . Q ⊆ Q.

A relation S :: A ← FA is monotonic on lax preorder Q if S . FQ˘ ⊆ Q˘. Λ(S . F∈).

Given a lax preorder, we define:

prune Q = ∈\∈ ∩ Q/∋

The definition induces the universal property:

X ⊆ prune Q . ΛS   ≡    ∈ . X ⊆ S   ⋀   X . S˘⊆ Q

Any preorder R induces a lax preorder ∋ . R. If a relation S is monotonic on , it is monotonic on lax preorder ∋ . R. Furthermore, prune (∋ . R) = thin R. Therefore, pruning is a generalisation of thinning. We need the notion of lax preorders because, for some problems, the generating relation S is monotonic on a lax preorder, but not a preorder.

Theorem: if S is monotonic on lax preorder Q, we have:

fold (prune Q . Λ(S . F∈)) ⊆ prune Q . Λ(fold S)

Proof. Since Λ(fold S) = fold (Λ(S . F∈)), by fold fusion, the theorem holds if

prune Q . Λ(S . F∈) . F(prune Q) ⊆ prune Q . Λ(S . F∈)

By the universal property of prune, the above is equivalent to:

∈ . prune Q . Λ(S . F∈) . F(prune Q) ⊆ S . F∈   ⋀
prune Q . Λ(S . F∈) . F(prune Q) . (S . F∈)˘ ⊆ Q

The first inclusion is proved by:

     ∈ . prune Q . Λ(S . F∈) . F(prune Q)
⊆     { since prune Q ⊆ ∈\∈ }
     ∈ . ∈\∈ . Λ(S . F∈) . F(thin Q)
⊆     { division }
     ∈ . Λ(S . F∈) . F(thin Q)
=     { power transpose }
     S . F∈ . F(thin Q)
⊆     { since prune Q ⊆ ∈\∈ }
     S . F∈ . F(∈\∈)
⊆     { division }
     S . F∈

And the second by:

     prune Q . Λ(S . F∈) . F(prune Q) . (S . F∈)˘
⊆     { since prune Q ⊆ Q/∋, converse }
     prune Q . Λ(S . F∈) . F(Q/∋) . F∋ . S˘
⊆     { division }
     prune Q . Λ(S . F∈) . FQ . S˘
⊆     { monotonicity: FQ . S˘⊆ Λ(S . F∈)˘. Q }
     prune Q . Λ(S . F∈) . Λ(S . F∈)˘. Q
⊆     { since Λ(S . F∈)˘is a function, that is, f . f˘⊆ id }
     prune Q . Q
⊆     { since thin Q ⊆ Q/∋, division }
     Q/∋ . Q
⊆     { since Q transitive }
     Q

Endproof.

The proof above uses transitivity of Q but not reflectivity. I need reflectivity to construct base cases, for example, to come up wit this specialised Pruning Theorem for lists:

foldr (prune Q . Λ(S . (id × ∈))) {e} ⊆ prune Q . Λ(foldr S e)

if S . (id × Q˘) ⊆ Q˘. Λ(S . (id × ∈)).

Proving the Thinning Theorem by Fold Fusion

Algebra of Programming records two proofs of the greedy and the thinning theorems slightly shorter than proofs using fold fusion. Of course, one can still use fold fusion. In fact, proving them by fold fusion are exercises in Chapter 8 (PDF) of Algebraic and Coalgebraic Methods in the Mathematics of Program Construction, of which I myself is listed among the authors.

A while ago when I needed to consider some variations of the thinning theorem I tried to do the proof again. And, horrifyingly, I could not do it anymore! Have my skills become rusty due to lack of practice in the past few years? In panic, I spent an entire afternoon fighting with it, until I realised that it was just a typical copying error from the very beginning: when I copied a property from the book I put in an extra Λ. Then I trapped myself in the maze of expanding ΛR into ∈\R ∩ (R\∈) and using modular law and ….

Having fixed the error, I get my trivial and easy proof back again. Anyway, I am going to record it below, in case I run into the same panic again.

Given a preorder Q, the relation thin Q is defined by:

thin Q = ∈\∈ ∩ (∋ . Q)/∋

The definition induces the universal property:

X ⊆ thin Q . ΛS   ≡    ∈ . X ⊆ S   ⋀   X . S˘ ⊆ ∋.Q

And here are some basic properties we will make use of later:

(R . S)˘ = S˘ . R˘,    (R\S)˘ = S˘/R˘      (converse)
∈ . ΛS = S       (power transpose)
ΛR . R˘ ⊆ ∋
R . R\S ⊆ S,       R/S . S ⊆ R       (division)

The Thinning Theorem

The thinning theorem says :
Theorem: if S is monotonic on preorder Q, that is, S . FQ˘⊆ Q˘. S, we have:

fold (thin Q . Λ(S . F∈)) ⊆ thin Q . Λ(fold S)

Proof. By fold fusion, the theorem holds if

thin Q . Λ(S . F∈) . F(thin Q) ⊆ thin Q . Λ(S . F∈)

By the universal property of thin, the above inclusion is equivalent to

∈ . thin Q . Λ(S . F∈) . F(thin Q) ⊆ S . F∈  ⋀
thin Q . Λ(S . F∈) . F(thin Q) . (S . F∈)˘ ⊆ ∋.Q

The first inclusion is proved by:

    ∈ . thin Q . Λ(S . F∈) . F(thin Q)
{ since thin Q ⊆ ∈\∈ }
    ∈ . ∈\∈ . Λ(S . F∈) . F(thin Q)
{ division }
    ∈ . Λ(S . F∈) . F(thin Q)
= { power transpose }
    S . F∈ . F(thin Q)
{ since thin Q ⊆ ∈\∈ }
    S . F∈ . F(∈\∈)
{ division }
    S . F∈

And the second by:

    thin Q . Λ(S . F∈) . F(thin Q) . (S . F∈)˘
{ since thin Q ⊆ (∋ . Q)/∋, converse }
    thin Q . Λ(S . F∈) . F((∋ . Q)/∋) . F∋ . S˘
{ functor, division }
    thin Q . Λ(S . F∈) . F(∋ . Q) . S˘
{ monotonicity: FQ . S˘ ⊆ S˘. Q }
    thin Q . Λ(S . F∈) . F∋ . S˘. Q
{ since ΛR . R ⊆ ∋ }
    thin Q . ∋ . Q
{ since thin Q ⊆ (∋.Q)/∋, division }
    ∋ . Q . Q
{ since Q transitive }
    ∋ . Q

Endproof.

By the way, the variation of thinning theorem I need is “fold (thin Q . Λ(S . F∈)) ⊆ thin Q . Λ(fold S) if S . F(Q˘. ∈) ⊆ Q˘. S . F ∈ “, whose proof is, luckily, trivial once you write down the original proof.

Maximum Segment Sum and Density with Bounded Lengths

It may be surprising that variations of the maximum segment sum (MSS) problem, a textbook example for the squiggolists, are still active topics for algorithm designers. These literate Haskell scripts presents a program solving two recently studied variations:

  1. mssu.lhs: an amortised linear-time algorithm computing the maximum sum of segments not longer than an upper-bound;
  2. msdlb.lhs: an O(n log L) algorithm computing the maximum density (average) of segments not shorter than a lower-bound;
  3. msdll.lhs: computing the maximum density (average) of segments not shorter than a lower-bound. With the discovery of Glodwasser et al. we are able to refine the algorithm to amortised linear time again.

2007/06/26 Update: fixed binary search.
2007/11/04 Update: linear time algorithm for MSDL.

Countdown: a case study in origami programming

R. S. Bird and S-C. Mu. In Journal of Functional Programming Vol. 15(5), pp. 679-702, 2005.
[GZipped Postscript]

Countdown is the name of a game in which one is given a list of source numbers and a target number, with the aim of building an arithmetic expression out of the source numbers to get as close to the target
as possible. Starting with a relational specification we derive a number of functional programs for solving Countdown. These programs are obtained by exploiting the properties of the folds and unfolds of various data types, a style of programming Gibbons has aptly called origami programming. Countdown is attractive as a case study in origami programming both as an illustration of how different algorithms can emerge from a single specification, as well as the space and time trade-offs that have to be taken into account in comparing functional programs.

A Calculational Approach to Program Inversion

S-C. Mu, A Calculational Approach to Program Inversion. D.Phil Thesis. Oxford University Computing Laboratory. March 2003
[GZipped Postscript][PDF]

Many problems in computation can be specified in terms of computing the inverse of an easily constructed function. However, studies on how to derive an algorithm from a problem specification involving inverse functions are relatively rare. The aim of this thesis is to demonstrate, in an example-driven style, a number of techniques to do the job. The techniques are based on the framework of relational, algebraic program derivation.

Simple program inversion can be performed by just taking the converse of the program, sometimes known as to “run a program backwards”. The approach, however, does not match the pattern of some more advanced algorithms. Previous results, due to Bird and de Moor, gave conditions under which the inverse of a total function can be written as a fold. In this thesis, a generalised theorem stating the conditions for the inverse of a partial function to be a hylomorphism is presented and proved. The theorem is applied to many examples, including the classical problem of rebuilding a binary tree from its preorder and inorder traversals.

This thesis also investigates into the interplay between the above theorem and previous results on optimisation problems. A greedy linear-time algorithm is derived for one of its instances — to build a tree of minimum height. The necessary monotonicity condition, though looking intuitive, is difficult to establish. For general optimal bracketing problems, however, the thinning strategy gives an exponential-time algorithm. The reason and possible improvements are discussed in a comparison with the traditional dynamic programming approach. The greedy theorem is also generalised to a generic form allowing mutually defined algebras. The generalised theorem is applied to the optimal marking problem defined on non-polynomial based datatypes. This approach delivers polynomial-time algorithms without the need to convert the inputs to polynomial based datatypes, which is sometimes not convenient to do.

The many techniques are applied to solve the Countdown problem, a problem derived from the popular television program of the same name. Different strategies toward deriving an efficient algorithm are experimented and compared.

Finally, it is shown how to derive from its specification the inverse of the Burrows-Wheeler transform, a string-to-string transform useful in compression. Not only do we identify the key property why the inverse algorithm works, but, as a bonus, we also outline how two generalisations of the transform may be derived.

Algebraic methods for optimisation problems

R. S. Bird, J. Gibbons and S-C. Mu. In Algebraic and Coalgebraic Methods in the Mathematics of Program Construction, LNCS 2297, pp. 281-307, January 2002.
[PDF]

We argue for the benefits of relations over functions for modelling programs, and even more so for modelling specifications. To support this argument, we present an extended case study for a class of optimization problems, deriving efficient functional programs from concise relational specifications.

Optimisation problems in logic programming: an algebraic approach

S. Seres and S-C. Mu. In Proceedings of LPSE’00, July 2000.
[GZipped Postscript]

Declarative programming, with its mathematical underpinning, was aimed to simplify rigorous reasoning about programs. For functional programs, an algebraic calculus of relations has previously been applied to optimisation problems to derive efficient greedy or dynamic programs from the corresponding inefficient but obviously correct specifications. Here we argue that this approach is natural also in the logic programming setting.