Read the latest posts from Blogs@IADDG.

from danko

My PhD thesis has been approved for defense. You can find it here :

Chapter 1 gives a constructive version of a Henkin-style proof of Godel's completeness theorem. Chapter 2 introduces a notion of model similar to Kripke models using which one can give a normalization-by-evaluation style proof of completeness for classical logic. Chapter 3 does the same thing for intuitionistic logic (including disjunction and the existential). Finally, Chapter 4 presents an extension of intuitionistic logic with delimited control operators which still remains intuitionistic (preserves disjunction and existence properties) and can prove the Double Negation Shift and Markov's Principle.


from danko

Published on Feb. 5th, 2010 on

The purpose of this post is a detailed proof, following Coquand[1], of the Open Induction Principle using Decidable-Bar Induction[2].

Open induction (OI) is useful when one can not do a proof by usual well-founded induction, that is, when there is no well-founded relation that can be used. In such cases, if the space one is in is compact, and the property one wants to prove is open, one can use a non-well-founded order to do an induction proof. From the computer science perspective, this means one can build a terminating recursive program for computing a property that does not depend on a structurally decreasing criterium of termination.

In this post we concentrate on Cantor space 2N, but it is, in some sense, enough, because every compact space is an image of Cantor space. A member α of 2N is a map N → 2, i.e. an infinite stream of bits, i.e. an infinite branch of the infinite binary tree. α<β if α is “left of” β as a branch of the infinite tree, that is, α<β if they are bit streams which are equal for a finite initial segment and then at some length n of the segment α(n) is “left” while β(n) is “right”. We write αn for the finite initial segment of α of lenght n. Each node in the infinite binary tree corresponds to some initial segment, and vice-versa. Each such node, i.e. each finite bit string, is called a basic open. An open, or open subspace, of Cantor space is an arbitrary union of basic opens. Equivalently, an open is determined by a Boolean function G on arbitrary lenght, but finite, bit strings, G : 2* → 2, which is monotone: If s, s′:2*, s is an initial segment of s′, and G(s)=⊤, then also G(s′)=⊤.

Axiom 1 (Decidable-Bar Induction) For all unary predicates P and Q on 2* (finite binary strings), such that P is decidable and ∀ s. P s → Q s, if ∀ α∈2N. ∃ n∈ N. P(
α n) and ∀ s∈2*. (∀ b∈ 2. Q ⟨ b::s ⟩) → Q s then Q ⟨ ⟩.

Theorem 2 (Open Induction Principle) Let U be an open set of Cantor space 2N and < be a lexicographic ordering on 2N. If ∀ α. (∀ β<α. β∈ U) → α∈ U () then ∀ α. α∈ U. Proof. Let α : 2N be given. U is open in 2N, hence given by a mapping G : 2 → 2 which is monotone w.r.t. initial segments: if s is a prefix of t, G s = ⊤ implies G t = ⊤. We say that G bars s with witness k (notation G|k s) if all k−bit-extensions of s satisfy G.

We define mutually, by (primitive) recursion on finite strings of numbers, two functions f:N* → 2* and acc:N*→ 2:

acc ⟨ ⟩ = ⊤ acc ⟨ 0::s ⟩ = acc(s) acc ⟨ n+1::s ⟩ = acc(s) ∧ G|n ⟨ ⊥::f(s) ⟩

f ⟨ ⟩ = ⟨ ⟩ f ⟨ 0::s ⟩ = ⟨ ⊥::f(s) ⟩ f ⟨ n+1::s ⟩ = ⟨ acc⟨ n+1::s ⟩::f(s) ⟩ We set

P(s) := G(f(s)) Q(s) := acc(s) → ∃ m. G|m f(s) and apply bar induction, which gives us a proof of Q(⟨ ⟩), which, since acc(⟨ ⟩)=⊤, gives an m∈ N such that any bit-string with length m satisfies G. Hence, also αm, therefore α∈ U.

The hypotheses of (D-BI) are satisfied by the following lemmas.

Lemma 3 ∀ α∈NN. ∃ n∈ N. G(f(αn)) Proof. We write f(α) for the natural exension of f to infinite sequences. The lemma states that ∀ α∈NN. f(α)∈ U. Let α be given. Apply (). Let β<f(α), that is, there is some n such that βn = ⟨ ⊥::s ⟩ and ⟨ ⊤::s ⟩ = f(α) n = f(αn) for some s∈2 (of length n−1). By the defining equations of f and app, n>0, s=f(α(n−1)), acc(s) and G|k⟨ ⊥::s ⟩, that is, G|kβn, for some k. Hence, G(β(n+k)) = ⊤ i.e. β∈ U.

Lemma 4 For any s∈N*, if (∀ n∈ N. acc⟨ n::s ⟩ → ∃ m. G|m f⟨ n::s ⟩), and acc(s), then ∃ n. G|n f(s). Proof. Since acc(s), also acc⟨ 0::s ⟩, and the hypothesis gives us an m such that G|mf⟨ 0::s ⟩, that is, G|m⟨ ⊥::f(s) ⟩. This means that acc⟨ m+1::s ⟩, which we use with the hypothesis again to get a k such that G|kf⟨ m+1::s ⟩ i.e. Gk⟨ acc⟨ m+1::s ⟩::f(s) ⟩ i.e. G|k⟨ ⊤::f(s) ⟩. We showed that G bars both ⟨ ⊤::f(s) ⟩ and ⟨ ⊥::f(s) ⟩, which means that G bars f(s) already: G|max(m,k)f(s).

Lemma 5 ∀ s∈N*. G(f(s)) → acc(s) → ∃ m. G|m f(s). Proof. G(f(s)) iff G|0 f(s).

Remark 6 The given lemmas, and the functions f and acc, are more general than needed for (D-BI), since they take input of type N* instead of 2*.

References [1] Thierry Coquand. A note on the open induction principle, 1997. [2] Anne S. Troelstra and Dirk van Dalen. Constructivism in Mathematics: An Introduction I and II, volume 121, 123 of Studies in Logic and the Foundations of Mathematics. North-Holland, 1988.


from danko

Published on Apr. 19th, 2009 on

Some time ago I came upon a mention of Ishihara's proof technique in constructive analysis and a claim:

In [6], Ishihara introduced the following two lemmas in which, surprisingly, we can use completeness to make a decision which at first sight would seem to be impossible with purely constructive techniques (that is, with intuitionistic, rather than classical, logic). (Bridges, van Dalen, Ishihara: Ishihara’s proof technique in constructive analysis)

One does not have to be interested in mathematical analysis to appreciate it, however, it is enough to think about a sequence of computation steps and a measure of “distance” between every two consecutive computation steps. If one can prove that for a sequence the distance gets smaller and smaller as time goes on we say that the computation converges or is Cauchy. When we consider these converging processes of computation as inhabitants of some universe X we can say that X is a complete metric space.

Here I want to give a simpler (more predicative) proof of Bridges-vanDalen-Ishiharas's Proposition 1, which more obviously gives rise to a program than theirs.

Proposition 1. If X is a complete metric space, X=P∪Q, (xn) is a sequence in X that converges to x and we have ∀y:X, x#y ∨ y∉Q, then we have that (∀n. xn ∈ P) or (∃ n. x_n ∈ Q).

The role of x, y as members of X is to stand for a sequence (xn), (yn) together with a proof that the sequences make progress as time progresses, i.e. they are Cauchy/converging. The relation '#' is called apartness and it means that the two computation processes are not 'the same'.

Proof. Start with a sequence (xn). Define the binary sequence λ to be 0 as far as the elements of the sequence (xn) are in P, and to become 1 forever as soon as one of the elements of (x_n) is in Q.

Now define the sequence (yn) to be x as far as λn is 0 and to become xk as soon as λn becomes 1, where k is the smallest natural number at which λ becomes 1, i.e. the k-th element of the sequence (xn) is in Q. The sequence (yn) we defined is very similar to a constant sequence: it is either x forever, or it is x for a finite number of starting elements and then it becomes x_k forever.

In any case (yn) is Cauchy, so we can think of it as an entity y from the universe X. We now use the hypothesis given in Proposition 1 and look at two cases. if x#y then there is k, the smallest number such that λk = 1, which implies xk ∈ Q if y∉Q then we can show that ¬∃n. λn = 1 and, since λ is decidable, ∀n. λn = 0, which implies ∀n. xn ∈ P QED

Apparently, in constructive analysis they use Proposition 1 (and 2, which was not shown) as constructive instances of the general law of excluded middle. Is it useful in logic and computer science?


from danko

Published on Nov. 22nd, 2008 on

The #fan theorem is a theorem of Brouwer's original intuitionistic analysis, which follows from the principle of bar induction. In a classical setting it is also a theorem, thanks to the principle of excluded middle.

In constructive type theory it is not a theorem (yet), although by adding a principle of bar induction we can formally prove it. (see )

My interest in the fan theorem comes from some specific places where it appears, like the completeness theorem of Wim Veldman or giving type-theoretical explanation to backtracking. Hence I am interested in specific instances of it and their computational contents.

I was wondering what is the problem with the general formulation however and that is why I tried to express it in #Coq, for the particular fan Cantor space. It boils down to giving a proof of the following:

B : (nat –> option bool) –> Prop bB : forall a : nat –> bool, exists l : nat, B (IS a l) mB : forall (a : nat –> bool) (l : nat), B (IS a l) –> B (IS a (S l)) ============================ exists n : nat, forall a : nat –> bool, B (IS a n)


from danko

Published on Nov. 13th, 2008 on

Forcing is a model theoretic technique used to prove that adding certain axioms to classical set theory does not produce (new) inconsistencies in the resulting theory; or that certain statements are neither provable nor disprovable from the axioms of set theory.

It was famously invented and used by Paul Cohen in 1962 for his proof of the independence of the axiom of choice and the continuum hypothesis from the axioms of Zermelo-Fraenkel.

By the completeness theorem for first-order logic (of which set theory is an instance), one can think of forcing as a procedure for transforming a deduction in a formal system to a deduction in another formal system. My doctoral thesis supervisor and I are interested in finding out what is the computational content of such a transformation.

In the book “Foundations of Constructive Mathematics”, Michael Beeson devotes a chapter to forcing in which he translates the technique to a constructive setting in which no notion of a model is involved, thereby allowing to read his exposition as a procedure for transforming proofs.

His interest is in obtaining conservative extension results. More precisely, he gives a general methodology for showing a theory T to be a conservative extension of a theory S. He uses this methodology to give a new (we are talking about 1979 here) and conceptually clearer proof of Goodman's theorem which says that HA^ω+AC is a conservative extension of HA^ω (which is conservative over HA). HA is Heyting arithmetic, HA^ω is higher-order Heyting arithmetic and AC is the axiom of choice.

The method is to show that the following chain of interpretations is sound: HA^ω —–> HA^ω+AC —–> HA^ω. The first arrow is a “realisability interpretation” of HA^ω inside HA^ω+AC. The second one is a forcing translation.

I have not worked out the exact translation, because I would prefer to spend the time to work out a newer computational-content-result, namely the one of Berardi-Bezem-Coquand from “On the computational content of the axiom of choice”, but I would like to approach answering a question posed by my supervisor:

Is #forcing a #monad?

The question is about monads as from programming language theory, not monads from category theory, because we are interested in computational content. And hence we are expected to provide:

  1. a type constructor that encloses the computation going on inside the monad

  2. a unit operation which creates the initial monad environment

  3. a binding operation which composes consecutive computational steps inside the monad

I think that from Beeson's exposition in Chapter 15 of the mentioned book, these components are already identified and separated.

Namely, he begins with a constructive theory T from which he creates an extension, a theory called Ta, which contains an additional constant 'a' and some axioms concerning this constant. Now the point of the forcing transformation is to associate to each formula of Ta a formula of T.

He fixes a “definable” partial order (C,<) with a minimal element 0. Definable means that both C and < are “given by a formula of T, and T proves that (C,<) is a partial order with a minimal element 0”.

The type C is also called “a set of conditions”.

Now he specifies what forcing is supposed to look like, separately for atomic formulas and for composite ones. To make it shorter (but see below for a Coq specification), atomic formulas are expected to satisfy monotonicity and substitution, while composite formulas are expected to satisfy the usual forcing conditions. Beeson than gives a general result for any formula transformation which satisfies these conditions, and calls it a soundness theorem: if A is provable in Ta, then “forall p, exists q, q<-p –> q 'forces' A” is provable in T.

Going back to monads, I think that the given specification determines type-theoretically a class of monad types, that the composition operation of a particular monad type handles composite formulas, while the unit operation of a particular monad type “initializes” the monad with a translation for atomic formulas.

The two operations have to be given for each formal system separately and they are given for the concrete formal system Beeson is using inside his soundness theorem. (p. 348)

Here is a type-checked Coq definition of what seems to be a possible type of a forcing monad:

Program Fixpoint forces (f:formula)(p:C)(af:formula->C->Set) {measure depth f} : Type := match f with | (atom P t) => monotonicity af (atom P t) (* to add substitution *) | (all A) => forall x:nat, forall q, ext q p –> {r:C & ext r q * forces (A^x) r af} | (imp A B) => forall q, ext q p –> forces A q af –> {r:C & ext r q * forces B r af} | bot => Empty_set end.

Two questions come to my mind at the moment:

  1. Is this methodology useful to prove any conservativity result which concerns constructive type theory? (not set theory, or Heyting arithmetic)

  2. Is it useful at least as a tool for “applied formal meta-theory”?

  3. Does this make any sense?