% http://www.math.princeton.edu/~nelson/gffm/tex/2.tex
% plain TeX
\input mac
\chap{Understanding Intuitionism}
\centerline{by Edward Nelson}
\centerline{Department of Mathematics}
\centerline{Princeton University}
\centerline{\eightrm http:$\scriptstyle\slash\slash$www.math.princeton.edu$\scriptstyle\slash\scriptscriptstyle\sim$nelson$\scriptstyle\slash$papers.html}
\bigskip
Intuitionism was the creation of L.~E.~J.~Brouwer [Br], and I like
to think that classical mathematics was the creation of Pythagoras.
Imagine a conversation between a classical mathematician and an
intuitionist, say a woman and a man respectively. She speaks first:
\medskip
{\ninepoint I have just proved $\exists x A$.
Congratulations! What is it?
I don't know. I assumed $\forall x\,\neg A$ and derived a contradiction.
Oh. You proved $\neg\forall x\,\neg A$.
That's what I said.
}\medskip
\noindent Or another:
\medskip
{\ninepoint I have proved $A\vee B$.
Good. Which did you prove?
What?
You said you proved $A$ or $B$; which did you prove?
Neither; I assumed $\neg A\amp\neg B$ and derived a contradiction.
Oh, you proved $\neg[\neg A\amp\neg B]$.
That's right. It's another way of putting the same thing.
}\medskip
But he does not agree with her last statement;
they have a different semantics and
a different notion of proof.
This paper is an attempt to understand the differences between them.
I am grateful to Mitsuru Yasuhara for stimulating
discussions of this material and for pinpointing errors and
obscurities in earlier drafts. I also wish to thank Simon Kochen and
Per Martin-L\"of for helpful comments.
\section1.{Incomplete communications}
For a classical mathematician, a closed formula, true in a given
structure, is a complete communication.
It expresses an objective state of affairs in the universe of
discourse; it is
an ontological assertion. But to an intuitionist, a closed true formula
may be an incomplete communication. For an existential formula
he wants an object~$n$ for which $A_x(n)$ holds and further, since
$A_x(n)$ may itself be an incomplete communication,
he wants the information
needed to complete it. Similarly, for $A\vee B$ he
wants to know which, together with the information needed to
complete that communication.
To be specific, let us begin by discussing Arithmetic.
The language~$\cali L$ of Arithmetic consists of the the constant~0,
the unary function symbol~S (\ii successor\/.), +, and~$\.$, together
with the logical symbols: the variables, =, and the logical constants
$\neg\ \&\ \forall\ {\to}\ \exists\ \vee$. This language is the same
for classical and intuitionistic arithmetic.
Also the structure for the language, the universe of
discourse to which the language refers, is the same: the universe
is~\bbb N, and the function symbols denote the usual arithmetical
operations. A \ii numeral\/. is a term containing only~S and~0,
and each element of~\bbb N\ is denoted by a unique numeral and
vice versa.
Observe that a closed atomic formula~$C$ of $\cali L$ is
an equation $a=b$ where $a$ and~$b$ are variable-free terms.
Any variable-free term reduces to a unique numeral when the usual laws
$$\eqalign{
x+0&=x\cr
x+\S y&=\S(x+y)\cr
x\.0&=0\cr
x\.\S y&=(x\.y)+x\cr
}$$
are applied repeatedly from left to right. We say that $C$~is
\ii true\/. in case $a$ and~$b$ reduce to the same numeral.
Negation can be eliminated in both classical and intuitionistic
arithmetic: replace $\neg A$ by $A\to \S0=0$.
Here is an attempt, following Kleene [Kl,~\Sec82],
to put into words the information an intuitionist
needs to complete the communication in
a true closed formula~$C$ of Arithmetic. The description is by recursion
on the complexity of~$C$:\medskip
(I1)~Let $C$ be atomic: no additional information.
\smallskip
(I2)~Let $C$ be $A\amp B$: the information needed to complete~$A$ and
the information needed to complete~$B$.
\smallskip
(I3)~Let $C$ be $\forall x A$: a general method yielding for any
numeral~$n$ the information needed to complete $A_x(n)$.
\smallskip
(I4)~Let $C$ be $A\to B$: a general method yielding for any information
completing~$A$ the information completing~$B$.
\smallskip
(I5)~Let $C$ be $\exists x A$: a numeral $n$ together with the
information needed to complete $A_x(n)$.
\smallskip
(I6)~Let $C$ be $A\vee B$: 1 or 2, indicating first or second disjunct,
together with the information needed to complete it.
\medskip
Thus we see that the intuitionistic notion of truth is far
richer than the classical: rather than one bit of information
(true) it may specify
numbers, choices of disjuncts, and general methods.
\section2.{Codes}
Kleene [Kl45] [Kl,~\Sec82] made an incisive analysis of intuitionistic
semantics based on the notion of realization. I shall
express this notion in a different formalism that avoids the
intricacies of recursive partial functions and G\umlaut odel
numbering.
We introduce ten \ii additional function symbols., written here
with variables to indicate the placement of arguments:
$$0\quad1\quad 2\quad \Pair(x,y)\quad \pi_1 x\quad \pi_2 x\quad
x\{y\}\quad \gamma(x,y,z)\quad \lambda xy\quad \rho(x,y)$$
called \ii zero\/., \ii first position., \ii second position.,
\ii pair., \ii first projection.,
\ii second projection., \ii evaluation., \ii choice.,
\ii lambda\/., and
\ii recursion.. Recursion is special to Arithmetic.
Let $\cali L'$ be~$\cali L$ together with
the additional function symbols.
I have called $\lambda$ a function symbol but in many respects it is
like a quantifier symbol.
A \ii code\/. is a term~$c$ of~$\cali L'$ such that
for every occurrence of~$\lambda$ in~$c$, its first argument is a
variable. The notions of \ii $\lambda$-bound\/. and
\hbox{\ii $\lambda$-free\/.} occurrences of variables in codes, of
\ii $\lambda$-closed\/. codes, of a code being
\ii $\lambda$-substitutable\/. for a variable in another code, and of
a \ii $\lambda$-variant\/. of a code are all defined by strict
analogy with the corresponding notions for quantifier symbols.
If $a$ and~$b$ are codes and $x$~is a variable, $a_x(b)$ is
the code obtained by substituting~$b$ for each $\lambda$-free
occurrence of~$x$ in~$a$; when we use this notation, we assume
that $b$~is $\lambda$-substitutable for~$x$ in~$a$ (otherwise,
first replace~$a$ by a $\lambda$-variant~$a'$ such that $b$ is
$\lambda$-substitutable for~$x$ in~$a'$).
It will not be necessary to distinguish notationally
between the additional constant~0 and the constant~0
of \cali L. We call 1 and~2 \ii positions..
The projections are tightly binding, so $\pi_1c\{n\}$ is parsed
as $(\pi_1c)\{n\}$. Parentheses are used when necessary for grouping.
Unless indicated otherwise by parentheses, the scope of an
occurrence of~$\lambda x$ is the entire following expression.
The meaning of the additional function symbols is expressed
by the following \ii reduction rules for codes\/.:
\medskip\goodbreak
{\openup1\jot
\halign{\qquad\qquad\hfil#&\quad$#$\hfil&\quad$#$\hfil\cr
&\hbox{replace:}&\hbox{by:}\cr
(R1)&\pi_1\Pair(a,b)&a\cr
(R2)&\pi_2\Pair(a,b)&b\cr
(R3)&\gamma(a,b,\Pair(1,c))&a\{c\}\cr
(R4)&\gamma(a,b,\Pair(2,c))&b\{c\}\cr
(R5)&(\lambda xa)\{b\}&a_x(b)\cr
(R6)&a+0&a\cr
(R7)&a+\S b&\S(a+b)\cr
(R8)&a\.0&0\cr
(R9)&a\.\S b&(a\.b)+a\cr
(R10)&\rho(\S n,c)&\pi_2 c\{n\}\{\rho(n,c)\}\cr
(R11)&\rho(0,c)&\pi_1 c\cr}
\medskip
}
The rules (R1) and (R2) say that the first or second projection
of a pair is the corresponding member of the pair; (R3) and (R4)
give a choice of whether to evaluate~$a$ or~$b$ on~$c$;
the rule~(R5) is the usual rule of the
$\lambda$-calculus. The remaining rules are special to Arithmetic.
If $a$ is a variable-free term of~$\cali L$,
the rules (R6)--(R9) reduce it to a
numeral~$n$, and then applications of (R10) and~(R11) eliminate
this occurrence of~$\rho$ by recursion.
A code is \ii irreducible\/. in case none of these reduction rules
applies to any code occurring in it.
Notice that for any term of~$\cali L'$, if the arguments of the term
are irreducible then at most one reduction rule applies to it.
The \ii reduction algorithm for codes\/. is this: repeatedly
apply the reduction
rules from right to left (that is, starting at the leaves of the
tree structure of the term). Because of~(R5), this algorithm may not
terminate. A code is
\ii terminating\/. in case the algorithm terminates, in which
case we say that the code \ii reduces to\/. the irreducible code
with which the algorithm terminates. Let \san r\ be
the function with domain the set of all terminating codes~$c$ whose
value $\san r c$ is the code to which it reduces. In our notation
\san r~binds less tightly than all the function symbols of~$\cali L'$,
so that $\san r c\{a\}$ is parsed as $\san r(c\{a\})$.
An example of a non-terminating code is
$(\lambda xx\{x\})\{\lambda xx\{x\}\}$,
which keeps on repeating itself, and there are others with
explosive growth. But if $c$~is terminating,
this can be effectively determined and the value $\san r c$ can be
effectively computed.
This is a concrete, syntactical formalism, implementable by computer.
Now we describe the relation between a code $c$ and a formula~$C$.
This can be expressed by a program \san B. I believe it to be a
faithful expression of Brouwer's intent when he formulated
intuitionism. It is
not an algorithm but an interactive program, since in general it
will prompt from time to time for input during its execution.
Let $c$~be a $\lambda$-closed code and
$C$~a closed formula of~$\cali L$.
Then the action of $\san B(c,C)$ is described as follows:
\medskip
(B1)~Let $C$ be atomic. Then $\san B(c,C)$ reduces the two variable-free
terms in the equation~$C$ to numerals and prints ``true'' if they are
the same and ``false'' otherwise. This does not depend on~$c$.
\smallskip
(B2)~Let $C$ be $A\amp B$. Then $\san B(c,C)$ splits into two programs,
running independently of each other, namely
$\san B(\pi_1 c,A)$ and $\san B(\pi_2 c,B)$.
\smallskip
(B3)~Let $C$ be $\forall x A$. Then $\san B(c,C)$ prompts for a
numeral~$n$ and runs $\san B\big(c\{n\},A_x(n)\big)$.
\smallskip
(B4)~Let $C$ be $A\to B$. Then $\san B(c,C)$ prompts for a
$\lambda$-closed
code~$a$ and forms a pipe: it runs $\san B(a,A)$ and if this terminates
(and since it is an interactive program, that may depend
on the responses to prompts), then it runs $\san B(c\{a\},B)$.
\smallskip
(B5)~Let $C$ be $\exists x A$.
Then $\san B(c,C)$ prints $\san r \pi_1 c$
(if $\pi_1 c$ is terminating; otherwise, the program does not terminate)
and then runs the program
$\san B\big(\pi_2 c,A_x(\san r \pi_1 c)\big)$
(if $\san r \pi_1 c$ is a numeral; otherwise it prints an error
message).
\smallskip
(B6)~Let $C$ be $A\vee B$. Then $\san B(c,C)$ prints $\san r \pi_1 c$
(if $\pi_1 c$ is terminating; otherwise, the program does not terminate)
and then runs the program
$\san B(\pi_2 c,A)$ if $\san r \pi_1 c$ is~1,
$\san B(\pi_2 c,B)$ if $\san r \pi_1 c $ is~2, and otherwise prints
an error message.
\medskip
We call $\forall$ and~$\to$ \ii input operators\/. and
$\exists$ and $\vee$ \ii output operators..
\section3.{Realization}
The rules (B1)--(B6) are concrete,
syntactical rules implementable by computer.
But now we ask, what is a good code for a formula?
When does a code~$c$ express the intuitionistic truth of the
formula~$C$?
Let $c$~be a terminating
$\lambda$-closed code and $C$~a closed formula of~$\cali L$.
Here is Kleene's definition of $c$ \ii realizes\/.~$C$,
abbreviated $c \rz C$, by recursion on the complexity of~$C$.
It is expressed in our formalism and slightly
modified in the first clause (the realization code for a true closed
atomic formula can be arbitrary, not necessarily~0):
\medskip\goodbreak
(K1)~Let $C$ be atomic. Then $c \rz C$ in case $C$ is true.
\smallskip
(K2)~Let $C$ be $A\amp B$. Then $c \rz C$ in case $\pi_1 c \rz A$
{\it and\/} $\pi_2 c \rz B$.
\smallskip
(K3)~Let $C$ be $\forall x A$. Then $c \rz C$ in case {\it for all\/}
numerals~$n$, $c\{n\} \rz A_x(n)$.
\smallskip
(K4)~Let $C$ be $A\to B$. Then $c \rz C$ in case {\it for all\/}
$\lambda$-closed codes~$a$, $a \rz A$ {\it implies\/} $c\{a\} \rz B$.
\smallskip
(K5)~Let $C$ be $\exists x A$. Then $c \rz C$ in case $\san r \pi_1 c$
is a numeral {\it and\/} $\pi_2 c \rz A_x(\san r \pi_1 c)$.
\smallskip
(K6)~Let $C$ be $A\vee B$. Then $c \rz C$ in case $\san r \pi_1 c$
is a position, {\it and\/} $\san r \pi_1 c$ is~1 {\it implies\/}
$\pi_2 c \rz A$, {\it and\/} $\san r \pi_1 c$ is~2 {\it implies\/}
$\pi_2 c \rz B$.
\smallskip
(K*) For a formula $C$ that is not closed, let $x_1$, \dots, $x_i$
be the distinct variables occurring free in~$C$, say in the order of
first free occurrence. Then $c$ \ii realizes\/.~$C$ in
case $\lambda x_1\ldots\lambda x_i c$
realizes $\forall x_1\ldots\forall x_i C$ (so $c$~is
terminating and $\lambda$-closed). Equivalently, for all numerals
\hbox{$n_1$, \dots,~$n_i$}, we have
$c_{x_1,\ldots, x_i}(n_1,\ldots, n_i) \rz
C_{x_1,\ldots, x_i}(n_1,\ldots, n_i)$.
\medskip
Because of the quantifiers {\it for all\/} over infinite domains
(and notice that in~(K4) they can occur in the hypotheses
of implications), realization is an abstract notion.
The following two lemmas are proved by induction on the complexity
of~$C$:
\smallskip
{\it Lemma\/}~1:~Let $C$~a closed formula of~\cali L.
Then $c \rz C$ if and only if $\san r c \rz C$.
\smallskip
{\it Lemma\/}~2:~Let $A$ be a formula of~\cali L\ whose only free variable
is~$x$, and let $a$~be a variable-free term of \cali L. Let
$\san r a$ be the numeral~$n$. Then $c \rz A_x(a)$ if and only if
$c \rz A_x(n)$.
\smallskip
A formula is \ii classical\/. in case there is no occurrence of an
output operator~$\exists$ or~$\vee$ in it.
Every formula is classically equivalent to a classical formula.
\medskip
{\it Theorem}~1: Let $C$ be a closed classical
formula of~$\cali L$ and let
$c$ be a terminating $\lambda$-closed code. Then $c \rz C$ if and only
if $0 \rz C$.
\smallskip
Proof: By induction on the complexity of~$C$, using (K1)--(K4).~\bul\
(The reasoning does not apply
to non-classical formulas, since \hbox{(K5)--(K6)} restrict the form
of realization codes for them.)
\smallskip
Consequently, we can simply say $C$ is \ii true.,
rather than $c \rz C$, for $C$~a closed classical formula.
Then for closed classical formulas, $A \amp B$ is true if and
only if $A$~is true and $B$ is true; $\forall xA$ is true if and only
if for all numerals~$n$, $A_x(n)$ is true; $A \to B$ is true if and
only if [not $A$ is true] or [$B$ is true].
In short, the classical and intuitionistic semantics are identical
on classical formulas.
Let us remark
that for any closed formula~$C$, classical or not, $c \rz C$
is expressed, in the metalanguage, with only the classical logical
constants {\it and\/}, {\it for all}, {\it implies}. Consequently,
the classical and intuitionistic semantics of $c \rz C$ are identical.
A classical mathematician might say that for a classical formula~$A$,
$\neg\forall x\neg A$ means
that there exists a numeral~$n$ such that $A_x(n)$ is true, but to her
this means exactly the same as saying that not
$\forall x\neg A$ is true,
so there is no dispute over any matter of substance between her
and an intuitionist. She need only defer to his sensibilities by
rephrasing statements in ways that to her are entirely equivalent
and then they will be in complete accord on the semantics of classical
formulas.
We are led to the following conclusion:
the intuitionistic semantics is ontological and Platonic.
This is because the classical semantics
of classical formulas is ontological and Platonic,
but intuitionistic semantics
is identical with classical semantics on classical formulas.
It appears that this conclusion can be challenged only by maintaining
that realization is an unsatisfactory expression of
intuitionistic semantics. We shall return to intuitionistic semantics,
but first we need to discuss some syntactical questions.
\section4.{Intuitionistic proofs}
Let us list the axioms and rules of inference of Intuitionistic
Arithmetic. What follows is a version of Heyting's formalization~[He].
With each axiom or axiom scheme we associate a code;
these codes will be discussed shortly.
The language is $\cali L$. We associate $\to$ from right to left.
In the list, $A$, $B$, and $C$ are formulas, $x$ and~$y$ (possibly
with subscripts) are variables, and $a$~is a term (substitutable
for~$x$ in~$A$). The nonlogical axioms are the usual:
\medskip
{\openup1\jot
\halign{\hfil#&\quad$#$\hfil&\quad$#$\hfil\cr
1.&\neg\,\S x=0&0\cr
2.&\S x=\S y \to x=y&0\cr
3.&x+0=x&0\cr
4.&x+\S y=\S(x+y)&0\cr
5.&x\.0=0&0\cr
6.&x\.\S y=(x\.y)+x&0\cr
7.&A_x(0) \amp \forall x[A \to A_x(\S x)] \to A&\lambda c\rho(x,c)\cr
\noalign{\vskip4pt\hbox{The logical axioms are the identity and equality axioms:}\vskip4pt}
8.&x=x&0\cr
9.&x=y \to \S x=\S y&0\cr
10.&x_1=y_1 \to x_2=y_2 \to x_1+x_2=y_1+y_2&0\cr
11.&x_1=y_1 \to x_2=y_2 \to x_1\.x_2=y_1\.y_2&0\cr
12.&x_1=y_1 \to x_2=y_2 \to x_1=x_2 \to y_1=y_2&0\cr
\noalign{\vskip4pt\hbox{together with axioms schemes for the propositional calculus:}\vskip4pt}
13.&A \to B \to A&\lambda a\lambda ba\cr
14.&[A \to B]\to[A \to B \to C]\to A \to C&\lambda p\lambda q\lambda aq\{a\}\{p\{a\}\}\cr
15.&A \to B \to A \amp B&\lambda a\lambda b\Pair(a,b)\cr
16.&A \amp B \to A&\lambda c\pi_1 c\cr
17.&A \amp B \to B&\lambda c\pi_2 c\cr
18.&A \to A \vee B&\lambda a\Pair(1,a)\cr
19.&B \to A \vee B&\lambda b\Pair(2,b)\cr
20.&[A \to C]\to[B \to C]\to A \vee B \to C&\lambda p\lambda q\lambda r\gamma(p,q,r)\cr
21.&[A \to B]\to[A\to\neg B]\to \neg A&\lambda p\lambda q\lambda aq\{a\}\{p\{a\}\}\cr
22.&\neg A \to A \to B&0\cr
\noalign{\vskip4pt\hbox{and the substitution axiom schemes of the predicate calculus:}\vskip4pt}
23.&A_x(a) \to \exists x A&\lambda y\Pair(a,y)\cr
24.&\forall x A \to A_x(a)&\lambda yy\{a\}.\cr}
\medskip
}
A few comments on the axioms: (13) and (14) justify the use of
deductions (introduction and discharge of hypotheses); with
the elimination of~$\neg$, (21)~is the special case of~(14) in
which $C$~is $\S0=0$; if the axiom~(22) is changed to
($22'$)\hskip1pt: $\neg\neg A \to A$, then the resulting theory is
classical Peano Arithmetic [Kl,~\Sec19].
There are three rules of inference:
\medskip
{\openup1\jot
\halign{\hfil#&\quad$#$\hfil&\quad#\hfil\cr
25.&A,\ A\to B \ent B\cr
26.&A \to B \ent \exists x A \to B&($x$ not free in $B$)\cr
27.&A \to B \ent A \to \forall x B&($x$ not free in $A$).\cr}
\medskip
}
To each of these rules of inference we associate a \ii rule for
constructing codes.. For (25), let $y_1$, \dots,~$y_j$ be the
variables occurring free in~$A$ but not in~$B$; for (26) and~(27),
let $y$~be a variable that does not occur in $A$ or~$B$. Then:
\medskip
if $a$ is a code associated with~$A$ and $c$~is a code associated
with \hbox{$A\to B$}, associate
$(c\{a\})_{y_1\ldots y_j}(0\ldots0)$ with~$B$;
\smallskip
if $c$~is associated with $A\to B$ and $x$ is not free in~$B$,
associate
$\lambda y(\lambda xc)\{\pi_1 y\}\{\pi_2 y\}$ with
$\exists x A\to B$;
\smallskip
if $c$~is associated with $A\to B$ and $x$ is not free in~$A$,
associate
$\lambda y\lambda x c\{y\}$ with $A\to\forall x B$.
\medskip
We use Kleene's notation $S$ for this intuitionistic theory.
A simple algorithm checks whether a purported intuitionistic proof
in~$S$ is indeed a proof, associates to each axiom its code,
and associates to each formula derived by a rule of inference the
code constructed according to the rules for constructing codes.
Thus the algorithm associates to each proof in~$S$ a code
for the theorem of which it is a proof. This is concrete
and syntactical, implementable by computer.
The codes for the axioms and code construction rules for the rules
of inference are due to David Nelson. (He and I are unrelated.)
At Kleene's suggestion, Nelson undertook to establish that each
intuitionistic theorem of Arithmetic is realizable, and achieved
this in~[Ne].
Let us give a proof in our formalism of Nelson's
theorem. First we need to show that the associated codes
realize the axioms.
By~(K*) and~(K3), we
assume that in each axiom the free variables are replaced by numerals,
with the same replacements for the $\lambda$-free variables in the
associated code.
Since each variable-free instance of (1)--(6) is true, 0~realizes
each of them, and similarly for the identity and equality axioms
(8)--(12).
\smallskip
7:~By (K*) we assume that $x$ is the only free variable in~$A$.
Suppose $c \rz A_x(0) \amp \forall x[A \to A_x(\S x)]$. To show:
$\rho(n,c) \rz A_x(n)$ (where $n$~is a numeral). The proof is by
induction on~$n$, and the beauty of the argument is that a single
induction yields the realization of all induction axioms. By~(R11),
$\san r \rho(0,c)$ is $\san r \pi_1c$. By~(K2)
and Lemma~1, this realizes~$A_x(0)$.
Suppose $\rho(n,c) \rz A_x(n)$. To show: $\rho(\S n,c) \rz A_x(\S n)$.
By~(R10), $\san r \rho(\S n,c)$ is $\san r \pi_2c\{n\}\{\rho(n,c)\}$.
By~(K2) and~(K3), $\pi_2\{n\} \rz A_x(n) \to A_x(\S n)$, so by~(K4)
and the induction hypothesis $\pi_2c\{n\}\{\rho(n,c)\} \rz A_x(\S n)$.
The proof is complete by Lemma~1.
\smallskip
In the following proofs, such frequent use is made of the reduction
rule~(R5), for $(\lambda xa)\{b\}$, and~(K4), for~$\to$, that
no explicit reference is made to these rules.
\smallskip
13:~Suppose $a' \rz A$. To show: $\lambda ba' \rz B \to A$. Suppose
$b' \rz B$. To show: $a' \rz A$. But it does, by hypothesis.
\medskip
14:~Suppose $p' \rz A \to B$. To show: $\lambda q\lambda a
q\{a\}\{p'\{a\}\} \rz [A \to B \to C] \to A \to C$. Suppose
$q' \rz A \to B \to C$. To show: $\lambda a q'\{a\}\{p'\{a\}\} \rz
A \to C$. Suppose $a' \rz A$. To show: $q'\{a'\}\{p'\{a'\}\} \rz C$.
But since $a' \rz A$ and $q' \rz A \to B \to C$, we have
$q'\{a'\} \rz B \to C$. Also, since $q' \rz A \to B$, we have
$p'\{a'\} \rz B$. Consequently, $q'\{a'\}\{p'\{a'\}\} \rz C$.
\smallskip
15:~Suppose $a' \rz A$. To show: $\lambda b\Pair(a',b) \rz B \to
A \amp B$. Suppose $b \rz B$. To show: $\Pair(a',b') \rz A \amp B$.
But it does, by~(K2).
\smallskip
16:~Suppose $c' \rz A \amp B$. To show: $\pi_1c' \rz A$. This holds
by~(K2).
\smallskip
17:~Suppose $c' \rz A \amp B$. To show: $\pi_2c' \rz B$. This holds
by~(K2).
\smallskip
18:~Suppose $a' \rz A$. To show: $\Pair(1,a') \rz A \vee B$.
By~(R1), $\san r \pi_1\Pair(1,a')$ is~1, so this holds by~(K6).
\smallskip
19:~Suppose $b' \rz B$. To show: $\Pair(2,b') \rz A \vee B$.
By~(R1), $\san r \pi_1\Pair(2,a')$ is~2, so this holds by~(K6).
\smallskip
20:~Suppose $p' \rz A \to C$. To show: $\lambda q\lambda r
\gamma(p',q,r) \rz [B \to C] \to A \vee B \to C$. Suppose
$r' \rz A \vee B$. To show: $\lambda r \gamma(p',q',r) \rz A \vee B
\to C$. Suppose $r' \rz A \vee B$. To show: $\gamma(p',q',r') \rz C$.
By the reduction algorithm for codes (which works from right to left)
and Lemma~1, we assume that $p'$, $q'$, and~$r'$ are irreducible.
By~(K6), $\san r \pi_1 r'$ is a position, and there are two cases.
First, suppose $\san r \pi_1 r'$ is~1. Since $r'$~is irreducible,
by~(R1) it is of the form $\Pair(1,a)$ where $a$~is irreducible.
By~(K6), $a \rz A$. Since $p' \rz A \to C$, $p'\{a\} \rz C$.
But $\san r \gamma(p',q',r')$---that is,
$\san r \gamma(p',q',\Pair(1,a))$---is $\san r p'\{a\}$ by~(R3).
By Lemma~1, $\gamma(p',q',r') \rz C$. The second case, that
$\san r \pi_1 r'$ is~2, is similar.
\smallskip
21:~This is the special case of (14) in which $C$ is $\S0=0$.
\smallskip
22:~Suppose $c \rz \neg A$; that is, $c \rz A \to \S0=0$. To show:
$0\{c\} \rz A \to B$. That is, we need to show that if $a \rz A$
then $0\{c\}\{a\} \rz B$. But this holds vacuously since the
hypothesis is untenable: if $a \rz A$ then $c\{a\} \rz \S0=0$,
which is impossible by~(K1).
\smallskip
23:~Suppose $b \rz A_x(a)$. To show: $\Pair(a,b) \rz \exists xA$.
Let $n$~be $\san r a$, so that $n$~is a numeral. By Lemma~2,
$b \rz A_x(n)$. By~(K5), $\Pair(a,b) \rz \exists xA$.
\smallskip
24:~Suppose $b \rz \forall xA$. To show: $b\{a\} \rz A_x(a)$.
Let $n$~be $\san r a$, so that $n$~is a numeral. By~(K3),
$b\{n\} \rz A_x(n)$. By Lemma~1, $b\{a\} \rz A_x\{n\}$, and
by Lemma~2, $b\{a\} \rz A_x(a)$.
\medskip
Finally, we must prove that the rules of inference with the rules
for constructing codes preserve realization.
\smallskip
25:~Suppose $a \rz A$ and $c \rz A \to B$. Let $y_1$, \dots,~$y_j$
be the variables occurring free in~$A$ but not~$B$ and let $b$~be
$c\{a\}_{y_1\ldots y_j}(0\ldots0)$. To show: $b \rz B$. Let
$a'$, $b'$, $c'$, $A'$, and~$B'$ be the codes and formulas obtained by
replacing the $\lambda$-free and free variables by numerals, but
with the \hbox{$y_1$, \dots,~$y_j$} replaced by~0. By~(K*),
we need to show that $b' \rz B'$.
But $c' \rz A' \to B'$ and $a' \rz A'$, so $c'\{a'\} \rz B'$.
But $c'\{a'\}$ is~$b'$.
\smallskip
26:~Suppose $c \rz A \to B$, where $x$ is not free in $B$. To show:
$\lambda y(\lambda xc)\{\pi_1 y\}\{\pi_2 y\}
\rz \exists xA \to B$.
By~(K*) we assume that $x$~is the only free variable in $A \to B$.
Suppose $d \rz \exists xA$. Then we need to show:
$(\lambda xc)\{\pi_1 d\}\{\pi_2 d\} \rz B$.
By~(K5), $\san r \pi_1 d$ is a numeral~$n$ and
$\pi_2 d \rz A_x(n)$. By~(K*), $c_x(n) \rz A_x(n) \to B$ (this is where
the assumption that $x$~is not free in~$B$ is used), so
$c_x(n)\{\pi_2 d\} \rz B$. But
$\san r (\lambda xc)\{\pi_1 d\}\{\pi_2 d\}$ is
$\san r c\{n\}\{\pi_2 d\}$ by~(R5), so
$(\lambda xc)\{\pi_1 d\}\{\pi_2 d\} \rz B$ by Lemma~1.
\smallskip
27:~Suppose $c \rz A \to B$, where $x$ is not free in $A$. To show:
$\lambda y\lambda x c\{y\} \rz \forall x B$. By~(K*) we assume that
$x$~is the only free variable in $A \to B$. Suppose $a \rz A$.
To show: $\lambda x c\{a\} \rz \forall xB$. Let $n$~be a numeral.
By~(K3), we must show that $(\lambda x c\{a\})\{n\} \rz B_x(n)$.
But $\san r(\lambda x c\{a\})\{n\}$ is $\san r c_x(n)\{a\}$ by~(R5),
since $a$~is $\lambda$-closed (this is where the assumption that $x$~is
not free in~$A$ is used). By~(K*), $c_x(n) \rz A \to B_x(n)$, so
$(\lambda x c\{a\})\{n\} \rz B_x(n)$ by Lemma~1.
\medskip
This concludes the proof of Nelson's theorem.
We can argue that since, by (K1), $\S0=0$ is not realizable, it is not
a theorem of~$S$, so $S$~is consistent. This is entirely analogous
to the classical argument that Peano Arithmetic is consistent
since the axioms are true and the rules of inference preserve truth.
The argument cannot be formalized in~$S$, if $S$~is consistent,
because of the arbitrarily large number of alternating quantifiers
needed to express the realization predicate for formulas of high
complexity. The realization predicate is not arithmetical.
We have seen that a classical mathematician can explain a theorem
to an intuitionist in a manner that satisfies them both, but will
he accept her proof? This question was answered affirmatively,
for Arithmetic, by G\umlaut odel [G\umlaut o33]
[Kl,~\Sec81]. The demonstration is by a very simple algorithm that
transforms a classical proof of a classical formula into an
intuitionistic proof of the same formula. Thus intuitionism is
an extension of classical mathematics in its proof syntax as well as
semantically.
In practice if not in profession, intuitionistic arithmetic
takes classical
arithmetic, with classical semantics and proof syntax, as its base.
Brouwer created two new logical constants: the constructive~$\exists$
and the constructive~$\vee$. The intuitionistic semantics and proof
syntax of $\exists xA$ and $A \vee B$ are quite different from those
of $\neg \forall x \neg A$ and $\neg[\neg A \amp \neg B]$. Stricter
standards of proof are required and richer information is obtained.
Let us call a formula $C$ \ii unconditional\/. in case there is no
occurrence of~$\to$ in it. For an unconditional~$C$, the program
$\san B(c,C)$ in non-interactive; it is an algorithm. An intuitionistic
proof of~$C$ yields a realization code~$c$, and this constructs
recursive functions for the existentially quantified variables with
the values of the universally quantified variables as arguments; the
placement of the quantifiers in~$C$ indicates the dependencies.
This suggests the possibility of using Nelson's algorithm as a means
to construct and verify algorithms in a mode congenial to
mathematical ways of thinking.
This is potentially a powerful tool, but the practical applications
of intuitionism as a method for obtaining constructive information
have been meager for most of this century. Most often,
important constructive
results are obtained with no use of intuitionism, for example
Tarski's decision procedure~[Ta] for real algebra with
inequalities, which played an essential role in H\umlaut ormander's
classification~[H\umlaut o] of hypoelliptic partial differential
equations.
But the situation is changing, spurred in part by the availability
of fast computers. One indication of this is the work of
Martin-L\umlaut of and his school [M-L\umlaut o] [NoPeSm].
For a conditional formula $C$, the program $\san B(c,C)$ is
interactive. Let $C$~be $\exists xA \to \exists yB$, where $A$ and~$B$
are unconditional, and let $c \rz C$. It may be difficult to find
an~$a$ realizing $\exists xA$, and only those in the know would
have access, via $c\{a\}$, to an~$n$ such that $B_y(n)$ holds.
If $A$ and~$B$ are themselves conditional, the situation is more
complex. Kleene's realization predicate may be the natural tool
for analyzing questions concerning complex interactive programs,
such as problems of system security.
\section5.{Kripke semantics}
This is a very brief discussion of Kripke's analysis [Kr] of
intuitionism.
Kleene was concerned with the intuitionistic semantics of Arithmetic
under the intended interpretation; Kripke is concerned with the
intuitionistic semantics of the predicate calculus under all
possible interpretations.
In discussing the intuitionistic
predicate calculus, rather than arithmetic, we must make two
changes. First, we do not have a specific false formula $\S0=0$
to use in eliminating~$\neg$,
so restore~$\neg$. Second, in the comparison of classical and
intuitionistic arithmetic implicit use was made of the
fact that intuitionistically an atomic formula of arithmetic
is equivalent to its
double negation, but this is not so in the predicate calculus.
Call a formula \ii strongly classical\/. in case it is classical
and each occurrence in it of an atomic formula is doubly negated.
Every formula is classically equivalent to a strongly classical
formula.
In part~(d) of Theorem~60 in [Kl,~\Sec81], Kleene modifies the
G\umlaut odel reduction so that the algorithm transforms a
classical proof of a strongly classical formula into an
intuitionistic proof of the same formula.
Roughly speaking, Kripke's
analysis involves stages of investigation forming a tree
where at each stage a classical semantics is imposed, with the
proviso that if a formula is true at one stage it remains true at
all later stages but if a formula is false at one stage it may become
true at a later stage; a formula is valid in case it is true at the root
of the tree in every structure. His completeness theorem establishes
that a formula is valid if and only if
it is provable in Heyting's predicate calculus
(the ``if'' part being trivial).
\medskip
{\it Theorem\/}~2: The classical and Kripke semantics are
identical on strongly classical formulas.
\smallskip
Proof: Suppose that a closed strongly
classical formula $C$ is classically
valid. By G\umlaut odel's completeness theorem
for classical logic [G\umlaut o30], it is provable in the classical
predicate calculus. By the reduction of classical proofs to
intuitionistic proofs (Kleene's modification of G\umlaut odel's
algorithm), it is provable in Heyting's intuitionistic predicate
calculus, and so is valid in Kripke's intuitionistic semantics.
Conversely, by Kripke's completeness theorem a formula valid in
his semantics is provable in Heyting's intuitionistic predicate
calculus,
and therefore in the classical predicate calculus, and hence is
classically valid.~\bul
\smallskip
Kripke semantics leads to the same conclusions regarding the
ontological Platonic nature of intuitionistic semantics, and
intuitionism as an extension of classical mathematics,
as those reached on the basis of Kleene's realization predicate.
When an intuitionist states that a classically valid strongly classical
formula is not constructively valid (as is frequently stated for
the strongly classical translation of the axiom of choice),
he implicitly states that
the Kripke semantics is an inadequate description of intuitionistic
semantics.
\section6.{Epistemic semantics}
Many writers on intuitionism
base intuitionistic semantics on epistemology; on knowledge and proof.
When Kleene [Kl,~\Sec82] introduces realization, he avoids
all reference to the notion of proof.
But in the informal discussion in~\Sec81, he writes (my italics):
{\narrower\medskip\ninepoint
The intuitionists' use of negation and implication must
then be understood as only requiring us to recognize, e.g., that a
particular given {\nineit proof\/} is intuitionistically acceptable,
or (when they {\nineit prove\/} a statement of the form
$(A \to B) \to C$) that if one should produce an intuitionistically
acceptable {\nineit deduction\/} of one statement~$B$ from another~$A$,
then on the basis of it one could by a given method surely construct an
intuitionistically acceptable {\nineit proof\/} of a third~$C$.\medskip}
\noindent Heyting [He31] [BePu] says (again my italics):
{\narrower\medskip\ninepoint
\noindent It is important to note that the negation of a
proposition always
refers to a {\nineit proof procedure\/} which leads to the
contradiction, even if the original proposition mentions no
proof procedure.\medskip}
\noindent Dummett [Du] [BePu] confronts the problem directly:
{\narrower\medskip\ninepoint However, the distinction
[between the notions of truth and
assertability] is unavoidable if the explanations of universal
quantification, implication and negation are to escape circularity.
The standard explanation of implication is that a proof of $A \to B$ is
a construction of which we can recognise that, applied to any proof
of~$A$, it would yield a proof of~$B$.
\dots if the intuitionistic
explanation of implication is to escape, not merely circularity,
but total vacuousness, there must be a restricted notion of
proof $-$ canonical proof $-$ in terms of which the explanation is
given\dots\
The notion of canonical proof thus lies in some
obscurity; and this state of affairs is not indefinitely tolerable,
because, unless it is possible to find a coherent and relatively sharp
explanation of the notion, the viability of the intuitionist
explanations of the logical constants must remain in doubt.\medskip}
Thus the standard explanation of the intuitionistic meaning of the
logical constants is based on the notion of proof. Let us call this
\ii epistemic semantics.. How can it be formalized?
Knowledge is most conveniently expressed in the first person
singular, but we are interested only in communicable knowledge, so the
task is to describe how I can communicate my knowledge to you, and
what it is about my knowledge that will convince you once it has been
communicated. Knowledge is closely related to syntactical questions,
to the notion of proof. How else can I communicate my knowledge to
you, other than by an appeal to a shared vision of a common
ontology or by a demonstration? We have already discussed the former;
the question now is whether it can be dispensed with in explaining
the intuitionistic meaning of the logical constants.
Let us grant, for purposes of argument, that mathematical
reasoning cannot be fully formalized. But consider all closed formulas
of Arithmetic
with $l$~or fewer logical constants. There are only finitely many
of them, modulo the infinity of atomic formulas.
So if I am to communicate to you in a convincing way my knowledge
of the truth of such a formula without appealing to ontology,
it must be possible to formalize
the argument in a formal system~$T_l$. This system must be axiomatic,
so that there is an effective method for deciding
whether a putative proof is indeed a proof.
Consider a closed formula $C$ of \cali L. If $C$ is atomic, $a=b$,
I communicate my knowledge by performing the computation
and showing that $a$ and~$b$ reduce to the same numeral. For
a conjunction $A \amp B$, I communicate
my knowledge of~$A$ and my knowledge of~$B$. For $\exists xA$, I exhibit
a numeral~$n$ and communicate my knowledge of~$A_x(n)$, and
for a disjunction $A \vee B$ I choose one of the disjuncts and
communicate my knowledge of it.
But for the input operators $\forall$ and $\to$ there is a problem.
The realization predicate for them was expressed by means of
quantifications over infinite domains.
This makes no sense epistemically;
I cannot produce an infinitely long communication, not even
introspectively. If I know
$\forall xA$ without appeal to ontology,
I must know it on the basis of a general argument, and
that argument must be formalizable in~$T_l$.
This formalization
requires an arithmetization, treating $\forall xA$ as an entity
of~$T_l$ rather than as a formula. But it is neither necessary
nor sufficient to prove in~$T_l$ the arithmetization
$\ul\,\forall xA\,\ur\,$; what I must do is prove in~$T_l$
an arithmetization of my knowledge of~$A_x(n)$ for a
general numeral~$n$, something like $\ul n$~is a numeral
{\it implies\/} $K(n)$ is my knowledge that $A_x(n)\,\ur$,
and similarly for implications $A \to B$.
Thus we must also require that $T_l$~be strong enough to
arithmetize itself; we shall take each~$T_l$ to be an axiomatic
extension of~$S$.
We need some function symbols and predicate symbols in $T_l$.
The function symbols have default value~0 if the arguments do not
match the descriptions given. Let $\pi_1x$ and~$\pi_2x$ be the first
and second members of the pair~$x$ and let $\'Arg'(x,y)$ be the
$x$th argument of the arithmetized formula~$y$.
Let ``$\'numeral'(x)$'' express that $x$~is
a numeral in the arithmetized theory. Let ``$\'proof'(x,y)$''
express that $y$~is an arithmetized formula of~$T_l$ and $x$~is
an arithmetized proof in~$T_l$ of it; the dependence on~$T_l$
is not indicated in the notation.
Let $C$~be a closed
formula of \cali L. We introduce, by recursion on the complexity
of~$C$, a formula $c \vf C$ of~$T_l$. This is read as $c$
\ii verifies\/.~$C$ and its intended meaning is that $c$~is a
communication of my knowledge that $C$~is intuitionistically true.
\medskip
(V1)~Let $C$ be atomic. Then $c \vf C$ is $C$.
\smallskip
(V2)~Let $C$ be $A \amp B$. Then $c \vf C$ is $\pi_1c \vf A \amp
\pi_2c \vf B$.
\smallskip
(V3)~Let $C$ be $\forall xA$. Then $c \vf C$ is
$\'proof'(\pi_1c,\ul \'numeral'(w) \to \pi_2 \vf A_x(w)\, \ur)$.
\smallskip
(V4)~Let $C$ be $A \to B$. Then $c \vf C$ is
$\'proof'(\pi_1c,\ul z \vf A \to \pi_2c \vf B\, \ur)$.
\smallskip
(V5)~Let $C$ be $\exists xA$. Then $c \vf C$ is $\pi_2c \vf
A_x(\pi_1c)$.
\smallskip
(V6)~Let $C$ be $A \vee B$. Then $c \vf C$ is $\pi_2c \vf
\'Arg'(\pi_1c,C)$.
\medskip
In (V3) and (V4), $w$ and $z$ are variables not occurring previously and
$\pi_2c$ may, of course, depend on them.
For a general formula $C$ of \cali L, let $c \vf C$ be
$c \vf \overline C$, where $\overline C$ is the closure of~$C$.
We say that
$C$~is \ii verifiable\/. in case there is a consistent axiomatic
extension~$T_l$ of~$S$ and a variable-free term~$c$ of~$T_l$ such that
$\vdash c \vf C$ (where $\vdash$ means $\vdash_{T_l}$).
For fixed~$C$ the formula $x \vf C$ represents a recursive predicate, so
for each variable-free term~$c$ we have
$\vdash c \vf C \vee \neg\, c \vf C$, and the occurrences
of~$\to$ can be eliminated in favor of $\neg\ldots\vee\ldots$.
Thus epistemic semantics
explains the intuitionistic meanings of the quantifier symbols,
the constructive~$\vee$, and the much-debated~$\to$ in simpler terms.
But now we must ask whether this semantics is adequate for
intuitionistic mathematics. The answer is no.
\medskip
{\it Theorem\/}~3. $\neg\neg\,0=0$ is unverifiable.
\smallskip
Proof: Suppose that $\vdash c \vf \neg\neg\,0=0$, where $c$~is a
variable-free term. That is,
$$\vdash c \vf [\neg\,0=0 \to \S0=0].$$
By~(V4),
$$\hbox{$\vdash \'proof'(\pi_1c,\ul \,[z \vf \neg\,0=0] \to
[\pi_2c \vf \S0=0]\,\ur)$.}$$
Now $\pi_2c \vf \S0=0$ is $\S0=0$ by (V1), so
$$\hbox{$\vdash\'proof'(\pi_1c,\ul \,[z \vf \neg\,0=0] \to [\S0=0]\,\ur)$.}$$
That is,
$$\hbox{$\vdash\'proof'(\pi_1c,\ul \neg\, [z \vf \neg\,0=0]\,\ur)$,}$$
which is
$$\hbox{$\vdash\'proof'\big(\pi_1c,\ul \neg\, \big[z \vf [0=0 \to \S0=0]\big]\,\ur\big)$.}$$
By (V4),
$$\hbox{$\vdash\'proof'\big(\pi_1c,\ul \neg\, \'proof'\big(\pi_1z,
\ul \,[w \vf 0=0] \to [\pi_2z \vf \S0=0]\,\ur\big)\,\ur\big),$}$$
which by (V1) is
$$\hbox{$\vdash\'proof'\big(\pi_1c,\ul \neg\, \'proof'\big(\pi_1z,
\ul \,[w \vf 0=0] \to [\S0=0]\,\ur\big)\,\ur\big).$}$$
That is,
$$\hbox{$\vdash\'proof'\big(\pi_1c,\ul \neg\, \'proof'(\pi_1z,
\ul\neg\, w \vf 0=0\,\ur)\,\ur\big).$}$$
Let $u$~be a new variable and let $d$ be~$\pi_1c$ followed by the
(arithmetized) proof of the instance obtained by
substituting $\Pair(u,0)$
for~$z$ and proving $\pi_1\Pair(u,0)=u$. Then
$$\hbox{$\vdash\'proof'\big(d,\ul \neg\, \'proof'(u,
\ul \neg\, w \vf 0=0\,\ur)\,\ur\big).$}$$
Let $A$~be $\neg\, \'proof'(u,\ul \neg\, w \vf 0=0\,\ur)$.
Then $d$ is a variable-free term of $T_l$ and we have
$\vdash \'proof'(d,\ul A\,\ur)$. Since $\'proof'(x,y)$ represents a
recursive predicate,
we have a proof in~$T_l$ of~$A$; that is, we have a proof in~$T_l$
of
$$\hbox{$\neg\, \'proof'(u,\ul \neg\, w \vf 0=0\,\ur).$}$$
Thus we have a proof in $T_l$ that a certain arithmetized formula
is unprovable; that is, we have a self-consistency proof for~$T_l$.
But $T_l$~is a consistent axiomatized extension of~$S$, so this is
impossible by the modern form of G\umlaut odel's second theorem
[G\umlaut o31].~\bul
\smallskip
The trouble lies in the propositional calculus. Modus ponens,
the rule of inference~(25) of the propositional calculus,
preserves verifiability. For suppose that $T_l$ is a
consistent axiomatic extension of~$S$, and $\vdash a \vf A$ and
$\vdash c \vf A \to B$. Then
$\vdash \'proof'(\pi_1c,\ul z \vf A \to \pi_2c \vf B \,\ur)$.
Since $\vdash a \vf A$ we can arithmetize the proof and substitute~$a$
for~$z$. This gives us a variable-free term~$d$ such that
$\vdash \'proof'(d,\ul \pi_2c \vf B\,\ur)$.
Since $\'proof'(x,y)$ represents a recursive predicate, we have
$\vdash \pi_2c \vf B$.
Using Nelson's realization proofs as templates,
we can establish that the propositional axioms (13)--(22) are
verifiable, with one exception. The exception is~(14),
$$[A \to B] \to [A \to B \to C] \to A \to C$$
and its special case (21).
This is used, together with (13), to justify deduction. Consider
a deduction, introducing a hypothesis~$A$. So we suppose that for
some~$a$ we have $a \vf A$. Then we establish~$B$, with $b \vf B$,
and $B \to C$, with $e \vf B \to C$. We want to find a~$c$
such that $c \vf C$. Why can't we do this by modus ponens? The
proof that modus ponens preserves verification used the fact that
if $\'proof'(d,\ul D\,\ur)$ for a variable-free term~$d$,
then $\vdash D$. But to prove this for a variable~$d$, as would be
necessary in a deduction, we would need to
prove in~$T_l$ the consistency of~$T_l$, and this we cannot do.
When an intuitionist makes a deduction, introducing and discharging
a hypothesis, he implicitly reifies a hypothetical situation,
projecting it onto an abstract ontology.
Epistemic semantics is adequate for mathematics shorn of the
axiom scheme~(14), with deductions not permitted. But this
drastic prohibition would indeed deprive the boxer of the use
of his fists.
\goodbreak
\section7.{Conclusions}
This investigation has led to the following conclusions. Intuitionism
is an extension of classical mathematics, both
in its proof syntax and semantically. Intuitionistic semantics, being an
extension of classical semantics, is ontological and Platonic.
Attempts to give an epistemic basis for intuitionistic semantics,
to explain the intuitionistic meaning of the logical constants in terms
of proof, are not viable. What is genuinely new in intuitionism is
Brouwer's creation of two new logical constants, the
constructive~$\exists$ and the constructive~$\vee$, together
with a rich notion of truth containing much constructive information.
But so far intuitionism has not proved to be a powerful tool in
constructive mathematics.
This is a somewhat negative assessment of intuitionism, but I want
to end on a constructive note. Since the advent of digital computers,
attention has turned from effective methods---functions computable in
principle---to feasible algorithms and programs. There is strong
evidence that polynomial time functions provide the
correct formalization of the intuitive notion of a feasible
computation, and unlike the situation for recursive functions
there is a purely syntactical characterization [BeCo] [Be] [Le]
of polynomial time functions. I am convinced that intuitionism
reformulated in this context will become a powerful practical
method for constructing and verifying feasible algorithms,
and that Kleene's realization predicate will provide an incisive tool
for analyzing problems concerning interactive programs.
\bigskip
\frenchspacing
\parskip=\smallskipamount
\parindent=0pt
\centerline{\bf References}
\medskip
[Be] Stephen J. Bellantoni, {\it Predicative Recursion and
Computational Complexity}, Technical Report 264$\slash$92,
Department of Computer Science, University of Toronto, 1992. Available
on-line at \hfil\break
{\eightrm ftp:$\scriptstyle\slash\slash$ftp.cs.toronto.edu$\scriptstyle\slash$pub$\scriptstyle\slash$reports$\scriptstyle\slash$theory$\slash$cs-92-264.ps.Z}
[BeCo] --- and Stephen Cook, {\it A new recursion-theoretic
characterization of the polytime functions}, Computational
Complexity, vol. 2, pp. 97--110, 1992.
[BePu] Paul Benacerraf and Hilary Putnam, eds., {\it Philosophy of
Mathematics: Selected readings}, 2nd ed., Cambridge University Press,
Cambridge, 1983.
[Br] L. E. J. Brouwer, {\it Die onbetrouwbaarheid der logische
principes}, \hbox{Tijdschrift} voor wijsbegeerte, vol. 2, 1908.
[Du] Michael Dummett, {\it The philosophical basis of
intuitionistic logic}, Proceedings of the Logic Colloquium, Bristol,
July 1973, ed. H. E. Rose and J. C. Shepherdson, pp. 5--40,
North-Holland, Amsterdam, 1975. Reprinted in [BePu].
[G\umlaut o30] Kurt G\umlaut odel, {\it Die Vollst\umlaut andigkeit
der Axiome des logischen Funktionenkalk\umlaut uls}, Monatshefte
f\umlaut ur Mathematik und Physik, vol. 37, pp. 349--360, 1930.
[G\umlaut o31] ---, {\it \umlaut Uber formal unentscheidbare
S\umlaut atze der Principia Mathematica und verwandter Systeme I},
Monatshefte f\umlaut ur Mathematik und Physik, vol. 38, pp. 173--198,
1931.
[G\umlaut o33] ---, {\it Zur intuitionistischen
Arithmetik und Zahlentheorie}, Ergebnisse eines mathematischen
Kolloquiums, Heft 4, pp. 34--38, 1933.
[He] Arend Heyting, {\it Die formalen Regeln der intuitionistischen
Logik}, Sitzungsberichte der Preussischen Akademie der Wissenschaften,
Physi\-kalisch-mathematische Klasse, pp. 42--56, 1930.
[He31] ---, {\it Die intuitionistische Grundlegung der Mathematik},
Erkenntnis, vol. 2, pp. 106--115, 1931. English translation as
{\it The intuitionist foundations of mathematics\/} in [BePu].
[H\umlaut o] L. H\umlaut ormander, {\it On the theory of general
partial differential operators}, Acta. Math., vol. 94, pp. 161--248,
1955.
[Kl] S. C. Kleene, {\it Introduction to Metamathematics},
North-Holland, Amsterdam, 1971. First published 1952.
[Kl45] ---, {\it On the interpretation of intuitionistic number
theory}, Jour. Symbolic Logic, vol. 10, pp. 109--124, 1945.
[Kr] Saul A. Kripke, {\it Semantical analysis of intuitionistic
logic I}, Formal Systems and Recursive Functions, Proceedings of the
Eighth Logic Colloquium, Oxford, July 1963, ed. J. N. Crossley and
M. A. E. Dummett, pp. 93--130, North-Holland, Amsterdam, 1965.
[Le] Daniel Leivant, {\it Ramified recurrence and computational
complexity I: Word recurrence and poly-time}, in
Feasible Mathematics II, ed. Peter Cole and Jeffrey Remmel,
pp. 320--343,
Perspectives in Computer Science, Birkhauser-Boston, New York, 1994.
[M-L\umlaut o] Per Martin-L\umlaut of,
{\it Intuitionistic Type Theory}, Bibliopolis, Naples, 1984.
[Ne] David Nelson, {\it Recursive functions and intuitionistic
number theory}, Trans. Amer. Math. Soc., vol. 61, pp. 307--368, 1947.
[NoPeSm] Bengt Nordstr\umlaut om, Kent Petersson, Jan M. Smith,
{\it Programming in Martin-L\umlaut of's Type Theory: An
Introduction}, The International Series of Monographs on
Computer Science 7, Clarendon Press, Oxford, 1990.
[Ta] Alfred Tarski, {\it A Decision Method for Elementary Algebra
and Geometry}, Berkeley, 1951.
\bye