## What is Aleph-Null?

August 8, 2011

Someone asked for an explanation of $\aleph_{0}$ (Aleph-null) for non-mathematicians.

Here goes.  Begin with the concept of counting a set of objects.  We can count a set of objects if we can arrange the objects in a list: first object, second object, third object, and so on, so that each object in the set appears on the list.  Listing the objects in a set is the same as assigning a positive integer to each member of the set.  Some countable sets are infinite.  For example, the set of positive integers itself, the set of positive even integers, the set of positive odd integers as well as the set of primes.

To see that the positive even integers are countable just line up the members of that set and match each member of the positive even integers to a member of the positive integers.  The function $f(n) = 2n$ expresses this matching of the two sets because each positive even number $2n$ is matched up with just one positive integer $n$.

Having the property of being able to be put into such a correspondence is called a set’s cardinality.  The cardinality of finite sets is just the number of elements they have; two sets have the same cardinality if, and only if, they have the same number of members/elements.  For example, the cardinality of a set of a dozen eggs and a set of a dozen pencils is 12.

Any infinite set that can be put into a one-to-one correspondence with the positive integers is countably infinite. We can also speak of the cardinality of countably infinite sets (like the set of positive integers and the set of positive even integers).  We can speak of the number of elements in these sets, which is an infinite, or transfinite number, denoted by $\aleph_{0}$.  So we can say that the number of positive even integers is $\aleph_{0}$.

So the set of positive integers $(1, 2, 3, \cdots)$ has cardinality $\aleph_{0}$. Say we list out each positive integer and say  we want to add the number 0 at the beginning of this list.  Well, we can just add it and shift every other number out to the right (to position $n+1$).  The cardinality of this set is still $\aleph_{0}$ since this set can also be matched up to the positive integers. This means that $\aleph_{0} + 1 = \aleph_{0}$.  Suppose we go off the deep end and want to add a countable infinity of other numbers to the set of positive integers, say the negative integers $(-1, -2, -3, \cdots)$.  We can do this by letting each $n$ in the original set move over to position $2n$. We can match this resulting list to the positive integers, and the cardinality of the resulting set will also be $\aleph_{0}$.  This means that $\aleph_{0} + \aleph_{0} = \aleph_{0}$.

Finally, suppose we’re really crazy and want to add $\aleph_{0}$ sets each of $\aleph_{0}$ members to our original list.  The list can always be “expanded” to accommodate each member of each set. For example, since each of the sets we want to accommodate are countably infinite, each of their members can be listed $1, 2, 3, \cdots$ Take the first set and list out each element in this fashion: for each $n$ place the first element of the set in position $n$ and leave $n$ spaces open on the list before placing the subsequent element. The first element will be in the first position, the second element will be in the third position, the third member will be in the 6th position, etc.  So now we have 1 empty space after the first element, two spaces after the second, three spaces after the third, four after the fourth, and so on and so forth. Taking the second countably infinite set, place each of its members in the leftmost space open after setting out the first countably infinite set. The situation now is analogous to when we laid out the first countably infinite set: place each member of the third countably infinite set and place it in the leftmost open space available after having set out the second countably infinite set.  You can do this for each of the countably infinite sets of countably infinite members. The resulting list can be matched up to the positive integers, meaning that $\aleph_{0} \times \aleph_{0} = \aleph_{0}$.

So we have shown that $\aleph_{0} + \aleph_{0} = \aleph_{0} = \aleph_{0} \times \aleph_{0}$. These are elementary equations of the arithmetic of infinite cardinals that are a bit different from those of the arithmetic of finite cardinals that everyone is familiar with, like, $2 +1 = 3, 1 \neq 3; 2 + 2 = 4, 2\neq 4; 2 \times 2 = 4, 2 \neq 4$.

While $\aleph_{0}$ can accommodate quite a bit, there are sets that have a cardinality higher than $\aleph_{0}$.  I’ll produce two such cardinal numbers in the next update.

## FOM Discussion on Consistency of Peano Arithmetic and Foundations of Mathematics

May 20, 2011

I’ve been folloiwing a discussion on the Foundations of Mathematics List (FOM) regarding these two videos (1, 2) on what noted Field’s Medalist Vladimir Voevodsky thinks regarding the consistency of Peano Arithmetic and the foundations of mathematics.  His comments have caused quite a stir on the list.  Below is a message to Voevodsky by Harvey Friedman and Voevodsky’s reply.

“Dear Professor Voevodsky,

I have become aware of your online videos at http://video.ias.edu/voevodsky-80th and http://video.ias.edu/univalent/voevodsky. I was particularly struck by your discussion of the “possible inconsistency of Peano Arithmetic”. This has created a lot of attention on the FOM email list. As a subscriber to that list, I would very much like you to send us an account of how you view the consistency of Peano Arithmetic. In particular, how you view the usual mathematical proof that Peano Arithmetic is consistent, and to what extent and in what sense is “the consistency of Peano Arithmetic” a genuine open problem in mathematics. It would also be of interest to hear your conception of what foundations of mathematics is, or should be, or could be, as it appears to be very different from traditional conceptions of the foundations of mathematics.

Respectfully yours,

Harvey M. Friedman
Ohio State University
Distinguished University Professor
Mathematics, Philosophy, Computer Science”

“Dear Harvey,

such a comment will take some time to write …

“To put it very shortly I think that in-consistency of Peano arithmetic as well as in-consistency of ZFC are open and very interesting problems in mathematics. Consistency on the other hand is not an interesting problem since it has been shown by Goedel to be impossible to proof [sic].

I think that this reply still leaves a lot of room for interpretation. It could simply mean that Hilbert’s program is no longer interesting or it could be an call for finitism in foundations or it could indicate the possibility of fruitful mathematics by means of searching for an inconsistency proof of these well established systems.  “Curiouser and curiouser!” Cried Alice …

## Clarity In Talking About The Status Of Attributes

October 13, 2010

When the ontological status of a property or attribute is in question, the central issue is the ontological status of an object’s possessing or exemplifying a property.  When the ideological status of a property or attribute is in question, the central issue are the kinds of predicates that express the property.  The following example brings this out starkly:  A property can be mental, but not physical (ideologically), but can be ontologically physical, but not mental.

If a mental predicate $\textup{A}(x) \in \Psi$ is not lawlike coextensive with a physical predicate $\textup{B}(x) \in \Phi$, where $\Psi$ and $\Phi$ are appropriate vocabularies, the attribute expressed by $\textup{A}(x)$ is mental, but not physical.  Nevertheless, by the principle of physical exhaustion (PE), the attribute is physical, but not mental (ontologically), as it is possessed solely by physical objects.

Entity-wise, every attribute is mathematical-physical, but only those attributes are mathematical-physical that are expressible by mathematical-physical predicates.  So, if physical reductionism is false (which is likely the case), there are attributes which are mathematical-physical entities, but are not mathematical physical attributes.

The confusion arises when terms like “mental” or “material” are used to call out properties or attributes without clearly setting out whether one is talking about the ontological or ideological status of the attribute.  This is an important point by Hellman, and we will see how this distinction and physicalist materialism in general are useful in clarifying questions of reductionism and materialism in the philosophy of mind when we turn to an analysis of Chalmers’ anti-materialist claims.

## The Ontological and Ideological Status of Attributes

October 12, 2010

With the principle of Physical Exhaustion (PE: $(\forall x)(\exists \alpha)(x \in \textup{R}(\alpha))$ where $\textup{R}(\alpha)$ is a rank in the hierarchy of the physical), which allows us to say that everything is exhausted by the physical, every attribute is mathematical-physical (i.e., existing somewhere (possibly high-up) on the set-theoretic hierarchy; see this and this). Thus the ontological status of attributes is mathematical-physical.  Now, for a given vocabulary $\psi$, and for any attribute that expressed by a predicate that makes essential use of members of $\psi$, call that attribute a $\psi$-attribute. So, for example, attributes expressed by physical predicates are physical attributes and those expressed by psychological predicates are psychological. This is their ideological status.

The ontological status of a thing has to do with which extensions of predicates (of ontological kind) under which the thing falls. The most encompassing of such predicates is “is mathematical physical’.  But there are other, narrower ways to distinguish the ontological status of things.  For example, some metaphysical predicates, “is abstract”, “is concrete”, or some scientific ones, “is an elementary particle”, “is an event”, “is a person”, “is a social process”, “is a physical magnitude”, etc. Some other ontological kind predicates have empty extensions: “is a soul”, “is a phenomenally raw feel not identifiable with any entity in the hierarchy of the physical”.  But if something were to satisfy these predicates, the predicates would indicate the ontological kind of these things.  So the important semantic relationship here is satisfaction of ontological kind predicates.

This is different than in the case of the ideological status of attributes, where the important semantic relationship is expression of an attribute by a predicate –i.e., the relationship between the argument and value of a universalizing function. Every entity has an ontological status, but only universals have ideological status determined by the types of predicates for which the universal under consideration is the value of the universalizing function.

So the ideological status of an attribute, is given by the predicates that express it.  These predicates themselves can be classified, for instance, according to scientific discipline, psychological-predicates, physical-predicates, Economic/Sociological-predicates, etc.  These classifications are historical and are subject to change due to a variety of factors –not all of them scientific.  Hellman explains that attributes themselves may have more than one ideological status: for the predicate ‘is in pain at t‘ is coextensive in a law-like fashion with a complex physical predicate then ‘being in pain’ is both a psychological and a physical attribute since it is expressed by both psychological and physical predicates.

In the next update I’ll go into Hellman’s discussion of the confusion that results when the ideological and ideological status of attributes is not clearly stated in debates concerning materialism and the mental.

## Introduction To Hellman’s Treatement of Universals

October 12, 2010

Universals, for our purposes, are sets, predicates, properties, relations and attributes –the same universal can instantiate or subsume numerically distinct things.  Among universals we distinguish between those that are extensional, like sets and predicates and those that are intensional, like properties, relations and attributes.

Roughly, the distinction between these two types of universals is the following.  Those that are extensional obey the principle that equivalence implies identity, while those that are intensional are in violation of this principle.  In cases of non-deformity, the property of being a creature with a kidney is extensionally the same as the property of being a creature with a heart insofar as the same things have each, but the properties are different.  Sets and predicates, on the other hand, do satisfy the principle of extensional equivalence: the set of creatures with a kidney and the set of creatures with a heart have the same members, and are thus identical.

Now, universals in general are taken to fall within the range of functions from the set $\textup{P}$ of predicates of a language $\textup{L}$ to the universal they give expression to $f: \textup{P} \mapsto \textup{U}$, where $\textup{U}$ is the set of universals of $\textup{L}$.  Hellman explains that among such functions there are some that assign universals with much discrimination: two predicates are mapped to the same universal only if the predicates are identical. And others do so with less discrimination: coextensionality is enough. Hellman imagines a partial ordering of universals based on discrimination criteria with predicates being among the most discriminating and extensions among the least.

Since Hellman is concerned with properties and relations used in science, so the first thing we need to do is set up identity criteria for these attributes. For example, we want to say that temperature is the very same magnitude as mean molecular kinetic energy. So two predicates $\textup{F}$ and $\textup{G}$ are mapped to the same universal $u$ just in case it is a scientific necessity that these two predicates apply to the same things.  In other words, $\textup{F}$ and $\textup{G}$ express the same attribute only in case in every model in $\alpha$, $\textup{F}$ and $\textup{G}$ have the same interpretation.  Remember that $\alpha$ is the set of structures modeling scientific possibility.

So, how do we express attributes given this way of thinking about them?  Hellman suggests using the technique from modal logic, such that the function from members of $\alpha$ to the extensions assigned to that predicate by each model in $\alpha$ is identified with the attribute expressed by that predicate.  In this way, two predicates express the same attribute in case they do so in every structure representing a scientific possibility; thereby getting the desired necessity.

If we need to represent universals that discriminate differently than scientific attributes, we can use sets or collections of structures different from $\alpha$ insofar as the universalizing function is no more discriminating than those functions that assign predicates to the same universal in case the predicates are logically equivalent.

For these universals, the identity conditions are the following:  functions (i.e., attributes) are identical just in case they assign the same arguments (i.e., members of $\alpha$) to the same values (i.e., extensions of predicates).   Or simply say that two predicates $\textup{F}$ and $\textup{G}$ express the same attribute if, and only if, $\forall (x_{1}, \dots, x_{n})(\textup{F}x_{1}, \dots, x_{n} \leftrightarrow \textup{G}x_{1}, \dots, x_{n})$ is a law of science.

This last bit will be what links the discussion of attributes to that of reducibility –since we need to make use of explicit definability.

In the next set of notes I’ll go into the difference that Hellman notes between the ideological and ontological status of universals.

## Determination and Definability: Newtonian Mechanics & Statistical Dynamics

September 30, 2010

The last determination-without-reduction example I’ll talk about from Hellman is the one he gives regarding classical particle mechanics and statistical mechanics.   A process is reversible, if it could proceed forward in time as well as backwards (think of playing, in reverse, a video of a model of the orbits of the planets around the sun).  A process is irreversible when it can only proceed in one direction of time and would violate physical law proceeding in the opposite time direction (think of playing, in reverse, a video of a gas escaping a bottle).

Newtonian laws governing motion are time-symmetric and reversible: forward motions of Newtonian systems are on par with motions backwards in time.  Statistical mechanics, on the other hand, attempts to explain irreversible behavior of the higher-level observable phenomena of thermodynamics, like temperature, diffusion, pressure and entropy.  The macroscopic properties of thermodynamics are defined in terms of the phase quantities of Newtonian mechanics with the addition of a measure theoretic probability density function as well as some assumptions a-priori about distribution (e.g., equiprobability of equal volumes of phase space).  For macroscopic properties like entropy, more complex probabilistic concepts come into play: dividing phase space into cells and adding to the mechanical motions the periodical average of the cells, then entropy increases and the distribution density tends uniformly to equilibrium.  This is what the Ehrenfests (Paul and Tanya) called coarse-graining and it’s a method for converting a probability density in phase space into a piece-wise constant function by density averaging in cells in phase space. Coarse-grained densities are needed to avoid paradoxical results concerning how irreversible processes of thermodynamics arise from completely reversible mechanical interactions.

The question Hellman poses is: can the higher-level concepts of thermodynamics be explicitly defined in the language of Newtonian mechanics?  Determination, at least, holds: having fixed any two closed particle systems that are identical at the level of Newtonian mechanics, their higher level behavior (studied by thermodynamics) will be identical.  Each of these systems will be represented by the same trajectory in phase space.  Hellman gives the example of one system entering higher entropy regions at a given time, then the same entropy regions will be entered by the other system.

Definiability on the other hand is more difficult to establish, since the language of classical Newtonian mechanics is not as mathematically robust as that of statistical dynamics.  And ultimately, definability in this case requires a significant change to the language of mechanics: additional vocabulary for measure theory or even set theory to speak of mathematical objects more generally.

We’ve covered some examples of determination without reduction: (1) No explicit definition of the truth predicate $\mathsf{Tr}(x)$ in $\textup{L}$ in spite of  $\textup{L}$-truth and $\textup{L}$-reference determining $\mathsf{Tr}$-truth and $\mathsf{Tr}$-reference, respectively, in $\alpha$-structures; (2) in $\alpha^{\textup{C}}$-structures, $\textup{L}$-reference determines $\textup{DF}arith$-reference, but within this same class of structures, $\textup{L} + \textup{G}(x)$ does not inductively define $\textup{DF}arith$; (3) The mechanical properties of two fixed, closed particle systems determine their macro-level thermodynamic properties without thereby establishing definability, as the language of Newtonian mechanics would have to undergo significant changes incorporating the language, at least, of measure theory.

Next, I’m going to cover an important distinction Hellman makes between the ontological and and ideological status of  properties, relations, attributes, etc. beyond predicates and sets.  After that I’ll go into a detailed, critical, evaluation of some of the anti-materialist claims David Chamers makes in The Conscious Mind: In Search of a Fundamental Theory.

## Hellman’s Second Definability Example

September 27, 2010

Tarski’s theorem shows that the set $\mathsf{Th} (\Omega)^\#$ of code numbers of sentences true in $\Omega$ is not definable.  This has negative consequences for the prospects of aritmetically defining a truth predicate for arithmetic. Nevertheless, Tarski showed how to define truth in terms of satisfaction and by giving an inductive definition of satisfaction beginning with atomic sentences and on up with sentences of higher and higher complexity in terms of the satisfaction of their parts. While a stronger set theory or higher-order logic is still required to convert the inductive definition into an explicit one, Hellman investigates how Det-T and Det-R measure up against this weaker type of definability.

Addison’s theorem establishes that the class of arithmetically definable sets of numbers is itself not an arithmetically definable class of sets.  This means that in the language, $\textup{L}$, of arithmetic extended by a one place predicate $\textup{G}(x)$, no formula $\textup{S}$ is true in $\Omega$ when $\textup{G}(x)$ is assigned a set $\textup{A}$ of numbers such that $\textup{A}$ is definable over $\Omega$.  The proof is involved and is clinched by contradiction on the existence of a generic arithmetical set (I may, or may not, get around to explaining what this is in the next post or so, since it involves explanation of the technique of forcing).

The example turns on this: Addison’s theorem shows that the predicate $\textup{DF}arith$ = ‘set of numbers definable in arithmetic’ is not inductively definable in  arithmetic. Nevertheless, $\textup{DF}arith$ is determined by the primitive predicates of $\textup{L}$.  Set out the following: $\alpha$ is the set of standard $\omega$-models of arithmetic.  Now extend each model $m \in \alpha$ by adding the class $\textup{C}$ of all sets $\textup{X}$ of natural numbers from the domain of $m$ such that $\textup{X}$ is in the extension in $m$ of a formula $\textup{B}(x)$ of $\textup{L}$ with one free variable.  Let $\alpha^{\textup{C}}$ be the class of $\alpha$-structures extended in this way.

So, $\alpha^{\textup{C}}$ contains all the standard models of arithmetic that have standard interpretations of $\textup{DF}arith$. This means that in $\alpha^{\textup{C}}$-structures, $\textup{L}$ reference determines $\textup{DF}arith$ reference but within this same class of structures, $\textup{L} + \textup{G}(x)$ does not inductively define $\textup{DF}arith$.  In spite of this lack of definiability,  Det-R still holds since (and this is all up to isomorphism) any two structures that assign the same interpretations to the primitives of $\textup{L}$ must also assign the same extension to the well-formed-formulas of $\textup{L}$ with only one free variable.  So, up to isomorphism, the same sets of natural numbers are assigned to the distinguished elements of $\textup{C}$

The next example from Hellman is not mathematical, but from classical particle mechanics.  After that I will go into Hellman’s clarification on the difference between the ontological and ideological status of attributes, properties and relations before moving into constructive work on the mental.

## Fun With Robinson Arithmetic

September 17, 2010

in yesterday’s notes I mentioned Robinson Arithmetic ($\mathsf{Q}$) as a subsystem of the axiom system $\textup{T}$ in Hellman’s example. Just because it’s so much fun, let’s talk about $\mathsf{Q}$.  The axioms are listed below.  The dot notation distinguishes between the symbol used and the relation it represents (this notation was used earlier without explanation here and here). $\dot{\mathsf{S}}x$ indicates the successor function that returns the successor of $x$.

## Robinson Arithmetic

$(\textup{S1}) \ \forall x (\neg \dot{0} \ \dot{=} \ \dot{\mathsf{S}}x)$
$(\textup{S2}) \ \forall x \forall y (\dot{\mathsf{S}}x \ \dot{=} \ \dot{\mathsf{S}} y \rightarrow x \dot{=} y)$
$(\textup{L1}) \ \forall x (\neg x \dot{<} \dot{0})$
$(\textup{L2}) \ \forall x \forall y [x \dot{<} \dot{\mathsf{S}}y \leftrightarrow (x \dot{<} y \vee x \dot{=} y)]$
$(\textup{L3}) \ \forall x \forall y [(x \dot{<} y) \vee (x \dot{=} y) \vee (y \dot{<} x])$
$(\textup{A1}) \ \forall x (x \dot{+} \dot{0} \dot{=} x)$
$(\textup{A2}) \ \forall x \forall y [x \dot{+} \dot{\mathsf{S}}y \dot{=} \dot{\mathsf{S}}(x \dot{+} y)]$
$(\textup{M1}) \ \forall x (x \dot{\times} \dot{0} \dot{=} \dot {0})$
$(\textup{M2}) \ \forall x \forall y [(x \dot{\times} \dot{\mathsf{S}}y) \dot{=} (x \dot{\times} y) \dot{+} x]$

These axioms are from Hinman and correspond to what Boolos, Burgess and Jeffrey (in chapter 16 of Computability and Logic) call “minimal arithmetic”. Since the Boolos et al book is so widely studied, here is a map linking both axiomatizations.

$(\textup{S1}) \mapsto 1$
$(\textup{S2}) \mapsto 2$
$(\textup{L1}) \mapsto 7$
$(\textup{L2}) \mapsto 8$
$(\textup{L3}) \mapsto 9$
$(\textup{A1}) \mapsto 3$
$(\textup{A2}) \mapsto 4$
$(\textup{M1}) \mapsto 5$
$(\textup{M2}) \mapsto 6$

This is going to be as confusing as keeping track of names in One Hundred Years of Solitude. Boolos et al compare $\mathsf{Q}$/minimal arithmetic to the system $\mathsf{R}$, which has historically been called “Robinson Arithmetic”.  $\mathsf{R}$ differs from $\mathsf{Q}$ in that it contains,

$(\textup{Q0}) \ \forall x [x \dot{=} \dot{0} \vee (\exists y) (y \dot{=} \dot{\mathsf{S}}y)]$

And it replaces $\textup{L1}$-$\textup{L3}$ with,

$(\textup{Q10}) \ \forall x \forall y [x \dot{<} y \leftrightarrow \exists z (\dot{\mathsf{S}}z \dot{+} x \dot{=} y)$

In their comparison Boolos et al conclude that $\mathsf{Q}$ and $\mathsf{R}$ have a lot in common (e.g., some of the same theorems are provable), but in some cases $\mathsf{R}$ is stronger than $\mathsf{Q}$ –e.g., in the non-standard ordinal model of $\mathsf{Q}$ some simple laws (like $\dot{1} \dot{+} x \dot{=} x \dot{+} \dot{1}$) fail to hold, in addition to $(\textup{Q0})$ and $(\textup{Q10})$.  At the same time, howerver, $\mathsf{Q}$ is in some cases stronger than $\mathsf{R}$.  For example, there is a model (with domain $\omega$ and non standard elements $a$ and $b$ and natural interpretations of $\dot{0}$, $\dot{+}$, $\dot{\times}$, $\dot{\mathsf{S}}x$ of $\mathsf{R}$ where basic laws like $x \dot{<} \dot{\mathsf{S}}x$ fail to hold.

While it’s easier to represent all recursive functions in $\mathsf{Q}$ is than it is to do so in $\mathsf{Q}$ than in $\mathsf{R}$, any one of these will do for Hellman’s example.  What’s interesting is that in theories like $\mathsf{Q}$ and $\mathsf{R}$ which lack an induction schema (and thus fall just short of Peano Arithmetic) the truth predicate is undefinable.

Hopefully in the next update I’ll be able to get to Hellman’s second definability example.

## Hellman’s First Definability Example

September 15, 2010

What Tarski’s Theorem shows is that interpreted formal languages that are interesting (i.e., with enough expressive machinery to represent arithmetic or fragments thereof) cannot contain a predicate whose extension is the set of code numbers (e.g., $\mathsf{Th}(\Omega)^\#$) of sentences true in the interpretation.  The extension of any proposed truth predicate in such a system escapes the definitional machinery of the system.  Of course, the truth predicate for first-order arithmetic can be defined with appeal to  a stronger system, like second-order arithmetic, in the case of the Peano Axioms, etc.

Hellman’s first example is the following.  It is a corollary of Tarski’s theorem that a theory in the language, $\textup{L}$, of arithmetic (e.g., an axiom system $\textup{T}$ containing Robinson Arithmetic ($\mathsf{Q}$)) with symbols for zero, successor, addition, and multiplication, when extended with a one place predicate, $\mathsf{Tr}(x)$ (read “true in arithmetic” such that for each closed sentence $\textup{S}$ in $\textup{L}$ a new axiom of the form $\ulcorner \mathsf{Tr}(n) \leftrightarrow \textup{S}\urcorner$ (where $n$ is the numeral for a code number for the sentence $\textup{S}$), the resulting theory $\textup{T}^{*}$ contains no explicit definition of  $\mathsf{Tr}(x)$ in $\textup{L}$.

Connecting this to our ongoing discussion of determination of truth and reference in special collections of models, suppose that $\alpha$ is the class of standard $\omega$-models of $\textup{T}^{*}$.  Then we have:

• In $\alpha$-structures $\textup{L}$-truth determines $\mathsf{Tr}$-truth.
• In $\alpha$-structures $\textup{L}$-reference determines $\mathsf{Tr}$-reference

Which means that once you have the arithmetical truths in the class $\alpha$, then so are the ‘true-in-arithmetic’ truths and the same goes for the reference of the vocabularies.  To avoid collapsing to reductionism via Beth’s theorem, note that there is no first-order theory (like those under discussion) in a language with finitely many non-logical symbols has as it’s models just the models in in $\alpha$.

If you extend $\alpha$ to $\alpha^{*}$ containing all models of $\textup{T}^{*}$, then you do get reductionism, since determination of reference in $\alpha^{*}$ amounts to implicit definability in $\textup{T}^{*}$ -thus showing that there exist non-standard models of arithmetic.

This is a good example because it is clear, based on popular, well established results and firmly shows how determination of truth and reference in one core theory carry over to it’s extension, without thereby reducing the extension to the core.

In the next update I’ll discuss Hellman’s second definability example.

## Tarski’s Theorem

September 14, 2010

Today we’re going to follow Hinman’s proof of Tarski’s theorem. The proof is by contradiction, and employs diagonalization. First, assuming clauses (i)-(iii) on definability, a universal $\textup{U}$-section for the class of $\Omega$-definable sets is established and it’s diagonal is shown to not be definable over $\Omega$.

A $\textup{U}$-section is a set defined for any set $\textup{A}$ and any relation $\textup{U} \subseteq \textup{A} \times \textup{A}$ such that for each $a \in \textup{A}$, $\textup{A}_{a} := \{b : \textup{U}(a, b)\}$$\textup{U}$ is universal for a class $\mathcal{C} \subseteq \wp (\textup{A})$ if, and only if, every member of the class $\mathcal{C}$ is a $\textup{U}$-section.  The diagonal set of $\textup{U}$ is $\textup{D}_{\textup{U}} := \{a : \textup{U}(a, a)\}$.

Second, assuming the definability of $\mathsf{Th}(\Omega)^{\#}$, there is a formula $\phi (y)$ such that for all $p$, $p \in \mathsf{Th}(\Omega)^{\#} \Longleftrightarrow \phi (\dot{p}) \in \mathsf{Th}(\Omega)$.  But since the function $\mathsf{Sb}$ is effectively computable, it is definable  and so, $\mathsf{Sb}(m) = p \Longleftrightarrow \psi (\dot{m}, \dot{p}) \in \mathsf{Th}(\Omega)$, for some $\psi (x, y)$.  But this means that,

$m \in \textup{D}_{\textup{U}} \Longleftrightarrow \exists p [\phi(\dot{p}) \in \mathsf{Th}(\Omega)$ and $\psi(\dot{m}, \dot{p}) \in \mathsf{Th}(\Omega)]$

$m \in \textup{D}_{\textup{U}} \Longleftrightarrow \exists y [\phi(y) \wedge \psi(\dot{m}, y)] \in \mathsf{Th}(\Omega)$,

which means that $\textup{D}_{\textup{U}}$ is definable.  This is a contradiction.  So, $\mathsf{Th}(\Omega)^{\#}$ is not definable and not effectively countable.  Nor is $\mathsf{Th}(\Omega)$ effectively countable.

In the next update (probably on a plane!) I’ll discuss this theorem and get into Hellman’s example.