## Archive for the ‘Supervenience and Determination’ Category

### Determination and Definability: Newtonian Mechanics & Statistical Dynamics

September 30, 2010

The last determination-without-reduction example I’ll talk about from Hellman is the one he gives regarding classical particle mechanics and statistical mechanics.   A process is reversible, if it could proceed forward in time as well as backwards (think of playing, in reverse, a video of a model of the orbits of the planets around the sun).  A process is irreversible when it can only proceed in one direction of time and would violate physical law proceeding in the opposite time direction (think of playing, in reverse, a video of a gas escaping a bottle).

Newtonian laws governing motion are time-symmetric and reversible: forward motions of Newtonian systems are on par with motions backwards in time.  Statistical mechanics, on the other hand, attempts to explain irreversible behavior of the higher-level observable phenomena of thermodynamics, like temperature, diffusion, pressure and entropy.  The macroscopic properties of thermodynamics are defined in terms of the phase quantities of Newtonian mechanics with the addition of a measure theoretic probability density function as well as some assumptions a-priori about distribution (e.g., equiprobability of equal volumes of phase space).  For macroscopic properties like entropy, more complex probabilistic concepts come into play: dividing phase space into cells and adding to the mechanical motions the periodical average of the cells, then entropy increases and the distribution density tends uniformly to equilibrium.  This is what the Ehrenfests (Paul and Tanya) called coarse-graining and it’s a method for converting a probability density in phase space into a piece-wise constant function by density averaging in cells in phase space. Coarse-grained densities are needed to avoid paradoxical results concerning how irreversible processes of thermodynamics arise from completely reversible mechanical interactions.

The question Hellman poses is: can the higher-level concepts of thermodynamics be explicitly defined in the language of Newtonian mechanics?  Determination, at least, holds: having fixed any two closed particle systems that are identical at the level of Newtonian mechanics, their higher level behavior (studied by thermodynamics) will be identical.  Each of these systems will be represented by the same trajectory in phase space.  Hellman gives the example of one system entering higher entropy regions at a given time, then the same entropy regions will be entered by the other system.

Definiability on the other hand is more difficult to establish, since the language of classical Newtonian mechanics is not as mathematically robust as that of statistical dynamics.  And ultimately, definability in this case requires a significant change to the language of mechanics: additional vocabulary for measure theory or even set theory to speak of mathematical objects more generally.

We’ve covered some examples of determination without reduction: (1) No explicit definition of the truth predicate $\mathsf{Tr}(x)$ in $\textup{L}$ in spite of  $\textup{L}$-truth and $\textup{L}$-reference determining $\mathsf{Tr}$-truth and $\mathsf{Tr}$-reference, respectively, in $\alpha$-structures; (2) in $\alpha^{\textup{C}}$-structures, $\textup{L}$-reference determines $\textup{DF}arith$-reference, but within this same class of structures, $\textup{L} + \textup{G}(x)$ does not inductively define $\textup{DF}arith$; (3) The mechanical properties of two fixed, closed particle systems determine their macro-level thermodynamic properties without thereby establishing definability, as the language of Newtonian mechanics would have to undergo significant changes incorporating the language, at least, of measure theory.

Next, I’m going to cover an important distinction Hellman makes between the ontological and and ideological status of  properties, relations, attributes, etc. beyond predicates and sets.  After that I’ll go into a detailed, critical, evaluation of some of the anti-materialist claims David Chamers makes in The Conscious Mind: In Search of a Fundamental Theory.

### Hellman’s Second Definability Example

September 27, 2010

Tarski’s theorem shows that the set $\mathsf{Th} (\Omega)^\#$ of code numbers of sentences true in $\Omega$ is not definable.  This has negative consequences for the prospects of aritmetically defining a truth predicate for arithmetic. Nevertheless, Tarski showed how to define truth in terms of satisfaction and by giving an inductive definition of satisfaction beginning with atomic sentences and on up with sentences of higher and higher complexity in terms of the satisfaction of their parts. While a stronger set theory or higher-order logic is still required to convert the inductive definition into an explicit one, Hellman investigates how Det-T and Det-R measure up against this weaker type of definability.

Addison’s theorem establishes that the class of arithmetically definable sets of numbers is itself not an arithmetically definable class of sets.  This means that in the language, $\textup{L}$, of arithmetic extended by a one place predicate $\textup{G}(x)$, no formula $\textup{S}$ is true in $\Omega$ when $\textup{G}(x)$ is assigned a set $\textup{A}$ of numbers such that $\textup{A}$ is definable over $\Omega$.  The proof is involved and is clinched by contradiction on the existence of a generic arithmetical set (I may, or may not, get around to explaining what this is in the next post or so, since it involves explanation of the technique of forcing).

The example turns on this: Addison’s theorem shows that the predicate $\textup{DF}arith$ = ‘set of numbers definable in arithmetic’ is not inductively definable in  arithmetic. Nevertheless, $\textup{DF}arith$ is determined by the primitive predicates of $\textup{L}$.  Set out the following: $\alpha$ is the set of standard $\omega$-models of arithmetic.  Now extend each model $m \in \alpha$ by adding the class $\textup{C}$ of all sets $\textup{X}$ of natural numbers from the domain of $m$ such that $\textup{X}$ is in the extension in $m$ of a formula $\textup{B}(x)$ of $\textup{L}$ with one free variable.  Let $\alpha^{\textup{C}}$ be the class of $\alpha$-structures extended in this way.

So, $\alpha^{\textup{C}}$ contains all the standard models of arithmetic that have standard interpretations of $\textup{DF}arith$. This means that in $\alpha^{\textup{C}}$-structures, $\textup{L}$ reference determines $\textup{DF}arith$ reference but within this same class of structures, $\textup{L} + \textup{G}(x)$ does not inductively define $\textup{DF}arith$.  In spite of this lack of definiability,  Det-R still holds since (and this is all up to isomorphism) any two structures that assign the same interpretations to the primitives of $\textup{L}$ must also assign the same extension to the well-formed-formulas of $\textup{L}$ with only one free variable.  So, up to isomorphism, the same sets of natural numbers are assigned to the distinguished elements of $\textup{C}$

The next example from Hellman is not mathematical, but from classical particle mechanics.  After that I will go into Hellman’s clarification on the difference between the ontological and ideological status of attributes, properties and relations before moving into constructive work on the mental.

### Hellman’s First Definability Example

September 15, 2010

What Tarski’s Theorem shows is that interpreted formal languages that are interesting (i.e., with enough expressive machinery to represent arithmetic or fragments thereof) cannot contain a predicate whose extension is the set of code numbers (e.g., $\mathsf{Th}(\Omega)^\#$) of sentences true in the interpretation.  The extension of any proposed truth predicate in such a system escapes the definitional machinery of the system.  Of course, the truth predicate for first-order arithmetic can be defined with appeal to  a stronger system, like second-order arithmetic, in the case of the Peano Axioms, etc.

Hellman’s first example is the following.  It is a corollary of Tarski’s theorem that a theory in the language, $\textup{L}$, of arithmetic (e.g., an axiom system $\textup{T}$ containing Robinson Arithmetic ($\mathsf{Q}$)) with symbols for zero, successor, addition, and multiplication, when extended with a one place predicate, $\mathsf{Tr}(x)$ (read “true in arithmetic” such that for each closed sentence $\textup{S}$ in $\textup{L}$ a new axiom of the form $\ulcorner \mathsf{Tr}(n) \leftrightarrow \textup{S}\urcorner$ (where $n$ is the numeral for a code number for the sentence $\textup{S}$), the resulting theory $\textup{T}^{*}$ contains no explicit definition of  $\mathsf{Tr}(x)$ in $\textup{L}$.

Connecting this to our ongoing discussion of determination of truth and reference in special collections of models, suppose that $\alpha$ is the class of standard $\omega$-models of $\textup{T}^{*}$.  Then we have:

• In $\alpha$-structures $\textup{L}$-truth determines $\mathsf{Tr}$-truth.
• In $\alpha$-structures $\textup{L}$-reference determines $\mathsf{Tr}$-reference

Which means that once you have the arithmetical truths in the class $\alpha$, then so are the ‘true-in-arithmetic’ truths and the same goes for the reference of the vocabularies.  To avoid collapsing to reductionism via Beth’s theorem, note that there is no first-order theory (like those under discussion) in a language with finitely many non-logical symbols has as it’s models just the models in in $\alpha$.

If you extend $\alpha$ to $\alpha^{*}$ containing all models of $\textup{T}^{*}$, then you do get reductionism, since determination of reference in $\alpha^{*}$ amounts to implicit definability in $\textup{T}^{*}$ -thus showing that there exist non-standard models of arithmetic.

This is a good example because it is clear, based on popular, well established results and firmly shows how determination of truth and reference in one core theory carry over to it’s extension, without thereby reducing the extension to the core.

In the next update I’ll discuss Hellman’s second definability example.

### “Physicalism” Concluding Summary

September 7, 2010

It took me a good while to get through this paper –more than I expected –but here we are.

What has Hellman accomplished?

First he showed us how to build the ontological principle of physical exhaustion, PE, $(\forall x)(\exists \alpha) (x \in \textup{R}(\alpha))$. PE allows us to say that everything is exhausted by the physical without (embarrassingly) implying that everything is in the extension of a basic physical predicate.

Then he introduced the identity of physical indiscernables, IPI, and IPI’, $(\forall u)(\forall v)((\forall \phi) (\phi u \leftrightarrow \phi v) \rightarrow (u = v)$ and $(\forall \psi) (\forall u) (\forall v) (\exists \phi) (\psi u \wedge \lnot \psi v \rightarrow \phi u \wedge \lnot \phi v)$, respectively.  IPI says that if two objects have the same physical properties, then they are the same thing, while IPI’ says that no two objects are distinct with respect to a $\psi$ property without being distinct with respect to a $\phi$ property.

These three principles neither independently nor in conjunction imply or require reduction to the physical, but they are also too weak to express the physicalist thesis that physical phenomena determine all phenomena.  He then turns his attention to principles of determination. In addition to PE and IPI/IPI’ Hellman introduces the determination of truth, Det-T: in $\alpha$ structures $\phi$ truth determines $\psi$ truth if, and only if, $(\forall m)(\forall m')((m, m' \in \alpha \wedge m \vert \equiv m' \vert \phi) \rightarrow m \vert \psi \equiv m' \vert \psi)$, and Det-R: in $\alpha$ structures $\phi$ reference determines $\psi$ reference if, and only if, $(\forall m)(\forall m') ((m, m' \in \alpha \wedge m \vert \phi = m' \vert \phi) \rightarrow m \vert \psi = m' \vert \psi)$.

Det-T says that once you have given a complete description of things in $\phi$ terms, there is only one correct way to describe them in $\psi$ terms.  Det-R says that if two $\alpha$ structures agree in what they assign to the $\phi$ terms, then they agree on what they assign to the $\psi$ terms.

Given a notion of definability, Hellman is able to state the thesis of physical reduction, PE: in $\alpha$ structures, $\phi$ reduces $\psi$, if, and only if, $(\forall \textup{P}(\textup{P} \in \psi \rightarrow \textup{P}$ is definable in terms of $\phi$ in $\alpha$ structures$)$.

Given that assumptions about the mathematical-physical determination of all truths and, maybe, reference, are regulative principles of scientific theory construction –the assumption in general that all terms and all theories are reducible to mathematical-physical terms is probably false. Hellman’s physicalist materialism is instead composed of PE, DET-T and DET-R, with PE independent of PR and the determination principles.

Because there are non-standard models of the laws of science, our formal systems do not model scientific possibility in a way that permits the move from physical determination to physical reduction –thus Beth’s definibility theorem poses no threat to physicalist materialism.  And any way you cut it, the link between theories as syntactic entities and reductionism doesn’t carry over to determination of reference, ruling out even accidental co-extensiveness between terms.

This sets up the theoretical background for an evaluation of the applications of physicalist materialism across disciplines and problems in philosophy of science, mind and social theory.  In the next updates I  will be covering Hellman’s 1977 “Physicalist Materialism” (Noûs) as well as giving a detailed, critical, evaluation of some of the anti-Materialist claims David Chamers makes in the early chapters of his book The Conscious Mind: In Search of a Fundamental Theory.

### Theories and Determination

September 3, 2010

What we saw about theories and reduction is that in the case of reducibility we have a theory that enables the proof of definitions enabling the reduction. $\psi$ reference can be determined by $\phi$ reference in $\alpha$ structures and it can also be the case that no term in $\psi$ is (even accidentally) co-extensional with a term in the vocabulary, $\phi$.  This means that determination of reference (Det-R) does not imply physical reductionism (PR) in cases where the set $\alpha$ in Det-R is substituted by the singleton of some member of $\alpha$.

The upshot is that the link between theories as syntactic entities and reductionism doesn’t carry over to determination of reference and even accidental co-extensiveness between terms is ruled out.

This concludes my notes on Hellman’s “Physicalism: Ontology, Determination, and Reduction”.  In the next update I’ll summarize what has been covered and give some examples and clarifications from another paper by Hellman, “Physicalist Materialism”, that appeared in Noûs in the late ’70s.

### Theories and Reduction

September 2, 2010

A theory $\Gamma$  is a set whose members are just the sentences for a language $\textup{L}$ that follow from the set.  In other words, $\Gamma$ is a theory in the language $\textup{L}$ if, and only if, $\Gamma$ is closed under logical consequence -i.e.,

$\Gamma \models \psi$ and $\psi$ is a sentence $\Longrightarrow \psi \in \Gamma$.

The elements of $\Gamma$ are the theorems of $\Gamma$.

Hellman notes that a theory, construed syntactically in this way, is essentially connected to the notion of reducibility, but is not so connected to the notion of determination. I’ll talk about reduction now, and then in the next update I’ll deal with determination.

Reduction applies in the realm of definitions, which are syntactic entities, that facilitate the elimination of definable terms.  He elaborates on this point in a footnote.  If in $\alpha$-structures, $\phi$ reduces $\psi$, there’s a not-necessarily-recursively-enumerable theory where every definition enabling the reduction of $\psi$ to $\phi$ is provable.  And its provability doesn’t depend on whether or not $\alpha$ is the set of models that theory.  Once you have $\alpha$, you can set out the theory,

$\bigcap \{\gamma : (\exists m) (m \in \alpha \wedge m$ is a model of $\gamma )\}$

This theory is the intersection of the theories of each of the models in $\alpha$ and contains every definition needed to reduce $\psi$ to $\phi$.    Physical Reduction, PR, is equivalent to,

In $\{ m: m$ models $\bigcap \{\gamma : (\exists m') (m' \in \alpha \wedge m'$ is a model of $\gamma )\}\}$, $\phi$ reduces $\psi$.

This means that if reducibility holds for a collection of structures, then, and only then, it holds for the set of models for all sentences that are true in each member of that set of structures (even if the set of structures is a proper subset of the set of all model of sentences true in each of those structures).

This won’t work for determination, as it is not so connected to the syntactic nature of theories.

### Beth’s Theorem: Is Determination Equivalent To Reducibility?

September 1, 2010

Hellman sets out the problem posed by Beth’s theorem like this.  All of the $\psi$ terms are implicitly defined by the $\phi$ terms in a theory $\textup{T}$  just in case that $(\forall m)(\forall m')((m, m' \in \textup{M}od (\textup{T}) \wedge m \vert \phi = m' \vert \phi) \rightarrow m \vert \psi = m' \vert \psi)$.  This last expression is an instantiation of Det-R, and is consequently equivalent to:

In $\textup{M}od (\textup{T})$-structures, $\phi$ reference determines $\psi$ reference.

Similarly, all of the $\psi$ terms are explicitly defined by the $\phi$ terms in a theory $\textup{T}$ just in case that:

In $\textup{M}od (\textup{T})$-structures, $\phi$ reduces $\psi$.

Here $\textup{T}$ is first-order and has finitely-many non-logical terms.  By establishing the equivalence of implicit and explicit definitions, what Beth’s theorem shows is that determination of reference of $\psi$ by $\phi$ is equivalent to $\psi$ reduction to $\phi$.  So for such a theory, and in regard of structures that are are all, and only, models of that theory, determination of reference is equivalent to reducibility. But this conclusion (which is to this day a commonly held view) is precisely what Hellman wants to avoid.

There’s a way out.  In the more general case where the set $\alpha$ contains non-standard models of $\textup{T}$, determination of reference is not equivalent to reducibility.  While physical reductionism, PR entails determination of reference, Det-R,  the converse does not hold –and it does not hold in cases wehre some models of the laws of science do not represent scientific possibility (i.e., they are non-standard).  What this means is that one can uphold the principles of physical determination without needing to uphold that all scientific facts are reducible to the mathematical-physical.  Scientific possibility itself can be specified as the subset $\alpha$ of the laws of science where some predicates are standard (e.g., those of pure arithmetic) and fix the structures that model scientific possibility such that this set does not capture all and only the models of a first order theory with finitely-many non-logical symbols. In general, mathematical concepts can be given their standard interpretation giving a set of structures that are not all and only the models of the theory.  A(nother!) great insight by Hellman here is that the very question of which models of the laws of science should be excluded to fix the structures modeling scientific possibility is a scientific question.  Science is a thing that changes and so its models must also change.

We should be able to finish off this paper in the next set of notes.

### Determination of Truth

August 19, 2010

§2.1 is about determination.  Hellman begins by introducing the determination of truth about one set of facts by another.  If a collection of facts, A, determines a separate collection of facts, B, then the truths about B cannot change without a change in the truths about A. The task is to spell this out model-theoretically to be able to evaluate the connection between determination and reduction.

We have at our disposal a family of languages where the interpretation of terms appearing in more than one language remains fixed across languages.  We have two sets of non-logical symbols, $\phi$ and $\psi$ and a set $\alpha$ of structures that represent what is scientifically possible.

In elementary model theory two models m and m’ are elementary equivalent (m $\equiv$ m’) if they make the same sentences true.  Also, the reduct of a model m to a given vocabulary, L (m $\vert$ L) is the structure obtained form m by excluding the interpretation of all the terms not appearing in L.

Set out the following: In structures $\alpha$, $\phi$ truth determines $\psi$ truth iff $(\forall m)(\forall m') ((m, m' \in \alpha \ \wedge \ m \vert \phi \equiv m' \vert \phi) \rightarrow m \vert \psi \equiv m' \vert \psi)$.  This says that once you have fixed the $\phi$ facts, the $\psi$ facts are also fixed -or that once a complete description of things has been given in $\phi$ terms, then there is only one correct way to describe things in $\psi$ terms.

This notion of determination of truth becomes more engaging when $\alpha$ is a subset of the models of a theory T, composed of lawlike truths and both $\phi$ and $\psi$ are each subsets of the language used to articulate T and $\psi \not\subset \phi$.  If T contains sentences with indispensable occurrences of both $\phi$ and $\psi$, then T serves to connect the $\phi$ and $\psi$ terms.  Such sentences link the determining phenomena with the determined phenomena.  If all the models of T are elementarily equivalent or if all of the models of T are isomorphic then the determination is trivial.

Having set out the determination of truth, Hellman will then introduce the determination of reference before moving onto evaluating reduction.

### Evaluating PE and IPI

August 17, 2010

§1.2 is shorter but makes a very important point about PE and IPI, reductionism and dualism. Let physical reductionism be the claim that for the theory, formulated in a suitable language, that contains all the lawlike truths of science (including physical science), every scientific predicate is definable in physical terms.  That is to say, for every n-place predicate P, one can derive using only the laws of science, a formula of this form:

$(\forall x_{1})\dots (\forall x_{n}) (\textup{P}x_{1} \dots x_{n} \leftrightarrow \textup{A})$

Here, A is a finite sentence that contains only physical vocabulary as the nonlogical terms and n distinct variables, $x_{1} \dots x_{n}$. These equivalences are provable within scientific theory and are logical consequences of its laws. What may come as a shock to those who want to avoid mysterious dualism by adhering to such a strong form of reductionism is that even this reductionism is compatible with ontological dualism.

Hellman sets out the following basic theory to make this point. Let $\Sigma$ be the theory that contains only two one-place predicates, P and Q and these non-logical axioms:

$(\exists x)(\exists y) (x \not= y \wedge (\forall z) (z = x \vee z = y))$
$(\exists x) (\textup{P}x \wedge (\forall y) (\textup{P}y \rightarrow y = x))$
$(\exists x) (\textup{Q}x \wedge (\forall y) (\textup{Q}y \rightarrow y = x))$
$(\forall x) (\textup{P}x \vee \textup{Q}x)$

All that $\Sigma$ says is that there are just two objects and that just one of them is a P and just one of them is Q and everything else is either a P or a Q.  It follows from $\Sigma$ that Q is definable in terms of P, but $\Sigma$ gives no assurance that every object is exhausted by things that are P –as a matter of course, every interpretation where the axioms of $\Sigma$ are true partitions the domain into disjoint subsets of P and Q type things.  And dualism here is a minimal case, since the reasoning can be carried over to any finite number of predicates.

This is a quick and elegant way of showing that reductionism is no shelter from dualism.

What Hellman concludes here is that while PE is necessary for physicalism, neither it nor IPI are sufficient for these reasons: PE does is silent about the scope and power of physical laws, while in IPI, quantification is restricted to the actual world and consequently neither can be used to voice the physicalist thesis that all phenomena are determined by physical phenomena.

We’ll see in §2 what the precise connection is between determination, definability and reduction.

### Physicalism: Ontology, Determination, and Reduction

June 6, 2010

I’m moving into a new article by Hellman. This one is much harder than the last one. It’s an older paper that appeared in The Journal of Philosophy in the 70′s while he was still at Indiana. The paper is: Physicalism: Ontology, Determination, and Reduction. I’ve read Hellman’s physicalism paper several times, but only now am I ready to carry out a more careful study. The paper is divided into two sections. One for setting out a physicalist ontology and the other for giving principles to support the idea that physical facts determine all the facts. The point is to do this without implying reductionism. Should be fun.