Scrigroup - Documente si articole

     

HomeDocumenteUploadResurseAlte limbi doc
BulgaraCeha slovacaCroataEnglezaEstonaFinlandezaFranceza
GermanaItalianaLetonaLituanianaMaghiaraOlandezaPoloneza
SarbaSlovenaSpaniolaSuedezaTurcaUcraineana

AdministrationAnimalsArtBiologyBooksBotanicsBusinessCars
ChemistryComputersComunicationsConstructionEcologyEconomyEducationElectronics
EngineeringEntertainmentFinancialFishingGamesGeographyGrammarHealth
HistoryHuman-resourcesLegislationLiteratureManagementsManualsMarketingMathematic
MedicinesMovieMusicNutritionPersonalitiesPhysicPoliticalPsychology
RecipesSociologySoftwareSportsTechnicalTourismVarious

Universals: Laws Grounded in Nature

various



+ Font mai mare | - Font mai mic



Universals: Laws Grounded in Nature

Bas C. van Fraassen

All bodies near the earth fall if released because such bodies must falland this is because of the law of gravity. Nave as this may sound, it presents the paradigm pattern of explanation by law. If we take it completely seriously, it signals that a law is not itself a necessity but accounts for necessity. The law itself, it seems, is a fact about our world, some feature of what this world is actually like, and what must happen is due to that fact or feature.



The accounts we shall now examine begin with a robust anti-nominalism: there are real properties and relations among things, to be distinguished from merely arbitrary classifications. There are also real properties of and relations among these properties. Such doctrines have been well known in Western philosophy since Plato and Aristotle, and were revived in the last hundred years by the Neo-Thomists and Bertrand Russell. Current universals accounts of laws, as I shall call them, are due to Dretske, Tooley, and Armstrong among others. 1 As we saw, the accounts we examined earlier also had to embrace some form of anti-nominalism, if not always so robust. Their adherents should therefore have little way to disagree with Dretske's conclusion that in such barren terrains [as the nominalists' ontology] there are not laws, nor is there anything that could be dressed up to look like a law.

1. Laws as Relations Among Universals

Consider Boyle's law, expressed briefly in the equation PV = kT. We explain what it says as follows: for any ideal body of gas, the product of pressure and volume is proportional to its absolute temperature. But perhaps that is not what it says, but rather what it implies for the things (if any) to which it applies. Perhaps what

end p.94

it says is rather that a certain relation holds between the quantities P, V, T. These quantities are (determinable) properties, while the gases are their instances (a gas at temperature 200K is an instance of the property having a temperature of 200K). On the present suggestion, the law is not about the instances, at least not directly, but about the quantities themselves. It states a relation between them.

What is this relation? Dretske symbolizes it as →, and reads it as yields; Armstrong usually says necessitates, though sometimes he uses Plato's phrase brings along with it. In our present example, we would then say: the joint property of having volume V and pressure P→ the property of having temperature VP/k. To use another example: to say that it is a law of nature that diamonds have a refraction index of 2.419 means here that the property of being a diamond → (yields, necessitates) the property of having refraction index 2.419. Laws eschew reference to the things that have length, charge, capacity, internal energy, momentum, spin, and velocity in order to talk about these quantities themselves and to describe their relationship to each other. (Dretske, Laws of Nature 1977, p. 263.)

There is at first something very appealing about this view. Certainly we recognize the typical symbolic expression of science here, in equations relating quantities. But that symbolic expression comes from mathematical usage where it has explicit definition in the theory of functions:

which then allows us to abstract, and apply algebraic reasoning to functions directly. 2 And science appears to find use for this mathematical practice because it represents physical quantities by means of mathematical functions. Temperature in a room is not uniform; it may be represented by means of a function that maps the set of points (spatial locations) in the room, into the Kelvin scale, or into another temperature scale. The universals account of laws begins by insisting that the representation of a physical quantity by means of such a function, is really only the representation of an accidental feature of that quantitynamely, of how it is instantiated in the world of things. The appeal is therefore at once undermined by the story itself.

Nevertheless, the story is intelligible: it says that the assertion of general law concerning things is really a singular statement of relation between specific properties. We have learned a good deal in the two preceding chapters, so we can immediately raise two problems. 3

identification problem:  

which relation between universals is the relation → (necessitation)?

inference problem:  

what information does the statement that one property necessitates another give us about what happens and what things are like?

The two problems are obviously related: the relation identified as necessitation must be such as to warrant whatever we need to be able to infer from laws. It is equally clear what the paradigm inference must be: if A and B are properties, then A necessitates B must entail that any instance of A is an instance of Bor perhaps even that any A must necessarily be a B.

We shall look carefully at various attempts to solve or circumvent these problems. But we can already see certain perils that must beset any such attempts.

The first peril concerns necessary. If the account is to meet the necessity criterion, then it must entail that, if it is a law that all A are B, then it is necessary for any A to be a B. In order not to become merely a necessitarian account with universals added in, the necessity cannot have any very basic status in the account. Nor could it be merely verbal or logical necessity: no matter what else is true about, for example, diamond-hood and refraction index, it cannot be a logical truth that the Koh-i-noor diamond has refraction index 2.419. Since the law must be the ground of the necessity, the general line has to be something like: if an A is a B, there are two further possibilities, namely that A necessitates B or that it does not. If, and only if, the former is the case, then the A is necessarily a Bthat is what necessarily means here. But note well: once necessity is given this derivative status, it can be of no use in the development of the universals account itself. For example, if we want to ask whether A necessarily necessitates B, we had better realize that this question will now have to mean either whether A necessitates B is a logical truth, or else whether some higher property that A has necessitates the property of necessitating B. Uncritical use of modal language, and its convenience, must be strictly eschewed.

end p.96

The second peril concerns the obvious idea to play the two problems off, one against another, by a sort of bootstrap operation. In the preceding paragraph we saw that we had better not ask of A necessitates B that it entails more than that any A is a B. But this role necessitate must be able to play, and if it can, then the inference problem is solved. So why not solve the identification problem by postulating that there exists a relation among universals with certain features which logically establish that it can play this crucial role? Call this relation necessitation, or if there may be more than one, call them necessitation relations or nomological relations. Then both problems will have been solved.

The difficulties which beset this ploy come as a dilemma. Define the relation of extensional inclusion: A is extensionally included in B exactly if all instances of A are instances of B. Then A is extensionally included in B entails that any A is a B. But if this qualifies as a necessitation relationship, then all ordinary universal regularities become matters of law. To avoid this trivialization, the envisaged postulate must include among the identifying features something more than is needed to solve the inference problem. Actually, something more is not quite aptfor if any relation which consists of extensional inclusion and something else also qualifies as nomological, the account is still trivialized. So the identifying features must be, not something more but something else, other than extensional inclusion. Now the second horn of the dilemma looms: how could features so distinctly different from extensional inclusion still solve the inference problem?

This is not just an open question, for we have considerable reason to think that they cannot. The law as here conceived is a singular statement about universals A and B. The conclusion to be drawn from it is about another sort of things, the particulars which are instances of A and B. True, the instances are, by being instances, intimately related to the universals. This is not enough, however, to make the inference look valid to us. We are intimately related to our parents, but that does not make us regard the inference

(1)  

X knows Y therefore

(2)  

All children of X are children of Y

as a valid inference. There are ways to turn this argument into a valid one, but they are either too trivial to be of comfort:

end p.97

(1)  

X has the same children as Y therefore

(2)  

or else require a special extra premiss that makes the connection:

(1)  

X has carnal knowledge only of Y

(1.5)  

All a person's children are born of someone of whom he or she has carnal knowledge therefore

(2)  

What we need from a universals account at this point, is such an extra premiss to make the connection. It would be very disappointing if we find merely another postulate that asserts the connection to be there, and does not explain it.

There is a certain simplicity to the minimum postulate needed: if A necessitates B then any A is (necessarily) a B. But its intuitive flavour may just result from the pleasing choice of words. As David Lewis put it: to call the relation necessitation no more guarantees the inference than being called Armstrong guarantees having mighty biceps.

We shall see in a later section that Armstrong's book on laws of nature contains a sustained attempt to solve the inference problem. But earlier versions of the universals account of laws were sanguine. Tooley writes Given the relationship that exists between universals and particulars exemplifying them, any property of a universal, or relation among universals . . . will be reflected in corresponding facts involving the particulars exemplifying the universal. 4 That this cannot be a matter of logic, is clear from the parallel example of parents and children born to them. Perhaps relations among parents are reflected in corresponding relations between their children, but it will take more than logic to find the correct correspondence function! With not much less sang-froid, but appreciating the problem, Armstrong himself writes: The inexplicability of necessitation just has to be accepted. Necessitation, the way that one Form (universal) brings another along with it as Plato puts it in the Phaedo (104d 105), is a primitive, or near primitive, which we are forced to postulate. 5 But what exactly is the postulate? And how will it fare with our dilemma? Dretske writes:

end p.98

I have no proof for the validity. . . . The best I can do is offer an analogy. Consider the complex set of legal relationships defining the authority, responsibilities, and powers of the three branches of government in the United States. . . . The legal code lays down a set of relationships between the various offices of government, and this set of relationships . . . impose legal constraints on the individuals who occupy these officesconstraints that we express with such modal terms as cannot and must. . . . Natural laws may be thought of as a set of relationships that exist between the various offices that objects sometimes occupy. 6

Yes, that is the analogy we know. But in the analogy we also know that what gives the legal code its force is the continued agreement to enforce it. This passage highlights the inference problem by reminding us of the similar problem of values in a world of facts. 7 Given that chastity is a value, how does it follow that we should value it? The noun and verb had better be more intimately connected than by spelling and sound, if the answer is to satisfy us. And the reminder is not felicitous. We know the sad fortunes of attempts to solve the problem of facts and values, by reifying values as abstract entities!

In this section I have presented the basic idea of the universals account of laws, and its two basic problems. It would be a great injustice not to examine the careful attempts by Tooley and Armstrong which implicitly react to what I call the identity and inference problems, and their attempts to extend the account into a satisfactory view of both deterministic and probabilistic laws. So we shall. Along the way, we shall encounter still further interesting problems.

2. The Lawgivers Regress

I painted a decidedly bleak picture, when I discussed the identification and inference problems for the universals account of laws. Here I shall describe a proposal which solves the identification problemsub specie certain postulates of coursebut in doing so, I shall argue, it renders the inference problem insoluble. The proposal is essentially due to Tooley (I shall ignore refinements of his theory, but, I believe, without loss in the present context). Nevertheless, I can best introduce it by continuing the quote from Dretske which elaborated the constitutional law analogy:

end p.99

Natural laws may be thought of as a set of relationships that exists between the various offices that objects sometimes occupy. Once an object occupies such an office, its activities are constrained by the set of relations connecting that office to other offices and agencies; it must do some things, and it cannot do other things. In both the legal and the natural context the modality at level n is generated by the set of relationships, existing between the entities at level n + 1. Without this web of higher order relationships there is nothing to support the attribution of constraints to the entities at a lower level. 8

What hierarchy of levels does he have in mind? Level 1, at the bottom, contains particulars (things and persons); level 2 contains the offices which they may occupy. That entities at level 1 must do such and such is determined by the fact that their offices are related so and so. But that their offices are so related is determined by something still higher. That is the legal code or Constitution, presumably, on the government side of the analogy; what is it on the nature side? And why have the levels stopped at 3is it because on the government side we find a single higher entity? Why has Dretske fallen into terminology that suggests higher and higher levels, a useless generalization if the levels stop at the third storey?

Fruitless, surely, to push this analogy too hard; but the discussion which follows makes Dretske's surprising choice of words oddly revealing. For a regress does lurk only just around the corner, if we introduce the general thesis that modal statements about one sort of entity, must have their ground in relations among another sort.

Tooley begins his account with a hierarchical description of the world of universals and particulars. 9 Particulars are of order zero; properties of, and relations among particulars, are of order one. In general, a universal (property or relation) is of order (k + 1) exactly if k is the highest order of entities to which it pertains. So for example, the property of being malleable is of order 1, and the property of being a property of gold is of order 2, but any relation between gold objects and the property of being malleable is of order 2 as well. 10 Let us call the latter impurely of order 2, saying that a universal is purely of order k + 1 if it pertains solely to entities of order k. Tooley moreover calls a universal irreducibly of order k if it cannot be analysed in terms of universals of lesser order. Finally, a relation R is called contingent exactly if there are entities which (a) can bear R to each other, but (b) can also fail to bear R to each other.

We perceive here at once what I called the first peril. I used pertain rather vaguely. Tooley does not mean that a relation is, for example, of order one exactly if all its instances are particulars. For any relation, of any order, may fail to have instances at all. So we should understand this as: a relation of order one is a relation which can have particulars, but not universals, among its instances. This can and cannot may be the same as those which appear in the definition of contingency. But what are they? It is not at all clear that they could be either purely logical or grounded in higher order universals.

Be that as it may, we can now proceed to the definition of what I called necessitation relations.

R is a necessitation relation exactly if it is a contingent relation, irreducibly and purely of some order > 2, and the statement that R holds between certain universals of order 1 entails that a certain other (corresponding) relation holds between particular instances of those universals.

When that corresponding relation among particulars is exactly that all instances of the first universal are instances of the second, let us call R a proper necessitation relationship. 11

See now how elegantly the introduced qualifications pre-empt the difficulties I raised for this sort of proposal before. This identification of necessitation relations does not catch extensional inclusion in its net, even if it is a real relationfor it is not irreducible. It is (analysable as) necessarily equivalent to a statementof form All A are Bin which no order two universals appear at all; hence reducible in Tooley's sense. Similarly for the example: A bears R to B exactly if there have been or will be black swans and all instances of A are instances of B. The statement to the right of exactly if is a necessary equivalent in which no universals of order two are even mentioned.

This fits in exactly with our story of the second peril, the dilemma relating the problems of identification and inference. Tooley has escaped the first horn by identifying necessitation relations (or as he calls them, nomological relations) by means of features that logically exclude from them any relation which is merely extensional inclusion, whether alone or in conjunction with something

end p.101

else. But thenhere comes the second hornhow can this identification be involved in, or even co-exist with a solution to the inference problem?

Perhaps this question puzzles you, for does not the identification of R as a proper necessitation relation involve as part of the definition, entailment of extensional inclusion? Yes; but that just forces the dilemma's second horn into the form: how could a relation R with the initially noted features (contingent, irreducibly and purely of order two) also entail extensional inclusion?

In my praise of how Tooley's account avoids the first horn of the dilemma, I assumed that extensional inclusion does not fit the definition of necessitation relationship, on the basis of a certain gloss I gave to analyse. My prescription was that extensional inclusion is not irreducibly of order two because A is extensionally included in B is necessarily equivalent to All instances of A are instances of B, and no order-two universals are involved in the latter proposition. Nor would the situation get better if we threw in a conjunct or disjunct that does involve an order-two universal, since irreducibility requires that any correct analysis should be entirely in terms of order-two universals.

But in that case, the necessary equivalent exhibited by any correct analysis will give no logical clue to what is true at orders below two. The entailment asked for cannot be logical entailment, just as a fact that is purely about the parents cannot logically imply anything about the children. (Note that it would not be purely about the parents if it described them as parents, i.e. as people having children!) So the entailment cannot be a matter of logic. What is it thennecessary con-commitance? of a non-logical sort?

Now we have arrived again, after all, at the door behind which lurks the first peril. Unless we are to return to a necessitarian accountwith universals as frillswe have besides logical necessity only the necessity that derives by definition from relations of a higher order. At the present point that would expand the word entail in our definition of necessitation as follows:

and the statement that A bears R to B entails that all A are B, in the following sense: there exists a(n impure) relation R′ of order three which R bears to A and B and is such that, if any X bears R to Y and R bears R′ to X and Y then it follows that all A are B,

end p.102

for the case of proper necessitation. But note the word follows in the last clauseit faces the same challenge as the earlier entails. We will have the same impossibility of taking it to be the follows of pure logic, and will be driven to a universal of order four. This is a typical Third Man regress.

Don't give up! Not every regress is vicious. Couldn't we just accept that each law must be backed by an infinite hierarchy of trans-order laws, which are factual statements about impure higher order relations?

I think we could, in all consistency, if not in all good conscience. But we should not underestimate the regress. It will not be merely infinite, but transfinite; for the infinite hierarchy envisaged above still leaves one, as a whole, with the question of why that should entail that if A bears R to B then every A is a B. The hopes of finessing modal statements at the level of particulars, which Dretske held out so temptingly, are not dashedbut they do recede ever farther into transfinite distances, as we pursue them.

Another hope surely is dashed. Distinguo: not all regresses are vicious, in that the existence of an infinite regress does not always reduce the theory to absurdity. Regresses in explanation, however, are not virtuous: they leave us with something which may be consistent but is not an explanation. The explanatory pattern This is so, because it must be so, because it is a law that it is so is destroyed if we say that it is only a sketch, the second because needing the additional phrase and if it is a law then it must be so, because it is a trans-order law that if something is a law then it is so and if it is a trans-order law that . . . . A hierarchy need have no top, but an explanation without a bottom, an ungrounded explanation, is no explanation at all. The regress had to be stopped if there was to be an explanation of nomological statements in non-nomological terms. But it cannot be stopped.

3. Does Armstrong Avoid the Regress?

David Armstrong's account of laws of nature is based on a theory of universals, previously developed in his Universals and Scientific Realism. The main question I want to address in this section is whether Armstrong's account avoids the debilitating regress we found above. 12 I will take for granted, therefore, that the relation(s)

end p.103

among universals which he introduces are independently identifiableleaving the identification problem aside, to focus on the inference problem. 13

To begin, let me repeat and add some terminology concerning universals. Armstrong's world has in it particulars and universals. The particulars come in two sorts, objects and states of affairs. A state of affairs will always involve some universal, whether monadic (property) or dyadic, triadic, . . . (relation). When I say particulars, I mean entities which are not universals, but which instantiate universals. It is countenanced that universals may instantiate other universals, and that there is a hierarchy of instantiation. So it is natural, as Armstrong does, to extend the terminology: call an nth order universal one which has instances only of (n − 1)th order, call it also an (n + 1)th order particular. Now call the particulars which are not universals, first-order particulars. When I say particular without qualification I shall mean these first-order particulars. If a, b are particulars and R a relation, and a bears R to b, then there is a state of affairs, a's bearing R to blet us designate this state of affairs as Rabin which these three are joined or involved. If a does not bear R to b, there is no such state of affairsa state of affairs which does not obtain is not real.

To bring to light just a little of Armstrong's theory, let us look at how he handles the problem known as Bradley's Regress. In the state of affairs described above, the terms R, a, b are all names, one of a universal and two of particulars. These three entities are joined in the state of affairs; but how? Is it that there is a certain three term relation R′ which R bears to a and b? More generally, how are universals related to their instances? If a regress were accepted, it might or might not be vicious. After all, suppose R is the set of real numbers, and X one of its subsets. Then in set theory we could say b X exactly if <b, X> . Calling the latter set, a subset of R P(R), by the name X′, we continue with the equivalent (< b, X>, X′) X″ for the membership relation restricted to (R P(R)) P(R P(R)) and so forth ad infinitum. But as I remarked before, that a regress does not lead to a contradiction, does not mean it is a satisfactory thing to have around. Armstrong adopts a view of the Aristotelian (moderate) realism type, in order to avoid Bradley's regress. If universals were substances (i.e. entities capable of independent existence), he says, the regress would have to be accepted. But they

end p.104

are not substances, though real: they are abstractions from states of affairs. One consequence is that there are no uninstantiated universals. We should not regard states of affairs as being constructed out of universals and particulars, which need some sort of ontological glue to hold them together.

We cannot embark on the general theory of universals here; we need to consider only those details crucial to Armstrong's account of laws. The important point here, for our purposes, was that his solution to the puzzle entails that there can be no uninstantiated universals. In this account the symbol N is used, prima facie in several roles; I shall use subscripts to indicate prima-facie differences and then state Armstrong's identifications. There is first of all the relation N 1 which states of affairs can bear to each other, as in

(1)  

N 1 (a's being F, a's being G)

This N 1 is the relation of necessitation between states of affairs: formula (1) is a sentence which is true if and only if a's being F necessitates a's being G. But for this to be true, both related entities must be real; therefore (1) entails that these two states of affairs are real, hence

(2)  

a is F and G

However, both states of affairs could be real, while (1) is false, so N 1 is not an abstraction from the states of affairs of sort (2)for then it would be the conjunctive universal (F and G).

This relation N 1 has sub-relations, in the sense that the property being coloured has sub-properties (determinants) being red, being blue. One sort is the relation of necessitation in virtue of the relation(s) between F and G, referred to as N 1 (F, G):

(3)  

N 1 (F, G)(a's being F a's being G)

This is a particular case of (1), so (3) entails (1) and hence also (2), but again the converse entailments do not hold. In (3) we also have a sort of universalizability. Note that (3) says (a) that the one state of affairs necessitates the other, and (b) that this is in virtue solely of the relation between F and G. Hence what a is, does not matter. Of course we cannot at once generalize (3) to all objects, for (3) is not conditional; it entails that a is both F and G. Thus we should say that what a is, does not matter beyond the entailed fact that it is an instance of those universals whose relation is at issue. We conclude therefore that if (3) is true, then for any object b whatever, it is also true that

(4)  

if b is (F and G) then N(F, G)(b's being F, b's being G)

Now what is the relation between F and G by virtue of which a's being F necessitates a's being G? It is the relation of necessitation between universals, Armstrong's version of Dretske's → and Tooley's nomic necessitations. Let us call this N 2 :

(5)  

N 2 (F, G)

Thus if (3) is true then (4) and (5) must be true, and indeed, (3) must be true because (5) is true. This N 2 , the target of course of the remark by David Lewis which I reported earlier, Armstrong takes as not calling for further illumination:

the inexplicability of necessitation just has to be accepted. Necessitation, the way that one Form (universal) brings another along with it as Plato puts it in the Phaedo (104d 105) is a primitive, or near primitive, which we are forced to postulate (What is a Law of Nature?, 92).

Forced to postulate because the way the actual particulars (including states of affairs) are in our world cannot determine whether (3) is true.

We now come to the point where Armstrong may escape the threat of a regress of trans-order laws. To be specific: Armstrong proposes a surprising identification of a relation with a state of affairs. If this will lead to a logical inference from the existence of the universal N(F, G) to the conclusion that any F is a G, then the inference problem will have been solved without falling into the lawgiver's regress. But could that really work?

It appears that Armstrong thinks it does, for he writes that he hopes now to have arrived at a reasonably perspicuous view of the entailment of all F's being G's by the statement that F necessitates G. He elaborates on this as follows:

It is then clear that if such a relation holds between the universals, then it is automatic that each particular F determines that it is a G. That is just the instantiation of the universal (N(F, G)) in particular cases. The [premiss of the inference] represents the law, a state of affairs, which is simultaneously a relation. The [conclusion] represents the uniformity automatically resulting from the instantiation in its particulars. (What is a Law of Nature?, 97; italics in original)

end p.106

But I shall try to show that, on careful analysis, the inferential gap appears only to have been wished away.

Armstrong proposes the postulatesurprising, but in accord with his theory of universalsthat the universal N 1 (F, G)a relation between states of affairsis identical with the state of affairs N 2 (F, G). In that case we can drop the subscripts and say that the state of affairs N(F, G)(Fa, Ga) instantiates the state of affairs N(F, G)in the way that the state of affairs Rab instantiates R, generally. The question whether N(F, G) entails (If Fb then Gb), can then be approached by logical means. For suppose N(F, G) is real, i.e. F bears N to G. Then it must have at least one instance; let it be described by (3) above. But in that case (4) is also true. We conclude therefore that if N(F, G) is real, then for any object b, if b is both F and G, then N(F, G) (b's being F, b's being G).

But we can go no further. What we have established is this: if there is a law N(F, G), then all conjunctions of F and G, in any subject, will be because of this law. There will be no F's which are only accidentally G. That shows us an interesting and undoubtedly welcome consequence. Although Armstrong does not give this argument, I must assume that he introduced the curious identification of N 1 and N 2 to reach some such benefit. However, this benefit is not great enough to get him out of the difficulty at issue. For what cannot be deduced, from the universal quantification of (4), is that all F's are G's. Any assertion to that effect must be made independently. Nothing less than a bare postulate will do, for there is no logical connection between relations among universals and relations among their instances.

Proofs and Illustrations

For me, the above argument establishes that the inference problem remains unsolved. But let us look a little further into what might or might not be possible in some such theory of universals, if not Armstrong's own.

Armstrong reports that logicians are inclined to protest at this identification. Let us see why. In the preceding paragraphs I have more or less followed Armstrong's practice of using the same notation to stand for a sentence and also for the noun that denotes the state of affairs which is real if and only if the sentence is true. If we identify N 1 and N 2 as N, we have a four-fold ambiguity; N(F, G) can stand for

end p.107

(a)  

the sentence F necessitates G

(b)  

the noun F's necessitating G

(c)  

the predicate necessitates by virtue of (relation(s) between) F and G

(d)  

the noun the relation of necessitating by virtue of (relation(s) between) F and G

The identification is meant to give sense to the idea that the state of affairs a's being F and G is (near enough) an instance of the law that F necessitates G. So the only identification needed is this:

the nouns (b) and (d) have the same referent

which is (for all I know about universals and states of affairs) consistent.

Worrying, however, is the occurrence of necessitates in sentences (a) and (c)and its further occurrence, in the guise of N 1 , in sentence (1) above. Does this verb have the same meaning in all these cases? It would then stand for a single universal which is a relation (i) between universals like F and G, and (ii) between states of affairs like Fa and Ga. The former are first-order universals, hence second-order particulars, while the latter are first-order particulars. So what is N? Either it is not a third-order particular or the order-hierarchy is not simple. The alternatives posed here are not at all attractive. Suppose for instance Armstrong says N stands for a single universal, and that it need not be thought of as a disjunctive one, because the hierarchy is cumulative. Then he has to make sense of such assertions as that N(F, Ga) or that N(Ga, F), i.e. that his relation holds between a universal and a particular state of affairs. Perhaps he can say that such assertions are always false, but this would still presuppose that they make sense.

Armstrong has shown great determination not to multiply or complicate the diversity of his world by allowing disjunctive universals. If he resists the preceding line of thought however, he must say that necessitates is ambiguous: it stands now for one relation and then for another (one second-order, one first-order) and not for a disjunction of the two. But that would destroy the identification.

Suppose on the other hand that Armstrong does not resist a new complication in this case, but says that N is a universal, which is however a disjunction of a first-order relation and a second-order

end p.108

relation. (This could be expressed without using the word disjunction, perhaps by calling N order transcendent or whatever, but that would not really alter the case.) Then he has his identification. But the glory has gone out of it, since the relevant states of affairs are now instances of the relevant universal, just because the law has been reconstructed by swelling it so as to encompass those states of affairs. The real story, obscured by the notation, would still be this: N 1 (F, G)(Fa, Fb) holds exactly if N 1 (Fa, Fb) and N 2 (F, G) both hold. Let us now define N(F, G) to hold exactly if N 2 (F, G) holds and also N 1 (F, G)(Fa, Fb) for all entities a such that a is F and G. Now drop the notation N 1 , N 2 from the languagedon't have names for N restricted to particulars, nor for N restricted to universals. Now you no longer have the language to raise the embarrassing question whether it could be that F necessitates G while some particular F is not a G. If you'd had the language, we would have had to answer either Yes, and the law does not imply the corresponding regularity; or No, there is a trans-order law that forbids it. And the regress would begin.

4. Armstrong on Probabilistic Laws

So far we have only discussed laws corresponding to universal regularities. How could universals and their relations account for irreducible probabilities?

As before, let us take the law of radioactive decay as example. This is the simplest; it involves only a single parameter: the atom still remains stable, or has decayed. We are not so ambitious here as to tackle the complex web of statistical correlations which gives quantum theory its truly non-classical air. Radioactive decay exhibits a form of indeterminism which is conceptually no different from Lucretius' unpredictable swerve. In addition, the decay law appeared first as deterministic law, of the rate at which such a substance as radium diminishes with time. After 1600k years, the remainder is only ()k of the original amount. Unlike Achilles' race with the tortoise, however, the process comes to an end, because each sample consists of a finite number of atoms.

But this reflection makes the original law inaccurate, for strictly speaking, there cannot be half of an odd number of atoms. What can the true law be for a single atom? The new, probabilistic law

end p.109

of radioactive decay is that each single atom has a probability (depending on a decay constant A), namely

of remaining stable for an interval of length t (regardless of the time at which you first encounter it in stable condition). The original half-life law is thus regularly violated by small numbers of atoms, and it can be violated also for substantial samples. The new law has as corollary, however, that for a large number of atoms, the probability of having more or less than left after 1600 years is very small. This small probability is the same sort of probability as (1), namely physical probability. Yet this corollary, if properly understood, should make it rationally incumbent upon us to attach only negligible personal likelihood to such violations.

The task of an account of laws is now two-fold: (a) to give an official meaning to such a probabilistic law, and to the objective probability involved in it, and (b) to do so in a way that warrants the guiding role of the objective probability for subjective expectation.

The second task was already the focus of several sections of the preceding chapter. The arguments there were general enough to plague any conception of objective probability which is logically unconnected with frequency or opinion. Such a conception we will also find here in the universals account of laws. But I do not propose the boring work of transposing those arguments, mutatis mutandis. I believe them to be devastating to any metaphysical reification of statistical models, and am content to leave the issue here.

The first task, on the other hand, is the focus of Armstrong's, and more recently Tooley's interest. They wish to bring probabilistic laws into the fold of their universals account of law. I will use the law of radioactive decay as a touchstone, at a certain point, for their success.

Armstrong begins 14 by asking us to consider an irreducibly probabilistic law to the effect that there is a probability P of an F being a G. One imagines that G may include something like: remaining stable for at least a year, or else, decaying into radon within a year. Adapting his earlier notation, he writes

(2)  

((Pr:P)(F, G))(a's being F, a's being G)

Read it, to being anyway, as There is a probability P, in virtue of F and G, of an individual F being a G. As with N, (Pr: P)(F, G) is a universal, a relation, which may hold between states of affairs, but of course only real ones. Suppose now that a is F but not G. Then (2) is not true, for in that case there is no such state of affairs as a's being G. So (2), properly generalized, does not say something true about any F, but only about those which are both F and G. That is not what the original law-statement looked like. Nor does Armstrong wish to ameliorate this by having negative universals, or negative states of affairs, or propensities (i.e. properties like having a chance P of becoming G which an F could have whether or not it became G). Thus a probabilistic law is a universal which is instantiated only in those cases in which the probability is realized.

Suppose there is such a law; what consequences does this have for the world? The real statistical distribution should show a good fit to the theoretical distribution described in the law. The mean decay time of actual radium should show a good fit to law (1)on its new, probabilistic interpretationfor example, also on Armstrong's construal. But I don't see why it should. We can divide the observed radium atoms into those which do and do not decay within one year. Those which do decay are such that their being radium atoms in a stable state bears (Pr:e A ) (radium, decay within one year) to their decaying within one year. The other ones have no connection with that universal at all. Now how should one deduce anything about the proportions of these two classes or even about the probabilities of different proportions?

Open questions are not satisfactory stopping points, so let us leave the connection with actual frequency aside, and concentrate on probability alone. The reality of (Pr:P)(F, G) has one obvious consequence: a universal cannot be real without being instantiated, so there is at least one F which is G. Thus we have, for example:

(3)  

If it is a law that there is a probability of of an individual F being a G, and there is only one F then it is definitely a G.

This is worrying, but a one-F universe is perhaps so unusual that it can be ignored. However, suppose there are two Fs, call them a and b. If we ignore the Principle of Instantiation, and assume this is the only relevant law that is real, we calculate: the probability that both are G equals , the probability that a alone (or b alone)

end p.111

is G equals , and that neither is G equals . But the Principle rules out the last case. How does this affect the probabilities? We must give zero to the last case; the new probabilities of the other cases must be like the old but add up to one again. This adjustment is called conditionalization (see Table 5.1). The new probabilities x, y, z must stand in the same proportions 9:3:3, and must add up to 1. The probability that a is G, for example, is now calculated by adding the probabilities of the first and second case: (9/15) + (3/15) = (12/15) = 4/5. In this case, we deduce, after a few steps:

(4)  

Given the law that there is a probability of of an individual F being a G, and a, b are the two only Fs, then the probability that a is a G equals , and the probability that a is a G given that b is a G, is a bit less (namely, again).

So the trouble is not confined to a one-F universe; it is there as long as there is a finite number of F's. If the law says probability P, and there are n F's, then the probability that a given one will be G equals P divided by (1 − (1 − P) n ). For very large n, this is indeed close to P, but the difference would show up in sufficiently sensitive experiments. Should we recommend this consequence to physicists, if they have ever to explain apparent systematic deviations from a probabilistic law?

Table 5.1. Effect of the Principle of Instantiation

G

not G

Probabilities

a, b

x

a

b

y

b

a

z

a, b

The second part of (4) is also striking. I made the calculation on the assumption that the objects a and b did not influence each other's being G or not G. This assumption standsthey could exist in different galaxies, saybut the difference between the probability of b being G tout court, and its probability given that a is G amounts to a statistical correlation. Now today's physics countenances such uncaused correlations, though not ones arising simply from numbers present, so to say. A correlation without preceding

end p.112

interaction to account for it is always prima facie mysterious, and I note it as an interesting feature of Armstrong's account.

The first problem was generated by the fact that in Armstrong's theory, a universal cannot be real without having at least one instance. That same fact spells trouble independently for a law such as that of radioactive decay, which gives a distinct probability for each time-interval. Because Armstrong does not have negative universals, we should expect that only one of remaining stable for interval t and decaying into radon within interval t is a real universal. We had better consider both in turn.

If the former, we note that e At is positive for each t, no matter how small. So for each t there must be an instance: an atom that remains stable for interval t, starting from now. This means that either there is an atom which never decays, or else that there is an infinite series of atoms which remain stable respectively for at least one year, at least two years, at least three years, . . . , and so on. On the other hand, if the latter, there must be for each time t, an atom which decays before t (measuring from now). This means either that there is an atom which decays right now, or else that there are an infinite series of atoms which decay respectively before a year, a month, a day, an hour, . . . , and so on, has elapsed.

If, as we think, the amount of matter in the universe is finite, then two of these possibilities are ruled out at once. But the reasoning about now was quite general, and applies equally to every time. For there to be, at each instant, a radium atom which decays just then, would require an infinity of these atoms as well. 15 So only one possibility remains: there is at least one radium atom which will never decay. 16

This is a striking empirical deduction. It shows that Armstrong's reconstruction of probabilistic laws is not mere word-play, but has empirical consequences, which were not present in the law as heretofore understood. I do not say verifiableit is no use to apply for a grant, to find that sempiternal atombut concrete and strikingly general. For the argument would apply to any law which delineates objective probabilities as a positive function of time.

These two problems resulted from the instantiation requirement; the third will be quite independent of that. To explain it, we must first look a little further into Armstrong's account. Armstrong proposes that we identify (Pr: P) as a subrelation of N, rewriting it

end p.113

as (N: P). How should this be interpreted? Armstrong has recently emphasized this reading:

I hold that a probabilistic law gives the probability of a necessitation in the particular case. Necessitation is just the same old relation found in any actual case of (token) cause bringing about a (token) effect, whether governed by a deterministic law, a probabilistic law, or no law at all. 17

But if the difficulties below prove too onerous, one should not lose sight of any possibility of retrenchment into another possible interpretation, such as that Nlike temperature or propensityhas degrees, of which P provides a measure.

The main difference I see between the two interpretations is perhaps one of suggestion or connotation only. In an indeterministic universe, some individual events occur for no (sufficient) reason at all. If the law (N: P)(F, G) is real, and b is both F and G, there could prima facie be one of two cases. The first is that a's being F necessitated (brought along with it) its being G, in virtue of F and G and the relation (N: P) between them. The second is that b's being F is here conjoined with its being G as well, but accidentally (by pure chance). Now the first reading (probability of necessitation) suggests that this prima-facie division may have real examples on both sides of the divide. The second reading suggests that on the contrary, if F bears (N: P) to G, and b is both F and G, then b's being F cannot just be conjoined with its being G but must have necessitated (to degree P) its being G. On the first reading there can be cases of something's being F and G which are not instances of the law, on the second not. But I think either reading could be strained so as to avoid either suggestion. We should therefore consider both possibilities in the abstract.

Let us begin 18 with the first, official reading, and suppose that the law (N: P)(F, G) is real with , then there are three sorts of F: those which are not G, those whose being F necessitated their being G, and those which are G by pure chance. What is the probability that a given F is of the second sort? Well, if P is the probability of necessitation, then the correct answer should be P. What is the probability that a given F is of the third sort? I do not know, but by hypothesis it is not negligible. So the overall probability that a given F is a G, is non-negligibly greater than . Thus again we have the consequence that if it is a law for F's to be G's with probability , then the probability that an individual F

end p.114

is a G is greater than . This time the consequence does not rest on Armstrong's special instantiation requirement.

Armstrong has replied to this problem as follows: the law does not give us the probability of F's being G's, once properly understood, but the probability of instantiation of the law. Thus the law is not wrong, if it gives that probability, correctly, even though the overall probability of F's being G's is greater. He adds to this How then can we tell which cases of the FG are which? That is an epistemic matter we realists reply. Perhaps one would not be able to tell (ibid.). Testing of laws becomes a little difficult, of course. For anyone not quite so sanguine, it will be worth while to consider the alternative reading.

Suppose therefore that the third sort must be absent, due to some aspect of the meaning of (N: P). Then if any F is G, it is of the second sort. Let us again ask: what is the probability that in the case of a given F, its being F bears (N: P)(F, G) to its being G? On the supposition that it is a G, the answer is 1; on the supposition that it is not G, it is 0; but what is it without suppositions? We know what the right answer should be, namely P; but what is it? The point is this: by making it analytic that there can be no difference between real and apparent instances of the law, we have relegated (N: P)(F, G) to a purely explanatory role. It is what makes an F a G if it is, and whose absence accounts for a given F not being a G if it is not. (It is not like a propensity, another denizen of the metaphysical deep, which each radium atom is supposed to have, given each the same probability of becoming radon within a year.) So we still need to know what is the probability of its presence, and this cannot be deduced from the meaning of (N: P) any more than God's existence can be deduced from the meaning of God. It cannot be analytic that the objective probability, that an instance of (N: P)(F, G) will occur, equals P.

Thus we have three serious problems with Armstrong's universals account of probabilistic laws. The first and second derive from his special instantiation requirement, which other accounts do not share. The third derives from the specific reading he gives to the statement that a certain state of affairs instantiates a probabilistic law. But there appears a worse problem on the alternative reading.

Proofs and Illustrations

Armstrong does consider a different approach, or a different sort of statistical law. Suppose, he says, it is a law that a certain proportion of F's are G's, at any given time, but individual F's which are G's do not differ in any nomically relevant way from the F's which are not G's. (This is what he could not say in connection with the approaches examined above.) The law would govern a class or aggregate. If the half-life law of radium were construed that way, we would say: if this bit of radium had not decayed, then another bit would have, so that it would still have been the case that exactly half the original radium would remain after 1600 years. But (as Richmond Thomason has pointed out in a paper about counterfactuals) we would most definitely not say that about actual radium. We would say that if this bit of radium has not decayed, then less than half the radium would have decayed. There would be no contradiction with the theory because the half-life of 1600 years only has an overwhelmingly high probability, not certainty. I suppose there could be another sort of physics in which the half-life law is a deterministic law, and (like in ours) individual radium atoms do not differ nomically. If Armstrong's account fits that other law of radioactive decay better, that is scant comfort if it cannot fit ours.

5. A New Answer to the Fundamental Question About Chance?

In his recent book, Michael Tooley has improved and extended his account of laws (which we examined in section 2 above) to probabilistic laws as well. 19 His new account is designed to avoid the difficulties we found in Armstrong's. Moreover, Tooley has an answer to what I called the fundamental question about chance (see Chapter 4 above), as it can be posed for his counterpart to objective chance. This answer appeals to the concept of logical probability, and thus introduces a new element into that discussion.

The basic idea of Tooley's account of deterministic laws was that (a) it is a law that all A are B if and only if there is a nomological relation between A and B, and (b) nomological relations are those contingent, irreducible relations among universals whose holding necessarily implies certain corresponding universal statements about particulars.

The main difficulty we found was that there could not be any

end p.116

nomological relations, on an account of the sort Tooley wants to give. The necessary implication could not be a matter of pure logic, and any attempt to add missing premisss to the logic (trans-order laws) leads to a debilitating regress. To introduce a sort of implication that is not purely a matter of logic leads us into a necessitarian account instead. This is not the sort of problem that can be solved by adding a postulate: you cannot postulate that one thing logically implies another when it does not, without making a logical mistake. You can't make an argument valid by adding the postulate that it is valid.

In the elaboration to probabilistic laws, Tooley in effect replaces logical implication by logical probability. This notion of logical probability is an old one: that there is a quantitative relation between proposition, which generalizes implication, and has the same logical status. Of course, if it is a matter of logic, then it must govern rational opinion, and the analogue to Miller's Principle will have the same status as: if P logically implies Q, then rational opinion cannot hold P more likely to be true then Q. This latter status is that we can show someone that, even by his own lights, he sabotages himself if he violates it. So if we think of Tooley's explication of probabilistic laws as introducing his notion of objective chance, then we can view him as answering the fundamental question about chance in two steps: if there is a law then there is a corresponding logical probability, and if there is a logical probability, then that must logically constrain rational opinion. We should look carefully at both steps.

Tooley begins with some very welcome criteria of adequacy. Suppose that all radium atoms decay within a trillion years; it could still be a law that they had a certain positive probability of remaining stable for longer. Suppose that there are only four A in the history of the universe, and that three are B; it could still have been a law that an A has a probability of 0.8 or of being a B. He proposes that for it to be a law that an A has probability p of being a B, requires the real existence of a certain relation between A and B, which he designates as:

A probabilifies B to degree p

or: Law-Stat (B, A, p)

To begin we must identify this relation, and we must do so in a

end p.117

way that will support the correct inferences. The inference he settles on as correct is this:

1.  

The argument from Law-Stat (B, A, p) and the additional premiss that x is an A, to the conclusion that x is a B, is logically valid to degree p

or, in terms of the quantified form of logical implication:

2.  

The logical probability of the proposition that x is a B, given that x is an A and that A probabilifies B to degree p, equals p.

He also discusses what other sorts of premisses can be added, without altering the logical probability; this I shall leave aside. We can see that 2 is formally like Miller's Principle, which connects objective chance and subjective probability. Now Tooley supports this solution to the inference problem, by identifying the relation among universals so as to secure 2:

3.  

Probabilification to degree p is that contingent, irreducible relation between universals such that 2 holds.

The question is now whether we can consistently postulate that there is such a relation as probabilification.

It would be boring to elaborate those difficulties with this idea, which we already encountered in section 2. No argument from spatial relations among trees to spatial relations among stones, to give yet another example, is logically valid without additional premisses relating trees and stones. To postulate that it is valid is a logical mistake. The same goes for logical validity to degree p, if there is a legitimate notion of that sort.

But perhaps that is too fast. Perhaps if we look closely at the notion of logical probability, there will be a new insight that can save us. For consider: the meaning of and is surely not much more than its logical role; to understand this is to see, among other things, that as a matter of logic, (P and Q) implies P, and the probabilityin any sense thereofof (P and Q) can be no greater than that of P. As for and, so perhaps also for probabilifies?

The notion of logical probability is unfortunately not nearly so clear as that of implicationand we need to set aside a great deal of scepticism to be able to discuss it seriously. I know how to identify a valid inference from one sentence to another: it is valid

end p.118

if merely understanding the words is sufficient to see that if the one is true, then so is the other. Now, how shall I identify validity to degree p? It is there if merely understanding the words is sufficient to see that, if the one is true then . . . . Then what? How can I complete this statement without using the word probability again?

Carnap had a very clever reaction to this problem. Our understanding of probability consists of (a) the rules for probability calculation, (b) the rule that if two sentences are entirely on a par as far as meaning is concerned, they have the same logical probability. To complete this identification it is required then to spell out what on a par means, and to demonstrate (given such a spelling out) that the probabilities of all sentences are (thereby) uniquely determined. This completion was Carnap's programme.

Part (b) is clearly a symmetry requirement, a reinstatement of the eighteenth-century principle of indifference, which fared so badly at the hands of late nineteenth-century writers. (See further my discussion of this history in Part III.) We can see how to begin here: if P and Q are logically equivalent sentences, then they are on a par. Also if two sentences are related by permutation of a single syntactic category they are on a par. This means for instance that if F and G are syntactically simple predicates of the same degree, then a sentence ( . . . F . . . ) must receive the same logical probability as the corresponding sentence ( . . . G . . . ). Carnap spelled out carefully all the invariants of syntax so as to explicate when two sentences are on a par.

However, even given all these requirements of invariance, the assignment of probabilities was not uniquely determined. Nor was the class of remaining probability functions sufficiently constrained, to make their common features informative. (See further Proofs and Illustrations.) Therefore the programme had failed: if Carnap's concept was correct, then there is no such thing as the logical probability. 20

Carnap had a favourite probability function, called m*, and in his article Tooley referred to it as the correct logical probability function. How could this be warranted? Could we postulate that it is the correct one? Not in the sense that the above mathematical problem has a unique solution, when it does not. That is again like trying to postulate that an argument is valid, when it isn't. Could we postulate that eventually we will understand the notion better, and be able to add to Carnap's (a) and (b) certain other requirements,

end p.119

which will single out m* uniquely? What would that bea postulate about the future of Western philosophy? Would the correctness about present philosophical views then depend in part on whether the military will reduce this world to ashes in the next century? Or would the postulate mean that we already have a richer concept of logical probability than either Carnap or anyone else has been able as yet to make explicit? That could be; but all of us being unable to tell, how shall we evaluate a philosophical position resting on this article of faith? If a philosophy requires an act of faith, of such a specific sort, what has it to say to those who do not share it? I am not accusing Tooley of having chosen any of these courses, but to be frank, I see no other course open to him.

Proofs and Illustrations

Hindsight is easy, and always a bit gnant. But the problems for Carnap's early programme turned out to be both insuperable and elementary. The non-uniqueness of the measure rests on different considerations for finite and infinite vocabularies (or sets of properties and particulars), and I shall discuss it for these two cases separately. 21 Moreover, to show that the problem does not rest on Carnap's Humean notion of what simple sentences say, I shall show how laws may be incorporated. I follow in part Tooley's proposals and in part John Collins's BA Thesis. 22

Let language L have k simple sentences Q, R, . . . ; m one-place predicates F, G, . . . ; and n names a, b, c, . . . and the machinery of standard first-order logic without identity. The simple sentences can be interpreted as laws. Call L finite if its vocabulary is finite, and otherwise infinite (in which case k, m, or n is not an integer but countable infinity). Let a TV be an assignment of T (true) or F (false) to each sentence, in accordance with (first-order) logic. Usually such a TV can be summed up rather briefly. Suppose for instance that k = m = n = 1. Then

Q

Fa

(x) Fx

(x) Fx

T

F

T

T

T

T

F

F

depicts two TVs in summary.

Suppose that L is finite; then so is the number of its TV's. (This is in part because I have kept out identity, so we cannot count in this language.) A probability function P must assign a number between zero and one (inclusive) to each TV, and these numbers must add up to one. Then we define

Now, what requirements can we put on P? The definition has already guaranteed that P will assign the same number to any two logically equivalent sentences, and that P will not violate the usual rules of probability calculation. So what remains is to determine what P must do with the individual TV's.

This is the point where Carnap introduces the invariance requirements. It is clear that any specific meaning given to the vocabulary, or any factual information assumed, will break the sort of syntactic symmetries which these represent. For example if F and G stand for scarlet and red, then no TV should give T to Fa and F to Ga. Similarly, if we already know that all F's are G's for some other reason. We now have two options. We can classify some of this information as really logical or verbal, and eliminate TV's conflicting with it, before determining how P should treat (the remaining) TV's. Or else we can ignore all such information to begin, define a perfectly informationless P and then conditionalize it on the information (along the lines of the second part of the above definition). The result, call it P′, may be thought of as the mature logical probability, after assimilating meaning that goes beyond syntactic form. In this latter case it will not be so bad if P is not unique, as long as the mature Pis unique.

The two courses will lead to the same result, if the language is finitary and the idea is merely to delete bad TV's. When the language is infinitary, zero probabilities begin to play a troublesome role, and the first course may work when the second won't. On the other hand, if we have the idea that some of the simple sentences speak about the probabilities of other sentencesi.e. if they express probabilistic lawsonly the second course could work. For that we can make come out right only by insisting that the probabilities should be so distributed among the TV's that the conditional probabilitye.g. of b decaying in 5 minutes, given that b is a radium atom and the law gives probability e − 5A to the decay of radium atoms within 5 minutesbe correct. This cannot be done by deleting some TV's, and it cannot itself be invariant under

end p.121

substitution of predicates (such as lead atom for radium atom).

So let us proceed carefully in two steps. First we decide what an informationless probability P looks like. Then we will look for a mature descendant P′ which reflects meaning and lawhood. If P itself is not unique, this need not worry us, as long as its mature descendant is.

Recall the case of language L with k = m = n = 1. It does not have 24 TV's, because a TV must give T to Fa if it gives T to (x) Fx, and F if it gives T to (x) Fx. That leaves eight. Because the language is so small, no two of these are related to each other by a permutation of simple terms. If there is no other invariance requirement, that means these eight TVs can be assigned any probabilities summing to one. We could think of insisting on invariance if F is replaced by F, or Q by Q. That gives the grouping shown in Table 5.2.

Table 5.2

Q

Fa

(x) Fx

(x) Fx

Group I

(1)

T

T

T

F

(2)

T

F

F

T

(3)

F

T

T

F

(4)

F

F

F

T

Group II

(5)

T

T

F

F

(6)

T

F

F

F

(7)

F

T

F

F

(8)

F

F

F

F

Then every TV in Group I must be assigned the same value, and likewise every TV within Group II. But how the probability is allocated to the two groups is arbitrary. It is noteworthy that on either policy, Q and Fa will each receive probability , which shows the extent to which the Principle of Indifference is operative. What is left indeterminate, clearly, is the probabilities of (x) Fx and (x) Fx. On the first policy, these could be anything; on the second they could also be any number between zero and one, but must be equal.

There is no way in which logical considerations could go further than this. You could say that all eight TVs must have the same probability. But this would mean, for example, that the probability of (x) Fx rises from to conditional on the information that Fa.

end p.122

This is a 100 per cent increase, which is just silly. If we thought that the number of names in the language reflected the number of things there are, then we should have said that (x) Fx is certain given Fa. And if we didn't think that, we shouldn't be assuming that a can function as such a sensitive gauge of what all things are like. So the idea that all TV's must be treated equally, can have no general appeal. 23

Can we cut this down to a unique function P′ by letting Q express a law that everything must be F? Indeed; that would in effect remove TV (2) from Group I and (5), (6) from Group II. But the liberty to distribute probability any way we like between the two groups still leaves many probability functions. Uniqueness would appear only if we added R to express the law that that (x) Fx, and then tossed out any TV in which Q and R are both false. This would be a hypothesis of total determinism. We could also allow both Q and R to be false, while adding, say, S, T, . . . to express probabilistic laws such that P′(Fa|S) must equal, say, . These restrictions leave the probability of (x) Fx unconstrained again, however. To close the gap, another law could be introduced to fix the probability of (x) Fx directly, and independently of that of Fa. That would be quite out of line with conception of a probabilistic law. In any case, that we could constrain a unique P′ simply by dictating all the probabilities it must assign, is no news! The point of introducing logical probability into the accountthat it is itself an independent and determinate logical notion, which provides a bridge between probabilistic laws and rational expectationwould here be lost altogether.

Let us look now at an infinite language; suppose specifically that there is an infinite set of names a, b, c, . . . . If F is a predicate, then each TV must give T or F to each of Fa, Fb, Fc, . . . . The number of TV's is then the next infinite number: it has the power of the continuum. But from this it follows at once that most of them must receive probability zero. (For at most two can receive as much as , at most three as much as , at most n as much as . So at most countably many will receive a probability higher than zero.) 24 This is not debilitating: we can represent the TV's by points on a line, and then use a distribution function over that line to determine positive probability for consistent finite sets of sentences. However, there are more than innumerably many such distribution functions. It does not help to say of course, you must use a constant

end p.123

distribution function! For the representation of TV's by points does not incorporate a non-arbitrary distance metric, between TV's; hence it is largely arbitrary. (This is a point we encountered also in the discussion of chance in the preceding chapter; see Fig. 4.5.)

How far can the invariance requirements, imposed on probabilities assigned to single sentences, take us? Suppose we include the demand that simple sentences such as Q or predicates such as F be replaceable uniformly by their negations, without affecting logical probability. Then P(Q) = P( Q), hence each must be ; similarly for P(Fa). Now P(Fa) = P(Fa & Fb) + P(Fa & Fb). If we try to keep probabilities positive (or non-infinitesimal) as long as possible, then P(Fa & Fb) will be less than P(Fa). By a repetition of the argument, P(Fa & Fb & Fc) will be less still; and so forth. Since (x) Fx entails all those sentences, its probability will be less than each in consequences. This need not be zero, for the series could converge to a positive number.

Note well that replacing F by F does not turn (Fa & Fb) into (Fa & Fb) but into ( Fa & Fb). The invariance requirement, even in this strong form incorporating negation, entails only:

Fa & Fb has same probability as Fa & Fb

Fa & Fb has same probability as Fa & Fb

but probability could be arbitrarily distributed between the two groups. A still stronger requirement, that any simple predicative sentence like Fa be everywhere replaceable by its negation, salva probabilitate, would make the two groups equal in probability. Then all four sentences receive . By repeating the argument for (Fa & Fb & Fc) and so forth, we would arrive at the conclusion that every TV has zero (or infinitesimal) probabilityand the universal sentence (x) Fx also. But just as above, there are independent reasons not to accept such a strong requirement as logically imperative. (The reasons are however that the programme would suffer, and not that the general command, to treat logically similar sentences similarly, is stopped here by an apprehended asymmetry. I am simply being as charitable to the programme as I can be.) If we did impose this very strong requirement, we would arrive at the unique function m for the quantifier-free part of the language again, the one that Carnap rejected.

The plethora of distribution functions on a continuum makes P non-unique to a remarkable extent. What happens if we begin to

end p.124

interpret the single sentences, such as Q, as laws? If Q is the law that everything must be F, we can delete any TV which gives T to Q and F either to some sentence like Fa, or to (x) Fx. Then the probability of Fa, or of (x) Fx given Q will be 1, while Q itself still has probability . Now the negation invariance is broken: no reason to expect P′(Fa Q) to be zero, so Fa will now have a higher probability than Fa. This is rather curious in itself: the mere possibility that there is such a law, has raised the probability that something is F! But at this point we may drop even universal negation invariance, and let P(Q) be anything. Uniqueness is not exactly nearer then.

If (x) Fx had probability zero, the above manuvre will not work, because the envisaged conditionalization of P on ( Q v (x)Fx) to produce P′ will have reduced the probability of Q to zero as well. In such a case, the deletion of TV's should occur first, and only then should the logical probability be determined, to the extent it can be. That is: the meaning of the laws has to be built into the language before the Indifference Principle is applied. That is certainly possible, and will then insure that (x) Fx, which had probability zero, now has the probability of Q (since the determination of P((x)Fx| Q) is presumably unaffected). Again, since P(Fa| Q) is presumably also positive, the mere recognition of the possibility of a law (by designing the language to allow its expression) has raised the logical probability of Fa. The effect could be counteracted by incorporating other, contrary law statements. Thus the general conclusion is this: however it be done (as in the preceding paragraph or this one), the set of law-statements expressible in the language, significantly affects the logical prior probability of their instances.

For someone who views the correct language as having a law statement in it only if the law is true, this would be fine. But the logical design of the language cannot depend on a contingent truth. I do not see how this could seem fine to someone who would expect merely conceivable laws to be formulated in the language. Perhaps he could insist on a very carefully chosen set, to be expressible, so as to nullify this effect by their presence overall. But puzzling over this seems rather useless, given that the non-uniqueness we have found, establishes that there is no such thing as the logical probability.

6. What the Renaissance Said to the Schoolmen

So far we have concentrated on the unsolvability of the identity problem and the inference problem, taken jointly. It is time to look at the universals accounts' most frequent claim: that laws so conceived truly explain.

This claim is most often advanced in the negative: universal regularities as such do not explain, but laws do, so a law must be or entail more than a regularity. So far, so good; but that does not yet establish that laws conceived as relations among universals do explain. The first major obstacle to the claim that they do is the failure to solve the inference problem: it simply does not seem that (irreducible higher-order) relations among universals can provide information about how particulars behave. While I'm anxious not to base criticisms on any specific theory of explanation, surely a minimal criterion is unmet here. But let us set all this aside, and see whether (if the information they give be granted to be as hoped) relations among universals can indeed truly explain, in the way that regularities cannot.

For the necessitarian accounts, possible-worlds style, the answer to the corresponding question was No, according to Foley's argument. For the law was there conceived as also a universal truth, though about worlds rather than about entities in a world. Now, if a mere universal truth does not have the wherewithal to explain, then the postulate of a universal truth about worlds cannot be as such the terminus de jure for explanation.

The form of this argument is tu quoque: you claim that A cannot be explained by B alone because B is a mere X, and you then explain A by explaining B by Cbut C too is a mere X! Let us call this the termination problem: anyone who claims that something or other is not enough for explanation must enlighten us as to what is enough.

At first sight, the universals account fares better here. After all, it explains the universal truth about particulars by means of a singular truth about universals. But it was not the universal form of the universal regularity that made it incapable of explaining! The objection was that mere universality is not enough. We can't explain that this crow is curious by saying that all crows are curious alone; at best this will point to an explanation in terms of inheritable characteristics among birds. The failure of the possible-worlds

end p.126

account was that we don't receive the information All worlds physically possible relative to us are thus-and-so on a background of beliefs that would lead us to go on in this fashion, Oh well, then there is probably a set of inheritable characteristics of crows in this, or all such worlds, such that . . . . Or at least, the universal truth about worlds does not point to these missing pieces in our puzzle any more than the original universal truth about actual crows did.

Of course, the possible-worlds theorist says: but this news about crows in other worlds means that the regularity is necessary, that it is not an accident. By making it mean that, however, he robs the assertion of necessity of any force that the generalization about worlds lacks. After all, what is gained, except brevity, if one restates the same story by means of explicitly defined terms? A defined term is only an abbreviation, and nothing can be added by abbreviating.

Now again, the universals account looks as if it will fare better. It claims not to define, but to reveal the ground of, the necessity. The regularity in the particulars is made necessary, by the relations among the universals.

But now we face the dilemma: is this necessary in made necessary a matter of logic or not? In the first case, we have Molire's virtus dormitiva as pattern of explanation. In the second case, we land in the lawgivers' regress.

Molire was late, and only in fashion with his critique of the Schoolmena fashion harking back to the real struggle of the New Sciences against the Scholastic tradition in the Renaissance. Galileo still had to understand that tradition to fight it. Boyle and Newton already seem unaware of finer distinctions, but still appreciate the real gap between the two styles of explanation.

That which I chiefly aim at, is to make it probable to you by experiments, that almost all sorts of qualities, most of which have been by the schools either left unexplicated, or generally referred to I know not what incomprehensible substantial forms, may be produced mechanically . . . 25

Substantial formsthat means universals. But can't the Schoolman retort that the mechanics, describing what Boyle calls the mechanical affectations of matter, must fall into the same pattern of explanation as his ownif it is to explain at all?

To explore this question, just imagine a discussion in the Renaissance between a Schoolman (with a complex theory of

end p.127

natures, substantial forms, complexio, and occult qualities) and a new Mechanist (with a nave theory of atoms of different shapes, with or without hooks and eyes). The Schoolman says that the mechanist account must eventually rely on the regularities concerning atoms, such as that their shapes remain the same with time, and there must be a reason for these regularities in nature. But the Mechanist can reply that whatever algebra of attributes, etc. the Schoolman can offer him, the inference, from the equations of that algebra to regularities in the behaviour of atoms, must rest on some further laws which relate attributes to particulars. For example, if A is part of B, it may follow that instances of A are instances of B, but not without an additional premiss which justifies, in effect, the suggested part-whole terminology for the indicated relation between universals. Of course, if we define A to be part of B exactly if it is necessary for instances of A to be instances of B, the argument becomes valid. But then is the necessity appealed to in the definition itself grounded in some further reality? And if not, if there can be a necessity not further grounded in some further reality, he would like to return to his atoms, please, and say that their postulated regularities are not grounded in any further realityit's just the way atoms are.

end p.128

Abstract: This Part concentrates on general issues of epistemology, both to provide an alternative to the metaphysics examined in Part One and to prepare the ground for the following discussion of science in a truly empiricist, non-metaphysical key.



Politica de confidentialitate | Termeni si conditii de utilizare



DISTRIBUIE DOCUMENTUL

Comentarii


Vizualizari: 1102
Importanta: rank

Comenteaza documentul:

Te rugam sa te autentifici sau sa iti faci cont pentru a putea comenta

Creaza cont nou

Termeni si conditii de utilizare | Contact
© SCRIGROUP 2024 . All rights reserved