Monday, May 30, 2005

They're at it again

Another Philosopher's Carnival! Will the madness never end??

Freakonomy indeed

In addition to the usual load of more arcane material, lately I've been reading a Best Seller (Freakonomics, by Steven D. Levitt and Stephen J. Dubner), which spends a chapter examining the question of whether people are held back if they have weird and/or self-evidently minority (esp. black) names. The answer, when one controls for other factors, seems to be no; but on the way to this conclusion our authors throw out a bunch of titillating tidbits (that's how you get to be Best-Selling Authors).
A great many black names today are unique to blacks. More than 40 percent of the black girls born in California in a given year receive a name that not one of the roughly 100,000 baby white girls received that year. Even more remarkably, nearly 30 percent of the black girls are given a name that is unique among every baby, white and black, born that year in California. (There were also 228 babies named Unique during the 1990's alone, and 1 each of Uneek, Uneque, and Uneqqee.)

I don't have a problem with unusual names in general; there are some great names in sports, like Peerless Price (and remember I. M. Hipp?). But Uneqqee?? Yeeqqee. And here's a story that suggests that the warnings on certain medications should be extended to read: do not drive, operate machinery, or name your child while taking this product.
Roland G. Fryer Jr., while discussing his names research on a radio show, took a call from a black woman who was upset with the name just given to her baby niece. It was pronounced shuh-TEED but was in fact spelled "Shithead."

That definitely trumps the story my dad likes to tell (but here it is anyway) about the family who wanted to name their daughter yoo-REEN (sp: Urine). (Beautiful, but not a biblical name, said the preacher; how about Rachel instead?)
Or consider the twin boys OrangeJello and LemonJello, also black, whose parents further dignified their choice by instituting the pronunciations a-RON-zhello and le-MON-zhello.

Dignified? Tell that to their sisters Lih-MEE-zhello and Razba-REE-zhello [rimshot]. Actually I bet everyone's used to it by now; or they just go by Ron and Monj. Incidentally, they tell us that this story is considered an urban legend, but they stick by it nonetheless. (Note, however, the distancing move re: Shithead -- that's what the caller said, but who knows?) Lastly,
A young couple named Natalie Jeremijenko and Dalton Conley recently renamed their four-year-old son Yo Xing Heyno Augustus Eisner Alexander Weiser Knuckles Jeremijenko-Conley.

Oh, that's just great. Hyphenated names are a stretch to begin with, but then you have to exercise some restraint. I mean, Joe Jeremijenko-Conley is a halfway reasonable name; but Yo? If he lives in Philadelphia he'll think everyone's calling his name whenever he walks down the street. Xing and Heyno aren't any better, but at least they're in the middle. After that we have a stretch of relatively normal names -- but then, right before the finish, comes the final insult: Knuckles. Yo Knuckles: sounds like that character from Lilo and Stitch, Cobra Bubbles, the government agent, to whom is addressed my favorite line in that movie: "Oh good, my dog found the chain saw."

Saturday, May 28, 2005

Back to the bakery

The other day I voiced some dissatisfaction with Professor Littlejohn's discussion of biscotti, the twice-baked treat. In the light of his response, and further cogitation, I'd like to try again. I began there by defending one way of speaking as if it were the correct one and the other incorrect, and then later on I defended the idea that the right way of speaking depends on what we're talking about. This is confusing (bad duck! Bad!).

First let me change the example from cookies to a loaf of bread (this shouldn't matter). There are two ways of looking at what happens in the oven. First, we can say that we put something into the oven and when it -- that thing -- came out, it was quite different, qualitatively speaking, from how it was when it went in. In this sense nothing came into being or passed away. But we can also say that we put one thing into the oven and took another thing out. When we talk like this, we say: the loaf of bread came into being and the uncooked lump of dough no longer exists -- it has become something else which did not exist before. This latter is Clayton's view: as he put it, the [loaf of bread] is numerically distinct from the hunk of dough that went into the oven.

Is one of these ways of speaking the correct one and the other incorrect? That is, does one speak truly of what "really happened" while the other is loose or elliptical or downright wrong? If so, which is correct? If not, how can this be?

In my original post, I mangled the concept of substance, causing me to fail to see the point of Clayton's view (I called it "a mistake"). Leaving out at least three more definitions (isn't metaphysics grand?), let's distinguish between substance (1) = bearer of properties or substratum; substance (2) = entity or concrete individual; and substance (3) = type of stuff.

If concrete individuals can be the bearers of properties, which it seems they would have to be, then senses (1) and (2) of "substance" coincide. The only sense of (1) which is independent, is, I would think, is the extremely abstract one which we are not talking about here, so let's fold them together for now. The other one, (3), might seem simply colloquial, in that being (made up of) a type of stuff is a property, not a substance, which is the bearer of properties. On the other hand, some individual things are such that being made up of a particular type of stuff is essential to them, so that if they lost that property, they would thereby pass out of existence. In this case, we might say that senses (2) and (3) coincide, or at least overlap.

Here's the one view, then, in these terms. It is essential to [the substances (2) which are] loaves of bread that they are made up of [the substance (3)] bread; so if something is made up not of bread but of uncooked dough, then it is not (yet) a loaf of bread. At the (vague) point at which dough is cooked enough to be properly called bread (if still perhaps undercooked), then the lump of uncooked dough passes out of existence (because of course being made up of X is essential for something to be a "lump of X"), and a numerically distinct entity, the loaf of bread, comes into being. The thing which went into the oven underwent, as Clayton calls it, "substantial change" (and thereby passed out of existence).

Fair enough; I grant that this is a coherent way of speaking and not just the mistake I called it before. On the other hand, and this was my first point, I hardly need to be committed to the classical metaphysical notion of substance as pure substratum (an accusation, I should point out, has never been made, to my knowledge, here or elsewhere) in order to say instead, or as well, that something (some concrete individual) went into the oven, underwent a qualitative change therein (which it survived), and then emerged from the oven in a newly crusty state.

In other words:
I put the bread into the oven at 3:00, and when it came out at 5:00, it was done.

If that's not enough, here's a real example, not made up (although it's hard to imagine anyone doubting that people actually do say what I just did). My mom's been making bread in a bread machine, and in the manual it says the following:
For the French bread cycle you can expect the following things to happen as the timer counts down to zero.

To begin: The dough is kneaded for the first time. (18 minutes)

At 3:32: The dough begins to rise (40 minutes)

At 2:52: The dough is kneaded for the second time. (22 minutes)

At 2:30: The dough continues to rise. (20 minutes)

At 2:10: The dough is "punched down." (30 seconds)

At 2:10: The dough rises for the final time. (65 minutes)

At 1:05: The bread begins to bake. (65 minutes)

At 0:00: The bread is finished.

If at time = 1:05 to go, the bread machine contains either a lump of uncooked dough x-or a loaf of bread (which is just now beginning to bake), then these two accounts are at odds. Which is right? On one view of the matter, it doesn't matter what people actually say. All that matters is what is really the case, whether or not that's what we say in ordinary cases. If we are to get things right, as philosophers, we must discern the actual ontology and modify our (philosophical) language to track it. (Of course there's nothing wrong with speaking loosely "outside the study," as we say, or in the kitchen in this case.) If this is right, I have hardly helped my case by citing an actual example of what people say. Maybe that we do (sometimes) talk that way just means that (sometimes) we get things wrong (and need philosophers to help us out).

In this context consider two ways of understanding what has come to be known as "ordinary language philosophy." Obviously the "ordinary language philospher" (let's call him "Austin") wants us to take ordinary talk seriously. But does this mean that ordinary talk gets things right where "metaphysics" fails? That is, is the correct ontology picked out by ordinary talk while "metaphysics," in its misguided attempt at rigor, is simply false? Or is it rather that "metaphysical" talk and ordinary talk, although giving different accounts of the world for different purposes, are equally correct?

Neither of these sounds particularly attractive to the traditional ear. The first sounds like simple nihilism, as if there could be no point in speaking rigorously. This is worse than positivism, as while the positivists also rejected "metaphysics," they at least substituted a rigorous empiricism, where "Austin" just encourages the sort of loose talk we would engage in if there were no such thing as philosophical inquiry into the real. (This was Russell's attitude toward "ordinary language philosophy.") The second is no better. If both ways of talking are correct, then this implies that how things are depends in some way on how we talk; while if neither is correct, then this is a different sort of nihilism, of a skeptical sort, as if our language were essentially inadequate for describing reality.

Of course another possibility is that as it happens in this case, what we ordinarily say points us to the correct ontology, where another, more clearly "philosophical" account gets it wrong. If this were my view, I wouldn't have brought up ordinary language in the first place, seeing as I would need a traditional metaphysical argument anyway. (In my original post I left this possibility open. Oops.) Neither is the first conception of ordinary language philosophy my view. Ordinary language has no special ontological status, as if it were purer or uncontrived. This leaves the (two versions of) the second conception. I reject the latter version (that neither way of talking is correct) for the reasons suggested: this does indeed sound too skeptical (or non-cognitivist) for me. Our language is perfectly adequate for doing what it does. The former version -- that how things are depends in some way on how we talk -- gets the closest, I suppose. I do claim that in speaking normally we speak the literal truth; and I have also conceded to Clayton that his way of talking is not necessarily false, depending on what it is that one was thinking of doing in talking this way.

But that's not the same thing as saying, flatly, that "both are correct" (or, not so flatly but equally lamely, "... relative to our interest, or conceptual scheme, or whatever"). Clayton naturally denies that things change simply in being thought of differently for different purposes. Although he does not use these terms, in warning of the "untold havoc" that would result from this view, it sounds like he is defending "realism" against an "idealistic" threat to the idea of an independent reality. It is this that provoked me to deliver myself, neither for the first time nor the last, of my anti-dualistic rant (neither realism nor anti-realism is acceptable, etc.). I won't repeat it here, except once again to recommend Cavell, who I should point out doesn't necessarily put things in any way like I have here, or even in general for that matter; nor does Austin escape criticism. (Read Cavell.) On the other hand I can be provoked to talk this way in other contexts too (Nietzsche, Kant, Davidson, Hegel, or, well, anyone you like).

Naturally this settles nothing. All I've done, if that, is clear the space for a better account -- not of the bread, but of how we (should) speak of the bread, and of how we should speak of how we speak of the bread, and so on -- or in other words, of how we should speak. And of course for the most part we speak perfectly well. When we do philosophy we can lose sight of this, and fall into error. But that does not mean that doing philosophy -- moving out of the kitchen and into the study -- thereby amounts to falling into error, as some have claimed on behalf of (or in supposed response to) Austin and Wittgenstein alike. Nor does it mean that we are condemned to speaking not of the world but of language (another common complaint about linguistic philosophy). The two are inextricably linked -- that's the whole point, and one reason for bringing in Austin in the first place.

Let me finish with Austin's own words on the matter (from "A Plea for Excuses," p. 182):
When we examine what we should say when, what words we should use in what situations, we are looking again not merely at words (or 'meanings', whatever they may be) but also at the realities we use the words to talk about: we are using a sharpened awareness of words to sharpen our perception of, though not as the final arbiter of, the phenomena.

Again, though, this is not simply a defense against (accusations of) linguistic idealism. Taken together with what he actually says about our use of words -- what, that is, he takes this "sharpened awareness" to consist in -- the point cuts both ways (as does, again, Cavell's chapter title, in The Claim of Reason: "What a Thing Is (Called)"). That, after all, is what made this denial of idealism necessary in the first place. (Note, finally, the qualifier "though not as the final arbiter of" the phenomena; that's what rules out the dogmatically nihilist reading (Russell's, above) of his views.)

Monday, May 23, 2005

More Artmatic

I should say more about the program I used to make these pictures (this one's generated from an algorithm, not, as the earlier one was, a computer treatment of a pre-existing digital photo). Some interesting aesthetic issues are involved (not only Is It Art, that is). But for now here's the picture. If you click on it, it will take you to my flickr page, where, if you so desire, you can see a larger version, plus the other photos (only three total so far) I have there.

gap

Saturday, May 21, 2005

BBS X-com redux

The other day we took a look at the cherem, or excommunication, delivered by the leaders of the Jewish community in Amsterdam against Spinoza in 1656, one which was apparently much more severe than usual, and included some remarkable passages, some of which sounded familiar. Well, as I should have guessed (duh), a bit of it comes from the Torah, Deuteronomy in particular. Now Nadler (the author of the Spinoza bio) refers to "the famous phrases from Deuteronomy 4:7" -- but the Hebrew version (I did not know this) must be fairly different from my handy NRSV, as that 4:7 has nothing like that. I eventually found a similar passage in chapter 28.

Deuteronomy chapter 28 has 68 verses. The first 13 say how blessed the Israelites will be if (and only if) they diligently observe the commandments the preceding chapters have spelled out. As everyone knows, these commandments vary in tone and content from things they were probably already doing (23:13) to common courtesy (24:21), to particular one-shot tasks (27:2-8), to everyday rules of conduct, which themselves range from time-honored conventional morality (25:15) to the oddly specific, by today's standards at least (25:11-12; did this issue really come up that often?), or things which, for various reasons, virtually no-one suggests we should do today (take your pick). Anyway, if the Israelites are faithful to the covenant, Moses tells them, they will be blessed (28:6): "Blessed shall you be when you come in, and blessed shall you be when you go out."

The remaining fifty-plus verses of chapter 28 (you can see where this is going) deals with what will happen should the Israelites fail to obey. The only overlap with Spinoza's cherem that I can see is the inverse of verse 6: "Cursed shall you be when you come in, and cursed shall you be when you go out" (28:19). My point in the earlier post was that Spinoza's cherem was particularly vitriolic, for reasons not entirely clear; but when we look at the source for the "cursed be he under conditions C1, and cursed be he under complementary conditions C2" language therein, we find not only that but a whole lot more that they might have put in but didn't. So in some ways he got off easy.

As before, I intend no disrespect here. Moses had good reason to emphasize the importance of obedience, and to look at an ancient text with contemporary eyes is not to mock it. When we do so look at it, we see what we saw in the cherem, only more so. Some of the unpleasantness ensuing on disobedience is as you would expect:
27: The LORD will afflict you with the boils of Egypt, with ulcers, scurvy, and itch, of which you cannot be healed.

Some of it is unpleasantness of a different sort:
30: You shall become engaged to a woman, but another man will lie with her.

Ouch. But that's not all.
49: The LORD will bring a nation from far away, from the end of the earth, to swoop down on you like an eagle, a nation whose language you do not understand, a grim-faced nation showing no respect to the old or favor to the young.

That nation will besiege the Israelites, and ultimately they will be scattered among all peoples, which of course will be no fun either.
67: In the morning you shall say, "If only it were evening!" and in the evening you shall say, "If only it were morning!"—because of the dread that your heart shall feel and the sights that your eyes shall see.

But the most remarkable part is the description of the siege:
53: In the desperate straits to which the enemy siege reduces you, you will eat the fruit of your womb, the flesh of your own sons and daughters whom the LORD your God has given you. (54) Even the most refined and gentle of men among you will begrudge food to his own brother, to the wife whom he embraces, and to the last of his remaining children, (55) giving to none of them any of the flesh of his children whom he is eating, because nothing else remains to him, in the desperate straits to which the enemy siege will reduce you in all your towns. (56) She who is the most refined and gentle among you, so gentle and refined that she does not venture to set the sole of her foot on the ground, will begrudge food to the husband whom she embraces, to her own son, and to her own daughter, (57) begrudging even the afterbirth that comes out from between her thighs, and the children that she bears, because she is eating them in secret for lack of anything else, in the desperate straits to which the enemy siege will reduce you in your towns.

Wow. In some places in (what some of us call) the Old Testament, the characteristic rhetorical repetition is just tedious, but here it's very powerful. I can just imagine Moses delivering this address; that must have been something to see. As for Spinoza, Nadler thinks he wasn't even there to hear his.

Mistaken or mad?

Over at Right Reason, Scott Campbell opines that "centrist political views have a built-in [rhetorical] advantage" over left and right:
If you’re a left-winger, then the right-wingers claim that you’re mad, and the centrists say that while you’re not mad, you’re mistaken. Similarly, if you’re a right-winger, then the left-wingers will claim that you’re mad, and the centrists will say that while you’re not mad, you’re mistaken. But if you’re a centrist then no-one says you’re mad, just mistaken. But this is nothing to do with whether or not the centrist position is right. It’s merely due to the fact that they have no opponents who are far enough away from them on the scale to assert that they’re mad.

Interesting thesis, but I'm not sure it's true. First of all, as a centrist myself, I can assure you that I am just as capable as any moonbat of seeing wingnuts for the loonies they are -- that is, of properly employing the concept "wingnut" (and "moonbat" too, of course). And the feeling seems mutual to me as well, from both sides. No advantage there.

W/r/t particular issues, on the other hand (quantifiable ones, say), the thesis is entirely plausible (including its independence from the issue of whether a centrist position is correct or not). But that's not really my subject here (and over there, similar points were made perfectly well by others). Campbell goes on to say that "[t]his reasoning applies not just to political debate, but to any sort of debate on any topic. The centrist positions have an in-built advantage." Now in the philosophical context, here and elsewhere, I have presented my approach to debates like realism vs. antirealism or dogmatism vs. skepticism in just this way, as a mean between extremes -- even, sometimes, with an explicit nod to Aristotle -- and of course in doing so I'm simply continuing in the footsteps of others (Rorty, Putnam, McDowell, etc.; though the all-important details differ greatly). Is there really an "advantage," of the sort Campbell suggests, to doing this?

I should point out that this is not really a test of the thesis, which, again, was intended to apply to specific, e.g. quantifiable, types of case. In fact it presumes the thesis, or at least the intuition that underlies it. That is, I see my favored brand of post-Kantian pragmatism as attractive precisely because it splits the difference between unacceptable extremes. Using the quantifiable image of space, then, we put the point this way: in this picture (which I'm not giving here), the world is independent enough of our minds to escape idealism and relativism, without being so independent as to make a mystery of our cognitive access to it (i.e., in the realist manner). That does sound admirably sane and moderate, as well it should (after all, it's correct).

However, this is hardly decisive. In a way, the "moderation" of this view is an artifact of the conception it rejects, and disappears once the alternative comes into view in its own right. In other words, the situation can be seen just as well as a choice not among three views (two extremes and a center), but between two (the traditional and the pragmatist). In this sense, the two "extremes" are much closer to each other than either is to the "center." In fact, again, they're the essentially the same position, only with opposed orientations. Rather than moving toward the "center," then, in this sense we simply exchange a defective picture (one which, as Wittgenstein puts it, had been "holding us captive") for another. The defect is just what made it seem that there was a forced choice between two opposed positions, instead of a single dualistic picture we should reject for that very reason.

But now we seem to lose the "advantage" of the putative "moderation." What we had been seeing as extremes now join forces to gang up on us, defending their shared conception against the "counterintuitive" newcomer. Yet at the same time, each also sees the pragmatist as in league with his opponent (that is, as trading a winning position for a losing one). Just as in (some of) the political case(s), one person's virtuous moderation is another's concession to vice. By itself, it seems, the structure of the dispute can't determine which side is right.

Let's look again at (a simplified version of) our example, realism vs. antirealism (it works the same way, mutatis mutandis, for skepticism vs. dogmatism). The realist sees the world as independent of the mind, as "objective"; the idealist disagrees. Which is right? Above, we saw the "moderate" answer ("independent enough, without being ideally (i.e., "absolutely") independent"). But realists and idealists don't see this as sane moderation, but instead as wishy-washy dissembling. To dispel this impression, the pragmatist must switch from the three-position image -- the only one which can even appear "moderate" -- to the two-position image. The realist and idealist seemed opposed, and it looked like we had to choose between them where they differ. The "moderate" view was a way of seeing them as both (almost) right (that's the "carrot"). But (here's the "stick") we can just as well see them as both wrong, where they agree. Both are committed, that is, to the same dualistic picture of objectivity. Here of course is where the idealist will object (the realist will cop to it and stick to his guns, accusing the pragmatist of idealism for rejecting it). After all (he'll say), it is just because the realist picture is indeed dualistic that we must be idealists; that's the point of idealism. Where's the dualism, if the world does not transcend our ideas?

The answer is this. The realist puts everything into two boxes: the objective (the world), and the subjective (ideas). Seeing that such a division is senseless, the idealist puts everything into the second box. But we still have a two-box picture, even if one of the boxes is empty. The conceptual dualism -- to my mind the important one -- is still in place. To overcome it we must reconfigure the notion of objectivity. When we do so we will still find it informative to distinguish, in particular cases and senses, between ideas in the mind and the world which (in an equally reconfigured sense) they represent. (Here's where Rorty, for example, is being obtuse, when, without qualification or explanation, he says simply "I have no use for the idea of objective truth." That's just what makes pragmatism look like the rankest idealism. Of course that's not really an excuse for the uncharitable interpretations of some critics; but it's careless all the same.) We simply give up the traditional (i.e., seen as obligatory, even constitutive of philosophy itself, on the traditional picture) metaphysical/ontological project of dividing things up into one or the other. In this sense the realist and idealist are both unsatisfied with pragmatism (that is: to remain unsatisfied just is to be realist or idealist): there is no "objective world" (realist rolls his eyes) -- but that doesn't mean everything is "within the mind" (idealist frowns).

Once again I have failed to give the content of the "reconfiguration," apparently leaving open (or begging) the question of who is correct. But consider this: what if describing the necessity or process of reconfiguration often and imaginatively and persuasively enough makes it unnecessary actually to fill in its content (that is, we get the point -- give up the defective picture and learn to see properly -- anyway)? Or what if the former just is the latter? Or (hold onto your hats) what if these two possibilities are the same one??

Wednesday, May 18, 2005

And your little dog too!

Right now I'm reading Steven Nadler's biography of Spinoza, and, after more than you ever thought you needed to know about life in the Jewish quarter of Amsterdam in the seventeenth century, we have arrived at the part where, for reasons still not entirely clear, though it probably had something to do with his enthusiasm for the writings of the (recently deceased) damned heretic and radical Descartes, the elders of the community have declared a ban on our hero. No disrespect intended, but to contemporary ears the rhetoric here goes a bit, how you say, over the top. Here's an excerpt (p. 120):
By decree of the angels and by the command of the holy men, we excommunicate, expel, curse and damn Baruch de Espinoza, with the consent of God, Blessed be He, and with the consent of the entire holy congregation, and in front of the holy scrolls with the 613 precepts which are written therein; cursing him with the ex-communication with which Joshua banned Jericho and with the curse which Elisha cursed the boys and with all the castigations that are written in the Book of the Law. Cursed be he by day and cursed be he by night; cursed be he when he lies down and cursed be he when he rises up. Cursed be he when he goes out and cursed be he when he comes in. The Lord will not spare him, but then the anger of the Lord and his jealousy shall smoke against that man, and all the curses that are written in this book shall lie upon him, and the Lord shall blot out his name from under heaven. And the Lord shall separate him unto evil out of all the tribes of Israel, according to the curses of the covenant that are written in this book of the law.

Yikes! Now that's some fine cursing: detail-oriented and thorough. I have to say, sometimes I feel that way about Descartes myself, or at least contemporary manifestations of same. W/r/t the proximate target of this magnificent malediction, on the other hand, my uneducated impression is this: he'll say something like "the great genius Descartes says such-and-such, which is certainly right, but actually I think it might work better if we put it this way" -- followed by a devastating refutation of Descartes's position (that is, of what we have come to see as the objectionably "Cartesian" part). Come to think of it, I'm not sure where I got that impression. If I run into any examples I'll let you know.

Monday, May 16, 2005

Here's how out of it I am (chapter 46)

I was doing the Sunday crossword puzzle, and for 76 Across, the clue reads: "Guys' chat topic." I have CA_S. Cabs? No, that can't be right. Cads? No, that would be "Gals' chat topic." Cams? No, not unless they're mechanics or something. Cans? Maybe if they're beer can collectors. Caps? Hardly. Cars? Oh, right. Myself, I've probably spent more time talking about cats than cars, but that's me. On that note, check out the latest addition to the blogroll!

Sunday, May 15, 2005

Cookies: sometimes or anytime?

Over at the only official blog of Clayton Littlejohn, the eponymous author has an interesting post about biscotti (you know, the Italian cookies). "Biscotto," as I might have noticed had I been paying attention, translates as "twice-baked"; but "on Aristotelian grounds," he says, Clayton objects to what might seem the natural way of understanding this:
Contrary to what some cooks might say, Biscotti are never baked. [...] The Biscotti are numerically distinct from the hunks of dough which are the twice-baked objects. Biscotti needn't be baked even once. Similarly, one needn't toast toast to make toast, one should instead toast bread.

Although metaphysics, Aristotelian or not, is not my long suit, to me this just looks like a mistake. Biscotti aren't "numerically distinct from the hunks of dough which are the twice-baked objects" -- they're numerically identical with them. They're qualitatively distinct, having undergone a qualitative change in the oven (which is after all what it's for: to affect things qualitatively, e.g., making them crunchy, by heating them). You put the hunk in the oven, and it -- the same entity -- changes from uncooked to cooked (or cooked to recooked). The successive cookie-stages have to be of one numerically identical object or we can't see that thing as having undergone a qualitative change.

What seems to be troubling Clayton is this: Are the hunks "biscotti" on the way in? After all, they haven't been cooked yet, and how is it a "bis-cotto" if it hasn't been cooked even once? Doesn't its identity as a biscotto depend on a future contingency which might very well never happen? My answer: Well, it depends. What am I asking when I ask "is that a biscotto?"? Maybe I mean "I heard biscotti are really good with coffee -- can I try one [reaching for it]?" (Answer: no, not yet; wait til they're done.) But maybe I mean "I get my Italian goodies mixed up -- are those sfogliatelle or biscotti?" Here I don't see any reason not to say: they're biscotti (not: they'll be biscotti after they come out of the oven). In fact, they'd be biscotti even if they never get baked. Similarly, although this case is a bit different, when on CSI they take all the goop out of the victim's stomach, and someone asks "what's that?", a perfectly good answer is: "That's a biscotto" (not: it was part of a biscotto until she ate it (the biscotto); now it's a half-digested, non-crunchy mass).

We may be tempted to say that "it's a biscotto" in these cases is elliptical (i.e. for "that which will be a biscotto once it's cooked," staying with the first case) and not literal -- that ontologically speaking, it isn't really a biscotto yet. But I agree with Austin (on my reading) that to say this misconstrues what we are doing when we refer to things. (One reason I mention this is because in a subsequent post, Clayton tells us that he just got Austin's Philosophical Papers -- a righteous purchase). Everyone should also check out Cavell's chapter on "What a thing is (called)" in The Claim of Reason.

More on that in a bit. Let me explain Clayton's worry a bit more. A commenter, ian, seems to start to say what I just did, but ends up someplace else. Referring, by "D1", to "the packets of space time that at some point undergo the baking process," ian says:
When you say "Biscotti are twice-baked", you mean "Biscotti are D1 that have been baked twice." D1 is the subject of the predicate "twice-baked", not the biscotti. Clearly, to make biscotti you don't take something that is both D1 and biscotti and bake it again. You take something that is both D1 and [hunks of dough] and, by baking it, make it D1 and biscotti.

ian finishes his comment by remarking:
Now I get why some people think philosophers are jerk offs.

Yes. Well. (What, only some people think this?) I certainly agree that it sounds weird to say that to make biscotti you take biscotti and bake them (twice, no less). But that's because to do so would be to use the term in two different ways at the same time, deliberately inviting confusion, for no reason. That would be the act of a jerkoff. But we don't have to say this in order to preserve the idea, as ian's version at least succeeds in doing, that there was some thing that went into the oven and was numerically identical with what came out. On the other hand ian says (I didn't quote this part) that this is thing is D1, the "packets of space time," but not the hunks of dough therein, which confuses me. Clayton, on yet another hand, is
a bit worried about the pair of claims that (a) D1 is the biscotti but (b) D1 but not the biscotti is the subject of predication. This suggests that when it comes to predication, the truth of a predicative judgment depends on what sort of description the speaker is thinking of the subject under. This will create all sorts of untold havoc when we think about identity.

Here I agree that if D1 is the biscotto then the biscotto is the subject of predication, which means that biscotti can be uncooked, which is fine with me (in theory, that is, as above). With respect to havoc untold, however, it is important that we lay the blame at the proper door. Said havoc results, in my view, not from the perfectly straightforward idea that the truth of a judgment depends on what (i.e. not simply what thing, under no particular description) the speaker is talking about. It is only when this is combined with tendentious (if also traditional) metaphysical assumptions that havoc results. The identity in question here is the identity of an object, and what objects are -- the only things they could be -- are the referents of our concepts, as used in judgments. An object is what it is in being that thing of which the judgment that it falls under such-and-such a concept is true. And, correspondingly, concepts get their content from their use in judgments about the things they (are used to) refer to in (appropriate generalizations across sets of) particular cases.

Here I may sound more like Davidson (or Davidson as modified by McDowell) than Austin (or Austin as modified by Cavell). But Cavell is working the same side of the street. In the chapter I mentioned above, he suggests that one of his targets is "[a] radical distinction between what is a question of language and what a question of fact" (p. 67); or, in Davidsonian terms, the scheme/content dualism. And of course the context of this quotation is a discussion of Wittgenstein's use of the concept of "criteria" -- which recalls my earlier post on the centrality of the concept of (acts of) judgment for understanding the relation (i.e. overcoming the traditional metaphysical dualism, of which Cartesian substance dualism is only the surface manifestation) of mind and world.

My guess is that this suggestion -- that it makes no sense to consider the properties of an "object" under no particular description -- must sound like idealism, especially in this post-Kripkean era in which "essentialism" (let alone "realism") is no longer a bad word. And in the post-Kantian context which McDowell and his (better) critics (Brandom, Andrew Bowie) operate, this may not even be a bad thing, even if only for historical reasons. But if the mind/world dualism is our target, surely so must be the realism/idealism one as well. If we could ever agree even on this much, then I would be happy to hand cookies all round. But I'm not holding my breath.

Thursday, May 12, 2005

It's a madhouse around here

Like most people, I used to play solitaire on my computer. My old PowerBook came with a game called Eight Off, which was diabolically addictive -- I eventually had to throw it into the trash. First of all, it was easy; if you knew what you were about you could win in a couple of minutes. Second, because it was so easy, you were expected to win, so what it did was to time you, and keep statistics: low time, of course (mine was 35 seconds, I think), and average time. One bad hand, where for example you had to stop and think, meant that your average time went back above whatever milestone you had spent some time reaching (like 1:45, or 1:30), forcing you to spend some time getting it back down. But now of course as time goes on it takes longer and longer to affect the average time; bad hands don't hurt as much, but good hands don't help as much either, especially as they're not nearly as much below the average as bad hands are above it. The worst thing, though, is this. Immediately as the last card is played, it automatically deals another hand and the clock starts running. As you may imagine, it takes superhuman powers to resist clicking away -- after all, it'll only take another couple of minutes, and you can see immediately what to do (as I said, it's an easy game). From that experience I learned this: there really is such a thing as carpal tunnel syndrome, and man, does it hurt.

My new (year-old) PowerBook has chess, which is different enough that it's not so addictive (nor is CTS a problem). But I have been playing. I'm not very good at chess (anymore), so I find it frustrating to play normal style, which I rarely win. Instead, I've been playing what it calls "crazyhouse" chess. Now I knew two versions of what I had been calling "bughouse" chess. The first is a two-player game, where whenever you capture an opposing piece, you must immediately put it back on the board (you do get to choose where). This makes for a slow and difficult game -- you spend all of your time trying to unclog your position. This is because whenever he takes a pawn of yours, he puts it back in front of your pieces, blocking them, and whenever he takes a piece, he puts it back behind all your pawns, where they are blocked. Often the game ends with a smothered mate. The other version is more fun. It's a four-player game (two teams of two, at two boards). Let's say I'm White on my board. That means my partner plays Black on his. Whenever any of us takes a piece, he hands it to his partner, who may then, instead of moving, place the captured piece anywhere on his own board. So that leads to situations like this: I look perfectly safe, but then there's a routine exchange of knights on the other board, and before you know it, my partner's captured knight drops down out of the sky, a turncoat now in my opponent's service, and forks my king and queen. Or I have a winning position on my board, but on the other board my partner's opponent has my partner on the run, and, although behind in material, could checkmate by dropping a pawn or bishop -- so now my task is not just to crank out the win, but to do so without allowing a capture of pawn or bishop. And of course the game (match?) is won by the team who wins the first game, so if I can't do this, we lose.

As my computer plays it, "crazyhouse" is the natural cross between these two kinds of bughouse. When I take a piece of his, it changes color and becomes one of mine for potential paratrooper duty. I guess you can do this with two real players, if you have more than one set. This kind of chess is fun, but it takes some getting used to (it's weird to have four knights at the same time). What's particularly interesting is the difference, compared with normal chess, in the effect of the program's characteristic strengths and weaknesses. Naturally, computer chess "players" are tactical geniuses but (usually) strategic duffers, because nothing is easier for it than to look a few moves ahead and see whatever combinations there are to be found, but since there's no real planning going on, or any understanding of positional principles (at this level, anyway; better programs are better at this), if there aren't any "clever" tactics it can use, it basically moves randomly. In the normal game, this means it's easy to win if you can just avoid dropping material (like I said, not so easy for me anymore!). But crazyhouse is much more of a tactician's game. It is amazing to me how he, it, can whip up a bushel full of threats out of thin air. Not even a cloud the size of a man's hand (as the saying goes) ... but a pawn drop here and a bishop-for-knight exchange there, and bang! You may now choose between checkmate and pawn-takes-rook, queening. And once he gets the initiative, often by sacrificing a minor piece to get your king out, if he has any paratroopers all suited up, forget it -- you're toast. Interestingly, while in the normal game a minor piece is worth about three pawns, here it's (usually) much better to have the pawns, not least because there are three of them. If he's prevented you from castling (not hard here), and has a pawn at his K5 or K6, and a couple of pawns ready to go, it can get real ugly real fast -- those pawns are like a spike.

Not only that, sometimes it cheats! I kid you not. Each of the following has happened. First, sometimes it won't let you drop a piece you've fairly won (it error-beeps and moves the piece back to the side). Or it declares victory without the position actually being checkmate! That's happened twice. But the weirdest time was when for some reason it thought that it had an unlimited supply of queens to drop on me. I was winning, and I think I had queened a pawn which it then captured -- but then all it's supposed to get is a pawn, not a queen, let alone as many as it wants. It kept dropping them, and I kept taking them off, and eventually I won anyway (don't ask me how). I ended up with like eight queens on the side. So I do win sometimes. Actually, what I usually do is start from a saved position, one where I'm up a piece (which is actually a two-piece advantage, when you think about it), so I have a decent chance. The problem is that once it runs out of threats, as I said, it basically moves randomly, or checks or threatens you even though you can just capture the offending piece, which is no fun. On the other hand I do like crushing the last futile bits of alien resistance (wha ha ha ha ha!).

Okay, enough fooling around -- my next post will have some philosophical content!

Tuesday, May 10, 2005

Step right up

Here is the latest Philosopher's Carnival. Thanks to Clark for a fine job as host.

Check out this picture


vangoghtracks, originally uploaded by Duck888.

I like the way this turned out - it reminds me of the famous Van Gogh picture of the cornfield of doom. Only here it's the train tracks of doom.

Flickr

This is a test post from flickr, a fancy photo sharing thing.

Friday, May 06, 2005

Speaking of New Jersey

If you ever want to see an unrelenting barrage of stomach-churningly bad acting, here is what you must do: you must obtain (preferably from the library, so that none of your money finds its way into the wrong hands) the DVD of Star Wars: Episode 2 – Attack of the Clones; then sit down, load the disc, go to "Deleted Scenes," take a deep breath, exhale slowly, hit "Play All," and prepare for the ride of your life. Some of these scenes feature the young lovers, Anakin and Amidala (curse my neurons, that they retain this information!), so perhaps you are thinking "Ah yes, that Hayden Christensen, what a yo-yo, where did they get that guy?" -- but no! No, it is not he, but (shudder) Natalie Portman who will put the hurt on you big time. As I recall (the scenes have run together in my mind, into a single nexus of pain), they soften you up with a few cutesy-awkward flirtation scenes with the two of them, but then, when you're whimpering for mercy, they hit you with the killer: Amidala's speech to the Senate, which ... no, I cannot go on.

I'm not sure why they put those scenes on the disc, except perhaps to show that the movie, which was stunningly bad to begin with, was at one time worse still. It's the cumulative effect which is so powerful – I swear, I think she's in all of them. Anyway, that's not the point. You have to watch the deleted scenes because you don't notice it during the movie (even though they couldn't remove all of her scenes), but bad "professional" acting is actually more painful than bad non-professional acting. We've all seen movies with non-professional actors, and some of them, even the acting, is pretty good. When it works the effect is one you can't get with professional actors – it's like mistreating a musical instrument to get a unique sound. (Of course, bad non-professional acting is pretty hellacious too – there's a scene in George Washington, which is generally fine, with acting so bad I just cannot believe that was the best take they got.)

So I was leery of Garden State. There she is, right on the cover. Co-billing and everything. A reviewer says "Natalie Portman has never been better!", which did not exactly encourage me. But mirabile dictu, she's adorable here. The lead guy, one Zach Braff, is very likable (think Jewish Ray Romano), the support is good (Peter Sarsgaard, Ian Holm), and the soundtrack is very nice (second straight movie I've seen with Nick Drake in it). They go for a bit too much profundity and romance at the end, but it's too little too late for it to hurt much. A hesitant thumbs up -- uh, better make that "recommendation," lest we receive a cease-and-desist from you-know-who. Dear God, it says on the back that "this quirky coming-of-age comedy [...] has been hailed as 'the seminal film for today's generation' (USA Today)". Yikes. I take it back. Forget I said anything.

Thursday, May 05, 2005

Wit happens

For my looong bus ride(s) the other day, I brought along two books. One was Christopher Hookway's book on Quine; but of course I read the other one instead, which was the second book in Robin Hobb's Farseer trilogy, Royal Assassin. Actually, this trilogy is only the first of three related trilogies, making (gulp) nine books in all. I think the second three just take place in the same world, without any of the same characters, but judging from the titles, the third trilogy might pick up where book three leaves off (but who knows). Maybe I'll skip the second three and go right to book seven.

Like most science fiction fans, I used to think fantasy was for the birds, or at least for 12-year-old girls, and the cover of Royal Assassin, like that of Assassin's Apprentice, the first one, doesn't do much to suggest otherwise. But this is pretty good stuff, as genre fiction goes. It's carefully plotted, with good castle intrigue, rousing action, etc., but the coolest part is the magic. In science fiction, the convention is that you're supposed to at least try to make the futuristic (or reality-warping, or whatever) stuff plausible, but in fantasy it's easy just to make it up as you go along (it's magic!). Hobb does a good job here. There are two kinds of magical power, the Skill and the Wit. Both are more or less versions of telepathy. King Shrewd is Skilled, as is Prince Verity, as well as a half-dozen younger nobles trained in the Skill for military purposes (beats carrier pigeons). Our hero, Fitz, the bastard son of Verity's now dead older brother Chivalry, being of the Farseer line, has great native Skill abilities, but (oversimplifying a bit), being born on the wrong side of the sheets, hasn't been trained, so his Skill is erratic (this is potentially problematic, but actually well handled).

However, Fitz also has the Wit, which manifests itself as a psychic bond with animals. An interesting aspect of the Wit is that it's seen as shameful, so Fitz has to keep it secret -- also potentially problematic, also handled well: Fitz's wolf companion Nighteyes is a key player, and he's a good character too. (Not telling who else may or may not be Witted -- gack! I started to write Wittge....) Fitz's abilities take on an impressively wide range of tactical significance at various points, like when he's doing both at once (and trying to keep them separate), or when he sees his own surroundings with someone else's sensibility, or when the Wit turns out not to work under certain circumstances (d'oh!). It takes a while to get going, in the first one, but by the second one we're all set up and the plot hums along nicely. I do have a few gripes though. First, the romance angle, while minor, is a bit conventional (Molly's not much more than a cipher, but he loooves her anyway). Second, there are a few infelicities, like going to the well too often (okay, we get it), surprises which do not surprise (including one which has not yet been sprung, but who does she think she's kidding?), and the somewhat odd ending to RA (it looks like she painted herself into a corner; but on the other hand it's hard to judge the end of book 2 before reading book 3). The main problem is the villain, Prince Regal. Most of the main characters are well fleshed out, but Regal's not compellingly evil -- he's just a dipshit (he can't even Skill). Yes, you just want to toss him off a parapet, but so what?

Now I've gone and made it sound so-so. And compared (unfairly as may be) to the uniquely great Patricia McKillip, I guess it is. But I sure do want to know how it all turns out. On the other hand, the reviews of Assassin's Quest (gee, I wonder where he might be going?) at Amazon are mixed, and a few mention a weird and disappointing ending. I'll still read it though. By the way, I saw the title of this post in the convenience store, promoting the greeting cards (but they meant it differently).

Tuesday, May 03, 2005

My loooong day

This Saturday I got up at 5 AM, after a brief nap, and took a bus, waited around for a while, and then took another bus, switching there, after only a brief wait this time, to a third bus, arriving on the campus of Rowan University, in Glassboro, NJ, which, I have discovered, is waaay the hell down there, at around 11: 30. The occasion was the spring meeting of the New Jersey Regional Philosophical Association. (Thanks are due to Lucio Privitello for setting it all up!) In the second session (the first occurred while I was on bus 2 and bus 3), I gave a paper called "Pragmatism, Skepticism, and Taoism," which is an abbreviated version of a paper of the same title (let's not go there), which is to be published in The Journal of Chinese Philosophy early next year (look for it!). The longer version has a punchier punch line, but the shorter version makes a decent conference paper.

Here's what I said. Taoism is often presented as a form of skepticism, whether mystical (the ineffable Tao and all that) or linguistic/methodological (basically, words aren't stable enough to capture anything worth calling "knowledge" of the real). On the other hand, pragmatists are united in rejecting skepticism as foisting upon us entirely illegitimate requirements for knowledge, requirements which have lost all contact with the practices which they supposedly govern. So it looks like Taoists and pragmatists are on opposite sides of the issue. Yet surely they have common enemies as well (like most of Western philosophy). How should we think about this?

Now of course there are different kinds of skepticism, and that gets us part way home. Pragmatists are generally concerned with Cartesian skepticism, which presents us with an intolerable paradox: we are committed to seeing ourselves as knowing, but if knowledge is what it seems it must be, then we cannot justifiably claim to know. Taoists, however, see their skepticism (and their use of paradox) as precisely not intolerable but instead as a live philosophical option. We must see knowledge as impossible and learn to give up the aspiration to know. In the West, this kind of skepticism is known as Pyrrhonism.

Both kinds of skeptic oppose the idea, which we should call dogmatism, that we can meet the theoretical demands for justification for knowledge as typically construed, and they are right to do so. (As my commentator correctly pointed out, I do not go into the details of the typical construal, nor how dogmatist arguments fail, nor how the dogmatist conception of knowledge falls into incoherence. My concern here is for the structure of the dialectic.) Pyrrhonism is preferable to Cartesian skepticism in that it provides, if not a positive view, at least a positive spin on a negative view (we cannot know). But why settle for a negative view at all? In my view, the problem with dogmatism is not the idea that we can know, but instead the particular conception of what it is to do so, which is shared by both skeptic and dogmatist, who disagree only on whether we do or not. Another response to the problem is then to retain the idea that we know, but reject the flawed conception of knowledge which makes it look like we don't.

Of course this is not a new idea; but my point is that once we see things this way, we can think of the flawed conception of knowing as attackable from either side -- that is, as either skepticism (holding out for inappropriate levels or forms of justification) or dogmatism (claiming to have achieved same). The Taoist and Pyrrhonist barbs against dogmatists are well-aimed; but once understood we may be able to see them as usefully analogous to those directed against dogmatism (i.e. the objectionable conception of knowledge) in its skeptical form. This positive anti-skepticism is thus not (guilty of) dogmatism, yet endorses our claims to know, now in a theoretically unobjectionable sense (let's call this kind of anti-skepticism cognitivism).

In the second part of the paper, I relate a characteristically crisp Taoist anecdote (from the Outer Chapters -- that is, those attributed to the school rather than the person -- of Zhuangzi), and show how Zhuangzi's supposedly "skeptical" attitude can have just the sort of versatility we're looking for. In this sense, we may not need to think of Taoists as "skeptics" at all. Here Zhuangzi turns a flat-footed skeptical attack back on itself in a deliciously clever way. My "translation" -- not like I know Chinese or anything, it's more of a paraphrase of other translations. I leave the exegesis as an exercise for the reader (not too hard given what I've already said).

Zhuangzi and Huishi, a common interlocutor, are walking above the Hao river on a fine spring day.

Zhuangzi: Look how the fish are swimming: those are some happy fish!

Huishi: You are not a fish. How [or whence] do you know fish's happiness?

Z: You are not me. How do you know that I don't know?

H: I'm not you, so I don't know about you. You're not a fish, so you don't know about fish. Q.E.D. [or: I've run rings round you logically.]

Z: Let's go back to where we started. When you said "how/whence do you know fish's happiness?", you already knew I know it before asking the question. I know it from up above the Hao river.

Take that!

After my paper, I stayed for most of the next session and then took a bus and then another bus and then another bus. Phew!

The next NJRPA meeting is this fall at Felician College. This time, having learned my lesson, before submitting a paper I'm going to find out where that is.

Update: I see they're discussing this very issue (sort of) over at Chalmers's place...