Thursday, February 28, 2008

All hail teh internets

I was reading this thread at Leiter's about the nature of philosophy (glad to see people thinking about that, but I wasn't too jazzed about any of the answers there), and a mention of David Lewis reminded me that I had always thought there was an interesting comparison between Lewis's modal realism and Deleuze's conception of the virtual (unfortunately, I'm not well-versed enough in either to make anything of that myself). So I thought, huh, let's see what Google has to say about "Deleuze virtual David Lewis". First result = a pdf of the fifth paper on this page, entitled "Deleuze and Lewis on the real, the virtual, and the possible". Bingo! Not only that, this paper turns out to be a chapter from this book, the full text of which is available in pdf form at that second link, along with a lot of related material. Now if I just had time to read it ...

Also, random link-following has revealed that another discussion of the a priori is taking place here. Check it out!

Saturday, February 23, 2008

Aspect blindness as ominous portent of Great Cthulhu's imminent return

I see the page 123 meme has come around again. (See here for last time. Which reminds me, I should finish labeling my archived posts.) Daniel (my tagger) has a good one (as in relevant to the blog's content), but (in that respect) it will be hard to beat this one – yikes, that's creepy.

Speaking of creepy, check this out. I don't want to obey the rules this time, since I haven't moved the books from the last book sale off of my desk, and we've already heard something (something more entertaining than p. 123, I can assure you) from The Golden Bough. So this is 123/5 from another book I'm currently reading:
If someone had come then to lead me away to a place of execution I would have gone meekly, without a word, without so much as opening my eyes, just as people who suffer from violent seasickness, if they are crossing the Caspian Sea on a steamer, for instance, will not offer the slightest resistance should someone tell them that they are about to be thrown overboard.
No, it's not H. P. Lovecraft, though this whole passage does sound pretty overheated out of context (that's some simile, isn't it?). I actually prefer the rules of the previous go-round, which told you simply to give the one sentence, letting us guess the context. This time we are told to provide three more sentences, which is a bit less fun; but let's go ahead anyway, and see if we can't give Tom (above) a run for his money.

The cause of our man's malaise, it turns out, is writer's block:
Whatever was going on within me, said Austerlitz, the panic I felt on facing the start of any sentence that must be written, not knowing how I could begin it or indeed any other sentence, soon extended to what is in itself the simpler business of reading, until if I attempted to read a whole page I inevitably fell into a state of the greatest confusion. If language may be regarded as an old city full of streets and squares, nooks and crannies, with some quarters dating from far back in time while others have been torn down, cleaned up, and rebuilt, and with suburbs reaching further and further into the surrounding country, then I was like a man who has been abroad a long time and cannot find his way through this urban sprawl anymore, no longer knows what a bus stop is for, or what a back yard is, or a street junction, an avenue or a bridge. The entire structure of language, the syntactical arrangement of parts of speech, punctuation, conjunctions, and finally even the nouns denoting ordinary objects were all enveloped in impenetrable fog.
Whew! Glad I wasn't reading Proust. But let's go on – he's just getting warmed up:

I could not even understand what I myself had written in the past—perhaps I could understand that least of all. All I could think was that such a sentence only appears to mean something, but in truth is at best a makeshift expedient, a kind of unhealthy growth issuing from our ignorance, something which we use, in the same way as many sea plants and animals use their tentacles, to grope blindly through the darkness enveloping us. The very thing which may usually convey a sense of purposeful intelligence—the exposition of an idea by means of a certain stylistic facility—now seemed to me nothing but an entirely arbitrary or deluded enterprise. I could see no connections anymore, the sentences resolved themselves into a series of separate words, the words into random sets of letters, the letters into disjointed signs, and those signs into a blue-gray trail gleaming silver here and there, excreted and left behind it by some crawling creature, and the sight of it increasingly filled me with feelings of horror and shame.
Wow. Bet you didn't see that coming! (Great book, btw.)

I don't feel like tagging anyone, but if anyone wants to join in, go ahead.

Friday, February 22, 2008

Hacker on Quine

As I mentioned in my last post, this one is about Hacker's paper "Passing By the Naturalistic Turn: On Quine's Cul-de-sac" (which is, again, available on his website). In this paper, unlike (say) Grice & Strawson's defenses of analyticity, Hacker's criticism of Quine takes a particularly broad form. As the title indicates, his subject is "the naturalistic turn," as pointedly opposed to "the a priori methods of traditional philosophy". The paper discusses three aspects of Quinean naturalism: naturalized epistemology, "ontological" naturalism, and, most broadly, "philosophical" naturalism. Hacker defines this last as
the view that [in Quine's words] philosophy is 'not ... an a priori propaedeutic or groundwork for science, but [is] ... continuous with science' [...] In the USA it is widely held that with Quine's rejection of 'the' analytic/synthetic distinction, the possibility of philosophical or conceptual analysis collapses, the possibility of resolving philosophical questions by a priori argument and elucidation is foreclosed, and all good philosophers turn out to be closet scientists. (MS p. 2)
For the record, Hacker believes that regardless of what Quine's arguments show about "the" analytic/synthetic distinction, the philosophical project of "conceptual analysis" is not threatened:
The thought that if there is no distinction between analytic and synthetic propositions, then philosophy must be 'continuous' with science rests on the false supposition that what was thought to distinguish philosophical propositions from scientific ones was their analyticity. That supposition can be challenged in two ways. First, by showing that characteristic propositions that philosophers have advanced are neither analytic nor empirical [but still a priori]. Secondly, by denying that there are any philosophical propositions at all.

Strikingly, the Manifesto of the Vienna Circle, of which Carnap was both an author and signatory, pronounced that ‘the essence of the new scientific world-conception in contrast with traditional philosophy [is that] no special “philosophic assertions” are established, assertions are merely clarified’. [The Scientific Conception of the World: the Vienna Circle (Reidel, Dordrecht, 1973), p. 18] According to this view, the result of good philosophizing is not the production of analytic propositions peculiar to philosophy. Rather it is the clarification of conceptually problematic propositions and the elimination of pseudo-propositions. (p. 3)

[So instead of being "continuous" with science, Hacker claims, philosophy is] categorially distinct from science, both in its methods and its results. The a priori methods of respectable philosophy are wholly distinct from the experimental and hypothetico-deductive methods of the natural sciences, and the results of philosophy logically antecede the empirical discoveries of science. They cannot licitly conflict with the truth of scientific theories – but they may, and sometimes should, demonstrate their lack of sense. (p. 4)
Myself, I never thought that the point about "continuity," about which naturalists make so very much, was that helpful. "Continuity" is cheap. Sure philosophy is "continuous" with science; but it's also "continuous" with art, literature, religion, law, politics, and, I don't know, sports. But I am being perverse here. Let me try instead to be not-perverse.

As previous posts (not just recently but going back to distant 2005) may or may not have made clear, I want 1) to follow Wittgenstein in not only distinguishing philosophy from empirical inquiry (scientific or not), but also seeing it (in some contexts, for certain purposes) as an activity of provoking us into seeing differently what we already knew, by means of (among other things) carefully chosen reminders of same; but at the same time 2) to follow Davidson in pressing Quine to extend and (significantly!) modify the line of thought begun in "Two Dogmas," one which recasts empiricism in a linguistic light and purges it of certain dualisms left over from the positivistic era.

What we've seen so far is that Hacker and Quine are in firm agreement that I can't have it both ways. Either there's a solid "categorical" wall between philosophy and empirical inquiry, or we level that distinction to the ground. It's true that I couldn't have it both of those ways; but I don't want either of 'em. My concern here, as always, is to overcome whatever dualisms are causing confusion; and overcoming a dualism isn't the same thing as obliterating a distinction. In fact, in my terminology, we overcome the dualism only when we can see how the corresponding distinction is still available for use in particular cases (of course, I can reject distinctions as well if I want, for philosophically uncontroversial reasons). So, for example, when Grice & Strawson object to Quine by claiming that the concept of analyticity still has a coherent use, I don't think I need to object. If you want to use the concept to distinguish between "that bachelor is unmarried" and "that bachelor is six feet tall," go right ahead. I just don't think that distinction has the philosophical significance that other people do. In particular, I don't need to use it, or the a priori/a posteriori or necessary/contingent distinctions either, in explaining my own idiosyncratic take on "therapeutic" philosophy. In fact, I find that explanation works better when we follow Davidson in stripping the empiricist platitude (what McDowell calls "minimal empiricism," that it is only through the senses that we obtain knowledge of contingent matters of fact) of its dualistic residue, and meet up again with Wittgenstein on the other side of Quine. (And yes, I used the word "contingent" there – anyone have a problem with that?)

On the other hand, it also seems to me that after the smoke clears and everyone (*cough*) realizes that I am right, each side can make a case that I had been agreeing with that side all along: Hacker can point to the sense in which philosophy on my conception is still a matter of (what he will continue to call) clarifying our concepts, with an eye to dissolving the confusions underlying "metaphysical" questions; while Quine can point to (what he will continue to call) a characteristically "naturalistic" concern (if that naturalism is perhaps more Deweyan than his own) with the overcoming of the conceptual dualisms left over from our Platonic and Cartesian heritage – e.g., those between the related pairs of opposed concepts we have been discussing. Yet it seems to me that neither side can make the sale without giving something up (something important) and thereby approaching what seemed to be its polar opposite.

We've already seen the shape of this idea. On the one side, Hacker's insistence that, as he puts it, "[t]he problems [here, skeptical ones] are purely conceptual ones, and they are to be answered by purely conceptual means" [p. 9, my emphasis]" sabotages the anti-dualist content of the anti-skeptical critique with a dualistic emphasis on the "purity" of its form (itself held in place by a corresponding dualism of form and content). On the other, Quine recoils from the dualism of pure abstract a priori and good old-fashioned getting-your-hands-dirty empirical inquiry by eliminating the former entirely in favor of the latter. This insufficient response to one dualism leads inevitably to another: in Quine's case, this means (as Davidson argues) a dualism between conceptual scheme and empirical content, which ultimately (or even proximately!) proves to be pretty much the same as the dualisms (analytic/synthetic, observational/theoretical) Quine was supposed to be showing us how to discard.

We'll leave Davidson for another time (the interpretation business might take a while, though it does come up below), but as my subject here is the Hacker article, let me continue by discussing an area of agreement with Hacker: his dismissal of Quine's naturalized epistemology. (Yet of course even here I do not draw Hacker's moral, exactly.) No one disputes that there is such a thing as empirical psychology, so in one sense the focus of "naturalized epistemology" on resolutely third-person description of the processes of information acquisition by biological organisms is unobjectionable. The problem comes when this project is taken to amount to or replace philosophical investigation (however conceived) of knowledge and related topics.

I'll just mention two points. First (although Hacker doesn't put quite it this way), Quine's naturalistic aversion to "mentalistic" concepts leads him to assimilate the theoretically dangerous (in this sense) first-person case to the more scientifically tractable third-person case – after all, I'm a human being too, so what works for any arbitrary biological organism should work for me too. This makes the "external world" which is the object of our knowledge something no longer opposed (as in the (overtly) Cartesian case) to something mental, but instead to the world outside our (equally physical) sensory receptors. But now Hacker wonders about the status of our knowledge of our bodies; or of ourselves, for that matter. Quine is left in a dilemma: "Either I posit my own existence, or I know that I exist without positing or assuming it." As a result (see the article for the details) "[i]ncoherence lurks in these Cartesian shadows, and it is not evident how to extricate Quine from them." [p. 6]

This is (given the difference I've already mentioned) remarkably similar to Davidson's criticism of Quine in "Meaning, Truth, and Evidence":
In general, [Quine] contended, ‘It is our understanding, such as it is, of what lies beyond our surfaces, that shows our evidence for that understanding to be limited to our surfaces’ [The Ways of Paradox, p. 216]. But this is mistaken. The stimulation of sensory receptors is not evidence that a person employs in his judgements concerning his extra-somatic environment, let alone in his scientific judgements. My evidence that there was bread on the table is that there are crumbs left there. That there are crumbs on the table is something I see to be so. But that I see the crumbs is not my evidence that there are crumbs there. Since I can see them, I need no evidence for their presence – it is evident to my senses. That the cones and rods of my retinae fired in a certain pattern is not my evidence for anything – neither for my seeing what I see, nor for what I see, since it is not something of which I normally have any knowledge. For that something is so can be someone’s evidence for something else only if he knows it.
No, wait, that's Hacker again, from later in the paper (p. 13). Here's Davidson, criticizing as "Cartesian" Quine's "proximal" theory of meaning and evidence:
The only perspicuous concept of evidence is the concept of a relation between sentences or beliefs—the concept of evidential support. Unless some beliefs can be chosen on purely subjective grounds as somehow basic, a concept of evidence as the foundation of meaning or knowledge is therefore not available. [...] The causal relations between the world and our beliefs are crucial to meaning not because they supply a special sort of evidence for the speaker who holds the beliefs, but because they are often apparent to others and so form the basis for communication. [p. 58-9]
The relevant stimulus is thus not "the irritation of our sensory surfaces" but instead the rabbit whose appearance prompts the utterance of "gavagai." (See the rest of this key article; it's reprinted in the fifth volume of Davidson's papers, Truth, Language, and History, which I think is now available cheap.) Again, though, this is for reasons concerning the conceptually interconstitutive nature of meaning and belief, not a simple recoil from naturalized epistemology to conceptual analysis. That is, while considering these matters conceptually, as Hacker does, Davidson's argument presents a specific conceptual analysis (if that's what we want to call it) which in its content may be just as fatal to the "purely a priori" as is Quine.

Jumping ahead a bit, we can see on the horizon, even here, a cloud the size of a man's hand. For Davidson's contextually healthy insistence that (as he puts it elsewhere) "only a belief [here, as opposed to sensory stimulations] can be a reason for another belief" can, in other circumstances, manifest itself as a content-threatening coherentism. In "Scheme-content dualism and empiricism" (which I hope we can get to later), McDowell registers puzzlement that Davidson's criticism of Quine is that the latter's conception of empirical content as sensory stimulation (i.e., in its conceptual distance from the "external" world) leads merely to skepticism (not that that's not bad enough!) rather than an even more disastrous loss of the right to be called "content" at all. (At another level, this same consideration tells against Hacker's insistence that "conceptual analysis" are simply matters of language as opposed to matters of fact, i.e., about their referents in the world.)

Hacker too finds Quine's own response to skeptical worries to be nonchalant. In Quine's view, he says, since we are concerned with knowledge acquisition as a scientific question, "we are free to appeal to scientifically established fact (agreed empirical knowledge) without circularity." (Hacker's comment: "That is mistaken.") The philosophical problem of skepticism is not concerned simply with deciding whether or not we have any knowledge, so that it may be dismissed in deciding that, in fact, we do. As Hacker points out, one form of skepticism arises
from the thought that we have no criterion of truth to judge between sensible appearances. Citing a further appearance, even one apparently ratified by ‘science’, i.e. common experience, will not resolve the puzzlement. Similarly, we have no criterion to judge whether we are awake or asleep, since anything we may come up with as a criterion may itself be part of the content of a dream. So the true sceptic holds that we cannot know whether we are awake or asleep. We are called upon to show that he is wrong and where he has gone wrong. To this enterprise neither common sense nor the sciences can contribute anything. [Again, as cited above, Hacker's conclusion, now in context, is that] [t]he problems [skepticism] raises are purely conceptual ones, and they are to be answered by purely conceptual means – by clarification of the relevant elements of our conceptual scheme. This will show what is awry with the sceptical challenge itself. (p. 8-9)
There's more in this vein, attacking Quine's offhandedly deflationary conceptions of knowledge ("the best we can do is give up the notion of knowledge as a bad job") and belief (beliefs are "dispositions to behave, and these are physiological states"), and "the so-called identity theory of the mind: mental states are states of the body." Hacker's comment on this last is typical ("This too is mistaken"), and here too I agree. (Nor, since you ask, am I happy with Davidson's early approach to the mind-body problem, i.e., anomalous monism. But let's not talk about that today.)

Still, I can't see that Hacker's more extreme conclusions about the relation of science to philosophy are warranted. It's true that we can maintain that firm boundary by definitional fiat. But it's just not true that "the empirical sciences," if that means empirical scientists doing empirical science, cannot possibly contribute to our understanding of (the concept of) knowledge, or even provide a crucial piece of information which allows us to see things in a new way. After all, that's what the philosopher's "reminders" were trying to do too. And if a philosopher's "invention" of an "intermediate case" (for example) can provide the desired understanding (PI §122), then so too might a scientific discovery. All we need here, to avoid the "scientism" Hacker fears, is the idea that even the latter does not solve problems qua discovery, even if it is one – and that just because the philosopher's reminder might have done the same thing even if invented and not discovered.

Thursday, February 21, 2008

Mea culpa, mea methodologica culpa

In my post the other day, I made an interesting slip (if that's how you want to think of it): I suggested that Putnam's claim that analyticity and a priority come apart (so that the first four sections of "Two Dogmas" can be detached from the last two) might be of some use to defenders of analyticity. They might want to argue, I thought, that if your target (qua "metaphysics") is really the a priori/a posteriori distinction, then it might be better not to identify it with the analytic/synthetic one (and get rid of them at the same time), but to distinguish the two, so that we might not simply keep around the presumably now unoffensive (qua non-metaphysical, once so distinguished) notion of analyticity, but also employ it (for the project of linguistic analysis) to combat more metaphysical notions (like the a priori).

But (as I noted in a subsequent comment) that just assumes that the defenders of analyticity might see the a priori as unacceptably metaphysical where analyticity is not. As it turns out, Hacker at least does not. I'll get to all that in a minute. Let me first give a quick and dirty characterization of four similar concepts, not worrying for the moment about whether any one of them can be collapsed into any of the others, or whether there really are any such things.

1. Tautologies are "truths of logic": P or not-P (in classical logic).

2. Analytic sentences are "truths (by virtue) of meaning": That bachelor over there is unmarried.

3. Truths are known a priori when we don't have to go out and look, but can confirm them from the proverbial armchair.

4. Truths are necessary when it is impossible for them to be false (they're "true in all possible worlds").

If you like these concepts, you can supply your own examples for the last two. (The SEP article on "A Priori Justification and Knowledge" has as an example of a necessary proposition this one: "all brothers are male," which is not one I would have chosen if I were trying to distinguish necessity from analyticity). Anyways, my point is that however the categories do or do not overlap, the characterization of each has its own typical angle: tautologies have to do with logic, analyticity with meaning, a priority with knowledge (and justification), necessity with ontology (or modality, or in any case metaphysics).

A lot of us want, in some sense or other, to rule out "metaphysics" as nonsense, e.g. a) Ryle, Hacker, etc.; b) Wittgenstein (early and late, on most interpretations); and c) some but not all naturalists. So necessity (or, redundantly, "metaphysical necessity") looks fishy to us. But (as I started to talk about before) in order to combat metaphysics (including but not limited to "necessity"), some of us think we need to hold on to analyticity – a concept which deals, the thought goes, not with the world (i.e., on the other side, qua the object of a "metaphysical" statement, of the "bounds of sense"), but with meaning (which is safely on "this" side). Or so I read Grice & Strawson (I'm trying not to make a straw man here!). For G & S, then, analyticity is both unobjectionable and necessary uh, required for the project of finally exorcising our metaphysical demons. (I assume, if perhaps I shouldn't, that no one has a problem with (the very idea of) tautologies.)

Where does that leave the a priori? If we assimilate it to necessity (on the one side), then it's a metaphysical notion, worthy of dismissal; and if analyticity is the "least metaphysical" of the three (on this quick and dirty characterization), then if Quine's attack on analyticity goes through, it seems that a fortiori (so to speak) the others go as well. But for "analysis" to be possible, G & S believe, there need to be such things as "analytic" truths. So again, my off-the-cuff suggestion was that if we drew the line between analyticity (needed for the method of "analysis") and the a priori, we could use the former to dismiss the latter (along with the more overtly metaphysical notion of necessity).

But Hacker at least is clear that he does not want to do this. For Hacker, the a priori is the central concept he wants to defend: not as a possibly unacceptably metaphysical subject (i.e. object) of philosophical speculation, but as its constitutive method. It is this and this alone which distinguishes philosophy from empirical inquiry. I should have realized this, as the notion is (as in the SEP article) characteristically applied to the manner in which knowledge is acquired rather than its (semantic) form or (ontological) object, and the main contention of the "conceptual analysis" folks is that, again, philosophy is a matter of the clarification of our concepts as specifically opposed to empirical inquiry; so of course they want to defend the a priori as well as analyticity. (My excuse is that I didn't want to assume the naturalist characterization of the a priori (i.e. as hopelessly unempirical) from the beginning, even, or perhaps especially, because I too am not too keen on the notion, if for somewhat different reasons.)

For a interesting account of Hacker's attitude toward Quine, I recommend his paper "Passing By the Naturalistic Turn: On Quine's Cul-de-sac" (available on his website). The main target is Quine's "naturalized epistemology" (so some of what Hacker says is perfectly congenial), and in attacking it Hacker commits himself hook, line, and sinker wholeheartedly to a full-on Manichean dualism of pure a priori conceptual analysis, on the one hand, and not-at-all-philosophical empirical inquiry on the other. Picking up Quine's gauntlet, he begins:
There has been a naturalistic turn away from the a priori methods of traditional philosophy to a conception of philosophy as continuous with natural science.
and ends:
This imaginary science [naturalized epistemology] is no substitute for epistemology – it is a philosophical cul-de-sac. It could shed no light on the nature of knowledge, its possible extent, its categorially distinct kinds, its relation to belief and justification, and its forms of certainty. [...] For philosophy is neither continuous with existing science, nor continuous with an imaginary future science. Whatever the post-Quinean status of analyticity may be, the status of philosophy as an a priori conceptual discipline concerned with the elucidation of our conceptual scheme and the resolution of conceptual confusions is in no way affected by Quine's philosophy.
Snap! That last sentence answers our (my) question about priorities (no pun intended) pretty clearly, I'd say. Let's come back to this article; it's got a nice mix of right and wrong, uh, agreement and disagreement between Hacker and me (and Quine with both of us).

Monday, February 18, 2008

Laugh riot

The latest edition of the Philosophers' Carnival focuses on comedy. It'll be there all week; try the fish.

Sunday, February 17, 2008

Not John Gielgud

I was preparing a post on the analytic/synthetic business we have been discussing (okay, so far it's other people, here, here, and here), and (curious as ever) I followed a trail of links to Wikipedia's article on "Two Dogmas," which I basically just glanced at (looks okay), but there's an interesting bit at the end which no-one has said anything about yet:
In his book Philosophical Analysis in the Twentieth Century, Volume 1 : The Dawn of Analysis Scott Soames (pp 360-361) has pointed out that Quine's circularity argument needs two of the logical positivists' central theses to be effective:
All necessary (and all a priori) truths are analytic.

Analyticity is needed to explain and legitimate necessity.
It is only when these two theses are accepted that Quine's argument holds. It is not a problem that the notion of necessity is presupposed by the notion of analyticity if necessity can be explained without analyticity. According to Soames, both theses were accepted by most philosophers when Quine published Two Dogmas. Today however, Soames holds both statements to be antiquated.
Upon reading this, I had two thoughts in quick succession, and wouldn't you know, they're in tension with each other. The first one was: I hardly think the defenders of analyticity (that is, those who, like our friend N. N., see Quine's attack as threatening the philosophical project of conceptual analysis, whether or not they see the latter as constitutive of philosophy itself), or anyone else unimpressed by Kripke for that matter, should welcome criticism of Quine's argument along these lines. I can't see any such philosopher saying: "see, you can too have analyticity – all you have to do is explain it in terms of an independently established notion of metaphysical necessity!" Surely the whole point of "conceptual analysis" was to put "metaphysics" out of business. So, no help there, right?

The second thought I had was this. Of course the contemporary naturalist/empiricist line of thought, in which "Two Dogmas" was an important early move, is also determined to put metaphysics out of business. But in so doing, it seems to assimilate philosophy into the empirical sciences, not as itself an empirical discipline, but as concerned solely with making sure that science dots the i's and crosses the t's in the proper way (once out of the lab and writing up the results). So if you put all of your anti-metaphysical eggs into the naturalist basket, by rejecting the distinction underlying the competing strategy of conceptual analysis, that means that when the naturalists (if not the empiricists) then turn around and reinstate metaphysics, you have no recourse.

Naturally they'll put a doily on that monstrosity by calling it a "scientific" metaphysics (whatever that means); but when it's accompanied, even justified, by a swipe at "linguistic philosophy" for neglecting metaphysics – well, that's going to be pretty galling. The reason my two thoughts are in (mild) tension with each other is that while the first implies that Soames's criticism of the argument of "Two Dogmas" is of no help to the linguistic analyst, the second thought leads to a different conclusion. For now that philosopher can resist the naturalistic line of thought right at the beginning: if the point of "Two Dogmas" was to deprive metaphysical pseudo-inquiry of the purely non-empirical conceptual space in which it was supposed to operate, well then the naturalistic revival of metaphysics shows that it failed to follow through on its promises. This means that (given the original choice between naturalism and conceptual analysis) as far as unmasking metaphysics as nonsense is concerned, the linguistic strategy is the only game in town after all.

These (quick) thoughts, you will notice, elided two complications, which I should at least mention. First, I exempted properly empiricist naturalism from the accusation of reversion to metaphysics. But it's not clear to me that they will be able to fend off such accusations when coming from fellow naturalists. (My own objections to these positions are of another order entirely, so when naturalists trade accusations of "reversion to/neglect of metaphysics," I don't need to take sides.) For the second elision, let's return to Wikipedia's article:
In "'Two Dogmas' revisited", Hilary Putnam argues that Quine is attacking two different notions. Analytic truth defined as a true statement derivable from a tautology by putting synonyms for synonyms [is] near Kant's account of analytic truth as a truth whose negation is a contradistinction. Analytic truth defined as a truth confirmed no matter what[,] however, is closer to one of the traditional accounts of a prioricity. While the first four sections of Quine's paper concern analyticity, the last two concern a priority. Putnam considers the argument in the two last sections as independent of the first four, and at the same time as Putnam criticizes Quine, he also emphasizes his historical importance as the first top rank philosopher to both reject the notion of apriority and sketch a methodology without it.
It does seem that the a priori, rather than analyticity, is the key notion here, and perhaps the defenders of Grice and Strawson would like to argue that the way to debunk the former (as I think we may construe their project) is to keep the latter rather than running the two together and discarding both.

Saturday, February 16, 2008

imdb widget updated

The widget label is self-explanatory; for the first 15 see here. A few brief comments on these films, most of which were good enough to make the other list if there had been room:

1. Ballad of a Soldier

An excellent companion to The Cranes are Flying from the other list. Let's all lobby Criterion to bring out more Russian stuff.

2. Cafe Lumiere

Hou Hsiao-Hsien in Japan. Nice.

3. Triad Election.

Hard-boiled HK mob film from Johnny To. This is actually the second one (the first one's just called Election) but it stands on its own. Might as well start with the first one, though, which is also good.

4. Offside

Sly social commentary, Iran-style. (And you thought they didn't allow such things over there.) Interesting interview with director Jafar Panahi on the DVD.

5. Black Book

Just missed the last cut. Carice van Houten is spellbinding as a Dutch resistance fighter in WWII. (I saw a lot of WWII films last year!)

6. Army of Shadows

Here's another one, from France this time. Very different tone though (as one might expect from Melville). Strange poster on the widget!

7. Stray Dog

This one's about a WWII veteran. Early Kurosawa, with Toshiro Mifune.

8. Leaves From Satan's Book

Dreyer's answer to Intolerance.

9. Aura

Falls down on a key plot point, but it's got a nice noirish mood and Ricardo Darin's incandescent star power.

10. The Science of Sleep

Gael Garcia Bernal as a shy nerd with an active fantasy life. Quirky and charming (but don't let that scare you off).

11. Volver

I never liked Almodovar's early films, but recently he's been very consistent. Someone at one of the local libraries likes Penelope Cruz a lot!

12. Tarnation

A son's video tribute to his dying mother. Some reviewers revile this film for its self-indulgent narcissism, but that's what the film is about: a self-indulgent narcissist's inability to keep from moving the subject back to himself, even when he's supposedly making a film about his mother. Very creative, if also fracked up.

13. The Fallen Idol

Possibly annoying if you let the kid get to you, but Ralph Richardson is great. Oblique spoiler: someone gets Gettiered but good.

14. Richard III

Olivier as the not-that-misshapen anti-hero. Some textual liberties are taken; I swear I heard Richard say something early on about "murd'rous Machiavel," which would be somewhat anachronistic (not to Shakespeare, but to Richard at least). Richardson's in this one too (stealing every scene he's in).

15. Rebels of the Neon God

Early Tsai Ming-Liang.

Friday, February 15, 2008

Postmodern captain

Recent visitors to this site will notice the lack of activity here, but I haven't been entirely absent from the 'sphere, as we have been having a rousing conversation about Philosophical Investigations §122, among other things, at other locales (see here, here, here, here, and here). In general, if you drop by here looking for me, if I'm not here I may be over at these or a few other places (virtually speaking). Still, even so, I am remiss in not contributing anything more substantive than a few comments from the virtual peanut gallery. We'll get to those things soon enough, I hope, but for now here's an interesting tidbit from a book I read recently which was made up (almost?) entirely of untruths.

Post Captain is the second book in Patrick O'Brian's series of naval historical novels set in the Napoleonic wars era (thanks to the Crooked Timber crew for the recommendation). Toward the end, Dr. Stephen Maturin is at the opera, but he finds it "poor thin pompous overblown stuff" and cannot enjoy it:


A charming harp came up through the strings, two harps running up and down, an amiable warbling. Signifying nothing, sure; but how pleasant to hear them. Pleasant, oh certainly it was pleasant [...]; so why was his heart oppressed, filled with an anxious foreboding, a dread of something imminent that he could not define? That arch girl posturing upon the stage had a sweet, true little voice; she was as pretty as God and art could make her; and he took no pleasure in it. His hands were sweating.

A foolish German had said that man thought in words. It was totally false; a pernicious doctrine; the thought flashed into being in a hundred simultaneous forms, with a thousand associations, and the speaking mind selected one, forming it grossly into the inadequate symbols of words, inadequate because common to disparate situations – admitted to be inadequate for vast regions of expression, since for them there were the parallel languages of music and painting. Words were not called for in many or indeed most forms of thought: Mozart certainly thought in terms of music. He himself at this moment was thinking in terms of scent.

[Suddenly, from his box Stephen espies, in the crowd below, the woman whom he has been chasing for more than four hundred pages, with little success – only just enough, in fact, to maximize his frustration.]

Stephen watched with no particular emotion but with extreme accuracy. He had noted the great leap of his heart at the first moment and the disorder in his breathing, and he noted too that this had no effect upon his powers of observation. He must in fact have been aware of her presence from the first: it was her scent that was running in his mind before the curtain fell; it was in connection with her that he had reflected upon these harps.
I find this (like a lot of things in good literature, now that I think of it) phenomenologically astute but philosophically naive. Certainly the idea of "thinking [only] in words" suggests a crude picture indeed, of the sort (rightly or wrongly) attributed to artificial intelligence types – and which provokes phenomenologically-motivated accusations of a "myth of the mental" (e.g. in Dreyfus) and calls for recognition of "non-conceptual [mental] content" (not, as I understand it, to be confused with "qualia" – but maybe I'm the one who is confused).

Surely, we feel, our minds – and our experiences – contain more than words. That our hearts leap and our breaths catch, or that our (verbal) thoughts are affected, subtly or otherwise, by bodily phenomena and multifarious subconscious associations cannot be denied. The faculty of language – the "speaking mind" – is only one of many contributors to the experiential makeup of our conscious selves. It is natural to reach, as we all do at times, for an image of trying, and often failing, to "put into words" something which must perforce exist "outside" language but which is still part of our experience. Still, I would resist the idea that there are "thoughts" antecedent to their linguistic manifestations, or that music and other arts are "parallel languages" which can communicate thoughts which (what we would have to call, now non-redundantly) "verbal language" cannot. (Or as my undergrad professor put it, when I spoke of the sort of experiences Stephen here discusses: "why do you want to call these things 'thoughts'"?)

Let's look first at the idea that words are (language is) "inadequate because common to disparate situations." This has been a common refrain in philosophy from the Greeks through Derrida. Here's another German on the matter, writing some seventy years after Stephen's night at the opera, but one hundred years before the real-life author of Stephen's ruminations:
Every word immediately becomes a concept, inasmuch as it is not intended to serve as a reminder of the unique and wholly individualized original experience to which it owes its birth, but must at the same time fit innumerable, more or less similar cases—which means, strictly speaking, never equal—in other words, a lot of unequal cases. Every concept originates in our equating what is unequal. No leaf every wholly equals another, and the concept "leaf" is formed through an arbitrary abstraction from these individual differences, through forgetting the distinctions; and now it gives rise to the idea that in nature there might be something besides the leaves which would be "leaf"—some kind of original form after which all leaves have been woven, marked, copied, colored, curled, and painted, but by unskilled hands, so that no copy turned out to be a correct, reliable, and faithful image of the original form. ("On Truth and Lie in an Extra-Moral Sense", The Portable Nietzsche, p. 46)
Nietzsche scholars like Clark hurry to point out that Nietzsche later abandoned his youthful skepticism about truth (the oft-quoted subsequent paragraph in "Truth and Lie" tells us that "truths are illusions about which one has forgotten that this is what they are"), reminding us that this essay was a) published only posthumously, and b) written some 15 years earlier than his most mature writings (an eternity in Nietzsche-time). Still, even here the point is not to accept but to reject the idea that the origin of our concepts means that there is some more perfect reality which (due to their humble origins) they necessarily fail to capture. This is an anti-skeptical point, one which Nietzsche retained throughout his career.

But let's turn back to the properly skeptical point with which this anti-skeptical point is easily conflated (one which later Nietzsche does reject). Even if the Platonic Leaf is an illusion, what about those "unique and wholly individualized original experiences" from which our concept of "leaf" is abstracted? If our concepts necessarily fail to capture (not the pure abstraction, but instead) these individual differences, then it seems that here too our language is inadequate. Yet it is only so if one has a distorted conception of what it is that language is supposed to do, which is not to duplicate individual experiences but instead to express beliefs (and other "mental states" like emotions) and communicate truths about the world (often, which may be the source of some of the confusion here, both at once). Even when the problem is not that language fails (in its necessary finitude) to achieve pure generality, but the seemingly opposite point that it fails (in its necessary generality) to achieve pure specificity, the result is a fatal temptation toward Platonic (or Cartesian) abstraction and reification, and a corresponding anxiety (or conviction!) that language necessarily conceals rather than reveals (or communicates).

Here's a leaf. Is it not a leaf? We just agreed that it is. So "this is a leaf" is true, and not an "illusion." But this other leaf is also a leaf; so "this is a leaf" fails to capture the specific "leafiness" of either of them. True enough, much has been left out. But so what? (What should we say, "this is a leaf, but it's an illusion to believe that it is"? Hogwash.) So say more: this particular leaf is green, small, smooth, wet. These too are merely words, generalized from many greens and smalls and wets. Even the precise hue (say, 7CFC00), the size in microns, the precise amount of water on its surface, everything I can possibly "put into words," will not get us across that metaphysical gap (once so construed) between universal predicates and irreducibly individual thing. You cannot describe to me – language cannot capture – the leaf-in-itself.

Okay, but this wasn't really our problem. Your concern in speaking to me was not after all with a posited leaf-beyond-experience (whether an ideal Platonic Leaf or a specific Cartesian leaf-in-itself) but instead your own experience, communication of which need not require such fictions as leaves-in-themselves. Here too, though, the same problem seems to arise. You had some experience which you want to communicate to me. Of course I can't be you, so I can't have your experience. Yet it still can seem as if even though I cannot be you, there is some thing, a (specific) "experience" of yours (distinct from you, that you are "having", such that that identical experiencer – you – then go on to "have" another such, etc.) which your words necessarily (alas) fail to communicate to me. That such things cannot be transferred whole from your inner theater to mine isn't the fault of language. Even if you handed me the very leaf in question, to look at and touch for myself, I still wouldn't have your experience, even the one I did have was thereby very much more "like yours" than the one I had merely listening to you describe it. This "failure" just doesn't have the philosophical significance it can seem to have: that there is, like the leaf-in-itself, an experience-in-itself which can be conceptually detached from your having it, and which I may thereby "fail" to have due to imperfections in the medium of transmission. In my view, once we've established that I can't be you (or, again, that words aren't "the same as" the things which they denote or describe), that turns out to be the only metaphysically relevant consideration – which as a triviality cannot support the philosophical weight put on it by the sort of realism which results in the sort of skepticism in question, which sees language as "cutting us off from reality" (or each other) rather than opening it up to us.

It is of course true (another triviality) that music or painting can evoke experiences which language cannot – that there are qualitative differences which, as subjects, we automatically project back onto their "objects" qua experience. We naturally speak here too of "expression"; yet there is no reason to think of these arts as "parallel languages," or languages at all. I liked Garry Hagberg's book Art as Language, which goes into these matters very clearly indeed (as the Amazon reviewer rightly notes), so I won't go into them here. I would just suggest that "expression" (whether artistic or linguistic) has connotations not simply of communication, but also of manifestation or even creation, which can help suppress the urge to posit some distinct entity which it can fail to copy adequately – while yet leaving in place the triviality that there are plenty of ways in which an "expression" (of something) can indeed fail (and corresponding locutions, such as Mozart's musical "thought").

For more on the idea of "thinking in terms of scent," I imagine there would be a lot about that in this book, the movie version of which I just saw last week. Interestingly, while I imagine some people reacted to the story's move, toward the end, from highly implausible (even in cinematic terms) to completely impossible, with an annoyed "oh, come on," I found that the move actually relieved that pressure rather than increasing it to intolerable levels – as now it became easier to see the story as purely allegorical fantasy (which of course it always was) rather than an attempt to make (still fanciful) sense on the literal level. (I speak abstractly in order to avoid spoilage.) The film (by Tom Tykwer of Lola rennt fame) renders the experience of scent in visual terms very well (although there were a few too many shots of sniffing noses), and I imagine the book's appeal depends on its success in the corresponding rendering in verbal terms.

Monday, February 04, 2008