Showing posts with label philosophy rants. Show all posts
Showing posts with label philosophy rants. Show all posts

Saturday, September 05, 2009

Why Davidson is not Humpty Dumpty

I promised in the comments to the other post to say something about Davidson's argument in "Nice Derangement," which is of course the Ursprung of all this talk about rejecting the idea of "linguistic norms." (In the context of that discussion everybody is clear on this, except possibly me, but let me just tie up that loose end.) Unlike Bilgrami, Davidson does not direct his argument against Kripke and Burge in particular (and McDowell's somewhat differently focused criticism of same). Instead Davidson simply argues that we should not base our conception of language use, and thus of meaning, on the concept of convention, i.e., as manifested in linguistic rules which pre-exist and thereby determine the meanings of particular utterances on particular occasions, as if they were, in Davidson's dismissive terms, "portable interpreting machines."

Instead, the fundamental idea is that language is used above all to communicate (i.e. rather than to denote or represent, which it does in only a derivative manner). Similar ideas are already present in Davidson (q.v. "Reality Without Reference," and "Communication and Convention," in Inquiries), but here he spells out the implications more provocatively. Indeed, in asserting a primary role for the intentions of the speaker in determining meaning, he provokes suspicions of "internalism" and downright semantic nihilism.

The specific thesis he rejects is that "[t]he systematic knowledge or competence of the speaker or interpreter is learned in advance of occasions of interpretation and is conventional in character." Okay, that's pretty much what I said above. But later on, he elaborates: "[i]n principle communication does not demand that any two people speak the same language. What must be shared is the interpreter's and the speaker's understanding of the speaker's words." [438] Now there are some constraints on this sharing, some of which involve what can count as a possible communicative intention of the speaker in the given situation (here leaning on Grice's analysis of same); and it is these constraints which separate Davidson's account from nihilism and/or internalism.

Here Davidson points to Keith Donellan's previous (albeit somewhat differently focused – Davidson explains but I will skip that part) discussion of similar matters. Alfred MacKay had accused Donellan of Humpty Dumptyism ("When *I* use a word, it means just what I choose it to mean"), and in reply, Donellan "explains that intentions are connected with explanations and that you cannot intend to accomplish something by a certain means unless you believe or expect that the means will, or at least could, lead to the desired outcome. A speaker cannot, therefore, intend to mean something by what he says unless he believes his audience will interpret his words as he intends (the Gricean circle)." As quoted by Davidson, Donellan says:
If I were to end this reply to MacKay with the sentence 'There's glory for you' I would be guilty of arrogance and, no doubt, of overestimating the strength of what I have said, but given the background I do not think I could be accused of saying something unintelligible. I would be understood, and would I not have meant by 'glory' 'a nice knockdown argument'?
Davidson approves of this reply (and then explains a disagreement I have here elided). Okay, let me just quote the money paragraphs and then I'll stop.
Humpty Dumpty is out of it. He cannot mean what he says because he knows that 'There's glory for you' cannot be interpreted by Alice as meaning 'There's a nice knockdown argument for you.' We know he knows this because Alice says 'I don't know what you mean by "glory"', and Humpty Dumpty retorts, 'Of course you don't – til I tell you.' It is Mrs Malaprop and Donellan who interest me; Mrs Malaprop because she gets away with it without even trying or knowing, and Donellan because he gets away with it on purpose.

Here is what I mean by 'getting away with it': the interpreter comes to the occasion of utterance armed with a theory that tells him (or so he believes) what an arbitrary utterance of the speaker means. The speaker then says something with the intention that it will be interpreted in a certain way, and the expectation that it will be so interpreted. In fact this way is not provided for by the interpreter's theory. But the speaker is nevertheless understood; the interpreter adjusts his theory so that it yields the speaker's intended interpretation. The speaker has 'gotten away with it.' The speaker may or may not (Donellan, Mrs Malaprop) know that he has got away with anything; the interpreter may or may not know that the speaker intended to get away with anything. What is common to the cases is that the speaker expects to be, and is, interpreted as the speaker intended although the interpreter did not have a correct theory in advance. [440]
One more thing. I think that what this means is that when Wittgenstein asks us to consider whether I can say "bububu" and mean "if it does not rain I will go for a walk," the answer is yes, I can; but only after what he elsewhere calls "stagesetting." Before that, not so much (and certainly not by a Humpty Dumpty-like act of, say, inner ostention).

Thursday, September 03, 2009

Bilgrami's critique of the Platonistic urge (or: why reject the very idea of semantic normativity?)

The previous post was a bit of a bear, wasn't it. (By the way, if you liked it, you may vote for it here - once they work out the bugs, that is.) Let's back up a bit, and see if we can't get clearer on the various players. The dialectic here is quite complicated, with strange bedfellows all over the place, and a number of distinct yet overlapping positions on the issue(s). (When we get back to vagueness, we'll see that there too the teams have a somewhat unusual alignment, which is what provoked Marinus's remarks about Wittgenstein in the LL post.)

Why would anyone deny that there were linguistic norms? If there were no such norms, it is easy to assume, there would be no constraint on meaning. An agent could mean anything by anything, simply by intending to do so (that is, that whatever "norms" constrained his meanings – if we still want to call them "norms" at all – are merely "internal"). But maybe this is correct. This position is called "internalism" or "individualism," and its Cartesian flavor is undeniable, thus attracting criticism from all across the philosophical spectrum (including from closet or residual Cartesians themselves). Rejecting internalism seems to require that there be external linguistic norms, and thus that I can make errors in meaning as determined by others.

But what is it to make an "error in meaning"? On one view, whenever I refer to an ocelot as a lynx, I make an error in judgment (i.e. get the world wrong/say something false), and in so doing, I misuse the word "lynx," which should only be applied to lynxes, and I thus "use the word wrongly" in this way. What determines that this is the "wrong" use of the word? Answer: linguistic rules ("norms"). Among those who take this view, there is some variation about what constitutes the linguistic norms in question: obviously other English speakers have something to do with it, but there is also some role to be played by ocelots and lynxes themselves (what role this is exactly will depend on how you feel about natural kinds and Kripkean metaphysical realism more generally).

Now we can respond to this conception of meaning errors in a few ways. A natural way is to object to a conflation between two cases: 1) using a word "wrongly" (coming out with the wrong fusebox), and 2) using it correctly to express what happens to be a false belief (I perfectly correctly characterize how things appear to me, but as it happens I am mistaken). In one sense, Davidsonians will be happy to make this distinction, as one of their (our) main concerns here is the holism of belief and meaning: that in attributing the two together, we have some interpretive leeway (or even indeterminacy) in saying what falls under what. This doesn't mean there are *no* constraints on interpretation – that someone's meaning may swing free entirely from what both subject and interpreter see as observable evidence for it; it just means that we have a better sense of how content is attributed in interpretation than do those with non-Davidsonian accounts of meaning.

However, even after distinguishing in this way, the question remains how to characterize the first case (and the sense of "correctly" in the second). We are nowhere near out of the woods. It can be a further Davidsonian point that we fall directly back into the Platonistic soup if in making this distinction we carve out a realm of purely or sui generis semantic normativity, or in other words, those same "linguistic norms." On this view, we need nothing so robust (or theoretically questionable) as linguistic normativity so construed to account for the actual constraints we make on meaning attribution. We can perfectly well, for example, think of such "mistakes" as prudential ones, in which the sound I make is inconveniently chosen to convey my perfectly determinate (and indeed often perfectly intelligible) meaning – a prudential "error," not a contravening of "linguistic norms" in the disputed sense.

This is the point Bilgrami is making in "Norms and Meaning," in which he criticizes Kripke and Burge, not for opposing "internalism" or "individualism" per se, but for not getting at the root of the problem, and thus perpetuating it in a new form. In hurrying to explain my attempted moderation of Bilgrami's rejection of semantic normativity, I kind of skipped over his reasons for rejecting it in the first place. So let me go back and say more about that.

In Kripke's and Burge's discussions, the "individualist" is pretty much someone with a "private language," someone whose inner intentions determine his meanings no matter what other people say, which is why the issue comes up in Kripke's book on Wittgenstein's rule-following considerations. Naturally Wittgenstein rejects this view; and so does Kripke, who takes the RFC (whether or not Wittgenstein himself does so) to require an appeal to a "social theory of meaning" to save us from the meaning-skeptical paradoxes to which "individualism" so construed famously leads. On Kripke's picture, if we are to account for meaning at all, *something* must provide the norms manifested in linguistic rules. In distancing itself from mere linguistic nihilism, individualism promises to locate the source of normativity in the speaker's linguistic dispositions. However, as the paradoxes show, such dispositions cannot do this, as they are compatible with *any* subsequent behavior. Nor, Kripke argues (following Wittgenstein at least this far), can we find the source in Platonistic "rigid rails" or whatnot; so "[w]hat then can the source of the desired normativity be but the social element?" ("Norms and Meaning," p. 126). The result is Kripke's "skeptical solution" to the meaning-skeptical paradox, an appeal to the dispositions of the surrounding linguistic community.

Akeel in a typical poseHowever, Bilgrami rejects this forced choice (dare I say "dualism"?) between anything-goes-if-I-say-so "individualism" on the one hand and external linguistic normativity on the other, such that we must locate a source for it in this way. Bilgrami frequently qualifies his criticism of Kripke and Burge, rejecting normativity "in the sense demanded by" K/B, or "such" normativity. (This is what encourages me to risk re-expansion of the concept into the semantic realm, or that is, recognizing a properly semantic component to our normative commitments.) Yet he is determined to pull the objectionable picture out by the roots, and takes so doing to require a stronger line against "linguistic norms" than has seemed feasible until Davidson's criticism.

Bilgrami's diagnosis goes like this:
[I]n rejecting the abstractions and metaphor of [platonistic] Meanings and 'rails' on the one hand and the internalistic mentalism of inner facts of the matter on the other, one has not yet succeeded in rejecting what in Platonism underlies the search for these things being rejected. Without rejecting this deeper urge, one will no doubt find another such thing to gratify the Platonist urge and indeed one has found it in society. This deeper urge underlying Platonism is precisely the drive to see concepts and terms as governed by such normativity. (p. 127)
John McDowell has of course also criticized Kripke's diagnosis and attempted solution to the paradoxes. In particular, McDowell too criticizes Kripke on his own terms - that his "skeptical solution," locating semantic norms in community practice, fails to do what it promises. And he too wants to dissolve the problem and allay the skeptical anxiety, just as does Bilgrami, only without giving up semantic normativity entirely. It is in trying make sense of McDowell's approach not only to this issue, but to normativity generally (especially in response to Davidson), that I am motivated to moderate Bilgrami's flat rejection of semantic normativity in the way I did the other day.

But let's see what Bilgrami says about McDowell. According to Bilgrami, McDowell says
that the way Kripke brings in the social is just an extension of the normativity-denying position of the dispositionalist because all Kripke does is bring in the dispositions of other members of society to account for an individual's meanings. So if he says something was missing in the individual dispositionalist account in the first place, then it will be missing in the social extension as well. This criticism seems to me to be fair enough, if one accepts the normativity demand as one finds it in Kripke and as one finds it in these others who think that Kripke has himself failed to live up to that demand. But I do not accept the demand in the first place. So mine is a much more fundamental criticism of Kripke. In my view, one should repudiate the 'Platonism' altogether (even in its ersatz forms) and in so doing give notions like meaning and concepts a much lower profile, whereby it does not matter very much that one is not able to say [referring here to the familiar examples in Kripke and Burge] that KWert is making a [properly semantic, or as Bilgrami puts it, "intrinsic lexical"] mistake on January 1st 1990 or that Burge's protagonist has all along made a mistake when he applies the term to a condition in his thigh. [...] [I]t makes no difference to anything at all, which answer we give. His behaviour is equally well explained no matter what we say. There is no problem, skeptical or otherwise. (p. 128)
Because of the holism of belief and meaning, we can attribute either concept, adjusting the belief component accordingly, and equally well explain the agent's behavior, which is after all the constitutive function of interpretation in the first place. This is the sense in which Bilgrami's is a Davidsonian view (and in response to this article, Davidson agrees heartily).

In this sense, again, I have no problem with this view. However, I think that here too (that is, w/r/t this view itself) we have other options in explaining the anti-Platonism we are after, options which leave the concept (or again, *a* concept) of "properly semantic normativity" in place. I was no doubt remiss in the previous post not to stress that it is only after the point has been understood that we safely can go on and try to accommodate McDowell's way of talking, with its characteristic stress on normative rather than (as readers of Mind and World will recognize as the criticism of Davidson there on analogous grounds) "merely causal" (or again, descriptive) relations between mental contents and the world they are about. When we do this we can see how McDowell's criticism can be properly directed. Davidson is not making a "Platonistic" error, as Kripke et. al. are, but in recoiling to a picture devoid of properly semantic normativity (properly construed), he misses a chance to tell a better story about normative commitment generally speaking, and thus recover gracefully from the error he really does make which results in his "coherentism," dismissed by McDowell as "frictionless spinning in a void" (again, see Mind and World, esp. ch. 1-2). I hope that helps place the other post in dialectical space (if not actually vindicate what I say there, and I still have some more fast talking to do on that score as well).

Okay, that's enough for Bilgrami. Next time: Davidson.

Monday, August 31, 2009

Can words be used incorrectly?

The other day at Language Log there was a post directing us to a philosophically-themed Dinosaur Comic, where T-Rex jubilantly schools philosophers with his deflationary solution to the sorites paradox. I have a number of comments about that, but for now I want to address one aspect of one of the comments there (as you'll see, that will be plenty for today). The commenter, Marinus, after giving an excellent explanation of why the sorites paradox is indeed a real problem in philosophy, suggests that some philosophers, Wittgenstein among them, are committed to the idea that it is impossible for anyone to use a word incorrectly. Marinus does not mention any other such philosophers, and the attribution to Wittgenstein seems like a stretch, or is at least not obvious.

Putting Wittgenstein to one side for now, I can attest that Akeel Bilgrami, following Davidson, has stated explicitly that "normativity is irrelevant to the meaning of words" ("Norms and Meaning"). Here, however, I would like to give some reasons why such talk of using words wrongly is perfectly natural, and, more importantly, can be harmless even by Davidsonian lights. That is, it will seem at first that in helping myself to properly semantic normative considerations, I invite the Platonism which both Davidson and Bilgrami correctly reject. My task will be to show, or at least suggest, that in so doing I issue no such invitation. (Bilgrami actually does qualify his claims somewhat, but not in the way I would prefer. I'll say a bit about this at the end.)

an ocelotLynxes and ocelots are members of the cat family. They're bigger and wilder than domestic cats, but smaller than the big cats (tigers, etc.). Other than that I get a little fuzzy on the details. I think ocelots might be a bit smaller than lynxes, and I think lynxes have little tufts on the ends of their ears. As you might expect, though, cat family classification is somewhat more complicated than I make it appear here, and as it turns out, lynxes and ocelots aren't really that similar. I don't think that affects the following argument, as our question is still what to say if someone were to confuse them: is the mistake epistemic, semantic, both, indeterminate, or something else? If the example bothers you, ignore the kitty pictures and think about elms and beeches instead.

With that in mind, let's say I work at a zoo (a real zoo, that is). I've spent the morning admitting an ocelot: having it checked for the standard ocelot parasites, feeding it ocelot food, cleaning out the ocelot cage, etc. At lunch the conversation centers around lynxes and ocelots, and I mention that the lynx I admitted today had some interesting markings. You've seen the animal in question too – maybe you received delivery and glanced in the cage before signing – and you reply: "Lynx? You mean ocelot, don't you?" My response: "Right, the ocelot." In other words, I don't bat an eye, but simply acknowledge what we would call a slip of the lip. My belief is fine: I knew all along it was an ocelot – that's why I did all those ocelot-specific things – but just now I made a semantic error. I simply came out, as does Michael Palin uncontrollably in a certain Python skit, with the wrong fusebox.

In particular, I attempted to express my (true) belief that the cat was an ocelot, but in so doing, I misused the word "lynx," which after all means lynx, not ocelot, and therefore cannot (or so it seems; I consider a qualification below) be used correctly in expressing beliefs – true or false – about ocelots rather than lynxes.

a lynxSo far, so good. However, I can also make a mistake about the cat, rather than the word. In order to do so, however, I have to use the (mistaken) word correctly in order to express my false belief. Let's say I simply made a cursory examination (before I had my morning caffeine?) and handed the "lynx" over to my assistant for the admission procedures I myself performed in the previous example, only this time it's cleaning out the lynx cage, etc. Again at lunch I speak of the "lynx's" markings, and again your reaction is "Lynx? You mean ocelot, don't you?" Now my response may very well be to frown, and say something like: "My goodness, you're right, it was an ocelot! I better get Terry to clean out the ocelot cage. After he's finished with the lynx cage, anyway."

Again, in referring to the "lynx" as I did, I expressed my mistaken belief that the cat was a lynx; but in order to do that by so speaking, I must have been using the word "lynx" correctly – to refer to lynxes, which the cat in question was not.


Now for some clarifications. My point here is certainly not that we must speak in this way – that the first really is a case of properly semantic error as opposed to the latter, a clear case of properly doxastic error. So already some peace can be made, as I take the Davidsonian point to be mainly that there can be nothing which forces us to speak this way. It's just that the natural way to make that point is to make sure to speak the other way instead, referring in all cases to doxastic error only, rather than semantic error; and I grant in advance that even this example can be spun that way if you like, as again no force was intended. I simply think there's no real reason not to speak of semantic error in particular cases if we so prefer, and that it can in fact be salutary to remind ourselves that that possibility is open to us.

another ocelotThat we can construe each example in either way is further suggested by the qualification I promised above; for there is a sense in which I can indeed refer to ocelots (i.e. successfully), and express beliefs about them, even when using the word "lynx." Suppose I say "This lynx here [patting yon ocelot on the head] has worms, can you give him a deworming pill?" I've expressed a belief, let's say a true one [i.e. that he's got worms] about what is in fact an ocelot, albeit by using the word "lynx." It would be perverse of you to pretend that I haven't said anything about the ocelot at all, simply because I used the "wrong word" to refer to it. Note that this case is intermediate between the two others, at least so far. For all you know, my response to "I think that's an ocelot, not a lynx" could be either "right, an ocelot; can you give him the pill?" or instead "no, it's a lynx; look, he's got the little tufts on his ears"; where the first suggests that I merely misspoke (failed to express my true belief that the cat is an ocelot), and the second sounds more like I have misidentified the cat rather than misused the word.

Yet these are mere suggestions, at least in advance of further investigation. After all, maybe the former of these responses acknowledges a false belief (if one I regard as unimportant and easily corrected), while the latter confusion about lynxes can also be construed as instead concerning the proper referent of "lynx," a semantic matter.


Now for the moral. The trick here, in my view, is to see two things at the same time. First, "using a word properly" ("having the concept") has (at least) two aspects: first, the semantic part: getting the meaning right; and secondly, the epistemic part: getting the world right. Secondly, on the other hand, these two things, while not identical, are very closely related, indeed interconstitutive, rendering interpretation (determination of meaning) more complicated than simply checking the dictionary to see if a speaker has used a word "correctly." It is in this anti-Platonistic sense only that such obligations are, in Bilgrami's not entirely univocal terms, neither "sui generis" nor "intrinisic."

Sometimes we will emphasize one of these two points rather than the other. For example, we sometimes say that knowing the meaning of a word is knowing how to use it correctly, where the paradigmatic example is that of using the word X to correctly identify X's. If someone says "that's a lynx" when and only when in the presence of lynxes, he most likely knows what "lynx" means. Similarly, when we are teaching someone a word, especially children, we test their understanding by seeing if they do the "appropriate" thing, e.g. apply "doggie" to dogs and not to ferrets, or responding "five" when asked to "add three and two."

This can make it seem that what we have here is a single determination – one of the meaning of a subject's utterances – which is determined behaviorally, by seeing if the subject makes correct judgments. The idea is that knowing the word (having the concept) "add" just is to add correctly; and knowing (the meaning of) the word "lynx" just is identifying lynxes correctly. But this leaves no room for going on to claim a distinct notion of semantic normativity over and above that involved in judgments that things are thus and so, a doxastic matter (Bilgrami is correct that McDowell can be careless on this point).

In other words, this conception of the relation between belief and meaning puts them too close together. In response, we point out that while I can indeed express a false belief that that cat is a lynx, I must, in so doing, be using the word "lynx" in its proper meaning – to refer to lynxes. Recognizing the conceptual distinctness of the two components restores the proper flexibility to an interpretive process which requires us, in standard cases, to attribute beliefs and meanings simultaneously. This reflects the internal connection to the learning process, in which, in learning "how to use words," we learn both what they mean and a whole bunch of truths about the world: what "lynx" means and what lynxes are, and what "add" means and how to add, without those two amounting to (exactly) the same thing.

On the other hand, however, we don't want to think of belief and meaning as two different phenomena (or things) entirely, in the sense of being determinable by separate processes (instead of the single complex process of interpretation cum inquiry); instead, again, we need to see them as interconstitutive.

According to Davidson and Bilgrami, we risk doing this when we speak of "linguistic norms" at all – that is, as in any way distinct from the doxastic norm of "getting things right." To do so makes it sound like meaning is determined not in the interpretive process itself but instead by allegedly independent facts about, say, English: given the actual dispositions of English speakers, on this view, if I make the sound /links/ (or inscribe l-y-n-x), then I necessarily thereby refer to those things (i.e., lynxes) – no matter what an engaged interpreter may say – simply because "that's what 'lynx' means in English." This semantic Platonism makes utter hash of the holistic Davidsonian picture, and is what provokes Davidson to declare, famously, in "A Nice Derangement of Epitaphs," that "there is no such thing as a language, not if a language is anything like what many philosophers and linguists have supposed."

(Let me just give a bit more from that article. The quote continues: "There is therefore no such thing to be learned, mastered, or born with. We must give up the idea of a clearly defined shared structure which language-users acquire and then apply to cases." Earlier on, he says that to say this means that "we have abandoned not only the ordinary notion of a language, but we have erased the boundary between knowing a language and knowing our way around in the world generally"; or, as I would say, between meaning and belief. "Erasing the boundary" in this way, however, sends us back to the first point – that we must not think of these things as identical or simply reducible to the normativity of belief. The two are not dualistically opposed, but distinct.)

another lynx (here, a bobcat)My claim is that even loyal Davidsonians can recognize a difference between "linguistic norms" in this deceptive sense, on the one hand, and on the other, the idea that "getting things right" is a norm for meaning just as much as it is for belief. We can have the latter without the former. Consider the Davidsonian triangle, with a subject at one point, an interpreter (or an informant) at another, and our shared but objective world at the apex. Each point can exert normative pressure on what we say (and believe and do): I get the world right when I believe the truth; I get meaning(s) right when I speak properly; and I get myself right when I act in accordance with my most fundamental commitments. Yet in each case talk of "getting right" need not commit us to the existence of some separably characterizable thing. The lack of a language, in the sense in which Davidson rejects it, is analogous in this image to the lack of the Cartesian world-in-itself on the one hand, and the non-existence of my "true self" on the other. Just as with belief and meaning, it is the dualism of norm and norm-follower that is rejected, not the distinction (and the relation). Even if that means we give up the terminology of concrete "norms" for something fuzzier like "normative commitment" (or as above, normative "pressure"), there is still a role for such a relation between meaning and "language" (if not *a* language).

Bilgrami does suggest that "norms" of meaning could be salvaged if construed as the "extrinsic" prudential norm of "speaking as others do" (rather than "speaking rightly"), or the hypothetical imperative of "... if you wish to be understood." But while prudence is indeed a part of the interpretive picture, I think, for the above reasons, that even properly semantic normativity (if not "norms") can be unobjectionable. But there's a lot more to say about that, so I'll leave Bilgrami's views for another time.

Sunday, November 02, 2008

Real goldfinches and robot cats

Daniel doesn't like what John Haugeland says in "Objective Perception." In the former's words: "The very idea of giving a "constitutive ideal" for "thinghood" strikes me as inadvisable." Yet it seems that we can always try to say what we mean by "thing," such that if (Ex)(x lacks some property p), then x isn't a "thing" after all. After giving some examples, Daniel admits that:
[Everything in this paragraph seems like an overwrought version of Austin's bit about the finches that suddenly explode etc., and what we should say about them. I can't recall where that passage is. I need to read more Austin.]
Well, everyone needs to read more Austin. Here's the quote from "Other Minds". Even in special cases (of deciding "whether it's real"), "two further conditions hold good": first, that it's not true that just "because I sometimes don't know or can't discover [e.g. because it flies away], I never can." The second is that "'Being sure it's real' is no more proof against miracles or outrages of nature than anything else is or, sub specie humanitatis, can be. If we have made sure it's a goldfinch, and then in the future it does something outrageous (explodes, quotes Mrs. Woolf, or what not), we don't say we were wrong to say it was a goldfinch, we don't know what to say." [Philosophical Papers, p. 88]

However, he also goes on to say that "It seems a serious mistake to suppose that language (or most language, language about real things) is 'predictive' in such a way that the future can always prove it wrong. What the future can always do, is to make us revise our ideas about goldfinches or real goldfinches or anything else." [88-89]

This is almost right, but it makes it sound like the case is asymmetrical: that the future can't always prove our beliefs false (rather than our "ideas"), but that our "ideas" are always vulnerable to (forced?) revision. What I would rather say is that both our beliefs and our meanings are corrigible, which nicely combines the ideas that a) beliefs are corrigible in the light of further experience, and b) the interconstitutive nature of belief and meaning implies that the same is true of meaning. While it may be natural in any one case to do one rather than the other, ultimately the choice is up to us. Neither "the world" on the one hand nor "language" on the other (as it seems some want to say) can determine our choice unilaterally.

I've always taken this to be the moral of Putnam's robot cat example. If those things turn out to be robots, then we have two choices: we can say either that a) the supposedly analytic and thus unrevisable sentence "cats are animals" has, mirabile dictu, turned out to be revisable after all, as cats have turned out not to be animals after all, but are in fact robots; or b) that "cats are animals" remains analytic, but that, mirabile dictu, it seems that there are no cats among us after all, as those things we thought were cats have turned out to be robots instead. I don't remember, but I think Putnam himself may have claimed that we must say (a) here, but it sounds better to say instead that we are not forced to say (b) (i.e. due to the incorrigible qua non-empirical analyticity of "cats are animals"), but can say what we like.

In general, my motto in such cases is that when something unutterably weird happens, it may be that whatever we say will sound unutterably weird, which means that examples like Swampman (or Twin Earth, or grue, or whatever) are nearly always not worth it – if there's really a point there (beyond what I just said), you can make it better some other way.

Monday, September 08, 2008

Ironies abound

Richard Rorty famously defines an "ironist" as "the sort of person who faces up to the contingency of his or her own most central beliefs and desires" (Contingency, Irony, and Solidarity p. xv). Rorty has his own story about what this means, and what it is to "face up" to it, a story which most interpreters, myself included, aren't particularly happy with. On that account, it remains unclear how one can regard one's beliefs as "contingent" without thereby simply giving them up. The resulting skepticism, perversely unacknowledged as such, is precisely not what the doctor ordered. Or so I claim.

On p. 96-7 of CIS, Rorty comes very close to spelling out an explicitly Pyrrhonist position: "The goal of ironist theory is to understand the metaphysical urge, the urge to theorize, so well that one becomes entirely free of it. Ironist theory is thus a ladder which is to be thrown away as soon as one has figured out what it was that drove one's predecessors to theorize." Remarkably, given the use of that familiar image, the accompanying footnote cites neither the ancient Skeptics nor even the early Wittgenstein (TLP 6.54), but instead the later Heidegger's "motto of ironist theorizing": "A regard to metaphysics still prevails even in the intention to overcome metaphysics. Therefore our task is to cease all overcoming, and leave metaphysics to itself" (Time and Being, 1962).

I don't know about Heidegger, but in Rorty the thought seems to be this. Traditional metaphysics is a mug's game; but if philosophers proceed in the usual fashion to try to show this once and for all, all we'll get is a philosophical theory, or doctrine, to that effect. But philosophical theorizing just is "metaphysics" in the controversial sense – that is, it succumbs to the same questionable urge (to escape finitude, or whatever). Instead of the traditional doctrines, then, we must target the urge which made them, or even their negations, seem necessary. If "overcoming" requires refutation, and refutation indulges the suspect urge, then we must abandon "overcoming" as well. We might not be happy about having to "leave metaphysics to itself," where anyone can still trip over it if they're not careful, but it can't be helped. We'll just have to develop other ways to help each other avoid that pitfall. Rorty's conception of pragmatism as "anti-authoritarianism," for example, exhorts us to spurn the siren song of metaphysical transcendence, with its chimerical promise of ideal grounding for our beliefs and values, in favor of more homespun methods of coping with our problems.

In chapter 5 ("Self-creation and affiliation: Proust, Nietzsche, and Heidegger"), Rorty explains why literature of a certain kind is better than philosophy for doing what needs to be done:
So the lesson I draw from Proust's example is that novels are a safer medium than theory for expressing one's recognition of the relativity and contingency of authority figures. For novels are usually about people – things which are, unlike general ideas and final vocabularies, quite evidently time-bound, embedded in a web of contingencies. [...] By contrast, books which are about ideas, even when written by historicists like Hegel and Nietzsche, look like descriptions of eternal relations between eternal objects, rather than genealogical accounts of the filiation of final vocabularies, showing how these vocabularites were engendered by haphazard matings, by who happened to bump into whom. [107-8]
I'll get back to raking Rorty over the coals some other time, but let me get to my point here. To that last quotation is appended its own footnote, which reads: "There are, of course, novels like Thomas Mann's Doktor Faustus in which the characters are simply dressed-up generalities. The novel form cannot by itself insure a perception of contingency. It only makes it a bit harder to avoid this perception."

I like Mann a lot, but that's definitely a fair criticism (q.v. The Magic Mountain, or anything else for that matter). However, the impetus for this post is that I'm just now (very slowly) reading Doctor Faustus, and on p. 45 our narrator Dr. Zeitblom is telling us about the early years of his friend LeverkĂ¼hn, the subject of the book:
In those years school life is life itself, it stands for all that life is, school interests bound the horizon that every life needs in order to develop values, through which, however relative they are, the character and the capacities are sustained. They can, however, do that, humanly speaking, only if the relativeness remains unrecognized. Belief in absolute values, illusory as it always is, seems to me a condition of life. But my friend's gifts [i.e., LeverkĂ¼hn's] measured themselves against values the relative character of which seemed to lie open to him, without any visible possibility of any other relation which would have detracted from them as values. Bad pupils there are in plenty. But Adrian presented the singular phenomenon of a bad pupil as the head of the class. I say that it distressed me, but how impressive, how fascinating, I found it too! How it strengthened my devotion to him, mingling with it – can one understand why? – something like pain, like hopelessness! [Lowe-Porter translation, altered slightly]
So described, the young composer sounds quite a bit like Rorty's "ironist," and indeed, the next paragraph discusses "one exception [i.e., mathematics] to [LeverkĂ¼hn's] uniform ironic contempt." Here, though, the skepticism is explicit. Belief in absolute values, however necessary "as a condition of life," is always illusory. The only alternative to "absolute" is "relative," and regarding some value as (merely) "relative" is equivalent to rejecting any claims it may have to validity. Where LeverkĂ¼hn differs from Zeitblom is that the former did not let the recognized illusoriness of absolute value stop him from acting as if he accepted it. He even excelled at what others took seriously, while he himself saw it as merely a game – one he was good at, but a game nonetheless.

No doubt Rorty takes his own position to differ, not simply from Zeitblom's, but also from that of LeverkĂ¼hn. (He might, for example, have mentioned this passage as anticipating his own views, rather than simply knocking the book for not being sufficiently Proustian.) As I read him, I think Rorty would say that in allowing a norm or value to structure one's actions – to be seen as "playing the game" at all – is, in that context, to accept it as fully as it makes sense to do so. To demand a further "metaphysical" commitment to its truth (pardon me, Truth) is to fall into unintelligibility, or at least disutility. That's why recognizing "contingency" isn't the same as skepticism or nihilism.

Now, this is indeed a more attractive thing to say. It's just that I don't think Rorty can do so consistently. For example, Rorty tells us repeatedly that his pragmatism points us past the "stale dichotomy of realism and anti-realism"; but he just as consistently endorses anti-realist doctrine when it suits him, as in the continuation of the very definition of "ironism" with which I began. Ironists, he says, are that way because they are "sufficiently historicist and nominalist to have abandoned the idea that those central beliefs and desires refer back to something beyond the reach of time and chance." Naturally going "beyond realism" will involve rejecting realist doctrines like this one. Still, when overt appeals to anti-realism are qualified, in the definition of the central concept of one's view, only by words like "sufficiently," it's hard to see how that itself is sufficient for us to avoid the one as well as the other.

Monday, July 14, 2008

Sway

This started out as another microreview, but it became both macro, on the one hand, and not so much a review of the book as another philosophical rant. Too bad, I should do more of the latter anyway.

Sway: The Irresistible Pull of Irrational Behavior, by Ori and Rom Brafman, is another pop-psych book, like Blink and The Tipping Point. It spends its time going over some things which may already be familiar, like the dollar auction (here, due to inflation, a twenty-dollar auction), group conformity experiments, the money-splitting experiment, etc. It's short, and these things are neat, so you might want to check it out. (That's the microreview part. On to the rant.)

Here are the authors on diagnosis bias. After relating how student evaluations of a visiting lecturer can depend to a surprising degree on whether the students are told in advance that he is regarded by others as "very warm" vs. "rather cold", they claim that this phenomenon extends as well to such things as dating, where we really might have thought we were reacting not to short descriptions we had heard in advance, but what we had experienced for ourselves over an entire evening:
[A] single word has the power to alter our whole perception of another person—and possibly sour the relationship before it even begins. When we hear a description of someone, no matter how brief, it inevitably shapes our experience of that person.
Fair enough, and of course this result is congenial to anyone suspicious of, say, the neutral Given. Here's their example (pp. 73-4):
Think how often we diagnose a person based on a casual description. Imagine you're set up on a blind date with a friend of a friend. When the big night arrives, you meet your date at a restaurant and make small talk while you wait for the appetizer to arrive. "So," you say, "what do you have planned for this weekend?" "Oh, probably what I do every weekend: stay home and read Hegel," your date responds with a straight face. Because your mutual friend described your date as "smart, funny, and interesting," you laugh, thinking to yourself that your friend was right, this person's deadpan sense of humor is right up your alley. And just like that, the date is off to a promising start. But what if the friend had described your date as "smart, serious, and interesting"? In that light, you might interpret the comment as genuine and instead think "How much Hegel can one person read?" Your entire perception of your date would be clouded; you'd spend the rest of dinner wracking your brain over the difference between Heidegger and Hegel and leave without ordering dessert.
Because of course no one who's smart, funny, and interesting could ever spend his or her weekends reading Hegel. That'd be crazy!

Seriously, though, the authors oversimplify. They make it sound like once you have preconceptions (which everyone does), you're irrevocably committed to a certain interpretation of your experience. This strikes me as a facile recoil from a naĂ¯ve commitment to an impossible "objectivity" (in this sense, an ideal detachment from our subjective perceptions) to an implausible determinism, analogous to the relevant sense of "historicism," i.e., the sort of thing of which Gadamer is often accused by his realist critics.

As I read him, however, it is instead this recoil itself which is Gadamer's target (as well as Davidson's, mutatis mutandis). I'll put the point in Davidsonian terms, but if this isn't what Horizontverschmelzung is all about, then I'm not getting Gadamer at all (which is of course a possibility). The process of interpretation isn't simply one of gathering all your data as "objectively" as possible and (thus) only then engaging our subjective faculties to arrive at a possible meaning. It's interactive, in that we interact not only with other speakers, but also with the world. That is, interpretation (into meaning) and inquiry (into fact) are two aspects of the same process. We attribute belief and meaning to our interlocutor at the same time as confirming or modifying our own beliefs and meanings, and in conveying our interpretation to others (or simply manifesting it in our actions), we express our own beliefs and meanings simultaneously as well, for further interpreters to unpack, and so on.

This means that while our initial reactions may indeed depend (surprisingly) sensitively on our preconceptions – or "prejudices" (Vorurteile) as Gadamer provocatively calls them – we may find that modifying them will be necessary if we are to arrive at a satisfactory interpretation. In fact, again, since interpretation just is inquiry (and, crucially, vice versa), we can purposely tailor our interaction to subject our preconceptions, and (what Quine calls) our "analytical [semantic] hypotheses," to test.

Let's say I've been told my date is "serious." She deadpans that her weekends are dedicated to Hegel studies. Maybe I am indeed less likely to regard that comment as a joke than if she's been described to me as "funny." But that doesn't mean it's not a joke. In particular, I don't have to spend the rest of the date worrying about how I got stuck with such a geek (or, more likely in my case, about whether I should wait until the next date to propose marriage, or can I pop the question over dessert). Nor should I necessarily feel safe in laughing ironically, acknowledging her humor, if I've been told she's "funny." Maybe, although indeed funny, she's also a Hegel scholar, and to laugh at how she spends her weekends will be an insulting gaffe.

If there's any doubt – and why shouldn't there be some, as we've just met – I can just ask: "Really?" As with the original remark, here the right intonation can render this rejoinder perfectly noncommittal between acknowledging and continuing the joke, or taking it seriously and allowing an elaboration. Maybe I'll get
Yes, I'm currently rereading Glauben und Wissen – usually translated Faith and Knowledge, but "Glauben" means "belief" as well as "faith" – because I really think Hegel's conception of skepticism, especially early on, before the Phenomenology, is key to any really useful contemporary appropriation of his views.
Now I've learned something: she's probably serious (be still, my heart!). It could still be a joke; but even if so, I've learned that a) she knows something about Hegel, so she can't think it would be crazy to spend one's weekends on him; and b) her sense of humor is such as to try to squeeze every last drop of irony from one's facetious suggestions.

If I maintain my noncommittal tone, the ball begins to shift (if I may so abuse this metaphor) over to her court. If she's joking, she will probably eventually need some overt acknowledgment from me that I have so understood her. She may escalate the facetious scenario to more and more outrageous heights, to provoke an actual laugh. Maybe she'll tell me that she reads Kierkegaard in the shower, and puts Adorno's Negative Dialectics under her pillow at night in lieu of actually reading it. It would be a good idea for me to laugh at this point, if only to curtail a line of conversation which is providing diminishing humorous returns (or to confirm that she is in fact joking rather than very unusual indeed, and perhaps not as marriageable as all that). Or she'll laugh herself and acknowledge the joke, perhaps continuing in an overtly humorous rather than ironic vein. ("No, I'm kidding, I was a philosophy major, but now I'm all dialectic-ed out; actually I just use the Phenomenology to prop up the air conditioner.") And of course I might have gotten that last one as an immediate response to my initial "Really?"

My point is not that our interpretive preconceptions can be overcome with careful inquiry. Maybe they can, in particular cases, or even most; but a general claim to that effect would simply be a re-recoil back to a dogmatic commitment to ideal objectivity – a one-sided assimilation of interpretation to ("objective") inquiry, rather than a recognition of their interconstitutive nature. We hardly need chaos theory to tell us that the course of a conversation may well be significantly constrained by how it begins – we all know the experience of getting off on the wrong foot (i.e., and never regaining our footing). But significant constraint falls well short of determination. More to the point, our interpretive practices, qua doxastic as well as semantic, are designed precisely so that we may use the third point of the interpretive triangle, our shared yet objective world, as leverage.

This would also be a good time to note that Gadamer's ideal of Horizontverschmelzung is just that: a fusion of horizons, not anything more drastic. When we have so fused our horizons, we're still a) two different people; b) with (some) divergent beliefs; and c) (some) divergent linguistic dispositions. We have simply come to understand each other, to the degree appropriate to that judgment in the context. We've overcome what are interpretable in retrospect as obstacles; yet while we can now see ourselves as occupying the same space, we may still be standing as far away from each other as we started out. Which is why hermeneutic philosophy may not be so opposed to Wittgensteinian "quietism" as people think: in the former case as well as the latter, the idea is not so much to go somewhere as to find out where we are, even while allowing that doing so need not require that we stand stock-still in order to find our bearings. Of course, Wittgenstein himself could be clearer on this point ...

Wednesday, May 28, 2008

Davidson and Gadamer


Recently a few friends stopped by to discuss Davidson, and Clark brought up Derrida's criticism of Gadamer, which he thought might be similar to Dummett's criticism of Davidson (i.e., as committed to something unpleasant or other, I didn't really get it). We ended up talking past each other – I don't get Derrida at all – but I did want to say a few things about the comparison on the one end between Davidson and Gadamer.

I imagine that some of our trouble came from the fact, as I did mention in that discussion, that Derrida's criticism is directed at Gadamer, not Davidson, so it's not really appropriate to speak Davidsonian in response, as I was doing. The similarities between the two are undeniable, but of course that doesn't make the two positions identical. In his article in Gadamer's Century, McDowell defends the two against charges of relativism, of which he takes them both to be innocent for pretty much the same reasons, and so in that context it's easy to elide the differences and just regard Gadamer as one of the good guys. I shouldn't do that.

But as Clark was describing it, Derrida's charge seems not to be one of relativism, but instead of dogmatism. Where we assume that communication is successful (such that our task is to explain how such a thing is possible), it may yet be that there is instead a "radical rupture" of some (necessarily) mysterious kind. This claim sounds to me like the ontological cum semantic equivalent of Cartesian radical epistemological doubt: offended by our seeming complacency concerning the apparent smoothness of typical conversation, the skeptical soixante-huitard imp hops in with dire warnings of ruptures and fissures and cracks, oh my!

Naturally Davidson comes in for a version of these charges as well (if not from Dummett; cf. Stroud and C. McGinn, who reject, on Cartesian grounds, the anti-skeptical consequences of Davidson's account of interpretation and belief), but Gadamer's case is a bit different. From Habermas, as one might expect, the charge against Gadamer took a characteristic form: if our conception of an objective world is limited by our cultural/linguistic horizons, then we won't have the detachment necessary to perform Critique. We dogmatically assume the world is as we have traditionally construed it, and even when we open our horizons up to achieve Horizontverschmelzung (I love that word) with the Other, we still don't acknowledge the absolute otherness of the objective world: now we both "could be wrong" about it. (Or something like that; I can go look.) Incidentally, people have been known to say the same thing about Wittgenstein, or at least "Winchgenstein."

But now two things occur to me about that. First, that accusation does indeed sound like Stroud's criticism of Davidson. And second, this criticism is pretty similar to that directed at Gadamer's supposed relativism (think, for example, of the various definitions – that is, by opponents – of "historicism"): Gadamer is held to claim that our beliefs are culturally determined (dogmatism), so the denizens of the various cultures never reach out to an objective world, rendering them equal in their futility (relativism). This makes sense, in that that Janus-faced flaw is absent from Davidson and (as I've been able to read him so far) Gadamer as well, and telling the proper story about interpretation can bring both of these things out at the same time (as in McDowell's article). I mean, seriously, if Gadamer were really interested simply in retreating from realism to relativism, Truth and Method wouldn't need to be 600 pages long. The tough part is drawing the proper consequences from a) the linguistic structure of cultural tradition and b) the plurality of same in a single objective world. The optimistic thought of Davidsonian Gadamerians is that T & M contains a helpful post-Heideggerian analogue to Davidson's rejection of the scheme-content dualism. But I haven't even read it, so I wouldn't know. (Maybe Malpas's article in Gadamer's Century can tell us.)

Still, if Derrida's criticism were similar to Habermas's, then maybe Gadamer would have said so (and thus not respond, as Daniel paraphrases him in comments, with "Huh?"). But I've never read that exchange, as I've heard before that it was a total train wreck.

Friday, February 22, 2008

Hacker on Quine

As I mentioned in my last post, this one is about Hacker's paper "Passing By the Naturalistic Turn: On Quine's Cul-de-sac" (which is, again, available on his website). In this paper, unlike (say) Grice & Strawson's defenses of analyticity, Hacker's criticism of Quine takes a particularly broad form. As the title indicates, his subject is "the naturalistic turn," as pointedly opposed to "the a priori methods of traditional philosophy". The paper discusses three aspects of Quinean naturalism: naturalized epistemology, "ontological" naturalism, and, most broadly, "philosophical" naturalism. Hacker defines this last as
the view that [in Quine's words] philosophy is 'not ... an a priori propaedeutic or groundwork for science, but [is] ... continuous with science' [...] In the USA it is widely held that with Quine's rejection of 'the' analytic/synthetic distinction, the possibility of philosophical or conceptual analysis collapses, the possibility of resolving philosophical questions by a priori argument and elucidation is foreclosed, and all good philosophers turn out to be closet scientists. (MS p. 2)
For the record, Hacker believes that regardless of what Quine's arguments show about "the" analytic/synthetic distinction, the philosophical project of "conceptual analysis" is not threatened:
The thought that if there is no distinction between analytic and synthetic propositions, then philosophy must be 'continuous' with science rests on the false supposition that what was thought to distinguish philosophical propositions from scientific ones was their analyticity. That supposition can be challenged in two ways. First, by showing that characteristic propositions that philosophers have advanced are neither analytic nor empirical [but still a priori]. Secondly, by denying that there are any philosophical propositions at all.

Strikingly, the Manifesto of the Vienna Circle, of which Carnap was both an author and signatory, pronounced that ‘the essence of the new scientific world-conception in contrast with traditional philosophy [is that] no special “philosophic assertions” are established, assertions are merely clarified’. [The Scientific Conception of the World: the Vienna Circle (Reidel, Dordrecht, 1973), p. 18] According to this view, the result of good philosophizing is not the production of analytic propositions peculiar to philosophy. Rather it is the clarification of conceptually problematic propositions and the elimination of pseudo-propositions. (p. 3)

[So instead of being "continuous" with science, Hacker claims, philosophy is] categorially distinct from science, both in its methods and its results. The a priori methods of respectable philosophy are wholly distinct from the experimental and hypothetico-deductive methods of the natural sciences, and the results of philosophy logically antecede the empirical discoveries of science. They cannot licitly conflict with the truth of scientific theories – but they may, and sometimes should, demonstrate their lack of sense. (p. 4)
Myself, I never thought that the point about "continuity," about which naturalists make so very much, was that helpful. "Continuity" is cheap. Sure philosophy is "continuous" with science; but it's also "continuous" with art, literature, religion, law, politics, and, I don't know, sports. But I am being perverse here. Let me try instead to be not-perverse.

As previous posts (not just recently but going back to distant 2005) may or may not have made clear, I want 1) to follow Wittgenstein in not only distinguishing philosophy from empirical inquiry (scientific or not), but also seeing it (in some contexts, for certain purposes) as an activity of provoking us into seeing differently what we already knew, by means of (among other things) carefully chosen reminders of same; but at the same time 2) to follow Davidson in pressing Quine to extend and (significantly!) modify the line of thought begun in "Two Dogmas," one which recasts empiricism in a linguistic light and purges it of certain dualisms left over from the positivistic era.

What we've seen so far is that Hacker and Quine are in firm agreement that I can't have it both ways. Either there's a solid "categorical" wall between philosophy and empirical inquiry, or we level that distinction to the ground. It's true that I couldn't have it both of those ways; but I don't want either of 'em. My concern here, as always, is to overcome whatever dualisms are causing confusion; and overcoming a dualism isn't the same thing as obliterating a distinction. In fact, in my terminology, we overcome the dualism only when we can see how the corresponding distinction is still available for use in particular cases (of course, I can reject distinctions as well if I want, for philosophically uncontroversial reasons). So, for example, when Grice & Strawson object to Quine by claiming that the concept of analyticity still has a coherent use, I don't think I need to object. If you want to use the concept to distinguish between "that bachelor is unmarried" and "that bachelor is six feet tall," go right ahead. I just don't think that distinction has the philosophical significance that other people do. In particular, I don't need to use it, or the a priori/a posteriori or necessary/contingent distinctions either, in explaining my own idiosyncratic take on "therapeutic" philosophy. In fact, I find that explanation works better when we follow Davidson in stripping the empiricist platitude (what McDowell calls "minimal empiricism," that it is only through the senses that we obtain knowledge of contingent matters of fact) of its dualistic residue, and meet up again with Wittgenstein on the other side of Quine. (And yes, I used the word "contingent" there – anyone have a problem with that?)

On the other hand, it also seems to me that after the smoke clears and everyone (*cough*) realizes that I am right, each side can make a case that I had been agreeing with that side all along: Hacker can point to the sense in which philosophy on my conception is still a matter of (what he will continue to call) clarifying our concepts, with an eye to dissolving the confusions underlying "metaphysical" questions; while Quine can point to (what he will continue to call) a characteristically "naturalistic" concern (if that naturalism is perhaps more Deweyan than his own) with the overcoming of the conceptual dualisms left over from our Platonic and Cartesian heritage – e.g., those between the related pairs of opposed concepts we have been discussing. Yet it seems to me that neither side can make the sale without giving something up (something important) and thereby approaching what seemed to be its polar opposite.

We've already seen the shape of this idea. On the one side, Hacker's insistence that, as he puts it, "[t]he problems [here, skeptical ones] are purely conceptual ones, and they are to be answered by purely conceptual means" [p. 9, my emphasis]" sabotages the anti-dualist content of the anti-skeptical critique with a dualistic emphasis on the "purity" of its form (itself held in place by a corresponding dualism of form and content). On the other, Quine recoils from the dualism of pure abstract a priori and good old-fashioned getting-your-hands-dirty empirical inquiry by eliminating the former entirely in favor of the latter. This insufficient response to one dualism leads inevitably to another: in Quine's case, this means (as Davidson argues) a dualism between conceptual scheme and empirical content, which ultimately (or even proximately!) proves to be pretty much the same as the dualisms (analytic/synthetic, observational/theoretical) Quine was supposed to be showing us how to discard.

We'll leave Davidson for another time (the interpretation business might take a while, though it does come up below), but as my subject here is the Hacker article, let me continue by discussing an area of agreement with Hacker: his dismissal of Quine's naturalized epistemology. (Yet of course even here I do not draw Hacker's moral, exactly.) No one disputes that there is such a thing as empirical psychology, so in one sense the focus of "naturalized epistemology" on resolutely third-person description of the processes of information acquisition by biological organisms is unobjectionable. The problem comes when this project is taken to amount to or replace philosophical investigation (however conceived) of knowledge and related topics.

I'll just mention two points. First (although Hacker doesn't put quite it this way), Quine's naturalistic aversion to "mentalistic" concepts leads him to assimilate the theoretically dangerous (in this sense) first-person case to the more scientifically tractable third-person case – after all, I'm a human being too, so what works for any arbitrary biological organism should work for me too. This makes the "external world" which is the object of our knowledge something no longer opposed (as in the (overtly) Cartesian case) to something mental, but instead to the world outside our (equally physical) sensory receptors. But now Hacker wonders about the status of our knowledge of our bodies; or of ourselves, for that matter. Quine is left in a dilemma: "Either I posit my own existence, or I know that I exist without positing or assuming it." As a result (see the article for the details) "[i]ncoherence lurks in these Cartesian shadows, and it is not evident how to extricate Quine from them." [p. 6]

This is (given the difference I've already mentioned) remarkably similar to Davidson's criticism of Quine in "Meaning, Truth, and Evidence":
In general, [Quine] contended, ‘It is our understanding, such as it is, of what lies beyond our surfaces, that shows our evidence for that understanding to be limited to our surfaces’ [The Ways of Paradox, p. 216]. But this is mistaken. The stimulation of sensory receptors is not evidence that a person employs in his judgements concerning his extra-somatic environment, let alone in his scientific judgements. My evidence that there was bread on the table is that there are crumbs left there. That there are crumbs on the table is something I see to be so. But that I see the crumbs is not my evidence that there are crumbs there. Since I can see them, I need no evidence for their presence – it is evident to my senses. That the cones and rods of my retinae fired in a certain pattern is not my evidence for anything – neither for my seeing what I see, nor for what I see, since it is not something of which I normally have any knowledge. For that something is so can be someone’s evidence for something else only if he knows it.
No, wait, that's Hacker again, from later in the paper (p. 13). Here's Davidson, criticizing as "Cartesian" Quine's "proximal" theory of meaning and evidence:
The only perspicuous concept of evidence is the concept of a relation between sentences or beliefs—the concept of evidential support. Unless some beliefs can be chosen on purely subjective grounds as somehow basic, a concept of evidence as the foundation of meaning or knowledge is therefore not available. [...] The causal relations between the world and our beliefs are crucial to meaning not because they supply a special sort of evidence for the speaker who holds the beliefs, but because they are often apparent to others and so form the basis for communication. [p. 58-9]
The relevant stimulus is thus not "the irritation of our sensory surfaces" but instead the rabbit whose appearance prompts the utterance of "gavagai." (See the rest of this key article; it's reprinted in the fifth volume of Davidson's papers, Truth, Language, and History, which I think is now available cheap.) Again, though, this is for reasons concerning the conceptually interconstitutive nature of meaning and belief, not a simple recoil from naturalized epistemology to conceptual analysis. That is, while considering these matters conceptually, as Hacker does, Davidson's argument presents a specific conceptual analysis (if that's what we want to call it) which in its content may be just as fatal to the "purely a priori" as is Quine.

Jumping ahead a bit, we can see on the horizon, even here, a cloud the size of a man's hand. For Davidson's contextually healthy insistence that (as he puts it elsewhere) "only a belief [here, as opposed to sensory stimulations] can be a reason for another belief" can, in other circumstances, manifest itself as a content-threatening coherentism. In "Scheme-content dualism and empiricism" (which I hope we can get to later), McDowell registers puzzlement that Davidson's criticism of Quine is that the latter's conception of empirical content as sensory stimulation (i.e., in its conceptual distance from the "external" world) leads merely to skepticism (not that that's not bad enough!) rather than an even more disastrous loss of the right to be called "content" at all. (At another level, this same consideration tells against Hacker's insistence that "conceptual analysis" are simply matters of language as opposed to matters of fact, i.e., about their referents in the world.)

Hacker too finds Quine's own response to skeptical worries to be nonchalant. In Quine's view, he says, since we are concerned with knowledge acquisition as a scientific question, "we are free to appeal to scientifically established fact (agreed empirical knowledge) without circularity." (Hacker's comment: "That is mistaken.") The philosophical problem of skepticism is not concerned simply with deciding whether or not we have any knowledge, so that it may be dismissed in deciding that, in fact, we do. As Hacker points out, one form of skepticism arises
from the thought that we have no criterion of truth to judge between sensible appearances. Citing a further appearance, even one apparently ratified by ‘science’, i.e. common experience, will not resolve the puzzlement. Similarly, we have no criterion to judge whether we are awake or asleep, since anything we may come up with as a criterion may itself be part of the content of a dream. So the true sceptic holds that we cannot know whether we are awake or asleep. We are called upon to show that he is wrong and where he has gone wrong. To this enterprise neither common sense nor the sciences can contribute anything. [Again, as cited above, Hacker's conclusion, now in context, is that] [t]he problems [skepticism] raises are purely conceptual ones, and they are to be answered by purely conceptual means – by clarification of the relevant elements of our conceptual scheme. This will show what is awry with the sceptical challenge itself. (p. 8-9)
There's more in this vein, attacking Quine's offhandedly deflationary conceptions of knowledge ("the best we can do is give up the notion of knowledge as a bad job") and belief (beliefs are "dispositions to behave, and these are physiological states"), and "the so-called identity theory of the mind: mental states are states of the body." Hacker's comment on this last is typical ("This too is mistaken"), and here too I agree. (Nor, since you ask, am I happy with Davidson's early approach to the mind-body problem, i.e., anomalous monism. But let's not talk about that today.)

Still, I can't see that Hacker's more extreme conclusions about the relation of science to philosophy are warranted. It's true that we can maintain that firm boundary by definitional fiat. But it's just not true that "the empirical sciences," if that means empirical scientists doing empirical science, cannot possibly contribute to our understanding of (the concept of) knowledge, or even provide a crucial piece of information which allows us to see things in a new way. After all, that's what the philosopher's "reminders" were trying to do too. And if a philosopher's "invention" of an "intermediate case" (for example) can provide the desired understanding (PI §122), then so too might a scientific discovery. All we need here, to avoid the "scientism" Hacker fears, is the idea that even the latter does not solve problems qua discovery, even if it is one – and that just because the philosopher's reminder might have done the same thing even if invented and not discovered.

Thursday, February 21, 2008

Mea culpa, mea methodologica culpa

In my post the other day, I made an interesting slip (if that's how you want to think of it): I suggested that Putnam's claim that analyticity and a priority come apart (so that the first four sections of "Two Dogmas" can be detached from the last two) might be of some use to defenders of analyticity. They might want to argue, I thought, that if your target (qua "metaphysics") is really the a priori/a posteriori distinction, then it might be better not to identify it with the analytic/synthetic one (and get rid of them at the same time), but to distinguish the two, so that we might not simply keep around the presumably now unoffensive (qua non-metaphysical, once so distinguished) notion of analyticity, but also employ it (for the project of linguistic analysis) to combat more metaphysical notions (like the a priori).

But (as I noted in a subsequent comment) that just assumes that the defenders of analyticity might see the a priori as unacceptably metaphysical where analyticity is not. As it turns out, Hacker at least does not. I'll get to all that in a minute. Let me first give a quick and dirty characterization of four similar concepts, not worrying for the moment about whether any one of them can be collapsed into any of the others, or whether there really are any such things.

1. Tautologies are "truths of logic": P or not-P (in classical logic).

2. Analytic sentences are "truths (by virtue) of meaning": That bachelor over there is unmarried.

3. Truths are known a priori when we don't have to go out and look, but can confirm them from the proverbial armchair.

4. Truths are necessary when it is impossible for them to be false (they're "true in all possible worlds").

If you like these concepts, you can supply your own examples for the last two. (The SEP article on "A Priori Justification and Knowledge" has as an example of a necessary proposition this one: "all brothers are male," which is not one I would have chosen if I were trying to distinguish necessity from analyticity). Anyways, my point is that however the categories do or do not overlap, the characterization of each has its own typical angle: tautologies have to do with logic, analyticity with meaning, a priority with knowledge (and justification), necessity with ontology (or modality, or in any case metaphysics).

A lot of us want, in some sense or other, to rule out "metaphysics" as nonsense, e.g. a) Ryle, Hacker, etc.; b) Wittgenstein (early and late, on most interpretations); and c) some but not all naturalists. So necessity (or, redundantly, "metaphysical necessity") looks fishy to us. But (as I started to talk about before) in order to combat metaphysics (including but not limited to "necessity"), some of us think we need to hold on to analyticity – a concept which deals, the thought goes, not with the world (i.e., on the other side, qua the object of a "metaphysical" statement, of the "bounds of sense"), but with meaning (which is safely on "this" side). Or so I read Grice & Strawson (I'm trying not to make a straw man here!). For G & S, then, analyticity is both unobjectionable and necessary uh, required for the project of finally exorcising our metaphysical demons. (I assume, if perhaps I shouldn't, that no one has a problem with (the very idea of) tautologies.)

Where does that leave the a priori? If we assimilate it to necessity (on the one side), then it's a metaphysical notion, worthy of dismissal; and if analyticity is the "least metaphysical" of the three (on this quick and dirty characterization), then if Quine's attack on analyticity goes through, it seems that a fortiori (so to speak) the others go as well. But for "analysis" to be possible, G & S believe, there need to be such things as "analytic" truths. So again, my off-the-cuff suggestion was that if we drew the line between analyticity (needed for the method of "analysis") and the a priori, we could use the former to dismiss the latter (along with the more overtly metaphysical notion of necessity).

But Hacker at least is clear that he does not want to do this. For Hacker, the a priori is the central concept he wants to defend: not as a possibly unacceptably metaphysical subject (i.e. object) of philosophical speculation, but as its constitutive method. It is this and this alone which distinguishes philosophy from empirical inquiry. I should have realized this, as the notion is (as in the SEP article) characteristically applied to the manner in which knowledge is acquired rather than its (semantic) form or (ontological) object, and the main contention of the "conceptual analysis" folks is that, again, philosophy is a matter of the clarification of our concepts as specifically opposed to empirical inquiry; so of course they want to defend the a priori as well as analyticity. (My excuse is that I didn't want to assume the naturalist characterization of the a priori (i.e. as hopelessly unempirical) from the beginning, even, or perhaps especially, because I too am not too keen on the notion, if for somewhat different reasons.)

For a interesting account of Hacker's attitude toward Quine, I recommend his paper "Passing By the Naturalistic Turn: On Quine's Cul-de-sac" (available on his website). The main target is Quine's "naturalized epistemology" (so some of what Hacker says is perfectly congenial), and in attacking it Hacker commits himself hook, line, and sinker wholeheartedly to a full-on Manichean dualism of pure a priori conceptual analysis, on the one hand, and not-at-all-philosophical empirical inquiry on the other. Picking up Quine's gauntlet, he begins:
There has been a naturalistic turn away from the a priori methods of traditional philosophy to a conception of philosophy as continuous with natural science.
and ends:
This imaginary science [naturalized epistemology] is no substitute for epistemology – it is a philosophical cul-de-sac. It could shed no light on the nature of knowledge, its possible extent, its categorially distinct kinds, its relation to belief and justification, and its forms of certainty. [...] For philosophy is neither continuous with existing science, nor continuous with an imaginary future science. Whatever the post-Quinean status of analyticity may be, the status of philosophy as an a priori conceptual discipline concerned with the elucidation of our conceptual scheme and the resolution of conceptual confusions is in no way affected by Quine's philosophy.
Snap! That last sentence answers our (my) question about priorities (no pun intended) pretty clearly, I'd say. Let's come back to this article; it's got a nice mix of right and wrong, uh, agreement and disagreement between Hacker and me (and Quine with both of us).

Sunday, February 17, 2008

Not John Gielgud

I was preparing a post on the analytic/synthetic business we have been discussing (okay, so far it's other people, here, here, and here), and (curious as ever) I followed a trail of links to Wikipedia's article on "Two Dogmas," which I basically just glanced at (looks okay), but there's an interesting bit at the end which no-one has said anything about yet:
In his book Philosophical Analysis in the Twentieth Century, Volume 1 : The Dawn of Analysis Scott Soames (pp 360-361) has pointed out that Quine's circularity argument needs two of the logical positivists' central theses to be effective:
All necessary (and all a priori) truths are analytic.

Analyticity is needed to explain and legitimate necessity.
It is only when these two theses are accepted that Quine's argument holds. It is not a problem that the notion of necessity is presupposed by the notion of analyticity if necessity can be explained without analyticity. According to Soames, both theses were accepted by most philosophers when Quine published Two Dogmas. Today however, Soames holds both statements to be antiquated.
Upon reading this, I had two thoughts in quick succession, and wouldn't you know, they're in tension with each other. The first one was: I hardly think the defenders of analyticity (that is, those who, like our friend N. N., see Quine's attack as threatening the philosophical project of conceptual analysis, whether or not they see the latter as constitutive of philosophy itself), or anyone else unimpressed by Kripke for that matter, should welcome criticism of Quine's argument along these lines. I can't see any such philosopher saying: "see, you can too have analyticity – all you have to do is explain it in terms of an independently established notion of metaphysical necessity!" Surely the whole point of "conceptual analysis" was to put "metaphysics" out of business. So, no help there, right?

The second thought I had was this. Of course the contemporary naturalist/empiricist line of thought, in which "Two Dogmas" was an important early move, is also determined to put metaphysics out of business. But in so doing, it seems to assimilate philosophy into the empirical sciences, not as itself an empirical discipline, but as concerned solely with making sure that science dots the i's and crosses the t's in the proper way (once out of the lab and writing up the results). So if you put all of your anti-metaphysical eggs into the naturalist basket, by rejecting the distinction underlying the competing strategy of conceptual analysis, that means that when the naturalists (if not the empiricists) then turn around and reinstate metaphysics, you have no recourse.

Naturally they'll put a doily on that monstrosity by calling it a "scientific" metaphysics (whatever that means); but when it's accompanied, even justified, by a swipe at "linguistic philosophy" for neglecting metaphysics – well, that's going to be pretty galling. The reason my two thoughts are in (mild) tension with each other is that while the first implies that Soames's criticism of the argument of "Two Dogmas" is of no help to the linguistic analyst, the second thought leads to a different conclusion. For now that philosopher can resist the naturalistic line of thought right at the beginning: if the point of "Two Dogmas" was to deprive metaphysical pseudo-inquiry of the purely non-empirical conceptual space in which it was supposed to operate, well then the naturalistic revival of metaphysics shows that it failed to follow through on its promises. This means that (given the original choice between naturalism and conceptual analysis) as far as unmasking metaphysics as nonsense is concerned, the linguistic strategy is the only game in town after all.

These (quick) thoughts, you will notice, elided two complications, which I should at least mention. First, I exempted properly empiricist naturalism from the accusation of reversion to metaphysics. But it's not clear to me that they will be able to fend off such accusations when coming from fellow naturalists. (My own objections to these positions are of another order entirely, so when naturalists trade accusations of "reversion to/neglect of metaphysics," I don't need to take sides.) For the second elision, let's return to Wikipedia's article:
In "'Two Dogmas' revisited", Hilary Putnam argues that Quine is attacking two different notions. Analytic truth defined as a true statement derivable from a tautology by putting synonyms for synonyms [is] near Kant's account of analytic truth as a truth whose negation is a contradistinction. Analytic truth defined as a truth confirmed no matter what[,] however, is closer to one of the traditional accounts of a prioricity. While the first four sections of Quine's paper concern analyticity, the last two concern a priority. Putnam considers the argument in the two last sections as independent of the first four, and at the same time as Putnam criticizes Quine, he also emphasizes his historical importance as the first top rank philosopher to both reject the notion of apriority and sketch a methodology without it.
It does seem that the a priori, rather than analyticity, is the key notion here, and perhaps the defenders of Grice and Strawson would like to argue that the way to debunk the former (as I think we may construe their project) is to keep the latter rather than running the two together and discarding both.

Friday, February 15, 2008

Postmodern captain

Recent visitors to this site will notice the lack of activity here, but I haven't been entirely absent from the 'sphere, as we have been having a rousing conversation about Philosophical Investigations §122, among other things, at other locales (see here, here, here, here, and here). In general, if you drop by here looking for me, if I'm not here I may be over at these or a few other places (virtually speaking). Still, even so, I am remiss in not contributing anything more substantive than a few comments from the virtual peanut gallery. We'll get to those things soon enough, I hope, but for now here's an interesting tidbit from a book I read recently which was made up (almost?) entirely of untruths.

Post Captain is the second book in Patrick O'Brian's series of naval historical novels set in the Napoleonic wars era (thanks to the Crooked Timber crew for the recommendation). Toward the end, Dr. Stephen Maturin is at the opera, but he finds it "poor thin pompous overblown stuff" and cannot enjoy it:


A charming harp came up through the strings, two harps running up and down, an amiable warbling. Signifying nothing, sure; but how pleasant to hear them. Pleasant, oh certainly it was pleasant [...]; so why was his heart oppressed, filled with an anxious foreboding, a dread of something imminent that he could not define? That arch girl posturing upon the stage had a sweet, true little voice; she was as pretty as God and art could make her; and he took no pleasure in it. His hands were sweating.

A foolish German had said that man thought in words. It was totally false; a pernicious doctrine; the thought flashed into being in a hundred simultaneous forms, with a thousand associations, and the speaking mind selected one, forming it grossly into the inadequate symbols of words, inadequate because common to disparate situations – admitted to be inadequate for vast regions of expression, since for them there were the parallel languages of music and painting. Words were not called for in many or indeed most forms of thought: Mozart certainly thought in terms of music. He himself at this moment was thinking in terms of scent.

[Suddenly, from his box Stephen espies, in the crowd below, the woman whom he has been chasing for more than four hundred pages, with little success – only just enough, in fact, to maximize his frustration.]

Stephen watched with no particular emotion but with extreme accuracy. He had noted the great leap of his heart at the first moment and the disorder in his breathing, and he noted too that this had no effect upon his powers of observation. He must in fact have been aware of her presence from the first: it was her scent that was running in his mind before the curtain fell; it was in connection with her that he had reflected upon these harps.
I find this (like a lot of things in good literature, now that I think of it) phenomenologically astute but philosophically naive. Certainly the idea of "thinking [only] in words" suggests a crude picture indeed, of the sort (rightly or wrongly) attributed to artificial intelligence types – and which provokes phenomenologically-motivated accusations of a "myth of the mental" (e.g. in Dreyfus) and calls for recognition of "non-conceptual [mental] content" (not, as I understand it, to be confused with "qualia" – but maybe I'm the one who is confused).

Surely, we feel, our minds – and our experiences – contain more than words. That our hearts leap and our breaths catch, or that our (verbal) thoughts are affected, subtly or otherwise, by bodily phenomena and multifarious subconscious associations cannot be denied. The faculty of language – the "speaking mind" – is only one of many contributors to the experiential makeup of our conscious selves. It is natural to reach, as we all do at times, for an image of trying, and often failing, to "put into words" something which must perforce exist "outside" language but which is still part of our experience. Still, I would resist the idea that there are "thoughts" antecedent to their linguistic manifestations, or that music and other arts are "parallel languages" which can communicate thoughts which (what we would have to call, now non-redundantly) "verbal language" cannot. (Or as my undergrad professor put it, when I spoke of the sort of experiences Stephen here discusses: "why do you want to call these things 'thoughts'"?)

Let's look first at the idea that words are (language is) "inadequate because common to disparate situations." This has been a common refrain in philosophy from the Greeks through Derrida. Here's another German on the matter, writing some seventy years after Stephen's night at the opera, but one hundred years before the real-life author of Stephen's ruminations:
Every word immediately becomes a concept, inasmuch as it is not intended to serve as a reminder of the unique and wholly individualized original experience to which it owes its birth, but must at the same time fit innumerable, more or less similar cases—which means, strictly speaking, never equal—in other words, a lot of unequal cases. Every concept originates in our equating what is unequal. No leaf every wholly equals another, and the concept "leaf" is formed through an arbitrary abstraction from these individual differences, through forgetting the distinctions; and now it gives rise to the idea that in nature there might be something besides the leaves which would be "leaf"—some kind of original form after which all leaves have been woven, marked, copied, colored, curled, and painted, but by unskilled hands, so that no copy turned out to be a correct, reliable, and faithful image of the original form. ("On Truth and Lie in an Extra-Moral Sense", The Portable Nietzsche, p. 46)
Nietzsche scholars like Clark hurry to point out that Nietzsche later abandoned his youthful skepticism about truth (the oft-quoted subsequent paragraph in "Truth and Lie" tells us that "truths are illusions about which one has forgotten that this is what they are"), reminding us that this essay was a) published only posthumously, and b) written some 15 years earlier than his most mature writings (an eternity in Nietzsche-time). Still, even here the point is not to accept but to reject the idea that the origin of our concepts means that there is some more perfect reality which (due to their humble origins) they necessarily fail to capture. This is an anti-skeptical point, one which Nietzsche retained throughout his career.

But let's turn back to the properly skeptical point with which this anti-skeptical point is easily conflated (one which later Nietzsche does reject). Even if the Platonic Leaf is an illusion, what about those "unique and wholly individualized original experiences" from which our concept of "leaf" is abstracted? If our concepts necessarily fail to capture (not the pure abstraction, but instead) these individual differences, then it seems that here too our language is inadequate. Yet it is only so if one has a distorted conception of what it is that language is supposed to do, which is not to duplicate individual experiences but instead to express beliefs (and other "mental states" like emotions) and communicate truths about the world (often, which may be the source of some of the confusion here, both at once). Even when the problem is not that language fails (in its necessary finitude) to achieve pure generality, but the seemingly opposite point that it fails (in its necessary generality) to achieve pure specificity, the result is a fatal temptation toward Platonic (or Cartesian) abstraction and reification, and a corresponding anxiety (or conviction!) that language necessarily conceals rather than reveals (or communicates).

Here's a leaf. Is it not a leaf? We just agreed that it is. So "this is a leaf" is true, and not an "illusion." But this other leaf is also a leaf; so "this is a leaf" fails to capture the specific "leafiness" of either of them. True enough, much has been left out. But so what? (What should we say, "this is a leaf, but it's an illusion to believe that it is"? Hogwash.) So say more: this particular leaf is green, small, smooth, wet. These too are merely words, generalized from many greens and smalls and wets. Even the precise hue (say, 7CFC00), the size in microns, the precise amount of water on its surface, everything I can possibly "put into words," will not get us across that metaphysical gap (once so construed) between universal predicates and irreducibly individual thing. You cannot describe to me – language cannot capture – the leaf-in-itself.

Okay, but this wasn't really our problem. Your concern in speaking to me was not after all with a posited leaf-beyond-experience (whether an ideal Platonic Leaf or a specific Cartesian leaf-in-itself) but instead your own experience, communication of which need not require such fictions as leaves-in-themselves. Here too, though, the same problem seems to arise. You had some experience which you want to communicate to me. Of course I can't be you, so I can't have your experience. Yet it still can seem as if even though I cannot be you, there is some thing, a (specific) "experience" of yours (distinct from you, that you are "having", such that that identical experiencer – you – then go on to "have" another such, etc.) which your words necessarily (alas) fail to communicate to me. That such things cannot be transferred whole from your inner theater to mine isn't the fault of language. Even if you handed me the very leaf in question, to look at and touch for myself, I still wouldn't have your experience, even the one I did have was thereby very much more "like yours" than the one I had merely listening to you describe it. This "failure" just doesn't have the philosophical significance it can seem to have: that there is, like the leaf-in-itself, an experience-in-itself which can be conceptually detached from your having it, and which I may thereby "fail" to have due to imperfections in the medium of transmission. In my view, once we've established that I can't be you (or, again, that words aren't "the same as" the things which they denote or describe), that turns out to be the only metaphysically relevant consideration – which as a triviality cannot support the philosophical weight put on it by the sort of realism which results in the sort of skepticism in question, which sees language as "cutting us off from reality" (or each other) rather than opening it up to us.

It is of course true (another triviality) that music or painting can evoke experiences which language cannot – that there are qualitative differences which, as subjects, we automatically project back onto their "objects" qua experience. We naturally speak here too of "expression"; yet there is no reason to think of these arts as "parallel languages," or languages at all. I liked Garry Hagberg's book Art as Language, which goes into these matters very clearly indeed (as the Amazon reviewer rightly notes), so I won't go into them here. I would just suggest that "expression" (whether artistic or linguistic) has connotations not simply of communication, but also of manifestation or even creation, which can help suppress the urge to posit some distinct entity which it can fail to copy adequately – while yet leaving in place the triviality that there are plenty of ways in which an "expression" (of something) can indeed fail (and corresponding locutions, such as Mozart's musical "thought").

For more on the idea of "thinking in terms of scent," I imagine there would be a lot about that in this book, the movie version of which I just saw last week. Interestingly, while I imagine some people reacted to the story's move, toward the end, from highly implausible (even in cinematic terms) to completely impossible, with an annoyed "oh, come on," I found that the move actually relieved that pressure rather than increasing it to intolerable levels – as now it became easier to see the story as purely allegorical fantasy (which of course it always was) rather than an attempt to make (still fanciful) sense on the literal level. (I speak abstractly in order to avoid spoilage.) The film (by Tom Tykwer of Lola rennt fame) renders the experience of scent in visual terms very well (although there were a few too many shots of sniffing noses), and I imagine the book's appeal depends on its success in the corresponding rendering in verbal terms.