Thursday, December 20, 2007

Another semi-cool widget

You will notice a new widget I have added in the sidebar, between the archives and the blogroll. It's kind of cool, but it'll get boring if it doesn't get updated for a long time (I'll try not to let that happen). I have started it off with a list of 15 fine films I saw this year. (I didn't limit it to 2007 releases because I didn't see, well, even 15 of them at all, let alone 15 worth mentioning.) 2004 seems to have been a reasonably good year, with four entries out of 15.

On a film's individual page at imdb, the title is always listed in the original language (which means that sometimes you can't recognize it: did you know that #57 on the imdb top 250 list, as voted by you the viewers, is Sen to Chihiro no kamikakushi?); but even on this list the title can be misleading. For instance ... oh heck, I'll just list them again here, as I don't have to link to them again, and I have a few comments anyway. This post will be far away down the page soon enough, I imagine. Click on the plus sign next to each title (in the widget) for more info! (Tell your browser to allow popups for this page in order to go to the corresponding imdb page.)

1. The Lives of Others

This is that East German movie about the Stasi. Sebastian Koch is excellent as the dissident playwright caught up in tragic circumstances, but the film really belongs to Ulrich Mühe as the Stasi agent with wavering loyalties. Koch is also good in Black Book, which just missed the cut (maybe later).

2. Kill!

Awesome samurai flick. Tatsuya Nakadai is the man. 'nuff said.

3. Downfall

Bruno Ganz was a fixture in German cinema during the 70's and 80's (e.g. Wings of Desire, Messer im Kopf, etc.) Here he plays Hitler. This film is monumental and I can't imagine what it must have been like to see it if you lived in Germany during those terrible days (kind of like The Lives of Others in that respect, but more so). I can't judge the accuracy of Ganz's Hitler impression, but he's totally convincing in any case. Fun fact: if you saw The Ninth Day, you'll remember Ulrich Matthes as the heroic priest; well, here he plays Goebbels. Yikes!

4. Together

This is a hilarious and moving film from Sweden (I laughed, I ... well, I was moved) about communal living (hey, the 70's happened in Sweden too, y'know) – portrayed with all its lumps, but lovingly all the same. Great comment at the imdb page ("the aesthetics of porridge").

5. Head-On

This starts out as what I guess is a culture-clash comedy, about Germans of Turkish descent, but then veers into deeper waters, with surprising impact (thus the title?). It takes a while to get where it's going, but it's definitely different. I don't know what else to say here so I'll leave it at that.

6. Sansho the Bailiff

I don't know how they get The Bailiff out of "Sansho dayu", but there it is. At least the DVD cover gets it right. This is a classic of Japanese cinema, newly out on DVD (thank you Criterion!!). More melodramatic than I expected (and the director himself was apparently a bit dismissive of it later on), but still jawdropping (e.g. the abduction scene – whew!).

7. The Taste of Tea

One of those quirky Japanese films about a quirky family. Yeah, sounds boring, but give it a chance and its weird rhythms will beguile you.

8. House of Fools

Another film that works better than its description: a lunatic asylum in Chechnya caught up in the madness of war. Look out for the bizarre, gutsy cameo by Bryan Adams the rock star.

9. F for Fake

That's what it says on the box; not sure where the listed title comes from (and the imdb page has "Verités et mensonges"). Orson Welles giving us a lesson in art fakery. This one is totally out there.

10. 49th Parallel

Again, that's what it says on the box (in huge letters). Much better title than "The Invaders." One of the lesser-known Powell & Pressburger efforts, but if you like them (or if you are Canadian) you will definitely enjoy this unusual yet very entertaining film (w/Anton Walbrook!). Look out for Laurence Olivier in what is, well, not his best-known role. Another clutch Criterion release (so was Kill!, btw).

11. Mysterious Skin

Joseph Gordon-Levitt is riveting in this Gregg Araki film (warning: hard to watch at times). The big revelation is no surprise, but that doesn't matter as much as you'd think. Great cameo: Mary Lynn Rajskub (Chloe on 24) as ... what Chloe on 24 would be like if she were a UFO abductee instead of a CTU agent.

12. Pan's Labyrinth

Never got to it in the theater, but I finally caught up with it on DVD. Eye-popping fantasy (set during the Spanish Civil War this time) from Guillermo del Toro (director of Hellboy).

13. Prince of the City

Cop corruption drama in the Serpico vein, from ... the director of Serpico. The big breakthrough – and AFAICT the only big role – for Treat Williams, who is in virtually every scene. Cool time capsule.

14. Take My Eyes

Surprisingly involving Spanish drama about an abusive relationship.

15. The Cranes Are Flying

Just saw this one last night (thanks are once again due to Criterion for a marvelous transfer). Wow. Yet another war movie, this time from the Soviet angle. And once again, I cannot imagine what a Soviet audience must have thought, not only having lived through that time themselves, but also seeing it so freely portrayed on the screen (thanks to the "thaw" after Stalin's death). There's a scene in which two bright-eyed factory girls are about to regale the young hero's father with the customary patriotic bromides and he just cuts them off with a mocking laugh – the audience must just have gasped. Breathtaking cinematography throughout. See this one first.

All in all a good year at this end. Check back in 2009 for some 2007 releases!

Monday, December 17, 2007

A minor point about Dennett

I know I have other obligations (in the works), but here's one thing I had intended to get back to. And now my intention has been realized.

The other day, in comments, Daniel linked to some criticism Ben W. made of Dennett's explanation (in "Real Patterns") of the utility (indeed, inescapability) of the "intentional stance" using his beloved Life-plane example. I'll assume familiarity with the example (clink the link for Ben's setup), but the general idea is that if we are presented with a Life-plane computer chess player, we will do much better in predicting its behavior, Dennett claims, by taking the intentional stance (i.e. taking into account what the best move is), rather than making a ridonkulous number of Life-rule calculations (i.e. in predicting the entire bitmap down to the last cell for X generations).

There are two issues here, which we need to keep straight. First, how are we supposed to translate an intentional-stance prediction ("K x Q") "back into Game of Life terms," as Dennett claims is possible? Second, how do we know enough, given the impossibly vast bitmap, to make the intentional-stance predition in the first place?

With respect to the first issue, the problem here, as I see it, is that there's an ambiguity in the idea of "translating that description back into Life terms". Of course you can't translate "he'll play K x Q" "back into" a bitmap. There are innumerable bitmaps which include (let's say) the stream of gliders that translates into "K x Q." (Consider for example that the move is so obvious that even the worst-playing chess programs will make it.) But there may be only one (suitably described) glider stream that corresponds to "K x Q". It's that description which is made in "Life terms" ("glider stream" is a Life term, right?).

Here's how Ben poses the second question:
You can't adopt the intentional stance towards [the Life-chessplayer] until you have some idea what it's doing beyond just playing Life. I don't see any real reason to grant this, but even if you grant that someone comes along and tells you "oh, that implements a universal Turing machine and it's playing chess with itself", you still can't do anything with that information to predict what the move will be until you know the state of the board.
Well, of course. But remember, the intentional stance is an interpretive stance; and we don't ask (say) Davidson's radical interpreter for his/her interpretation (ascription of beliefs and meanings) until a whole lot of observations and interactions have occurred. Pointing this out is not the same as locating a theoretical difficulty with the very idea of ascribing beliefs and meanings on the basis of observations which might very well have perfectly good descriptions in radically non-intentional terms. Nor does it show a problem with making predictions of events which can be described in some way at any number of levels (to wit: a long (!) list of particles and their momenta, a hand moving in this direction at that velocity, a man moving his hand, a man pulling a lever, a man voting, a man making a mistake – a bad mistake). That is, it doesn't mean interpretative strategies aren't worth it in predictive/explanatory terms. What choice do we have?

Now go back to the first question (or both together):
Even if someone came along and told the observer not only that it's playing chess with itself, and then also told h/h what the current state of the board is (in which case it's hard to see what the program itself has to do with anything anymore), he's still not in much of a position to predict future configurations of the Life board—not even those extremely few that do nothing but represent the state of play immediately after the move.
Again, when we "re-translate" our intentional-stance prediction "back into Life terms," we don't get 1) entire bitmaps; and we didn't get the intentional-stance predition, and the "re-translation", 2) from nothing but a gander at the massive bitmap. We get 1) particular abstractions (say, a glider stream of such and such a description from that gun there) 2) after we already know, however we know it, that a chess game is in progress, the position, which glider streams are doing what, etc. That may sound very limited compared to what Ben is demanding. But why should we expect the miracle he's discussing? or hold it against an interpretive procedure that fails to provide it? I don't predict the exact state of your neurons when I predict you'll say (i.e., make a token of one of the insanely large number of sound types corresponding to the English words) "Boise, why?" when prompted by (...) "What's the capital of Idaho?" This is true even if your "mental state" does indeed supervene on your neural state. And of course my predictions depend on a whole lot of assumptions not available to (say) an newly materialized alien being trying for prediction/explanation in its own terms – assumptions like that you speak English, and know the state capitals, and are in a question-answering (yet suspicious) mood, etc., etc., all of which are corrigible if my predictions keep failing (and all of which, for Davidsonian reasons, even such a very radical interpreter may eventually be able to make for itself).

Also, what we actually do get isn't chopped liver – unless you'd really rather crank out those trillions of calculations. And even if you did, your result wouldn't be in the right form for our purposes. It would have both (way) too much information and not enough. Too much, because I don't need to know the massive amount of information which distinguishes that particular bitmap from every other one in the set of bitmaps each of which instantiates a Turing machine chessplayer running a chess program good enough to make the move I predict (or not good enough not to make the mistake I also make). All I need – all I want – is the move (or, in Life terms, the glider stream).

But there's also not enough information there, because (as Ben points out) you could do the bitmap numbercrunching and still not know there was a chess game being played – which leaves you without an explanation for why I can predict the glider stream (qua glider stream) just as well as you; and of course you too have to know something about Life (more than the rules) to go from bitmap to glider stream. I think Nozick says something like this in Philosophical Explanations (without the Life part).

And of course "it's hard to see what the program itself has to do with anything anymore." Just as it's hard to see what your neurons have to do with the capital of Idaho. Once we take up the proper stance for the type of interpretation we're interested in, we can ignore the underlying one. In each case, though, we keep in mind that if things get messed up at the lower level (a puffer train from the edge of the plane is about to crash into our Turing machine, or (in the Boise case) someone clocks you with a shovel), all intentional-level bets are off. (But that doesn't mean we aren't also interested in an explanation of how exactly the bitmap-plus-Life-rules instantiates a chessplayer, or of how exactly neurons relate to cognition; those are just different questions than "what move?" or "what answer?").

So I disagree with Ben that (in this sense, anyway) "this stance stuff [seems not to] reap big benefits." I don't know what Daniel means by Ben's criticism's making the stances "look a whole lot more naturalistic." As opposed to what?

Knee-deep thoughts

What effect do lakes have on thought? Find out at the new Philosophers' Carnival!

Friday, December 14, 2007

Like, wow

I don't post about my dreams here (usually I forget them right away), but this one was amusing. Rorty had just given his last lecture before retiring, and he asks Habermas if he'll be teaching during the summer. No, Habermas replies, he has to work on his autobiography: "I'm only up to my Cosmic Hippie period." I don't know about you, but I find it hard to imagine ol' Jürgen grooving to Brüder des Schattens, Söhne des Lichts. On the other hand, the cover of my edition (1979) of Communication and the Evolution of Society is pretty psychedelic...

Thursday, December 13, 2007

One man's twaddle

National Review commenter George Leef spends 90% of his posts on the high cost of college, but the other 10% are on ... why it's not worth it anyway. Here he is the other day on academic "scholarship":
Professors at most colleges and universities these days have to publish their research in order to win tenure and impress fellow academics who might some day offer them a better job. Often that research is of extremely dubious value and only gets published by university presses. Mal Kline of Accuracy in Academia writes here about some examples.
I didn't include the link to AiA (use your imagination, or click through if you must). I draw your attention to the phrase I have embolded. You can tell, you see, that Professor Davidson's "research" is of dubious value because he could only get it published by Oxford U. Press, where it can hardly be expected to sell very many copies, instead of a real publisher – like, say, Regnery. They probably didn't even get him on Larry King, to promote that – what was it again? – "radical interpretation" business. Nobody's buying your radicalism, Dr. Smartypants!

What a tool. Oh, but we're not done:
In his recent book Education's End, Professor Anthony Kronman of Yale laments the damage that has been done by the "research ideal" that has come to dominate higher education. He writes, "In the natural and social sciences, the goal of an ever-closer approximation to the truth seems entirely reasonable....In the humanities, this is less clear." Kronman is too polite to blurt out the truth — a lot of academic research is just twaddle.
Damn humanities – they'll ruin it for everybody. (Yet I agree that some people would be better off not publishing ... )

Wednesday, December 12, 2007

'Tis the season, again

My two fave ambient mix purveyors have selected the best of 2007 (here and here. Thanks guys! Check 'em out!

Friday, December 07, 2007

Breaking news: Stockhausen RIP

Obituary here. Not to get into the metaphysics of naming or anything, but I happen to know that Karlheinz Stockhausen had already died some years ago, after being hit by a car. I wasn't present on that occasion, so I'm not sure whether he was wearing his leash at the time. He had spent most of his life as an indoor cat, so he probably never really developed the sort of road-awareness that outdoor cats have.

Anyway, this other Karlheinz Stockhausen will live forever, thanks to Gesang der Jünglinge, Stimmung (get the Hyperion recording, not the original DG version), Hymnen, Kontakte, and a few other things. Or perhaps he lives in his massive influence on others. The obituary names Miles Davis, Frank Zappa, and Bjørk; but I'm surprised not to see any German names there, like Klaus Schulze, Edgar Froese, Holger Czukay, Irmin Schmidt, Conny Plank, Peter Michael Hamel, or, well, anybody. (Or maybe I'm not surprised.)

Thursday, December 06, 2007

Monk in the land of churches

At Brain Scam, H. A. Monk takes on Paul Churchland. (For some reason, as I write this, anyway, the post is dated 11/14, but it appeared on my reader on 12/6; plus the post mentions its being two months since the last one, which was on 10/6, so it looks like Blogger is on drugs or something.) He deals effectively with Churchland's latest argument for the identity of color qualia (the "subjective aspects" of color experience) with something called "cone cell coding triplets". I don't know whether the latter is a brain structure or some abstraction about the behavior of some cells in the visual cortex, but it hardly matters as far as I can tell.

This is because the problem is with the "identity theory" itself, not what exactly the "material" relatum amounts to, with which the "qualia" are supposed to be identical. As Monk rightly sums up:
One might say: qualia are a suspect kind of entity anyway, so why should I need a theory to account for them? Fine, but what you can't say is: these qualia you talk about, they just are these coding vectors, and then act like you've explained qualia [...] Similarly [i.e. to a deleted example, which doesn't seem quite right], one can say: I can explain everything there is to explain about sensation without reference to "qualia", so why should I be obliged to give you a separate explanation of them? But that is not what is being offered. Rather, we are told, color qualia exist; they are cone cell coding vectors.
Okay, but I think Monk actually lets Churchland off the hook. I would rather he had stayed with the promising line that explaining sensory experience isn't a matter of either

a) a complete neurophysical explanation, such as the materialist gives us; or

b) (a), corresponding to the "objective" part of the phenomenon in question, plus an explanation of "qualia," or the "subjective" part, where this latter part may simply be ineffable, but in any case necessary, such that without it we have an "explanatory gap".

But instead, he goes into Ned Block mode, telling us essentially that Churchland's proposal leaves something out.
When is Churchland going to wake up and smell the coffee? I'm not sure, but I don't think we should test it by asking him whether he's awake or not; better check his brain scan and let him know. Then do an EEG and see if he's smelling the coffee. With sufficient training he could be taught to look at the EEG and say, "Why, I was smelling coffee!" (This is the flip side of Churchland's utopia, in which we are all so well-informed about cognitive facts that introspection itself becomes a recognition of coding vectors and the like.) Now for the tricky part: turn off the switch in his brain that produces the coffee-smelling qual, and tell him that every morning, rather than having that phenomenological version of the sensation, he will recognize the coffee smell intellectually and be shown a copy of his EEG. And similarly, one by one, for all his other qualia.
This is the same thing everybody says about the Churchlands. (Not the same, but it reminds me of the joke about the behaviorists in the bedroom: "It was good for you; was it good for me too?") It wasn't to the point then, and it's not to the point now. It's true that the materialist answer "leaves something out" conceptually; but the reply cannot be that we can bring this out by separating the third-personal and first-personal aspects of coffee-smelling, and then (by "turn[ing] off a switch in his brain") give him only the former and see if he notices anything missing. That the two are separable in this way just is the Cartesian assumption common to both parties. (Why, for example, should we expect that if he simply "recognize[s] the coffee smell intellectually" his EEG wouldn't be completely different from, well, actually smelling it?) I think we should instead resist the idea that registering the "coffee smell" is one thing (say, happening over here in the brain) and "having [a] phenomenological version of the sensation" is a distinct thing, one that might happen somewhere else, such that I could "turn off the switch" that allows the latter, without thereby affecting the former. That sounds like the "Cartesian Theater" model I would have thought we were trying to get away from.

But rejecting this suspect idea is not at all to return to saying, with the materialist, that (what I called) "registering" the smell and "subjectively experiencing" it are identical, such that we only need the former (so construed). Sensory experience (like cognition and every other "mental" phenomenon) has multiple aspects, of which the "purely" first-person and "purely" third-person aspects are merely the ones most ripe for illegitimate philosophical reification. Better far to tell a story such that these are more easily seen as useful abstractions from a more unitary phenomenon. (On the other hand, go too far in this direction, and you get a big monistic mess, in which subject and object disappear entirely. Opinions differ about Dewey, but Experience and Nature at least threatens to have this result. Subject talk and object talk are indispensable, and we need not render them inaccessible simply in order to block their dualistic reification. But that's another story for another day.)

Monk continues:
Don't say: well, he doesn't deny these qualia exist, after all; he just thinks they are identical to blah-blah-blah... If he thinks they are identical to blah-blah-blah then he should not object in the least if we can produce blah-blah-blah without those illusory folk-psychological phenomena we think are the essence of the matter.
Interestingly, I was at a talk one time where Churchland said just that very thing ("I love qualia!") to Block's predictable objection. (He and Block went back and forth a few times, neither understanding the other, until everybody else started fidgeting.) I'm not defending his answer, except to the extent that I agree that Block's objection is not the right response. Churchland will just say that you can't produce one without the other, because (as I also agree) there's no distinct "experiencer" in the brain; and we shouldn't let him get away with that. After all, we're after the conceptual issue, not the empirical one; and all the proposed experiment would get Churchland to do (if it "worked" at all) is to abandon his empirical proposal that that particular neural candidate was the correct one. All you will have shown is that that neural candidate wasn't sufficient for smelling, when it's the very idea of "identifying" subjective and objective aspects of experience which is incoherent.

Look also at how easy it is, even while saying something basically right, to slip into misleading ways of talking. Monk is rejecting the idea that Churchland's claimed ability to use his neural theory of sensory experience to make "phenomenological prediction from neurological facts" provides any support for it (i.e. for the idea that "the qualia-coding vector relationship is not a mere correlation but an actual identity"). We can do that already, without needing to posit a tendentious "identity" to account for it. Here's Monk's example:
We know that the lens of the eye delivers an inverted image, which is subsequently righted by the brain. This suggests that our brains, without our conscious effort, favor a perspective that places our heads above our feet. (It is also possible that it is simply hard-wired to invert the image 180 degrees, but for various reasons that theory does not hold water.) Prediction: make someone wear inverting glasses, and they will see [an] upside down image at first (the brain inverts it out of habit), but eventually the brain will turn it right side up. It works!
Again, I agree with the general point: we don't need any "identity" here. But the wording seems to leave some of the generating assumptions in place. The lens of the eye does indeed "deliver an inverted image" to the retina (we can even see it, back there). But why say that that image "is subsequently righted by the brain"? Does it need to be "righted" ... in order for us to see it? Again we have an incipient Cartesian Theater. Surely what we see is not the image, but the objects in front of us. Inverting glasses make it seem as if everything is upside down; but after a while we get better at (re-)coordinating our various sensory inputs (primarily vision, touch, and proprioception), and that impression fades (not that one image is replaced by another). There: I told the story "without giving a separate explanation" of the visual anomaly; isn't that what we were supposed to do, rather than demanding a distinct something lacking from the materialist account? It's not a detailed story, as the solely neural one would be; but so what? We can give details on request, depending on the point of the question, and maybe this will require one sort of abstraction from our unified picture, and maybe it'll require a different one. That doesn't mean they're separable (as the qualophile demands).

Churchland also claims, Monk tells us, that "trained musicians 'hear' a piece differently than average audiences." I don't see how this was supposed to help Churchland's case, but Monk objects to the very idea:
That is also a predictable phenomenological fact, but it involves a change in the mental software, through accustomization and training, and does not obviously involve any sensual change. To see a new color or to have fewer distinct sounds reach the brain from the cochlea are sensual changes; to hear more deeply those sounds that do reach the ear, to organize them more efficiently and recognize more relationships between is not a sensual change but an intellectual one that we might metaphorically characterize as "hearing more than others". In fact musicians hear the same thing others hear but understand what they hear in a more lucid way. The sensual phenomena I have mentioned are actual changes in what reaches the brain for processing or in processing at a subliminal level, and do not depend on how we train ourselves to organize the information we receive.
Again, I don't see why we need a dualism between "sensual changes" (i.e. in the sound that reaches the ear) and "hearing [these sounds] more deeply". (Isn't that the fallacy behind the philosophical chestnut "if a tree falls and no-one hears it, does it make any sound?"). I don't see any reason that we are required to say that "musicians hear the same thing others hear," simply because there's a sense (the "objective" one?) in which it's trivially true. As an English speaker, I find it perfectly straightforward (i.e. not necessarily metaphorical) to say that musicians "hear more than others." Nor do I feel obliged to characterize the difference as "intellectual" rather than "sensual," even if the latter sort of change is due to one of the former sort.

So what's the moral? Maybe it's this. In situations like this, it will always seem like there's a natural way to bring out "what's missing" from a reductive account of some phenomenon. We grant the conceptual possibility of separating out (the referent of) the reducing account from (that of) the (supposedly) reduced phenomenon; but then rub in the reducer's face the manifest inability of such an account to encompass what we feel is "missing." But to do this we have presented the latter as a conceptually distinct thing (so the issue is not substance dualism, which Block rejects as well) – and this is the very assumption we should be protesting. On the other hand, what we should say – the place we should end up – seems in contrast to be less pointed, and thus less satisfying, than the "explanatory gap" rhetoric we use to make the point clear to sophomores, who may very well miss the subtler point and take the well-deserved smackdown of materialism to constitute an implicit (or explicit!) acceptance of the dualistic picture.

Surely there's a way to make this point in the Wittgensteinian language of aspect-seeing, but I haven't got it just right yet. How about this: that I see the picture-duck only in seeing the drawing – that the former doesn't ontologically "transcend" the latter, if you like – doesn't mean I have to say they're identical (as the materialist-analogue would have it). If I do that, then I have to tell the same story about the picture-rabbit. But the picture-duck isn't "identical" with the picture-rabbit, which it would have to be if both were identical with the drawing.

But now the solution to this is not to say instead that the picture-duck is different (i.e. a distinct thing) from the drawing, while yet being careful to say (now as the Block-analogue would have it) that the former doesn't after all "transcend" the latter in a metaphysically unpleasant way. In a sense it was right to say that the picture-duck "is" the drawing; the problem was with the nature of that "is". (It depends on what the meaning of "is" is (heh heh).) It's not the "is" of "objective" metaphysical identity; it's the "is" of aspect recognition ("that drawing? It's a duck").

Try it with a different "is" still. The drawing is a picture-duck. Now we have the "is" of predication. It's also a picture-rabbit; but in each case we have the same drawing. That sounds okay at first; but it leaves us with a scheme-content dualism. The experience of aspect-dawning is one of seeing a different picture, not of seeing the same picture differently. Seeing the "same drawing" is too far in one direction, while seeing distinct entities is too far in the other. Or, to sound a Wittgensteinian note from another context: the difference between the picture-duck and picture-rabbit is "not a something, but not a nothing either." (I blush to admit that I can't remember exactly where this line occurs. My defense is that it applies to so much of what he says that it might occur anywhere. Anyone?)

Yet Wittgenstein himself sometimes seems dogmatically to close off what might, if properly conceived, be possible empirical/scientific investigations. This offends my pragmatist sensibilities (Peirce: thou shalt not put roadblocks on the path of inquiry). Here's an example from PI p. 211e:
The likeness makes a striking impression on me; then the impression fades.
It only struck me for a few minutes, and then no longer did.
What happened here?—What can I recall? My own facial expression comes to mind; I could reproduce it. If someone who knew me had seen my face he would have said "Something about his face struck you just now."—There further occurs to me what I say on such an occasion, out loud or to myself. And that is all.—And this is what being struck is? No. These are the phenomena of being struck; but they are 'what happens'.

Is being struck looking plus thinking? No. Many of our concepts cross here.
I like that; but right before this he says:
"Just now I looked at the shape rather than at the colour." Do not let such phrases confuse you. [So far so good; but now:] Above all, don't wonder "What can be going on in the eyes or brain?"
In a way this is right too, in the way the first excerpt was right. Don't wonder that if you thought that was going to provide the answer to our conceptual problem. But surely there is something going on in the brain! Would you tell the neuroscientist to stop investigating vision? Or even think of him/her as simply dotting the i's and crossing the t's on a story already written by philosophy? That gets things backwards. Philosophy doesn't provide answers by itself, to conceptual problems or scientific ones. It untangles you when you run into them; but when you're done, you still have neuroscience to do. Neuroscience isn't going to answer free-standing philosophical problems; but that doesn't mean we should react to the attempt by holding those problems up out of reach. Instead, we should get the scientist to tell the story properly, so that the problems don't come up in the first place. (Wittgenstein credits this insight to Hertz, but we will leave that story for someone more qualified than I to tell.)

Wednesday, December 05, 2007

No fool of a Took

You may have heard about Robert Pippin's recent exchange with McDowell in the European Journal of Philosophy. Well now Professor P. has put the relevant pdf's on his website, including the Postscript to "Leaving Nature Behind" (his contribution to Reading McDowell) which is his response to McD's response in that book. So it goes
1. P: "Leaving Nature Behind" in Smith 2002
2. M: "Responses" in Smith 2002
3. P: "Postscript"
4. M: "On Pippin's Postscript"
5. P: "McDowell's Germans"
6. M: "Oh yeah? Sez you! (Pbbbbbt!)"
Okay, I made that last one up. Some heady stuff there! You may want to skim the B Deduction first. (There's an oxymoron for you: "skim the B Deduction".)

Here's a taste from #4, where McDowell lays it on the line:
The result of [what he's just been saying] is no longer Kantian in any but the thinnest sense. But that is no threat to anything I think. My proposal — whose shape I took from Pippin — was that we can understand at least some aspects of Hegelian thinking in terms of a radicalization of Kant. The radicalization need not be accessible to someone who would still be recognizably Kant. It is enough if there is a way to arrive at a plausibly Hegelian stance by reflecting on the upshot of the Deduction. It is no problem for this that, as I am suggesting, this reflection undermines the very need for a Transcendental Deduction — provided such a result emerges intelligibly from considering what is promising and what is unsatisfactory in Kant’s effort.

Monday, December 03, 2007

Dear Santa (I been good)

Brian Leiter announces with pride that his co-edited volume The Oxford Handbook of Continental Philosophy is now available. Based on the stellar list of contributors, I have to say it looks terrific. Of course, "available" is a relative term. Amazon has it listed for $153.

[Update (12/26)]: Rats.

It does not! (okay, some does)

The Philosophers' Carnival seems now to be biweekly; here's the latest edition, at Philosophy Sucks!

Saturday, December 01, 2007

D'Souza vs. Dennett (preview)

I write this before hearing anything about the 11/30/07 debate between these two, but even so let me say a few things (and perhaps get an idea of how it will go at the debate). Besides, in my recent post about D'Souza and his, um, encounter with Kant, I didn't get to Dennett's response. I'll assume familiarity here with what D'Souza says (or see here for a taste).

I'm actually a big Dennett fan. His naturalism bugs me sometimes, but he's been a tiger in the fight against the Cartesian conception of the mind. (I know that sounds funny – his naturalism is central to his thought – but you'd be surprised how often it doesn't come up.) In this context, though, he does have a bit of a tin ear, and I'm not at all sure he'll do well in a debate in which the stated topic is "Is God a Human Invention?"

First of all, he just asks for it with his ridiculous self-labelling as a "bright." He tells us we need a word for "a person with a naturalist as opposed to a supernaturalist world view." But we've got one already: it's "naturalist." Yes, there's another sense of the word – that in which John Muir is a "naturalist" – but that sense doesn't entail disbelief in "supernatural" entities, as Dennett wants. So how about "philosophical naturalist"? Anything but "bright," which is monumentally stupid, and does indeed sound, no matter how many times Dennett denies the implication, like "brights" are smarter than you.

Also, in responding to D'Souza (about Kant), Dennett strikes me as remarkably unable, given his committed anti-Cartesianism, to see Kant as an ally rather than an opponent. It's disappointing to see him let D'Souza bait him into dismissing Kant as a deluded mystic desperately trying to prove the existence of another world beyond the veil. I guess that's easy for me to say, having grown up with the "one-world" interpretation of Kant advocated by Henry Allison and Graham Bird, among others (not that that's the only issue by any means; but it sure helps). Still, I would have thought that the scorn Kant pours on traditional metaphysics in the Critique would be hard to miss. In reply to D'Souza's Kant, though, Dennett is all snark:
If Dinesh D'Souza knew just a little bit more philosophy, he would realize how silly he appears when he accuses me of committing what he calls "the Fallacy of the Enlightenment." and challenges me to refute Kant's doctrine of the thing-in-itself. I don't need to refute this; it has been lambasted so often and so well by other philosophers that even self-styled Kantians typically find one way or another of excusing themselves from defending it.
Ah yes, the famous "doctrine of the thing-in-itself." If you want to make Kant look ridiculous, it is indeed helpful to hang around his neck the "transcendental illusion" he explicitly rejects, together with the insinuation that "self-styled Kantians" (like Allison, presumably) have to resort to sophistry in order to wiggle out of their manifest obligation to attribute sheer virtually unadulterated Platonism to him.

But of course D'Souza has no intention of wiggling out of it. As we saw, he embraces it:
So powerful is Kant's argument here that his critics have been able to answer him only with derision. When I challenged Daniel Dennett to debunk Kant's argument, he posted an angry response on his website in which he said several people had already refuted Kant. But he didn't provide any refutations, and he didn't name any names. Basically Dennett was relying on the argumentum ad ignorantium-the argument that relies on the ignorance of the audience. In fact, there are no such refutations.
Now that's chutzpah. Committing the argumentum ad ignorantiam (excuse me, "ignorantium") in the same breath as attributing it to your opponent: priceless. But of course in the context D'Souza has provided, the only thing that would count as a "refutation" is an argument showing not (say) that "noumenalism" (or transcendental realism!) is incoherent, but that the "Enlightenment Fallacy" – that we can know everything – is true. And of course even Hegel (who famously argued against Kant for the possibility of "Absolute Knowledge") didn't believe that. So in that sense D'Souza gets to be right. No such "refutations" exist. But so what.

Let's move on. In his 11/30 post, right before the debate, D'Souza served up some trash-talk for the occasion. He gleefully quotes the late Stephen Jay Gould (who was, as you know, a Prominent Biologist) referring to Dennett as a "Darwinian fundamentalist":
[Gould] suggested that just as religious fundamentalists read Scripture in a literal and pig-headed way, and unimaginatively apply biblical passages to everything, so Dennett has a primitive understanding of evolution and, with the enthusiasm of the fire-breathing acolyte, tries to apply Darwinism to virtually every human social, cultural and religious practice, with disastrous and even comical results.
There is indeed a controversy here, and Dennett is indeed more likely to err on the ambitious side w/r/t evolutionary explanation. But D'Souza is being ridiculous. All he does is quote dismissive rhetoric from Gould (and H. Allen Orr) to make Dennett look bad. But as that New York Review exchange with Gould makes embarrassingly clear, Gould never understood Dennett's response, or at least didn't address it, and in fact resorted to personal attacks and name-calling in a most unprofessional manner. But put that aside (after all, that doesn't make Dennett right). Dollars to doughnuts D'Souza doesn't understand it either, and is just looking for another way to hurl abuse.

Let me try to clear it up a bit. No doubt I too will oversimplify; but we can take a few steps in. The issue concerns Dennett's "adaptationism" – his tendency to try to explain a biological phenomenon in terms of its evolutionary advantages. Gould was right to point out that we cannot simply assume that we can do this for every biological phenomenon. In his famous example, which I will not explain, some things are "spandrels": they arise because other things are evolutionarily advantageous and bring the first thing along with them. The "adaptationist" makes it sound like some evolutionary developments are inevitable – as if nature says, hey, wouldn't feathers be a great idea here! Let's evolve some feathers! (Or intelligence, or – more relevant to Gould's rejection of sociobiology – incest taboos, or matriarchy, or whatever.) Such "explanations" can (and in the case of sociobiology, often did) end up sounding like a bunch of ad hoc Just So Stories. In response, Gould emphasizes the radical contingency of the evolutionary path: re-run the tape 20 times and get 20 different results (think "A Sound of Thunder" here).

Fair enough. But Dennett never denied these things. (Gould's original attack was on earlier "adaptationists," but then Gould turned his guns on Dennett later.) Somehow the debate got turned from an interesting one (about which particular sorts of appeals one can make to evolutionary advantage, and which particular such explanations work and which do not) into one about whether one could ever appeal to evolutionary advantage, or whether there could ever be what Dennett calls a "forced move in design space." But surely, even in the case of evolutionary psychology (where the danger of Just So Stories is very real) no such slam dunk is possible. I, at least, am willing to let the Ev Psychers make their case; as the name change indicates, they seem to have learned at least a bit of humility from the sociobiology debacle. Maybe this or that isn't so just-so a story after all.

But that's not the point. Let's abandon Ev Psych entirely, for the sake of argument, and take "adaptationism" in biology alone. Jerry Fodor recently claimed that the very idea of nature "selecting for" a particular trait is incoherent, because nature doesn't have desires that things be one way or another. We can't say that the polar bear's white fur was "selected for" – that it arose because of its evolutionary advantage, as "adaptationists" claim – because given the polar bear's white surroundings, nature can't distinguish between white fur and fur that matches the environment, so it can't "select" for either. Like a lot of what Fodor says, that sounds crazy to me. For one thing, not only would this render explanations in terms of evolutionary advantage necessarily insufficient, it seems to eviscerate the notion entirely, which is nuts. (Interestingly, for what it's worth, it's also reminiscent of Quine's argument for linguistic holism, which Fodor famously rejects.) For more on Fodor, see here and here.

As is their wont, Fodor and Dennett trade incredulous accusations of the other's not getting it at all (links at the previous link, at the bottom of the page). Granted, Dennett is on firmer ground (in this respect, i.e. that of accusing Fodor of not getting it at all) in the philosophy of mind than in biology; so maybe they're both not getting it. (Or I'm not, or nobody is.) But of course D'Souza is keen to set Dennett straight there too. Back in June he had a four-paragraph zinger which was very similar: he found some other authorities willing to dump on Dennett as a dogmatic ignoramus. (Actually, looking again, I see D'Souza brings up Dennett himself; but so do the authors in question, as I happen to know, so, no foul there.) Briefly, the idea is that Dennett is "committing a conceptual mistake" (as is Francis Crick, the original target) in ascribing intentional properties (believing, etc.) to the brain. According to D'Souza, "[b]rains aren't even conscious; the humans who have brains are conscious." How about that: that's my view as well. (I even go farther: in the sense with which we are concerned (though not in another), my brain isn't even alive.)

But again Dennett is perfectly well aware of the danger here, and is guilty at most of some loose talk and/or as yet uncashed promissory notes. Bennett and Hacker (for these are the authorities in question) claim that all such talk is necessarily loose and all such promissory notes knowable a priori to be uncashable. (I know Anton and N.N. disagree with me here, but I think Bennett and Hacker have been misled by Dennett's triumphally naturalist rhetoric into misconstruing his project, which seems to me to be construable (perhaps, I grant, against Dennett himself, at least to some degree) as perfectly acceptable, even (or even especially, qua anti-Cartesian) on Wittgensteinian grounds. But Hacker's Wittgenstein is not my own, as far as I can tell. I owe more explanation here, but this is not the place. See N.N.'s link for an exchange between Dennett and Bennett/Hacker.)

In any case, if the danger is one of misleading locutions, that charge cuts both ways. Bennett and Hacker are careful to deny dualism, but of course for D'Souza misleading locutions are mother's milk; he continues [I bold for emphasis]:
Crick and Dennett are erroneously ascribing qualities to brains that are actually possessed only by people. True, our thoughts occur because of the brain, and we use our brains to think just as we use our hands and rackets to play tennis. How foolish it would be, though, to say that "my arms are playing tennis," or even more absurdly, "My racket is playing tennis." In reality, I am the one who is playing, and arms and rackets are what I play the game with.

Crick and Dennett are guilty of a fallacy that has become quite common among cognitive scientists. This is the Pathetic Fallacy, the fallacy of giving human attributes to nonhuman objects. This practice is quite harmless if we do it in a whimsical, metaphorical way. I might write that "the stem of the oak raised its arms to the sun, searching for its warm embrace." The problem only arises if I actually start to believe that oak branches have intentions. Brains are very useful objects, but they aren't conscious and they don't know how to feel or think.
It's hard to tell, but I think he really thinks he has established dualism as true (I am not identical with my brain = My brain and I are distinct in the dualist sense). Wow. There's another related DD column I want to discuss (an amazing howler, which you may already have seen), but let's leave it for another time. Now let's hear about how the debate went!

Tuesday, November 27, 2007

Mistakes were made

My mom likes the Corrections section of the New York Times. It's interesting to see what they got wrong – sometimes it's a reasonable mistake; sometimes you can tell what happened (e.g., garbled phonemes from the informant), and sometimes they just don't check their "facts," or even, apparently, think about what they're saying. Everybody's got their favorites, apocryphal or not. I like the one (the former, I think, or at least not at the Times) that read:
[band name] compose their music according to Christian principles. They are not, as the article stated, "unrepentant headbangers."
So, repentant headbangers, then? Good name for a band, anyway.

On Sunday they had another good one. I didn't see it, but Powerline reports for us (HT: Roger Kimball):
A headline last Sunday about a Muslim man and an Orthodox Jewish woman who are partners in two Dunkin’ Donuts stores described their religions incorrectly. The two faiths worship the same God — not different ones.
Glad we got that cleared up. Incidentally, both blog citations pour scorn on the Times for presuming to dictate theology to us, but as far as I'm concerned the issue is semantic; the theological issue is orthogonal. We've gone over this before – and of course when I say an issue is "semantic" I mean not, as most people do, that it's not important, but the very opposite. (I'm weird that way.)

Tuesday, November 20, 2007

D'Souza vs. Dawkins

I hereby confess to finding fascinating the "debate" between those who believe that religious people are ipso facto irrational (i.e. that they are simply subject to "wishful thinking" or "blind faith"), and their pious opponents, who believe that while they themselves are composed of rubber, atheists, in contrast, are composed of glue, with predictable results where accusations of irrationality are concerned. Even so, the prospect of hearing a debate on this issue between Richard Dawkins and Dinesh D'Souza instills in me a profound desire to punch myself repeatedly in the face. That link, I should point out, takes you not to such a debate, but instead to D'Souza's challenge ("It's time to find out whose position is truly based on reason and evidence and skepticism and science").

I only mention it now because in recent bloviations derived from (and, not coincidentally, flogging) his new opus (What's So Great About Christianity), D'Souza invokes none other than Immanuel Kant in support of his position. Naturally our pedantic pundit ties himself into knots, but they are interesting knots nonetheless, clinically speaking, so I thought we might take a look at them.

According to D'Souza, dogmatic atheists such as Dawkins and Daniel Dennett are in the grip of a fatal fallacy, one with Capital Letters:
The Fallacy of the Enlightenment is the glib assumption that human beings can continually find out more and more until eventually there is nothing more to discover. The Enlightenment Fallacy holds that human reason and science can, in principle, unmask the whole of reality. In his Critique of Pure Reason, Kant showed that this premise is false. In fact, he argued, that human knowledge is constrained not merely by how much reality is out there but also by the limited sensory apparatus of perception we bring to that reality.
So the idea is that "atheists, agnostics and other self-styled rationalists" literally believe that they know everything, or they will once the upstate returns come in, and that what has been or will inevitably be discovered leaves no room for rational people to indulge themselves in the narcissistic fantasy of religious faith. Such dogmatic certainty is clearly fallacious; and, in fact, not even Richard "God Delusion" Dawkins claims otherwise. We hardly need the great Immanuel Kant to tell us this.

What is actually true is this: a) what science has already discovered, while indeed incomplete, leaves no room for rational people to indulge themselves in certain narcissistic fantasies often associated with religious faith; and b) far from being constituted by such inanities, religious faith is better off without them. Of course, to say this is not to end the debate (Dawkins would counter with an invocation of the No True Scotsman fallacy), but to begin it (i.e., a real one). My point in mentioning this here is instead to contrast it with D'Souza's response. As required by his polemical strategy, D'Souza has his naturalist flatly deny the existence of anything beyond our ken, while he himself is open-minded enough to allow the possibility. Typical debating tactic, but again, not exactly, well, enlightening (plus he's got stones calling other people glib).

The natural opponent of dogmatism is skepticism. (You're so sure that _______? Well, think again.) It is for this purpose – to counter fallacious Enlightenment dogmatism – that D'Souza turns to Kant. At first it's not clear why. If we want a skeptic to counter dogmatic overreaching, why Kant? Why not, say, Hume, or even Descartes? The idea that we can know reality whole is Hume's very target; indeed, it is rigorously empiricistic skepticism rather than naive dogmatism that most readily characterizes the scientific enterprise. All that reason demands, say Enlightenment types, is that evidence be submitted for claims about reality; and as Carl Sagan liked to say, "extraordinary claims require extraordinary evidence." This response to religious claims, rather than dogmatically affirming their negations ("we know there isn't anything non-material"), simply denies that they have been proven ("we don't know that there is"). This allows the religious person to parry the dogmatic-atheist thrust, even if to make a thrust of his own (whether of the form "here's your evidence" or "here's why I don't need any") requires some further work. Still, if our concern is to get scientists to acknowledge limits on their knowledge, Hume would seem to be the natural place to go.

Of course, Hume himself was a notorious atheist, or at least a freethinker of some sort; plus there's that stuff about "committing [metaphysics] to the flames, for it is nothing but sophistry and illusion." This renders him somewhat unappealing as an authority for the "greatness" of Christianity (unless you're playing the "even X admits ... " card). On the other hand, Descartes's skepticism is too sophomorically goofy to pass the giggle test in a debate involving non-philosophers (what if we were, like, in the Matrix, man?). Kant, however, is the very model of Protestant probity, and a recognized philosophical titan, with a firmly deontological moral theory to boot. (Not only that, as we'll see, his complex, poorly explicated views make it fairly easy to exploit his writings for rhetorical purposes. But let's not get ahead of ourselves.) Still, for deflecting dogmatic atheism, and thereby making room for the possibility of theism, it seems that some sort of skepticism would be required.

Now of course the problem with skepticism is our unshakeable conviction that we do in fact have real, objective knowledge of the external world. Naturally Kant believes this too, so it's not surprising to hear him say just that (this is the force of his "empirical realism"). D'Souza uses this to concede knowledge of a sort to science, thus adroitly cutting off a possible counterattack: as the moderate voice of sweet reason, he's a modern scientific fellow, neither a loony pre-modern fundamentalist nor a daffy postmodern relativist. Remember, all he needs for his burden-of-proof move is that such knowledge be conceded to be essentially incomplete, not that it be false or uncertain. But again, we hardly need Kant for this; this is standard-issue empirical skepticism, straight out of Hume (or common sense; after all, we'll never be omniscient). So why is D'Souza bringing Kant into it? Watch and learn.
It is essential to recognize that Kant isn't diminishing the importance of experience or what he called the phenomenal world. That world is very important, because it is the only one our senses and reason have access to. It is entirely rational for us to believe in this phenomenal world and to use science and reason to discover its operating principles. But Kant contended that science and reason apply to the world of phenomena, of things as they are experienced by us. Science and reason cannot penetrate what Kant termed the noumena: things as they are in themselves.
Perhaps for lack of an editor, but in any case obscurely, Kant does indeed speak in a number of places of phenomena and noumena as distinct ontological realms. As the first move in his rhetorical conjuring trick, D'Souza shamelessly palms this Kantian distinction off as the Platonic one between a life of mere deception in the Cave and true knowledge of a timeless reality which transcends and underlies human experience. Unlike Kant, Plato believed that the latter was to some degree possible for us through the philosophical use of reason. Kant is concerned instead to establish the limitations on reason; for him, belief in a Platonic realm is the result of "transcendental illusion."

Of course, D'Souza too is concerned with the limits of reason; but his limits are less Kantian than they are Cartesian. Unlike Plato, Kant and Descartes (in his skeptical mode, at least) both deny that reason can provide any knowledge of a realm "beyond appearances." The Cartesian skeptic limits our knowledge to sensory appearances, leaving us in doubt about what (if anything) lies beyond, while Kant's talk of "noumena" seemingly allows a blank affirmation of the existence of that mysterious realm. Thus the appeal to Kant instead, which sets up the theistic punch line to come. For all that's been said so far, though, the only consequence of our "limited sensory apparatus" is the skeptical one we cannot simply rule out dogmatically the existence of a "supersensible" reality on the basis of our empirical knowledge to date. After all, that point constitutes D'Souza's criticism of "self-satisfied atheism" – that it is precisely by so doing that dogmatic materialists rule out the possibility of rational (i.e., non-irrational) religious faith. His conclusion, however, is significantly stronger.
[T]he new atheists and self-styled "brights" can do their strutting, but Kant has exposed their ignorant boast that atheism operates on a higher intellectual plane than theism. Rather, as Kant showed, reason must know its limits in order to be truly reasonable. The atheist foolishly presumes that reason is in principle capable of figuring out all that there is, while the theist at least knows that there is a reality greater than, and beyond, that which our senses and our minds can ever apprehend.
So dogmatic claims of (potentially) universal knowledge constitute an "ignorant boast," and skepticism rightly cuts them down to size by pointing to necessary limits on our knowledge. Fine; but in D'Souza's hands this hard-won knowledge that doxastic modesty is warranted – that our knowledge is limited to (as Kant puts it) the objects of possible experience – is magically transmuted into positive warrant for supposed knowledge of a distinct transcendent realm. Once this slide is made, Kant's criticism of metaphysics licenses a naked affirmation of metaphysical doctrines of the most unashamedly "pre-critical" kind:
Kant's philosophical vision is entirely congruent with the teachings of Hinduism, Buddhism, Islam, Judaism and Christianity. It is a shared doctrine of those religions that the empirical world we humans inhabit is not the only world there is. Ours is a world of appearances only in which we see things in a limited and distorted way, "through a glass darkly," as the apostle Paul writes in his first letter to the Corinthians 13:12. Ours is a transient world that is dependent on a higher, timeless reality. That reality is of a completely different order from anything we know, it constitutes the only permanent reality there is, and it sustains our world and presents it to our senses. Christianity teaches that while reason can point to the existence of this higher domain, this is where reason stops: it cannot on its own investigate or comprehend that domain.
Now we have our answer. We needed the skeptical moment ('scuse my Hegelian) in Kant to combat dogmatic scientism, but that only gets us a wash: neither side can prove its claims. To complete the slide to a positive argument for supernaturalism, we need Kant's positive reaction to Humean skepticism. However, (as Kant shows!) a reaction to skepticism need not be a recoil into dogmatism. Kant indeed restores empirical knowledge (even of scientific laws, which it seemed Humean inductive skepticism had threatened) in the face of the acknowledged limitations of our senses, the significance of which Kant believes both skeptics and traditional metaphysicians misconstrue. But D'Souza goes farther: the Cartesian epistemic limitations of our senses (now curiously identified with "reason") – taken together with Kant's supposed demonstration of the existence of a world beyond them – leave the field open for knowledge to be provided by a distinct faculty not subject to the relevant limitations, one which, in providing knowledge of this transcendent realm, ipso facto shows things as they really are ("face to face," as Paul puts it), in a way that "merely" empirical science cannot. (I qualify this accusation a bit below.)

[UPDATE: I don't mean to imply that D'Souza gets Paul right here, or that Paul is indeed endorsing the Platonic picture. See the comments below for more about Paul.]

In other words, D'Souza's alleged "congruence" of Kant's transcendental idealism with traditional religious doctrine (so construed) is not in the Critique at all (Schopenhauer, maybe). In D'Souza's version, that "congruence" is instead a doctrine of full-on Platonist metaphysics with a Cartesian epistemological twist, complete with a characteristic equivocation on whether we have any real knowledge (resulting from equivocation on the nature of "appearances", to complement that above on "reason"). Even if it were Kant's view – and it's true that this was once a standard reading of Kant (one to which my undergraduate Kant teacher impatiently responded with "come on, read the book a little bit") – those who do attribute this view to Kant have near-universally taken its equivocation about knowledge to constitute what is obviously wrong with it. Here's Kant scholar Henry Allison on the matter:
The most basic and prevalent objection stemming from the standard picture is that by limiting knowledge to appearance, that is, to the subjective realm of representations, Kant effectively undermines the possibility of any genuine knowledge at all. In short, far from providing an antidote to Humean skepticism, as was his intent, Kant is seen as a Cartesian skeptic malgré lui. Some version of this line of objection is advanced by virtually every proponent of the standard picture, including Strawson. [For example,] Prichard construes Kant's distinction between appearances and things in themselves in terms of the classic example of perceptual illusion: the straight stick that appears bent to an observer when it is immersed in water. Given this analogy, he has little difficulty in reducing to absurdity Kant's doctrine that we know only appearances. His [...] main point is simply that this claim is taken to mean that we can know things only as they "are for us" or "seem to us" (in virtue of the distortion imposed by our perceptual forms), not as they "really are." Since to know something, according to Prichard, just means to know it as it really is, it follows that for Kant we cannot really know anything at all. Clearly, such a conclusion amounts to a reductio of the Kantian theory. (Kant's Transcendental Idealism: An Interpretation and Defense, 1983 ed., pp. 5-6, my emphasis)
If science discovers the truth about empirical reality, as all parties claim to admit, then we cannot take this reality qua reality and stick it back beyond the veil in the Platonic manner, let alone Cartesianize it into in-principle unknowability by "reason" (i.e. even by philosophy). Such epistemic nihilism is the very "scandal for philosophy" Kant is determined to overcome.

The essential point is that the incompleteness of our knowledge has nothing to do with its truth. Of course incompleteness is all D'Souza needs for his insipid anti-dogmatic conclusion. But, again, that wouldn't get him what he really wants, which is an a priori demonstration of the existence of a supernatural realm accessible only by "faith." I scare-quote that word because D'Souza here indulges the popular notion (shared, alas, by many of his scientific-rationalist interlocutors) of religious faith as a distinct mode of epistemic access to reality, differing from the senses only in being, well, extra-sensory, like ESP. In fact, that's exactly how he presents the issue. A tape recorder, he says, captures only sound and knows nothing of visible reality; so "[w]hat makes us think that there is no reality [which] lies beyond our perception, reality that simply cannot be apprehended by our five senses?". For D'Souza, what distinguishes the "intellectual plane" of theism from its rival can't amount only to a boring agnosticism about the unknowable; instead, once open to the possibility not of unknowable, but of extra-sensory reality, we may find ourselves presented with as yet unimagined experiences, with accompanying convictions of contact with ultimate reality. (Of course he doesn't say this, but that's the only way to make sense of his position.)

Again, the point about our cognitive limitations would be okay if all it entailed were the triviality that we cannot know that there's nothing out there that we simply can't know about; but then D'Souza spoils that point by conflating it with his Cartesian appeal to perceptual illusion, where what we're missing is not additional knowledge, but actual knowledge of the really real. This allows him to construe faith's independence of the five fallible bodily senses as showing it to be of another epistemic order entirely. The "supersensible" can thus be equated, as required, with the "supernatural," and that with the otherwise inaccessible "reality," and this, finally, with truth itself. Now of course we know that there is a truth about what we believe (that is, that our beliefs are true or false, the meaningful ones anyway), whether it is known to us or not. So what started out as a wash (science can't say whether or not there is a supernatural realm, so it's not irrational to regard the question as at least open, in spite of the methodological materialism of modern science) is now changed, in an instant as it were, to its being irrational to believe that there isn't such a thing – and thus not (at least potentially) to have faith. But of course it hardly takes religious faith to believe that our beliefs have truth values, and only a conjuring trick can make it look like it does – a conjuring trick in which Kant had no part. For Kant, no additional quasi-sensory modality can get us knowledge of things as they are in themselves, so D'Souza's Cartesian talk of tape recorders oblivious to vision is not at all to the point.

D'Souza will reply that he has been careful not to claim knowledge for faith. Kant's argument, he says, "is entirely secular: It does not employ any religious vocabulary, nor does it rely on any kind of faith. But in showing the limits of reason, Kant's philosophy "opens the door to faith," as the philosopher himself noted." He's been careful all right; but the question is not whether it takes faith to "open the door" to faith, but instead what faith does when it comes through that door. And of course the whole point of shoehorning Platonism into the Kantian argument is that the religious person is thereby entitled to a doctrine that "[o]urs is a transient world that is dependent on a higher, timeless reality [which] is of a completely different order from anything we know, it constitutes the only permanent reality there is, and it sustains our world and presents it to our senses." It's true that as religious doctrine goes, this is pretty abstract (if also highly speculative!); but the only way consistently to abjure knowledge of a realm "beyond reason" is to say, with the East, that "the way that can be spoken of is not the way," and respond to any attempt to say more (i.e. add any content whatsoever to our doctrine) with a smack upside the head. As he is committed to a particularly doctrine-laden version of neo-platonism, your typical Christian is in no position to do this. For him, he "knows by faith" about all kinds of things. With respect to knowledge claims, the bumper sticker "God said it; I believe it; that settles it" is only an extreme version of religious dogmatism (that is, cognitivism).

If, as the religious person believes, we have experience of the divine, then Kant's limitation of knowledge to possible experience does not touch religious belief at all. That may seem to defend it against Kant's strictures, as required; but in fact what it means is that Kant's argument here is completely impotent for D'Souza's purposes. The Kantian connection of the intelligibility of belief to possible experience means that in the relevant sense, both religious belief (which if true amounts to knowledge) and empirical belief are on a par, as being knowledge of "appearances" rather than a "transcendent" realm. Only a further equivocation on the notion of "belief" (and the above one on "transcendent") can disguise this. Essentially, D'Souza wants to get for free – from the nature of reason itself – what only further detailed explanation of the nature of faith and knowledge (or, in Hegelian again, Glauben und Wissen) can accomplish. For his argument to work, faith needs both to supply knowledge unavailable otherwise, on the one hand, and on the other, not to supply knowledge at all. (Of course, Kant himself, as the author of Religion within the Boundaries of Mere Reason, has plenty more to say, but as I haven't read that book, I can't comment; but that title sure is suggestive, isn't it? Maybe someone more knowledgeable than I will help out ...)

I mentioned this earlier (plus it should be obvious by now), but it bears repeating. My intention is not to side with Dawkins here. Of course I agree with D'Souza's weaker claim, i.e., that religious faith is (religious believers are) not ipso facto irrational. I merely take issue with his crude and tendentious (and yet ingeniously slippery!) exploitation of subtle philosophical issues in order to score debate points. I see also that I have helped myself, toward the end there, to my own idiosyncratic views about belief qua doxastic commitment, among other things. Much more would have to be said, about faith and knowledge both; but such a real discussion could only begin by rejecting the very idea of this pointless debate.

Breaking news I: D'Souza will debate (not Dawkins, but) Dennett at Tufts on 11/30. Dare I hope that Kant will not be mentioned?

Breaking news II: D'Souza has just unloaded a new, well, load, about how of course we have free will ("I can knock [my] coffee mug onto the carpet if I choose") and how this proves the existence of the immaterial soul. Maybe later, eh?

Monday, November 19, 2007

Gobble gobble

A feast awaits you at the latest Philosophers' Carnival. Dig in here.

Incidentally, the link to this edition from Leiter Reports sports a disclaimer to the effect that linking to it does not mean that he thinks all the posts are good. Which ones do you think brought that on?

Saturday, November 17, 2007


Technorati Profile

I dare you

If you click here, you will be taken to a Flickr photoset. One commenter tells us that upon viewing these photos, his/her brain "alternately exploded and melted."

This can mean only one thing.

Yes, it's a (snarkily annotated) photo tour of the Creation Museum in Kentucky! With an accompanying snarky essay!

Toughened by years of exposure to the most ridiculous philosophical doctrines imaginable, my brain actually held out for some time. This photo is the one that finally compelled it to implode. (And compare this one.) Holy frijoles!

Go on. You know you want to.

[HT: Pharyngula]

Thursday, November 08, 2007

McDowell (and Cavell) on criteria and skepticism

The other day, Currence reported on Ed Witherspoon's talk entitled "Wittgensteinian Criteria and the Problem of Other Minds." In his post, he wondered
how Witherspoon's discussion of Wittgenstein on criteria and whether he was a McDowellian or a Cavellian -- or whether it makes a substantial difference if they're essentially the same -- ties in at all to the discussion of two kinds of skepticism, except that Witherspoon thought Wittgenstein took himself to be addressing both kinds of skepticism.
Here are some thoughts of mine on that subject which I hope will be of some use. (You should probably read his post first, if you haven't already.)

Let me start with the intuitive motivation for skepticism. When the evidence points to something, the natural thing to do (the everyday thing, the default thing) is to believe it ( = take it to be true). If one finds the evidence insufficient, one suspends judgment; but that's not philosophical skepticism. The philosophical skeptic is the guy who says "sure, I don't have any problem with your evidence – it looks that way to me too, so I'm not asking for more or better evidence – but still, we might be wrong; so we better not say we know." In other words, we are fallible, in that perfectly compelling appearances can cause us to believe falsely; the skeptical conclusion is that in every case, we cannot know if this is one of those times, so we must therefore suspend judgment on the truth or falsity of (possibly "mere") appearances, no matter how compelling they may be.

In addressing this "scandal for philosophy," Kant diagnosed it as a variant of a much deeper problem. If subject and object are so radically distinct as to threaten the former's knowledge of the latter, as the Cartesian insists, it seems that this metaphysical gap threatens our ability not only to know, but even to think about the world. This "Kantian skepticism," as Conant terms it (as we will see if that darn book of his ever comes out), addresses the Cartesian confidence in the possibility of contentful yet radically false "appearances". Turning it around, we may say that if we are in close enough contact with the world to form contentful thoughts about how it might be, then the Cartesian epistemological scruples are pointless. You may of course bite the bullet and try to deny that your thoughts have content; but then why should I listen to your self-admittedly meaningless babble?

In the contemporary context, after the linguistic turn, we speak not simply of contentful thought but also of meaningful assertion. This is where "criteria" come in. The idea of appealing to criteria in replying to epistemological skepticism is to add to the original reason for belief (i.e. that the evidence points to the truth of P) the idea that to take P as true in such situations just is to use the term correctly. After all, generally speaking, that's how we learn the term in the first place: in the (apparent) presence of a yellow banana, nanny says "look at the nice yellow banana!", and we say "yehw bana," and so on (this is not to commit oneself to the "Augustinian" picture of the opening sections of Philosophical Investigations, but to the manifest facts which make that picture seem obligatory). Once we have learned the relevant terms, to withhold assent to "there's a yellow banana" in such situations is not to show virtuous epistemic scruple, as the skeptic has it, but instead to show vicious semantic incompetence. That's the thought anyway (and there is indeed something like this in Wittgenstein; the question is what exactly).

However, as both Cavell and McDowell point out, it would be wrong to suggest that "criteria" get you across the skeptical gap. When we answer the skeptical challenge, saying that we (do too) know that P because the criterion is satisfied, there are two ways in which the epistemic chain may yet break. First, the criteria for asserting P might actually apply, but fail to entail P; or the criterion indeed entails P, but we cannot know whether it actually applies here (that is, whether the "appearance" that P is veridical). We cannot have both, on pain of dogmatism. Our manifest fallibility is as undeniable a fact as any.

Where does this leave us? Putting Wittgenstein himself to one side for now, let's compare Cavell and McDowell. According to Currence, Witherspoon suggested at the talk that the two views differ mainly in a terminological difference between them concerning the term "criterion." McDowell takes the satisfaction of criteria for X to entail the truth of X, but allows that we can take criteria to be satisfied even when they are not, while Cavell goes the other way: criteria themselves can mislead us, but we can and do know when they obtain. In any case, both reject the "criterial theorist's" view that criteria take one across the skeptical gap. So they agree; but even so, this difference reflects an important difference in emphasis and strategy, in explaining which I think we can make helpful reference to the various types of skepticism, on the one hand, and that confusing talk about types of doubt (and whether LW was a "fallibilist") on the other. In any case that's what I'll try to do here.

Let's start with McDowell. One of McDowell's consistent concerns – becoming more explicit in his recent work on Kant and Hegel – has been to reject the Cartesian metaphysical opposition between subject and object, which is the source and stay of the corresponding epistemological skepticism. In Mind and World (MW), pressing the "Kantian skeptic" line (though not in those terms), he insists, following Wittgenstein, that the content of our concepts is not confined to the subjective side of the Cartesian gap, but instead "[does] not stop anywhere short of the fact" [PI §95]. In contemporary terms (MW p. 27):
[T]here is no ontological gap between the sort of thing one can mean, or generally the sort of thing one can think, and the sort of thing that can be the case. When one thinks truly, what one thinks is what is the case. [...] Of course thought can be distanced from the world by being false, but there is no distance from the world implicit in the very idea of thought.
The idea, then, is that when we turn to the more fundamental semantic consequences of radical subject-object dualism, the familiar fact of human fallibility no longer even seems to have the epistemological significance the skeptic claims it does. This can thus help us resist a perennial philosophical temptation, viz., to metaphysicalize that innocent epistemic gap and identify it with that same picture's supposed ontological gap between subject and object (thus reinforcing it, as well as the resulting skepticism). If we do this, when we turn back to epistemology, the Cartesian argument "effects a transition from sheer fallibility (which might be registered in a 'Pyrrhonian' scepticism) to a 'veil of ideas' scepticism" ["Criteria, Defeasibility, and Knowledge" [CDK] p. 386n).

Here's how that works. The skeptical argument gets its traction from the idea that deceptive appearance and veridical manifestation are phenomenologically indistinguishable, and thus that "one's experiential intake—what one embraces within the scope of one's consciousness—must be the same in both kinds of case" [CDK p. 386] That "highest common factor" [HCF] between the deceptive and veridical cases is thus essentially incapable of providing evidence for one or the other. The difference between the two cases is something extra: the actual connection to the world which makes the veridical case veridical. But ex hypothesi we cannot tell which is which; the world's contribution to the veridical case, on this view, is "blankly external" to our experience.

But as we have seen, McDowell disputes the idea that the content of our experience must be construed as an HCF in this way. When I lack an actual connection to the world, it's not that my contentful perceptual experience was deceptive, but instead that I haven't had a perceptual experience at all, only an illusion of one. (I have conflated them here, but as I read it, the connection to the somewhat different linguistic version of this thought is this. My illusion of perceptual experience can result in my having a mistaken but contentful thought if on other occasions I have indeed had (veridical) perceptual experiences in the course of learning and using the concepts which make it up. The virtue of McDowell's focus on the perceptual-experience aspect instead of the contentful-concept aspect of his picture is that the latter, like Davidson's holistic view, makes it look as if it is only global skepticism which it renders ineffective: this thought can be a contentful mistake only if not all of them are. That is of course true too; but the perceptual-experience version is more powerful – and its metaphysical import more directly anti-dualistic – in that it applies even to single cases.)

McDowell's alternative conception of perceptual experience is "disjunctive":
[A]n appearance that such-and-such is the case can be either a mere appearance or the fact that such-and-such is the case making itself perceptually manifest to someone. [CDK p. 387]
I either have a perceptual connection to the world or I do not. Let's say it seems to me that I see a cat. Either I do, and a cat has manifested itself in my experience – a veridical appearance – or there is no cat to be seen. In neither case is there an epistemological or metaphysical "intermediary" (either standing between me and the cat, on the one hand, or disguising its actual absence, on the other), in the manner of the HCF.

When a fact is perceptually manifest to one, saying or believing that it is is thus guaranteed to be true, and thus, in the skeptical context, a "criterion" of truth (though of course we may not take advantage of this fact: McDowell is careful to identify manifestation with the availability of knowledge, not its achievement, as even a veridical appearance may not lead to belief (perhaps because of skeptical scruple!)). Of course the skeptic, suspicious of this talk of epistemic "guarantees," makes the natural objection, as if McDowell were trying to sweep our fallibility under the rug in order to meet the skeptical challenge. This distinction, he says, doesn't help; the fact remains that we can't tell when we are deceived.

But McDowell's concern is not to deny epistemic fallibility, but to render it epistemologically, and thus metaphysically, uninteresting. His strategy is thus inherently anti-skeptical, affirming knowledge in the face of skeptical doubt. The skeptic can extract a concession that there remains a sense in which we cannot know when we are deceived (after all, to deny this is to deny that deception is even possible). Even where we are most sure, he says, we must leave room for doubt. But if this be doubt, it is an uninteresting – or perhaps "weak" or "imaginary" or "possible" – doubt, a mere footnote to our firm affirmation of belief. I am certain – beyond present doubt – that I am sitting before the fire at my computer; but every such belief is corrigible, in that I grant the conceptual, if not actual (or, as some pragmatists say, "serious") possibility of error. Give me more evidence, and I may come to change my view (so while I may be completely certain, I'm not "absolutely" certain – like I care about that anyway). That's all the "doubt" that human fallibility gets you – and it's not enough to underwrite a seriously skeptical position.

Currence reports that here was some discussion of this point at Witherspoon's talk:
Witherspoon did make a really odd claim, however, that everyone (Conant, Finkelstein, grad students) picked on during the discussion after the talk: Wittgenstein was a fallibilist. He said this in the context of a distinction he drew between "weak doubt" and "strong doubt". He switched between "weak doubt" and "imaginable doubt". This seemed massively confused to everyone; if being a fallibilist means no more than recognizing one has been wrong about things before, then every reasonable person is a fallibilist, and the claim is uninteresting. If, however, being a fallibilist means something substantial -- and I think it does, something along the lines of making intelligible assertions like 'I am justified in believing x, but I could be wrong' -- then Wittgenstein is about the last person I'd want to say was a fallibilist.

Conant made this point through a humorous example in which someone doubts that the room is not safe, saying, "This room is not safe!"; we ask, "Why isn't the room safe?", and they respond, "I don't know, it's just not!"; we wouldn't say they've offered a doubt at all. "Chicken Little worries" do not fall under the genus "doubt", and it is a sham to call them "weak" or "imaginary" or "possible" doubts.
My first reaction to this was that I don't see why anyone (anyone in that room, anyway) should be surprised if Wittgenstein thinks it important to stress something which "every reasonable person" believes, and which would thus constitute an "uninteresting" claim. On some views, that's all he ever does. But let's let that go.

Conant's Chicken Little example looks weird to me. (I assume that there is a typo here, and that it should be "doubts that the room is safe," not "not safe".) If someone says "This room is not safe!", he is indeed doubting that the room is safe; but he's also claiming that it isn't, which is not a skeptical thing to do. Chicken Little acted on his belief, insisting that the King be notified that the sky was falling. We have no difficulty attributing belief to him (and thus actual doubt about our safety). In fact his problem is not skepticism but gullibility, and our proper response to him is itself skeptical (i.e. garden-variety rather than philosophical): his evidence – a conk on the bean (or "I don't know, it just is") – is insufficient for such a remarkable claim. So Conant is right that this shouldn't count as "doubt" in the relevant sense; but that's only because it's a belief and thus not relevant to the issue of skeptical doubt. In particular, that's not what I take to be the point of talk of "weak" or "imaginary" or "possible" doubt.

The more relevant case, it seems, is this. I say, or assume, that the room is safe, but our friend demurs. Unlike Chicken Little, though, he is perfectly happy to remain here with us. Practically speaking, he says, the evidence is sufficient to warrant staying; yet he prefers (he says) to suspend judgment on the truth of "the room is safe," for familiar skeptical reasons: if we were deceived, and poison gas were about to seep from the ventilation, killing us all, things would look exactly as they do now. There's no evidence that this will happen, so there's no reason to leave. But we cannot claim knowledge that the room is safe.

This is what Peirce calls "paper doubt": philosophically motivated (e.g. Cartesian) demurral, conspicuously not backed up by action. You say you are in doubt; but you not only show no intention to leave the room (as you would if you were actually in doubt about your safety), but you're not even trying to allay your alleged doubt through inquiry. In claiming to doubt, you are simply registering your fallibility and drawing what seems to you to be the proper philosophical conclusion. But purported doubt (or belief), which has no connection with inquiry and deliberation is not doubt (belief) at all. The problem is not that the "doubter" cannot support his purported doubt, but that given his actions there's no reason, beyond his mere assertion, to attribute it to him at all. So the Chicken Little case is not germane. No-one denies that C. L. actually believes the sky is falling (and thus doubts that we are safe); it's the best way to explain his actions, including his urgent desire to see the King.

"Weak doubt" is not a good word for the bare concession of fallibility, but there's nothing wrong with "possible" as opposed to "actual" doubt (again, pragmatists oppose "theoretical" to "serious" possibility of error; it is when the former aspires to the latter condition that they (we) expose it as "paper doubt"). As for what Wittgenstein thought, that's a thorny issue. His reflections on these and related matters in On Certainty are inconclusive at best. I do agree, though, that simply to state that he was a "fallibilist" (not that that simple view is Witherspoon's) is highly misleading. But I won't get into it here.

I've already gone on for ages, so let me defer extended discussion of Cavell's position. I'll just finish the comparison with McDowell re: skepticism and criteria. As we saw, Cavell too concedes the failure of criteria to bridge the skeptical gap. But his philosophical strategy is very different from McDowell (even while sharing a great deal, in Wittgenstein and out). I quote from the back cover of Richard Eldridge's Cavell volume in the "Contemporary Philosophy in Focus" series:
At the core of [Cavell's] thought is the view that skepticism is not a theoretical position to be refuted by philosophical theory [i.e. "constructive philosophy"] but a reflection of the fundamental limits of human knowledge of the self, of others, and of the external world that must be accepted. Developing the resources of ordinary language philosophy and the discourse of thinkers as diverse as Wittgenstein, Heidegger, Thoreau, and Emerson, Cavell has explored the ineliminability of skepticism in philosophy, literature, drama, and the movies.
It's a good collection (check the link for an irresistible used price!), but unfortunately there's no article devoted to Cavell's views on skepticism in particular. Anyway, where McDowell as well as Cavell grants human fallibility (where the "criterial theorists" had no room for it, as the skeptic shows), the former shrugs it off as uninteresting, on the way to reaffirming our metaphysical and epistemological connections to the world, while the latter instead allows it to bring in its train the skeptical point about fundamental limitations on our knowledge, famously granting "the truth in skepticism," i.e. that our relation to the world "may not be one of knowing – or at least what we think of as knowing." So the skeptic wins; but then the victory turns to ashes from a Cartesian point of view, as Cavell proceeds to reinterpret its significance profoundly. So while he and McDowell may not easily be seen to agree on doctrine, or even on Wittgenstein interpretation, their views may yet be seen as helpfully complementary.

Wednesday, November 07, 2007

What about "The Twit"?

Yesterday's post, about the archaic terms for various levels of mental deficiency, put me in mind of one of the funniest books I've ever seen. It's called The Book of Sequels, with excerpts from, or promotions for, everything from Brideshead Revisited Revisited to Pride and Extreme Prejudice. Here's the blurb (featuring, as do many of the books, appropriate cover art) for the latter:
The action-packed sequel to Pride and Prejudice introduced a new Bennet sister, "Dirty" Harriet, who won the hearts of Jane Austen fans by forestalling an insult from Elizabeth Bennet's old nemesis, Lady Catherine de Bourgh, with a cool "I have no objection, your ladyship, to your proceeding, since, by so doing, you shall render my afternoon quite agreeable."
Naturally, the sequel I was reminded of here was that to The Idiot. In fact, as the ad tells us, Dostoyevsky wrote no fewer than nineteen such sequels, and for a limited time only, you may subscribe to The American Sequel Society's series, starting with two representative volumes, and receiving a new volume every month. If you decide to cancel, you may return The Imbecile and keep The Fathead absolutely free! Here's the complete list:
The Idiot I: The Idiot
The Idiot II: The Imbecile
The Idiot III: The Moron
The Idiot IV: The Cretin
The Idiot V: The Lamebrain
The Idiot VI: The Dimbulb
The Idiot VII: The Nitwit
The Idiot VIII: The Fathead
The Idiot IX: The Numbskull
The Idiot X: The Dumb Bunny
The Idiot XI: The Yo-Yo
The Idiot XII: The Dolt
The Idiot XIII: The Clod
The Idiot XIV: The Chump
The Idiot XV: The Sap
The Idiot XVI: The Dunce
The Idiot XVII: The Boob
The Idiot XVIII: The Dope
The Idiot XIX: The Ninny
The Idiot XX: The Nincompoop
Interestingly, the first three titles follow the official categories from low to high, as I explained yesterday: idiot, imbecile, moron. Of course, in Dostoyevsky's time those terms, so construed, were not anachronistic.

By the way, the amendment passed, but only by 59 percent to 41. That looks crazy – who would bother to vote no? But what they think happened is that a lot of people voted the same way on all five questions (there were Republican signs saying "vote no on all questions"). That (the sign anyway) makes a bit of sense; it's easier to say (or remember) "vote no on all" than "vote no on numbers 1, 2, 3, and 5; 4 sounds okay". Of course, if you bothered to read the thing, you can see it doesn't raise your taxes.