Thursday, December 20, 2007

Another semi-cool widget

You will notice a new widget I have added in the sidebar, between the archives and the blogroll. It's kind of cool, but it'll get boring if it doesn't get updated for a long time (I'll try not to let that happen). I have started it off with a list of 15 fine films I saw this year. (I didn't limit it to 2007 releases because I didn't see, well, even 15 of them at all, let alone 15 worth mentioning.) 2004 seems to have been a reasonably good year, with four entries out of 15.

On a film's individual page at imdb, the title is always listed in the original language (which means that sometimes you can't recognize it: did you know that #57 on the imdb top 250 list, as voted by you the viewers, is Sen to Chihiro no kamikakushi?); but even on this list the title can be misleading. For instance ... oh heck, I'll just list them again here, as I don't have to link to them again, and I have a few comments anyway. This post will be far away down the page soon enough, I imagine. Click on the plus sign next to each title (in the widget) for more info! (Tell your browser to allow popups for this page in order to go to the corresponding imdb page.)

1. The Lives of Others

This is that East German movie about the Stasi. Sebastian Koch is excellent as the dissident playwright caught up in tragic circumstances, but the film really belongs to Ulrich Mühe as the Stasi agent with wavering loyalties. Koch is also good in Black Book, which just missed the cut (maybe later).

2. Kill!

Awesome samurai flick. Tatsuya Nakadai is the man. 'nuff said.

3. Downfall

Bruno Ganz was a fixture in German cinema during the 70's and 80's (e.g. Wings of Desire, Messer im Kopf, etc.) Here he plays Hitler. This film is monumental and I can't imagine what it must have been like to see it if you lived in Germany during those terrible days (kind of like The Lives of Others in that respect, but more so). I can't judge the accuracy of Ganz's Hitler impression, but he's totally convincing in any case. Fun fact: if you saw The Ninth Day, you'll remember Ulrich Matthes as the heroic priest; well, here he plays Goebbels. Yikes!

4. Together

This is a hilarious and moving film from Sweden (I laughed, I ... well, I was moved) about communal living (hey, the 70's happened in Sweden too, y'know) – portrayed with all its lumps, but lovingly all the same. Great comment at the imdb page ("the aesthetics of porridge").

5. Head-On

This starts out as what I guess is a culture-clash comedy, about Germans of Turkish descent, but then veers into deeper waters, with surprising impact (thus the title?). It takes a while to get where it's going, but it's definitely different. I don't know what else to say here so I'll leave it at that.

6. Sansho the Bailiff

I don't know how they get The Bailiff out of "Sansho dayu", but there it is. At least the DVD cover gets it right. This is a classic of Japanese cinema, newly out on DVD (thank you Criterion!!). More melodramatic than I expected (and the director himself was apparently a bit dismissive of it later on), but still jawdropping (e.g. the abduction scene – whew!).

7. The Taste of Tea

One of those quirky Japanese films about a quirky family. Yeah, sounds boring, but give it a chance and its weird rhythms will beguile you.

8. House of Fools

Another film that works better than its description: a lunatic asylum in Chechnya caught up in the madness of war. Look out for the bizarre, gutsy cameo by Bryan Adams the rock star.

9. F for Fake

That's what it says on the box; not sure where the listed title comes from (and the imdb page has "Verités et mensonges"). Orson Welles giving us a lesson in art fakery. This one is totally out there.

10. 49th Parallel

Again, that's what it says on the box (in huge letters). Much better title than "The Invaders." One of the lesser-known Powell & Pressburger efforts, but if you like them (or if you are Canadian) you will definitely enjoy this unusual yet very entertaining film (w/Anton Walbrook!). Look out for Laurence Olivier in what is, well, not his best-known role. Another clutch Criterion release (so was Kill!, btw).

11. Mysterious Skin

Joseph Gordon-Levitt is riveting in this Gregg Araki film (warning: hard to watch at times). The big revelation is no surprise, but that doesn't matter as much as you'd think. Great cameo: Mary Lynn Rajskub (Chloe on 24) as ... what Chloe on 24 would be like if she were a UFO abductee instead of a CTU agent.

12. Pan's Labyrinth

Never got to it in the theater, but I finally caught up with it on DVD. Eye-popping fantasy (set during the Spanish Civil War this time) from Guillermo del Toro (director of Hellboy).

13. Prince of the City

Cop corruption drama in the Serpico vein, from ... the director of Serpico. The big breakthrough – and AFAICT the only big role – for Treat Williams, who is in virtually every scene. Cool time capsule.

14. Take My Eyes

Surprisingly involving Spanish drama about an abusive relationship.

15. The Cranes Are Flying

Just saw this one last night (thanks are once again due to Criterion for a marvelous transfer). Wow. Yet another war movie, this time from the Soviet angle. And once again, I cannot imagine what a Soviet audience must have thought, not only having lived through that time themselves, but also seeing it so freely portrayed on the screen (thanks to the "thaw" after Stalin's death). There's a scene in which two bright-eyed factory girls are about to regale the young hero's father with the customary patriotic bromides and he just cuts them off with a mocking laugh – the audience must just have gasped. Breathtaking cinematography throughout. See this one first.

All in all a good year at this end. Check back in 2009 for some 2007 releases!

Monday, December 17, 2007

A minor point about Dennett

I know I have other obligations (in the works), but here's one thing I had intended to get back to. And now my intention has been realized.

The other day, in comments, Daniel linked to some criticism Ben W. made of Dennett's explanation (in "Real Patterns") of the utility (indeed, inescapability) of the "intentional stance" using his beloved Life-plane example. I'll assume familiarity with the example (clink the link for Ben's setup), but the general idea is that if we are presented with a Life-plane computer chess player, we will do much better in predicting its behavior, Dennett claims, by taking the intentional stance (i.e. taking into account what the best move is), rather than making a ridonkulous number of Life-rule calculations (i.e. in predicting the entire bitmap down to the last cell for X generations).

There are two issues here, which we need to keep straight. First, how are we supposed to translate an intentional-stance prediction ("K x Q") "back into Game of Life terms," as Dennett claims is possible? Second, how do we know enough, given the impossibly vast bitmap, to make the intentional-stance predition in the first place?

With respect to the first issue, the problem here, as I see it, is that there's an ambiguity in the idea of "translating that description back into Life terms". Of course you can't translate "he'll play K x Q" "back into" a bitmap. There are innumerable bitmaps which include (let's say) the stream of gliders that translates into "K x Q." (Consider for example that the move is so obvious that even the worst-playing chess programs will make it.) But there may be only one (suitably described) glider stream that corresponds to "K x Q". It's that description which is made in "Life terms" ("glider stream" is a Life term, right?).

Here's how Ben poses the second question:
You can't adopt the intentional stance towards [the Life-chessplayer] until you have some idea what it's doing beyond just playing Life. I don't see any real reason to grant this, but even if you grant that someone comes along and tells you "oh, that implements a universal Turing machine and it's playing chess with itself", you still can't do anything with that information to predict what the move will be until you know the state of the board.
Well, of course. But remember, the intentional stance is an interpretive stance; and we don't ask (say) Davidson's radical interpreter for his/her interpretation (ascription of beliefs and meanings) until a whole lot of observations and interactions have occurred. Pointing this out is not the same as locating a theoretical difficulty with the very idea of ascribing beliefs and meanings on the basis of observations which might very well have perfectly good descriptions in radically non-intentional terms. Nor does it show a problem with making predictions of events which can be described in some way at any number of levels (to wit: a long (!) list of particles and their momenta, a hand moving in this direction at that velocity, a man moving his hand, a man pulling a lever, a man voting, a man making a mistake – a bad mistake). That is, it doesn't mean interpretative strategies aren't worth it in predictive/explanatory terms. What choice do we have?

Now go back to the first question (or both together):
Even if someone came along and told the observer not only that it's playing chess with itself, and then also told h/h what the current state of the board is (in which case it's hard to see what the program itself has to do with anything anymore), he's still not in much of a position to predict future configurations of the Life board—not even those extremely few that do nothing but represent the state of play immediately after the move.
Again, when we "re-translate" our intentional-stance prediction "back into Life terms," we don't get 1) entire bitmaps; and we didn't get the intentional-stance predition, and the "re-translation", 2) from nothing but a gander at the massive bitmap. We get 1) particular abstractions (say, a glider stream of such and such a description from that gun there) 2) after we already know, however we know it, that a chess game is in progress, the position, which glider streams are doing what, etc. That may sound very limited compared to what Ben is demanding. But why should we expect the miracle he's discussing? or hold it against an interpretive procedure that fails to provide it? I don't predict the exact state of your neurons when I predict you'll say (i.e., make a token of one of the insanely large number of sound types corresponding to the English words) "Boise, why?" when prompted by (...) "What's the capital of Idaho?" This is true even if your "mental state" does indeed supervene on your neural state. And of course my predictions depend on a whole lot of assumptions not available to (say) an newly materialized alien being trying for prediction/explanation in its own terms – assumptions like that you speak English, and know the state capitals, and are in a question-answering (yet suspicious) mood, etc., etc., all of which are corrigible if my predictions keep failing (and all of which, for Davidsonian reasons, even such a very radical interpreter may eventually be able to make for itself).

Also, what we actually do get isn't chopped liver – unless you'd really rather crank out those trillions of calculations. And even if you did, your result wouldn't be in the right form for our purposes. It would have both (way) too much information and not enough. Too much, because I don't need to know the massive amount of information which distinguishes that particular bitmap from every other one in the set of bitmaps each of which instantiates a Turing machine chessplayer running a chess program good enough to make the move I predict (or not good enough not to make the mistake I also make). All I need – all I want – is the move (or, in Life terms, the glider stream).

But there's also not enough information there, because (as Ben points out) you could do the bitmap numbercrunching and still not know there was a chess game being played – which leaves you without an explanation for why I can predict the glider stream (qua glider stream) just as well as you; and of course you too have to know something about Life (more than the rules) to go from bitmap to glider stream. I think Nozick says something like this in Philosophical Explanations (without the Life part).

And of course "it's hard to see what the program itself has to do with anything anymore." Just as it's hard to see what your neurons have to do with the capital of Idaho. Once we take up the proper stance for the type of interpretation we're interested in, we can ignore the underlying one. In each case, though, we keep in mind that if things get messed up at the lower level (a puffer train from the edge of the plane is about to crash into our Turing machine, or (in the Boise case) someone clocks you with a shovel), all intentional-level bets are off. (But that doesn't mean we aren't also interested in an explanation of how exactly the bitmap-plus-Life-rules instantiates a chessplayer, or of how exactly neurons relate to cognition; those are just different questions than "what move?" or "what answer?").

So I disagree with Ben that (in this sense, anyway) "this stance stuff [seems not to] reap big benefits." I don't know what Daniel means by Ben's criticism's making the stances "look a whole lot more naturalistic." As opposed to what?

Knee-deep thoughts

What effect do lakes have on thought? Find out at the new Philosophers' Carnival!

Friday, December 14, 2007

Like, wow

I don't post about my dreams here (usually I forget them right away), but this one was amusing. Rorty had just given his last lecture before retiring, and he asks Habermas if he'll be teaching during the summer. No, Habermas replies, he has to work on his autobiography: "I'm only up to my Cosmic Hippie period." I don't know about you, but I find it hard to imagine ol' Jürgen grooving to Brüder des Schattens, Söhne des Lichts. On the other hand, the cover of my edition (1979) of Communication and the Evolution of Society is pretty psychedelic...

Thursday, December 13, 2007

One man's twaddle

National Review commenter George Leef spends 90% of his posts on the high cost of college, but the other 10% are on ... why it's not worth it anyway. Here he is the other day on academic "scholarship":
Professors at most colleges and universities these days have to publish their research in order to win tenure and impress fellow academics who might some day offer them a better job. Often that research is of extremely dubious value and only gets published by university presses. Mal Kline of Accuracy in Academia writes here about some examples.
I didn't include the link to AiA (use your imagination, or click through if you must). I draw your attention to the phrase I have embolded. You can tell, you see, that Professor Davidson's "research" is of dubious value because he could only get it published by Oxford U. Press, where it can hardly be expected to sell very many copies, instead of a real publisher – like, say, Regnery. They probably didn't even get him on Larry King, to promote that – what was it again? – "radical interpretation" business. Nobody's buying your radicalism, Dr. Smartypants!

What a tool. Oh, but we're not done:
In his recent book Education's End, Professor Anthony Kronman of Yale laments the damage that has been done by the "research ideal" that has come to dominate higher education. He writes, "In the natural and social sciences, the goal of an ever-closer approximation to the truth seems entirely reasonable....In the humanities, this is less clear." Kronman is too polite to blurt out the truth — a lot of academic research is just twaddle.
Damn humanities – they'll ruin it for everybody. (Yet I agree that some people would be better off not publishing ... )

Wednesday, December 12, 2007

'Tis the season, again

My two fave ambient mix purveyors have selected the best of 2007 (here and here. Thanks guys! Check 'em out!

Friday, December 07, 2007

Breaking news: Stockhausen RIP

Obituary here. Not to get into the metaphysics of naming or anything, but I happen to know that Karlheinz Stockhausen had already died some years ago, after being hit by a car. I wasn't present on that occasion, so I'm not sure whether he was wearing his leash at the time. He had spent most of his life as an indoor cat, so he probably never really developed the sort of road-awareness that outdoor cats have.

Anyway, this other Karlheinz Stockhausen will live forever, thanks to Gesang der Jünglinge, Stimmung (get the Hyperion recording, not the original DG version), Hymnen, Kontakte, and a few other things. Or perhaps he lives in his massive influence on others. The obituary names Miles Davis, Frank Zappa, and Bjørk; but I'm surprised not to see any German names there, like Klaus Schulze, Edgar Froese, Holger Czukay, Irmin Schmidt, Conny Plank, Peter Michael Hamel, or, well, anybody. (Or maybe I'm not surprised.)

Thursday, December 06, 2007

Monk in the land of churches

At Brain Scam, H. A. Monk takes on Paul Churchland. (For some reason, as I write this, anyway, the post is dated 11/14, but it appeared on my reader on 12/6; plus the post mentions its being two months since the last one, which was on 10/6, so it looks like Blogger is on drugs or something.) He deals effectively with Churchland's latest argument for the identity of color qualia (the "subjective aspects" of color experience) with something called "cone cell coding triplets". I don't know whether the latter is a brain structure or some abstraction about the behavior of some cells in the visual cortex, but it hardly matters as far as I can tell.

This is because the problem is with the "identity theory" itself, not what exactly the "material" relatum amounts to, with which the "qualia" are supposed to be identical. As Monk rightly sums up:
One might say: qualia are a suspect kind of entity anyway, so why should I need a theory to account for them? Fine, but what you can't say is: these qualia you talk about, they just are these coding vectors, and then act like you've explained qualia [...] Similarly [i.e. to a deleted example, which doesn't seem quite right], one can say: I can explain everything there is to explain about sensation without reference to "qualia", so why should I be obliged to give you a separate explanation of them? But that is not what is being offered. Rather, we are told, color qualia exist; they are cone cell coding vectors.
Okay, but I think Monk actually lets Churchland off the hook. I would rather he had stayed with the promising line that explaining sensory experience isn't a matter of either

a) a complete neurophysical explanation, such as the materialist gives us; or

b) (a), corresponding to the "objective" part of the phenomenon in question, plus an explanation of "qualia," or the "subjective" part, where this latter part may simply be ineffable, but in any case necessary, such that without it we have an "explanatory gap".

But instead, he goes into Ned Block mode, telling us essentially that Churchland's proposal leaves something out.
When is Churchland going to wake up and smell the coffee? I'm not sure, but I don't think we should test it by asking him whether he's awake or not; better check his brain scan and let him know. Then do an EEG and see if he's smelling the coffee. With sufficient training he could be taught to look at the EEG and say, "Why, I was smelling coffee!" (This is the flip side of Churchland's utopia, in which we are all so well-informed about cognitive facts that introspection itself becomes a recognition of coding vectors and the like.) Now for the tricky part: turn off the switch in his brain that produces the coffee-smelling qual, and tell him that every morning, rather than having that phenomenological version of the sensation, he will recognize the coffee smell intellectually and be shown a copy of his EEG. And similarly, one by one, for all his other qualia.
This is the same thing everybody says about the Churchlands. (Not the same, but it reminds me of the joke about the behaviorists in the bedroom: "It was good for you; was it good for me too?") It wasn't to the point then, and it's not to the point now. It's true that the materialist answer "leaves something out" conceptually; but the reply cannot be that we can bring this out by separating the third-personal and first-personal aspects of coffee-smelling, and then (by "turn[ing] off a switch in his brain") give him only the former and see if he notices anything missing. That the two are separable in this way just is the Cartesian assumption common to both parties. (Why, for example, should we expect that if he simply "recognize[s] the coffee smell intellectually" his EEG wouldn't be completely different from, well, actually smelling it?) I think we should instead resist the idea that registering the "coffee smell" is one thing (say, happening over here in the brain) and "having [a] phenomenological version of the sensation" is a distinct thing, one that might happen somewhere else, such that I could "turn off the switch" that allows the latter, without thereby affecting the former. That sounds like the "Cartesian Theater" model I would have thought we were trying to get away from.

But rejecting this suspect idea is not at all to return to saying, with the materialist, that (what I called) "registering" the smell and "subjectively experiencing" it are identical, such that we only need the former (so construed). Sensory experience (like cognition and every other "mental" phenomenon) has multiple aspects, of which the "purely" first-person and "purely" third-person aspects are merely the ones most ripe for illegitimate philosophical reification. Better far to tell a story such that these are more easily seen as useful abstractions from a more unitary phenomenon. (On the other hand, go too far in this direction, and you get a big monistic mess, in which subject and object disappear entirely. Opinions differ about Dewey, but Experience and Nature at least threatens to have this result. Subject talk and object talk are indispensable, and we need not render them inaccessible simply in order to block their dualistic reification. But that's another story for another day.)

Monk continues:
Don't say: well, he doesn't deny these qualia exist, after all; he just thinks they are identical to blah-blah-blah... If he thinks they are identical to blah-blah-blah then he should not object in the least if we can produce blah-blah-blah without those illusory folk-psychological phenomena we think are the essence of the matter.
Interestingly, I was at a talk one time where Churchland said just that very thing ("I love qualia!") to Block's predictable objection. (He and Block went back and forth a few times, neither understanding the other, until everybody else started fidgeting.) I'm not defending his answer, except to the extent that I agree that Block's objection is not the right response. Churchland will just say that you can't produce one without the other, because (as I also agree) there's no distinct "experiencer" in the brain; and we shouldn't let him get away with that. After all, we're after the conceptual issue, not the empirical one; and all the proposed experiment would get Churchland to do (if it "worked" at all) is to abandon his empirical proposal that that particular neural candidate was the correct one. All you will have shown is that that neural candidate wasn't sufficient for smelling, when it's the very idea of "identifying" subjective and objective aspects of experience which is incoherent.

Look also at how easy it is, even while saying something basically right, to slip into misleading ways of talking. Monk is rejecting the idea that Churchland's claimed ability to use his neural theory of sensory experience to make "phenomenological prediction from neurological facts" provides any support for it (i.e. for the idea that "the qualia-coding vector relationship is not a mere correlation but an actual identity"). We can do that already, without needing to posit a tendentious "identity" to account for it. Here's Monk's example:
We know that the lens of the eye delivers an inverted image, which is subsequently righted by the brain. This suggests that our brains, without our conscious effort, favor a perspective that places our heads above our feet. (It is also possible that it is simply hard-wired to invert the image 180 degrees, but for various reasons that theory does not hold water.) Prediction: make someone wear inverting glasses, and they will see [an] upside down image at first (the brain inverts it out of habit), but eventually the brain will turn it right side up. It works!
Again, I agree with the general point: we don't need any "identity" here. But the wording seems to leave some of the generating assumptions in place. The lens of the eye does indeed "deliver an inverted image" to the retina (we can even see it, back there). But why say that that image "is subsequently righted by the brain"? Does it need to be "righted" ... in order for us to see it? Again we have an incipient Cartesian Theater. Surely what we see is not the image, but the objects in front of us. Inverting glasses make it seem as if everything is upside down; but after a while we get better at (re-)coordinating our various sensory inputs (primarily vision, touch, and proprioception), and that impression fades (not that one image is replaced by another). There: I told the story "without giving a separate explanation" of the visual anomaly; isn't that what we were supposed to do, rather than demanding a distinct something lacking from the materialist account? It's not a detailed story, as the solely neural one would be; but so what? We can give details on request, depending on the point of the question, and maybe this will require one sort of abstraction from our unified picture, and maybe it'll require a different one. That doesn't mean they're separable (as the qualophile demands).

Churchland also claims, Monk tells us, that "trained musicians 'hear' a piece differently than average audiences." I don't see how this was supposed to help Churchland's case, but Monk objects to the very idea:
That is also a predictable phenomenological fact, but it involves a change in the mental software, through accustomization and training, and does not obviously involve any sensual change. To see a new color or to have fewer distinct sounds reach the brain from the cochlea are sensual changes; to hear more deeply those sounds that do reach the ear, to organize them more efficiently and recognize more relationships between is not a sensual change but an intellectual one that we might metaphorically characterize as "hearing more than others". In fact musicians hear the same thing others hear but understand what they hear in a more lucid way. The sensual phenomena I have mentioned are actual changes in what reaches the brain for processing or in processing at a subliminal level, and do not depend on how we train ourselves to organize the information we receive.
Again, I don't see why we need a dualism between "sensual changes" (i.e. in the sound that reaches the ear) and "hearing [these sounds] more deeply". (Isn't that the fallacy behind the philosophical chestnut "if a tree falls and no-one hears it, does it make any sound?"). I don't see any reason that we are required to say that "musicians hear the same thing others hear," simply because there's a sense (the "objective" one?) in which it's trivially true. As an English speaker, I find it perfectly straightforward (i.e. not necessarily metaphorical) to say that musicians "hear more than others." Nor do I feel obliged to characterize the difference as "intellectual" rather than "sensual," even if the latter sort of change is due to one of the former sort.

So what's the moral? Maybe it's this. In situations like this, it will always seem like there's a natural way to bring out "what's missing" from a reductive account of some phenomenon. We grant the conceptual possibility of separating out (the referent of) the reducing account from (that of) the (supposedly) reduced phenomenon; but then rub in the reducer's face the manifest inability of such an account to encompass what we feel is "missing." But to do this we have presented the latter as a conceptually distinct thing (so the issue is not substance dualism, which Block rejects as well) – and this is the very assumption we should be protesting. On the other hand, what we should say – the place we should end up – seems in contrast to be less pointed, and thus less satisfying, than the "explanatory gap" rhetoric we use to make the point clear to sophomores, who may very well miss the subtler point and take the well-deserved smackdown of materialism to constitute an implicit (or explicit!) acceptance of the dualistic picture.

Surely there's a way to make this point in the Wittgensteinian language of aspect-seeing, but I haven't got it just right yet. How about this: that I see the picture-duck only in seeing the drawing – that the former doesn't ontologically "transcend" the latter, if you like – doesn't mean I have to say they're identical (as the materialist-analogue would have it). If I do that, then I have to tell the same story about the picture-rabbit. But the picture-duck isn't "identical" with the picture-rabbit, which it would have to be if both were identical with the drawing.

But now the solution to this is not to say instead that the picture-duck is different (i.e. a distinct thing) from the drawing, while yet being careful to say (now as the Block-analogue would have it) that the former doesn't after all "transcend" the latter in a metaphysically unpleasant way. In a sense it was right to say that the picture-duck "is" the drawing; the problem was with the nature of that "is". (It depends on what the meaning of "is" is (heh heh).) It's not the "is" of "objective" metaphysical identity; it's the "is" of aspect recognition ("that drawing? It's a duck").

Try it with a different "is" still. The drawing is a picture-duck. Now we have the "is" of predication. It's also a picture-rabbit; but in each case we have the same drawing. That sounds okay at first; but it leaves us with a scheme-content dualism. The experience of aspect-dawning is one of seeing a different picture, not of seeing the same picture differently. Seeing the "same drawing" is too far in one direction, while seeing distinct entities is too far in the other. Or, to sound a Wittgensteinian note from another context: the difference between the picture-duck and picture-rabbit is "not a something, but not a nothing either." (I blush to admit that I can't remember exactly where this line occurs. My defense is that it applies to so much of what he says that it might occur anywhere. Anyone?)

Yet Wittgenstein himself sometimes seems dogmatically to close off what might, if properly conceived, be possible empirical/scientific investigations. This offends my pragmatist sensibilities (Peirce: thou shalt not put roadblocks on the path of inquiry). Here's an example from PI p. 211e:
The likeness makes a striking impression on me; then the impression fades.
It only struck me for a few minutes, and then no longer did.
What happened here?—What can I recall? My own facial expression comes to mind; I could reproduce it. If someone who knew me had seen my face he would have said "Something about his face struck you just now."—There further occurs to me what I say on such an occasion, out loud or to myself. And that is all.—And this is what being struck is? No. These are the phenomena of being struck; but they are 'what happens'.

Is being struck looking plus thinking? No. Many of our concepts cross here.
I like that; but right before this he says:
"Just now I looked at the shape rather than at the colour." Do not let such phrases confuse you. [So far so good; but now:] Above all, don't wonder "What can be going on in the eyes or brain?"
In a way this is right too, in the way the first excerpt was right. Don't wonder that if you thought that was going to provide the answer to our conceptual problem. But surely there is something going on in the brain! Would you tell the neuroscientist to stop investigating vision? Or even think of him/her as simply dotting the i's and crossing the t's on a story already written by philosophy? That gets things backwards. Philosophy doesn't provide answers by itself, to conceptual problems or scientific ones. It untangles you when you run into them; but when you're done, you still have neuroscience to do. Neuroscience isn't going to answer free-standing philosophical problems; but that doesn't mean we should react to the attempt by holding those problems up out of reach. Instead, we should get the scientist to tell the story properly, so that the problems don't come up in the first place. (Wittgenstein credits this insight to Hertz, but we will leave that story for someone more qualified than I to tell.)

Wednesday, December 05, 2007

No fool of a Took

You may have heard about Robert Pippin's recent exchange with McDowell in the European Journal of Philosophy. Well now Professor P. has put the relevant pdf's on his website, including the Postscript to "Leaving Nature Behind" (his contribution to Reading McDowell) which is his response to McD's response in that book. So it goes
1. P: "Leaving Nature Behind" in Smith 2002
2. M: "Responses" in Smith 2002
3. P: "Postscript"
4. M: "On Pippin's Postscript"
5. P: "McDowell's Germans"
6. M: "Oh yeah? Sez you! (Pbbbbbt!)"
Okay, I made that last one up. Some heady stuff there! You may want to skim the B Deduction first. (There's an oxymoron for you: "skim the B Deduction".)

Here's a taste from #4, where McDowell lays it on the line:
The result of [what he's just been saying] is no longer Kantian in any but the thinnest sense. But that is no threat to anything I think. My proposal — whose shape I took from Pippin — was that we can understand at least some aspects of Hegelian thinking in terms of a radicalization of Kant. The radicalization need not be accessible to someone who would still be recognizably Kant. It is enough if there is a way to arrive at a plausibly Hegelian stance by reflecting on the upshot of the Deduction. It is no problem for this that, as I am suggesting, this reflection undermines the very need for a Transcendental Deduction — provided such a result emerges intelligibly from considering what is promising and what is unsatisfactory in Kant’s effort.

Monday, December 03, 2007

Dear Santa (I been good)

Brian Leiter announces with pride that his co-edited volume The Oxford Handbook of Continental Philosophy is now available. Based on the stellar list of contributors, I have to say it looks terrific. Of course, "available" is a relative term. Amazon has it listed for $153.

[Update (12/26)]: Rats.

It does not! (okay, some does)

The Philosophers' Carnival seems now to be biweekly; here's the latest edition, at Philosophy Sucks!

Saturday, December 01, 2007

D'Souza vs. Dennett (preview)

I write this before hearing anything about the 11/30/07 debate between these two, but even so let me say a few things (and perhaps get an idea of how it will go at the debate). Besides, in my recent post about D'Souza and his, um, encounter with Kant, I didn't get to Dennett's response. I'll assume familiarity here with what D'Souza says (or see here for a taste).

I'm actually a big Dennett fan. His naturalism bugs me sometimes, but he's been a tiger in the fight against the Cartesian conception of the mind. (I know that sounds funny – his naturalism is central to his thought – but you'd be surprised how often it doesn't come up.) In this context, though, he does have a bit of a tin ear, and I'm not at all sure he'll do well in a debate in which the stated topic is "Is God a Human Invention?"

First of all, he just asks for it with his ridiculous self-labelling as a "bright." He tells us we need a word for "a person with a naturalist as opposed to a supernaturalist world view." But we've got one already: it's "naturalist." Yes, there's another sense of the word – that in which John Muir is a "naturalist" – but that sense doesn't entail disbelief in "supernatural" entities, as Dennett wants. So how about "philosophical naturalist"? Anything but "bright," which is monumentally stupid, and does indeed sound, no matter how many times Dennett denies the implication, like "brights" are smarter than you.

Also, in responding to D'Souza (about Kant), Dennett strikes me as remarkably unable, given his committed anti-Cartesianism, to see Kant as an ally rather than an opponent. It's disappointing to see him let D'Souza bait him into dismissing Kant as a deluded mystic desperately trying to prove the existence of another world beyond the veil. I guess that's easy for me to say, having grown up with the "one-world" interpretation of Kant advocated by Henry Allison and Graham Bird, among others (not that that's the only issue by any means; but it sure helps). Still, I would have thought that the scorn Kant pours on traditional metaphysics in the Critique would be hard to miss. In reply to D'Souza's Kant, though, Dennett is all snark:
If Dinesh D'Souza knew just a little bit more philosophy, he would realize how silly he appears when he accuses me of committing what he calls "the Fallacy of the Enlightenment." and challenges me to refute Kant's doctrine of the thing-in-itself. I don't need to refute this; it has been lambasted so often and so well by other philosophers that even self-styled Kantians typically find one way or another of excusing themselves from defending it.
Ah yes, the famous "doctrine of the thing-in-itself." If you want to make Kant look ridiculous, it is indeed helpful to hang around his neck the "transcendental illusion" he explicitly rejects, together with the insinuation that "self-styled Kantians" (like Allison, presumably) have to resort to sophistry in order to wiggle out of their manifest obligation to attribute sheer virtually unadulterated Platonism to him.

But of course D'Souza has no intention of wiggling out of it. As we saw, he embraces it:
So powerful is Kant's argument here that his critics have been able to answer him only with derision. When I challenged Daniel Dennett to debunk Kant's argument, he posted an angry response on his website in which he said several people had already refuted Kant. But he didn't provide any refutations, and he didn't name any names. Basically Dennett was relying on the argumentum ad ignorantium-the argument that relies on the ignorance of the audience. In fact, there are no such refutations.
Now that's chutzpah. Committing the argumentum ad ignorantiam (excuse me, "ignorantium") in the same breath as attributing it to your opponent: priceless. But of course in the context D'Souza has provided, the only thing that would count as a "refutation" is an argument showing not (say) that "noumenalism" (or transcendental realism!) is incoherent, but that the "Enlightenment Fallacy" – that we can know everything – is true. And of course even Hegel (who famously argued against Kant for the possibility of "Absolute Knowledge") didn't believe that. So in that sense D'Souza gets to be right. No such "refutations" exist. But so what.

Let's move on. In his 11/30 post, right before the debate, D'Souza served up some trash-talk for the occasion. He gleefully quotes the late Stephen Jay Gould (who was, as you know, a Prominent Biologist) referring to Dennett as a "Darwinian fundamentalist":
[Gould] suggested that just as religious fundamentalists read Scripture in a literal and pig-headed way, and unimaginatively apply biblical passages to everything, so Dennett has a primitive understanding of evolution and, with the enthusiasm of the fire-breathing acolyte, tries to apply Darwinism to virtually every human social, cultural and religious practice, with disastrous and even comical results.
There is indeed a controversy here, and Dennett is indeed more likely to err on the ambitious side w/r/t evolutionary explanation. But D'Souza is being ridiculous. All he does is quote dismissive rhetoric from Gould (and H. Allen Orr) to make Dennett look bad. But as that New York Review exchange with Gould makes embarrassingly clear, Gould never understood Dennett's response, or at least didn't address it, and in fact resorted to personal attacks and name-calling in a most unprofessional manner. But put that aside (after all, that doesn't make Dennett right). Dollars to doughnuts D'Souza doesn't understand it either, and is just looking for another way to hurl abuse.

Let me try to clear it up a bit. No doubt I too will oversimplify; but we can take a few steps in. The issue concerns Dennett's "adaptationism" – his tendency to try to explain a biological phenomenon in terms of its evolutionary advantages. Gould was right to point out that we cannot simply assume that we can do this for every biological phenomenon. In his famous example, which I will not explain, some things are "spandrels": they arise because other things are evolutionarily advantageous and bring the first thing along with them. The "adaptationist" makes it sound like some evolutionary developments are inevitable – as if nature says, hey, wouldn't feathers be a great idea here! Let's evolve some feathers! (Or intelligence, or – more relevant to Gould's rejection of sociobiology – incest taboos, or matriarchy, or whatever.) Such "explanations" can (and in the case of sociobiology, often did) end up sounding like a bunch of ad hoc Just So Stories. In response, Gould emphasizes the radical contingency of the evolutionary path: re-run the tape 20 times and get 20 different results (think "A Sound of Thunder" here).

Fair enough. But Dennett never denied these things. (Gould's original attack was on earlier "adaptationists," but then Gould turned his guns on Dennett later.) Somehow the debate got turned from an interesting one (about which particular sorts of appeals one can make to evolutionary advantage, and which particular such explanations work and which do not) into one about whether one could ever appeal to evolutionary advantage, or whether there could ever be what Dennett calls a "forced move in design space." But surely, even in the case of evolutionary psychology (where the danger of Just So Stories is very real) no such slam dunk is possible. I, at least, am willing to let the Ev Psychers make their case; as the name change indicates, they seem to have learned at least a bit of humility from the sociobiology debacle. Maybe this or that isn't so just-so a story after all.

But that's not the point. Let's abandon Ev Psych entirely, for the sake of argument, and take "adaptationism" in biology alone. Jerry Fodor recently claimed that the very idea of nature "selecting for" a particular trait is incoherent, because nature doesn't have desires that things be one way or another. We can't say that the polar bear's white fur was "selected for" – that it arose because of its evolutionary advantage, as "adaptationists" claim – because given the polar bear's white surroundings, nature can't distinguish between white fur and fur that matches the environment, so it can't "select" for either. Like a lot of what Fodor says, that sounds crazy to me. For one thing, not only would this render explanations in terms of evolutionary advantage necessarily insufficient, it seems to eviscerate the notion entirely, which is nuts. (Interestingly, for what it's worth, it's also reminiscent of Quine's argument for linguistic holism, which Fodor famously rejects.) For more on Fodor, see here and here.

As is their wont, Fodor and Dennett trade incredulous accusations of the other's not getting it at all (links at the previous link, at the bottom of the page). Granted, Dennett is on firmer ground (in this respect, i.e. that of accusing Fodor of not getting it at all) in the philosophy of mind than in biology; so maybe they're both not getting it. (Or I'm not, or nobody is.) But of course D'Souza is keen to set Dennett straight there too. Back in June he had a four-paragraph zinger which was very similar: he found some other authorities willing to dump on Dennett as a dogmatic ignoramus. (Actually, looking again, I see D'Souza brings up Dennett himself; but so do the authors in question, as I happen to know, so, no foul there.) Briefly, the idea is that Dennett is "committing a conceptual mistake" (as is Francis Crick, the original target) in ascribing intentional properties (believing, etc.) to the brain. According to D'Souza, "[b]rains aren't even conscious; the humans who have brains are conscious." How about that: that's my view as well. (I even go farther: in the sense with which we are concerned (though not in another), my brain isn't even alive.)

But again Dennett is perfectly well aware of the danger here, and is guilty at most of some loose talk and/or as yet uncashed promissory notes. Bennett and Hacker (for these are the authorities in question) claim that all such talk is necessarily loose and all such promissory notes knowable a priori to be uncashable. (I know Anton and N.N. disagree with me here, but I think Bennett and Hacker have been misled by Dennett's triumphally naturalist rhetoric into misconstruing his project, which seems to me to be construable (perhaps, I grant, against Dennett himself, at least to some degree) as perfectly acceptable, even (or even especially, qua anti-Cartesian) on Wittgensteinian grounds. But Hacker's Wittgenstein is not my own, as far as I can tell. I owe more explanation here, but this is not the place. See N.N.'s link for an exchange between Dennett and Bennett/Hacker.)

In any case, if the danger is one of misleading locutions, that charge cuts both ways. Bennett and Hacker are careful to deny dualism, but of course for D'Souza misleading locutions are mother's milk; he continues [I bold for emphasis]:
Crick and Dennett are erroneously ascribing qualities to brains that are actually possessed only by people. True, our thoughts occur because of the brain, and we use our brains to think just as we use our hands and rackets to play tennis. How foolish it would be, though, to say that "my arms are playing tennis," or even more absurdly, "My racket is playing tennis." In reality, I am the one who is playing, and arms and rackets are what I play the game with.

Crick and Dennett are guilty of a fallacy that has become quite common among cognitive scientists. This is the Pathetic Fallacy, the fallacy of giving human attributes to nonhuman objects. This practice is quite harmless if we do it in a whimsical, metaphorical way. I might write that "the stem of the oak raised its arms to the sun, searching for its warm embrace." The problem only arises if I actually start to believe that oak branches have intentions. Brains are very useful objects, but they aren't conscious and they don't know how to feel or think.
It's hard to tell, but I think he really thinks he has established dualism as true (I am not identical with my brain = My brain and I are distinct in the dualist sense). Wow. There's another related DD column I want to discuss (an amazing howler, which you may already have seen), but let's leave it for another time. Now let's hear about how the debate went!