The other day, in comments, Daniel linked to some criticism Ben W. made of Dennett's explanation (in "Real Patterns") of the utility (indeed, inescapability) of the "intentional stance" using his beloved Life-plane example. I'll assume familiarity with the example (clink the link for Ben's setup), but the general idea is that if we are presented with a Life-plane computer chess player, we will do much better in predicting its behavior, Dennett claims, by taking the intentional stance (i.e. taking into account what the best move is), rather than making a ridonkulous number of Life-rule calculations (i.e. in predicting the entire bitmap down to the last cell for X generations).
There are two issues here, which we need to keep straight. First, how are we supposed to translate an intentional-stance prediction ("K x Q") "back into Game of Life terms," as Dennett claims is possible? Second, how do we know enough, given the impossibly vast bitmap, to make the intentional-stance predition in the first place?
With respect to the first issue, the problem here, as I see it, is that there's an ambiguity in the idea of "translating that description back into Life terms". Of course you can't translate "he'll play K x Q" "back into" a bitmap. There are innumerable bitmaps which include (let's say) the stream of gliders that translates into "K x Q." (Consider for example that the move is so obvious that even the worst-playing chess programs will make it.) But there may be only one (suitably described) glider stream that corresponds to "K x Q". It's that description which is made in "Life terms" ("glider stream" is a Life term, right?).
Here's how Ben poses the second question:
You can't adopt the intentional stance towards [the Life-chessplayer] until you have some idea what it's doing beyond just playing Life. I don't see any real reason to grant this, but even if you grant that someone comes along and tells you "oh, that implements a universal Turing machine and it's playing chess with itself", you still can't do anything with that information to predict what the move will be until you know the state of the board.Well, of course. But remember, the intentional stance is an interpretive stance; and we don't ask (say) Davidson's radical interpreter for his/her interpretation (ascription of beliefs and meanings) until a whole lot of observations and interactions have occurred. Pointing this out is not the same as locating a theoretical difficulty with the very idea of ascribing beliefs and meanings on the basis of observations which might very well have perfectly good descriptions in radically non-intentional terms. Nor does it show a problem with making predictions of events which can be described in some way at any number of levels (to wit: a long (!) list of particles and their momenta, a hand moving in this direction at that velocity, a man moving his hand, a man pulling a lever, a man voting, a man making a mistake – a bad mistake). That is, it doesn't mean interpretative strategies aren't worth it in predictive/explanatory terms. What choice do we have?
Now go back to the first question (or both together):
Even if someone came along and told the observer not only that it's playing chess with itself, and then also told h/h what the current state of the board is (in which case it's hard to see what the program itself has to do with anything anymore), he's still not in much of a position to predict future configurations of the Life board—not even those extremely few that do nothing but represent the state of play immediately after the move.Again, when we "re-translate" our intentional-stance prediction "back into Life terms," we don't get 1) entire bitmaps; and we didn't get the intentional-stance predition, and the "re-translation", 2) from nothing but a gander at the massive bitmap. We get 1) particular abstractions (say, a glider stream of such and such a description from that gun there) 2) after we already know, however we know it, that a chess game is in progress, the position, which glider streams are doing what, etc. That may sound very limited compared to what Ben is demanding. But why should we expect the miracle he's discussing? or hold it against an interpretive procedure that fails to provide it? I don't predict the exact state of your neurons when I predict you'll say (i.e., make a token of one of the insanely large number of sound types corresponding to the English words) "Boise, why?" when prompted by (...) "What's the capital of Idaho?" This is true even if your "mental state" does indeed supervene on your neural state. And of course my predictions depend on a whole lot of assumptions not available to (say) an newly materialized alien being trying for prediction/explanation in its own terms – assumptions like that you speak English, and know the state capitals, and are in a question-answering (yet suspicious) mood, etc., etc., all of which are corrigible if my predictions keep failing (and all of which, for Davidsonian reasons, even such a very radical interpreter may eventually be able to make for itself).
Also, what we actually do get isn't chopped liver – unless you'd really rather crank out those trillions of calculations. And even if you did, your result wouldn't be in the right form for our purposes. It would have both (way) too much information and not enough. Too much, because I don't need to know the massive amount of information which distinguishes that particular bitmap from every other one in the set of bitmaps each of which instantiates a Turing machine chessplayer running a chess program good enough to make the move I predict (or not good enough not to make the mistake I also make). All I need – all I want – is the move (or, in Life terms, the glider stream).
But there's also not enough information there, because (as Ben points out) you could do the bitmap numbercrunching and still not know there was a chess game being played – which leaves you without an explanation for why I can predict the glider stream (qua glider stream) just as well as you; and of course you too have to know something about Life (more than the rules) to go from bitmap to glider stream. I think Nozick says something like this in Philosophical Explanations (without the Life part).
And of course "it's hard to see what the program itself has to do with anything anymore." Just as it's hard to see what your neurons have to do with the capital of Idaho. Once we take up the proper stance for the type of interpretation we're interested in, we can ignore the underlying one. In each case, though, we keep in mind that if things get messed up at the lower level (a puffer train from the edge of the plane is about to crash into our Turing machine, or (in the Boise case) someone clocks you with a shovel), all intentional-level bets are off. (But that doesn't mean we aren't also interested in an explanation of how exactly the bitmap-plus-Life-rules instantiates a chessplayer, or of how exactly neurons relate to cognition; those are just different questions than "what move?" or "what answer?").
So I disagree with Ben that (in this sense, anyway) "this stance stuff [seems not to] reap big benefits." I don't know what Daniel means by Ben's criticism's making the stances "look a whole lot more naturalistic." As opposed to what?