The other day, in comments, Daniel linked to some criticism Ben W. made of Dennett's explanation (in "Real Patterns") of the utility (indeed, inescapability) of the "intentional stance" using his beloved Life-plane example. I'll assume familiarity with the example (clink the link for Ben's setup), but the general idea is that if we are presented with a Life-plane computer chess player, we will do much better in predicting its behavior, Dennett claims, by taking the intentional stance (i.e. taking into account what the best move is), rather than making a ridonkulous number of Life-rule calculations (i.e. in predicting the entire bitmap down to the last cell for X generations).
There are two issues here, which we need to keep straight. First, how are we supposed to translate an intentional-stance prediction ("K x Q") "back into Game of Life terms," as Dennett claims is possible? Second, how do we know enough, given the impossibly vast bitmap, to make the intentional-stance predition in the first place?
With respect to the first issue, the problem here, as I see it, is that there's an ambiguity in the idea of "translating that description back into Life terms". Of course you can't translate "he'll play K x Q" "back into" a bitmap. There are innumerable bitmaps which include (let's say) the stream of gliders that translates into "K x Q." (Consider for example that the move is so obvious that even the worst-playing chess programs will make it.) But there may be only one (suitably described) glider stream that corresponds to "K x Q". It's that description which is made in "Life terms" ("glider stream" is a Life term, right?).
Here's how Ben poses the second question:
You can't adopt the intentional stance towards [the Life-chessplayer] until you have some idea what it's doing beyond just playing Life. I don't see any real reason to grant this, but even if you grant that someone comes along and tells you "oh, that implements a universal Turing machine and it's playing chess with itself", you still can't do anything with that information to predict what the move will be until you know the state of the board.Well, of course. But remember, the intentional stance is an interpretive stance; and we don't ask (say) Davidson's radical interpreter for his/her interpretation (ascription of beliefs and meanings) until a whole lot of observations and interactions have occurred. Pointing this out is not the same as locating a theoretical difficulty with the very idea of ascribing beliefs and meanings on the basis of observations which might very well have perfectly good descriptions in radically non-intentional terms. Nor does it show a problem with making predictions of events which can be described in some way at any number of levels (to wit: a long (!) list of particles and their momenta, a hand moving in this direction at that velocity, a man moving his hand, a man pulling a lever, a man voting, a man making a mistake – a bad mistake). That is, it doesn't mean interpretative strategies aren't worth it in predictive/explanatory terms. What choice do we have?
Now go back to the first question (or both together):
Even if someone came along and told the observer not only that it's playing chess with itself, and then also told h/h what the current state of the board is (in which case it's hard to see what the program itself has to do with anything anymore), he's still not in much of a position to predict future configurations of the Life board—not even those extremely few that do nothing but represent the state of play immediately after the move.Again, when we "re-translate" our intentional-stance prediction "back into Life terms," we don't get 1) entire bitmaps; and we didn't get the intentional-stance predition, and the "re-translation", 2) from nothing but a gander at the massive bitmap. We get 1) particular abstractions (say, a glider stream of such and such a description from that gun there) 2) after we already know, however we know it, that a chess game is in progress, the position, which glider streams are doing what, etc. That may sound very limited compared to what Ben is demanding. But why should we expect the miracle he's discussing? or hold it against an interpretive procedure that fails to provide it? I don't predict the exact state of your neurons when I predict you'll say (i.e., make a token of one of the insanely large number of sound types corresponding to the English words) "Boise, why?" when prompted by (...) "What's the capital of Idaho?" This is true even if your "mental state" does indeed supervene on your neural state. And of course my predictions depend on a whole lot of assumptions not available to (say) an newly materialized alien being trying for prediction/explanation in its own terms – assumptions like that you speak English, and know the state capitals, and are in a question-answering (yet suspicious) mood, etc., etc., all of which are corrigible if my predictions keep failing (and all of which, for Davidsonian reasons, even such a very radical interpreter may eventually be able to make for itself).
Also, what we actually do get isn't chopped liver – unless you'd really rather crank out those trillions of calculations. And even if you did, your result wouldn't be in the right form for our purposes. It would have both (way) too much information and not enough. Too much, because I don't need to know the massive amount of information which distinguishes that particular bitmap from every other one in the set of bitmaps each of which instantiates a Turing machine chessplayer running a chess program good enough to make the move I predict (or not good enough not to make the mistake I also make). All I need – all I want – is the move (or, in Life terms, the glider stream).
But there's also not enough information there, because (as Ben points out) you could do the bitmap numbercrunching and still not know there was a chess game being played – which leaves you without an explanation for why I can predict the glider stream (qua glider stream) just as well as you; and of course you too have to know something about Life (more than the rules) to go from bitmap to glider stream. I think Nozick says something like this in Philosophical Explanations (without the Life part).
And of course "it's hard to see what the program itself has to do with anything anymore." Just as it's hard to see what your neurons have to do with the capital of Idaho. Once we take up the proper stance for the type of interpretation we're interested in, we can ignore the underlying one. In each case, though, we keep in mind that if things get messed up at the lower level (a puffer train from the edge of the plane is about to crash into our Turing machine, or (in the Boise case) someone clocks you with a shovel), all intentional-level bets are off. (But that doesn't mean we aren't also interested in an explanation of how exactly the bitmap-plus-Life-rules instantiates a chessplayer, or of how exactly neurons relate to cognition; those are just different questions than "what move?" or "what answer?").
So I disagree with Ben that (in this sense, anyway) "this stance stuff [seems not to] reap big benefits." I don't know what Daniel means by Ben's criticism's making the stances "look a whole lot more naturalistic." As opposed to what?
I just spent like half an hour writing a comment in response to this and then closed the tab, thinking that I had posted it, but in fact having failed to do so because of the g-d d-mn "word verification" shite. Now, of course, I'm in no mood to do it again, so you get the Executive Summary, guaranteed to be even less clear than the original. It had two parts.
ReplyDelete1: p 41 leads me to believe that Dennett really is talking about predicting configurations of pixels.
2: I see no reason whatever to believe that gliders, sliders, or whatever will "correspond to" or "translate" moves like KxQ, rather than corresponding to or translating things like bits of tape, logic gates, read heads, and other extremely low-level computational things. Both you and Dennett talk as if, when the program is running, you'll be able to point at part of it (at a glider or clump of dots, as you like it) and say "there's the move".
I can imagine something like this being true: that there is a bit of the life board which acts as an interface between the two players and which is therefore mostly not touched during one player's processing, and whose being updated to reflect the new state of the chess board one could call "the move". But then one can only predict a very small part of the life board's changes, whether in terms of pixels, gliders, tape and tape head, or whatever.
As for the program having nothing to do with things, like my neurons: that's fine, but when you ask me about the capital of Idaho, you aren't presented with my neurons, nor does anyone claim that you will better be able to predict the state of my neurons (or whatever stands to neurons as gliders stand to pixels) if you assume I'll answer "Boise" than if you trace out the interconnections between them by hand.
Sounds nearly interesting. When you learn how to write and re-do, maybe it'll like go over at a schmooze-fest somewhere. Maybe not.
ReplyDeleteBen – thanks for your comment. Sorry about the word verification; I turned it off once and right away there were three comments providing links to cheap prescription drugs. I lost a comment exactly once, and now I always always always compose in a text editor and copy/paste.
ReplyDeleteI don't have the original version, but on my p. 109 (i.e., in Brainchildren) he says "But from the perspective of one who had the hypothesis that this huge array of dots was a chess-playing computer, enormously efficient ways of predicting the future of that configuration are made available." I grant that sounds like it might mean what you say. But it continues: "As a first step one can shift from an ontology of gliders and eaters to an ontology of machine states [...]". That bolded part is the lowest level, the one one shifts from. No bitmap.
But a glider is (just) "a configuration of pixels" too, you say:
Both you and Dennett talk as if, when the program is running, you'll be able to point at part of it (at a glider or clump of dots, as you like it) and say "there's the move".
That's right. And either you would be able to do so or you wouldn't. If you can (and I see no reason to believe that that *couldn't* happen), then you can take advantage of the intentional strategy. If you can't (and of course you couldn't without a lot more information than just (as he puts it) "having the hypothesis" that chess is being played), then ... you're screwed. Go calculate the bitmap; and even then it's a wash, as you wouldn't be able to figure out any more than I that chess is being played.
But that's not the point. That's why I brought in Davidson. Let's say the tribe I'm interpreting never lets me hear them speak. I approach them and they clam up. They speak among themselves, in their tents. I sneak up, and then I can hear them. But I can't see what they're pointing at. I don't have enough information to interpret them. But we don't want to conclude "so much for your stupid 'radical interpretation'". The point is that it can work, and indeed it can be the only thing that will work.
Back to Dennett. On my p. 103 (your p. 35 or so): "A pattern exists in some data—is real—if there is a description of the data that is more efficient than the bit map, whether or not anyone can concoct it." Or detect it without help, I would imagine. This is part of his reply to reductive physicalists like Churchland, who think that the neural level is the real level, and that "mental" talk can only be justified if it can be strictly reduced to the neural (or, for other people, observable behavior). If chess is being played, then there really were chess moves being made, and saying so is not simply a shorter version of an impractically long description of a bitmap.
I think your problems with Dennett here are (or should be) less with the interpretive stance idea than with functionalism. That seems to be the point of your worries about the correspondence of gliders to chess moves, or one's ability to predict only "a very small part of the life board's changes." Functionalism (some types anyway) can depend on overly strong correlations (falling short of "reduction", but still too ambitious) between things at different levels. To the extent that you can convict Dennett of needing something he can't have, then I'll agree with you (he's modified his position since the Brainstorms era; and of course I'm not defending him completely, even now). The part about "real patterns" seems okay to me. But Haugeland makes some good points in response too ("Pattern and Being").
when you ask me about the capital of Idaho, you aren't presented with my neurons, nor does anyone claim that you will better be able to predict the state of my neurons (or whatever stands to neurons as gliders stand to pixels) if you assume I'll answer "Boise" than if you trace out the interconnections between them by hand.
Well, in the context of reductive "identity theories", that is indeed what we are "presented with" (i.e. in theory): some thought (type or token) is supposedly identical with some neural event (type or token). This makes the intentional description optional or redundant, ontologically speaking. Again, I take Dennett to reject this for something less ambitious. But to show the ineliminability of the intentional, while still granting the dependence on the neural level, he may indeed want to make the analogous claim – that even in predicting neural-level events, as opposed to utterances, strictly neural descriptions can be bypassed without significant loss. All he needs is for this to be *possible*. (Of course the reductionist can still bite the bullet and say: even if that way of talking is ineliminable, that doesn't mean its referents are real. But come on.)
But maybe "what's the capital of Idaho" isn't a good example for this purpose (too cognitive – like functionalism itself). Maybe what we are to predict is something very general like "activity in the amygdala" (the neural equivalent of a glider?). Let's say I show you the final scene of Casablanca or play you the trio from Der Rosenkavalier and predict such activity (because I think that's the part of the brain in some way responsible for emotion). Not too impressive, perhaps (and not exactly an example of the intentional stance, with its emphasis on rationality); but have fun getting there from the neural activity in the visual cortex or auditory center, however you want your data (that neuron fired at time t, etc.), or even if we give you the rest of the brain as well. No thank you.
Wow, that got long. But it helped me to think about it. Thanks again for your post.
I think what really got me is that there is (I believe) a time element as well as a typing element; if you take too long to submit, you'll have to redo the word verification even if you got it right—presumably this is to avoid spam programs from being able to capture the image and farm it out to distributed humans (usually consumers of porn) to decode.
ReplyDeleteThat's right. And either you would be able to do so or you wouldn't.
My reasons for thinking that you wouldn't be able to do so aren't really that philosophical, I think. I don't think you'd be able to do so because I don't think that's how the program would work. And if you could, I suspect it would be something analogous to the way Haugeland changes the example, so you actually do have stable images on a monitor while the computer churns away somewhere else. But while that happens you can say something like "I predict this pattern of pixels (representing a knight) will move from this square to that". But the rest of the board might still be completely opaque to you. You might infer that it's playing the chess game you see in your little corner, but you wouldn't be able to predict it in terms of moves, because predicting it would be in terms of predictions and analyses of possible positions, or however chess programs work, and the data structures in which such info is stored, the algorithms by which they're traversed, ect ect ect. That would already be a lower level, of course.
(It would be interesting to get the opinion of a computer architect, someone who designs chips and uses various high- and low-level languages.)
The only things I've read about this are "Real Patterns" and "Pattern and Being", actually, so Dennett's broader position is completely unknown to me in a way that is no doubt deleterious. And I'm generally poorly read in phil mind. So, you know. Kid gloves?
that is indeed what we are "presented with" (i.e. in theory): some thought (type or token) is supposedly identical with some neural event (type or token)
What you are presented with when, having mailed me a letter in which you ask me what the capital of Idaho is and, opening my reply, read "Boise", is not identical with a neural event, I hope.
'"having the hypothesis" that chess is being played'
ReplyDeleteI think I want to say that such a hypothesis is nonsense. There are criteria for playing chess, and the physical system (a galaxy "at play," was it?) cannot (i.e., grammatical impossibility) meet any of them. The criteria (all of which are behavioral) for playing chess are constitutive of what it means to say of someone that they are "playing chess," i.e., they are constitutive of what it is to play chess.
And if it is nonsense to say that the galaxy (was it a galaxy?) is playing chess, then it is nonsense to say that any part of the galactic state is a "translation" of a move in chess.
I don't think you'd be able to do so because I don't think that's how the program would work.
ReplyDeleteThis is why I think your objection is to functionalism rather than the intentional stance. I agree that functionalism was way overambitious, in fact merely a revamping of the identity theory (yet there are some insights to be gleaned from it). I haven't read Putnam's "Why Functionalism Didn't Work" recently, but I think I generally agreed with it. In this case (for the purposes of evaluating Dennett's point about "real patterns") I think we should allow the possibility he discusses.
No, a letter isn't a neural event; but receiving a letter but not a brain scan doesn't allow us (as Dennett's example intends) to show the advantage of the intentional stance. Paradoxically it's because here I *really* have no choice but to assume that someone (presumably you) made those marks, intending them to spell out the name of the city in question, etc.
N. N.: I think I want to say that Hacker (not Wittgenstein) is requiring you to miss the point. For what it's worth, I agree that there is indeed an important sense in which computers can't "play chess". But all that means is that it would be misleading to say that Deep Blue is "intelligent". That's not the point here. If I'm playing chess using my computer's chess program, it would be crazy to object to my wondering whether, if I exchange rooks, he will retake with the remaining rook or with his queen, by saying "well, neither of those, obviously: computers can't play chess." I think you're conflating your "grammatical" scruples with Ben's objection to functionalism (that is, you're making his point only more dogmatically, as if it were analytically true).
Plop plop, fizz fizz: Duck trades Weltanschauung's again. That humans intend certain acts, make decisions, and have some degree of "freedom" hardly suffices as a refutation of physicalism, or even determinism. You could decide to stop eating, but that hardly means you won't be hungry very quickly. There's no contradiction in saying "normal human brains do something we call thinking" (including intending acts, or volition, etc.). Or in older terms, "matter thinks."
ReplyDeleteChess is a human invention, as is the computer: and both show that some human-primates have "higher-order" cognitive skills: Kasparov may be a genius, but that genius-intellect developed via observation, habit, perception, study. Garry wasn't "Kasparov" when born.
Additionally, the chess program does not become conscious because it has been programmed to perform or duplicate those higher-order skills. (Perhaps if the Deep Blue programmers could design a virtual mouth for DB, and it starts to speak..........)
Dave,
ReplyDeleteI’m not yet in a position to judge whether Hacker is faithfully articulating Wittgenstein’s position. One reason for this is that I’m not yet in a position to sort out the similarities and differences between Wittgenstein and Ryle. Obviously, Ryle was influenced by Wittgenstein (and Hacker by Ryle). But how faithful is, say Concept of Mind, to Wittgenstein’s psychological remarks? I’m not sure. Personally, I’ve found Ryle to be a great help in getting a handle on parts of Wittgenstein’s thought. But maybe this is a mistake. So, instead of claiming a Wittgensteinian position, I will claim a ‘Hackensteinian’ position. If the two turn out to be the same, great. If not, I can still make various points, and we can see what merit they have.
You say, “I agree that there is indeed an important sense in which computers can’t ‘play chess’. But all that means is that it would be misleading to say that Deep Blue is ‘intelligent’.” Isn’t this sufficient to make my point? Deep Blue cannot be intelligent, it cannot think. And ‘cannot’ here indicates a grammatical impossibility. PI, 360: “But a machine surely cannot think! – Is that an empirical statement? No. We only say of a human being and what is like one that it thinks.” And to be like a human being, to resemble a human being, is to behave like a human being (PI, 281). This is because our psychological concepts are grammatically connected with the behavior that manifests mental properties such as intelligence. Deep Blue, much less a galaxy, does not behave like a human chess player. Consequently, it does not play chess. And this sense of ‘playing chess’ is the only relevant one.
You say, “If I’m playing chess using my computer’s chess program, it would be crazy to object to my wondering whether, if I exchange rooks, he will retake with the remaining rook or with his queen, by saying ‘well, neither of those, obviously: computers can’t play chess.’” Why not say something like, ‘If I exchange rooks, will my rook be taken by the opposing queen?’ It’s the attribution of chess moves to the computer that’s nonsensical. Whether the opposing queen takes your rook or not, the computer isn’t playing chess. As Hacker puts it, “The appearance of an appropriate [move] on the screen is a product of the behaviour of the programmer who designed the program, but not a form of human behavior of the machine” (Wittgenstein, p. 53).
N. N.:
ReplyDeleteYou are right not to worry too much about exactly how faithful Ryle or Hacker is (or you are) to Wittgenstein, and to concentrate instead on which ideas are of use to you, wherever they may come from. I do find "Hackenstein" to be at odds with my own use of Wittgenstein, but what has been striking me recently is not so much his antipathy toward Dennett (which is to be expected, and not entirely wrong-headed) but his, shall we say, incompatibility with Davidson. I hope to get into that some other time. Incidentally Ryle was Dennett's teacher at Oxford, and he definitely says Rylean things from time to time, but he also recognizes differences in their views.
Deep Blue, much less a galaxy, does not behave like a human chess player. Consequently, it does not play chess. And this sense of ‘playing chess’ is the only relevant one.
That last part just looks needlessly dogmatic to me. I don't see how thinking of meaning in terms of use is compatible with such categorical restrictions on what people are allowed to say. We don't have to defer to the man in the street (as people accuse "ordinary language philosophers" of doing -- where did they get that idea anyway??), but they don't have to defer to us either. Again, I won't go into it here (I've mentioned it before), but this use of the distinction between "empirical" and "grammatical" looks weird to us Davidsonians (as well as, as I would have put it before encountering "Hackenstein," Wittgenstein himself -- yes, even given those quotations).
Why not say something like, ‘If I exchange rooks, will my rook be taken by the opposing queen?’
You could say that; but it sounds forced. Like General Semanticists, who won't use the present tense of the verb "to be" because nothing is ever *really* identical with anything else. Okay, maybe not *that* crazy; and I do of course agree (how could I not?) that careless ways of talking can get you into trouble. I just don't see how Hacker can justify a distinction (if he wants to make one at all) between a "grammatical" *requirement* (such that breaking it leaves one speaking *nonsense*) and (supposedly) helpful advice about where one might be inviting confusion in speaking a certain way. I would agree with Dennett in noting the scrupulous – if schoolmarmish – warning, and rejecting the absolute prohibition as putting roadblocks on the path of inquiry.
But maybe you should read "Real Patterns" to get a sense of what Dennett is after here. (Your references to "the galaxy" as the supposed agent here make me nervous.)
Dink-rabbit even botches general semantics! The gen.semantics people objected to "to be" mainly on olde appearance/reality grounds: say one encounters the blog of some bitter, backwater academic, Schmutzstein, known as a liar, who has struggled to stay in some gradaute Marxist rhetoric program, and notes that he says he "was" expelled. The gen. semanticist, knowing of the academic's reputation as a liar, says, tentatively, "The liar Schmutzstein appears to have been expelled" rather than a somewhat indelicate assertion (and "conclusionary" as some legalbeagles belch) such as "The liar Schmutzstein was expelled." Schmutzstein may very well have flunked, or perhaps he never was in the program to start with but really works as a bookie or somethin'.
ReplyDelete(most semi-bright philo-hacks who scrolled through some cliffsnotes to Analytical Phil. if not Aristotle, also realize that "to be" indicates not only identity or assertions of various sorts but inclusion: "All A are B" IS not identity but inclusion. All cats are mammals, but cats are hardly identical with mammals as a whole.))
May I jump in?
ReplyDeleteI think it's worth reading PI 359 and 360 in full here:
"Could a machine think?-----Could it be in pain?--Well, is the human body to be called such a machine? It surely comes as close as possible to being such a machine.
"But a machine surely cannot think!--Is that an empirical statement? No. We only say of a human being and what is like one that it thinks. We also say it of dolls and no doubt of spirits too. Look at the word "to think" as a tool."
Some Wittgensteinians tend to treat the statement here of what we say as both a fact about ordinary language and as a rule prescribed by Wittgenstein for what we are to say. This is the kind of thing that leads people to think that ordinary language philosophers take orders from the man on the bus.
But what we say is not necessarily right. "What we say" need not have much normative force for us (if it is more what they say than what I say). Dolls don't really think, and there aren't any spirits. Or so I would say. In saying so, though, I am making a kind of value judgment, refusing to go along with certain games played with dolls and certain kinds of religious beliefs.
I think Dennett would emphasize the last sentence of 360. If we think of words as tools then we can decide to use them in new ways, for new purposes. This is what Dennett has in mind, and I don't see Wittgenstein saying that he can't do it. But we don't have to go along with Dennett. And if we want to keep a moral distinction between human beings and machines (as is perhaps hinted at in 359) then this might be a good reason not to play his game.
Thanks DR, that's very helpful. Of course you're right that "we don't have to go along with Dennett." But if every time he says something like "and then when the visual cortex tells the hypothalamus ... " we say "stop right there: that's strictly nonsensical (and by the way it's the homunculus fallacy)" then we'll never find out whether his view actually does erode any vital moral distinctions (an issue he addresses at length – for example in Freedom Evolves, which I'm part way through).
ReplyDeleteSo there are two ways in which we don't have to "go along with Dennett." We can close our ears because of linguistic scruple, or we can decide at the end that talking that way wasn't worth it after all. But we can do *that* in two ways: we can reject his views (for whatever reason, of which one might indeed be that he hadn't redeemed his conceptual promissory notes); or we can accept the substantive core of what he says but then translate it (back) into our preferred idiom. For example, if AI succeeds beyond everyone's expectations (and no, I'm not holding my breath), I don't have to admit that "computers can be conscious after all" – I can say (and I think this is what I *would* say) not that a computer had become conscious but that an artificial agent had been created. (My own worry here is that if an artificial entity could speak, we would not understand it.)
I'll talk more specifically about Dennett later. In short I just can't understand why purported anti-Cartesians see him as virtually indistinguishable from (say) Churchland – or as not getting just as much from Ryle and Wittgenstein as he does from Quine (himself hardly the devil incarnate!). Yes, he's a "materialist" – but so what? That just means he's not a "dualist." In this sense (as on Manuel Delanda's reading) Deleuze is a "materialist" too (although OTOH I can see why that might not help!).
I think I agree, although I haven't yet read the whole post so I won't commit to that. Refusing to listen to Dennett because of a linguistic scruple seems like a confusion to me (but I'm curious about what N.N. or Hacker would say about this--I must read more). Refusing to listen because of some kind of moral scruple is more acceptable to me than it seems to be to you. I wonder how finely we can distinguish kinds of scruple though. "Nonsense" is not simply a descriptive term, after all.
ReplyDeleteI don't mind refusing to listen because of moral scruples, say to arguments purporting to show the inherent superiority of one race or sex, or the justifiability of torture (though I think I don't want to *demand* turning a deaf ear there either – I just won't object if you do).
ReplyDeleteThe point here is that I see no reason to believe that we can even *tell* that our moral scruples would be violated, before the argument even gets going. I do agree though about the possible (even necesssary) overlap of types of scruple.
And of course I myself am happy to turn a deaf ear to similar things on equally non-empirical grounds. For example I don't want to hear about how we have finally "located" consciousness in some particular brain structure. I already know ("grammatically", if you like, I suppose) that consciousness doesn't have a "location" in the brain, and no empirical result is likely to convince me otherwise.
We must all read more.
Yes. I was really just quibbling over an apparent implicit consequentialism in your suggestion that we couldn't know whether our scruples were in danger until we saw the results of speaking a certain way. Speaking that way might in itelf be a violation of certain ideas of right and wrong. One possible source of these ideas would be glassy-eyed devotion to the words of Wittgenstein, but I'm sure there are others. I don't think Hacker is glassy-eyed in that way. He might be confused, but I'm in no position to accuse him of that either.
ReplyDeleteDuncan,
ReplyDeleteInstead of saying that the ordinary language philosopher “takes orders from the man on the bus,” I would say that the he “takes orders” from language. For if they both speak English, the philosopher and the man on the bus speak the same language. Before he ever learned philosophy, the philosopher learned to use the same ordinary language as the man on the bus. So when the philosopher, in his everyday discourse, talks about ‘thinking’ or ‘playing chess,’ he means the same as other English speakers.
“But what we say is not necessarily right.” Isn’t this like saying “What we call ‘chess’ is not necessarily chess”? Chess is what we say it is, i.e., that’s just what we mean by ‘chess.’ “Grammar tells us what kind of object anything is” (PI, 373).
I don’t think I’m refusing to listen to Dennett because of linguistic scruples, or mere linguistic scruples. As I see it, linguistic scruples are conceptual scruples. We get our concepts from ordinary language, and we understand the world with our concepts.
Sorry, this is too brief, but I don’t have much time at the moment. Perhaps this will provoke some response, and then I can fill in any missing parts in the next round.
I'll be brief too. Not too brief, I hope, but we'll see.
ReplyDeleteI agree that ordinary language philosophers should take orders from language rather than from anyone on a bus, if they take orders at all. I was trying to explain a common criticism of ordinary langauge philosophy, that's all.
As for "What we call chess is not necessarily chess," I would want to know the context, use, etc. before saying whether I agree with this or not. It sounds absurd, but I suppose people do sometimes make mistakes, so if those people count as "we" then it would be true. I take it that your point is precisely that it is absurd, but I was talking about Wittgenstein's assertion that we say of dolls that they think. This is something people say, most obviously children playing certain kinds of games, but this doesn't make it true. That was the point I was trying to make. It makes sense for a child to say "My dolly thinks you're horrid!", but we don't have to agree (or even agree to play this game) just because some people talk this way.
As for Dennett, I don't know. He's not my cup of tea, but I'm not familiar enough with Hacker's (and your) criticisms of him to know whether I agree with them or not.
Nothing I wrote was intended as a criticism of your position, but of course that doesn't mean you have to agree with me.
Duncan,
ReplyDeleteSorry. I thought some of your comments were directed against the 'Hackensteinian' position I am attempting to defend.
Let me return to the chess example, because I think the point there is central to my argument. It is certainly possible for a person not to know what chess is or to be confused over some of the rules, etc. In that case, if they describe what they mean by 'chess,' they will be wrong. And to be wrong here is to be at odds with the rest (or the vast majority) of the speakers of the language. This, however, is not the sort of case that I'm interested in.
If we leave aside those who have not properly learned the use of the word 'chess,' and focus on those who have, it makes no sense to say of the latter that they are 'wrong' about what sort of game chess is. The word 'chess' has the meaning that the language community has given it. That is, this is just what 'we' (i.e., English speakers) call 'chess.'
(One qualification: when speaking about the meaning of the word 'chess,' Wittgenstein does sometimes talk of two equivocal senses. (1) The rules that constitute the game, etc., and (2) the history of the game. In the second sense, it would be possible to talk about our whole culture being wrong about chess, e.g., if the game's ancient inventors didn't play it on a board, or didn't allow pawns to initially move two spaces. For present purposes, we need not bother with this sense of wrong.)
It makes no sense (so say I) to say that a language community, as a whole, is 'wrong' in its use of a word. Put another way, it makes no sense to say that the meaning a word has in ordinary language is 'wrong.' They mean whatever they mean by the word, and there is no higher court of appeal (e.g., a scientific inquiry) in which a member of that community could find out the real meaning of the word.
In this sense, ordinary language is normative.
N.N.,
ReplyDeleteI think I agree with all of that. Thanks.
O.K. How about this, then: What goes for ‘chess’ goes for ‘playing chess,’ ‘being struck,’ ‘thinking,’ etc. These expressions have the meanings that we (i.e., English speakers) have given them, and therefore, we cannot be wrong about what it means to ‘play chess,’ ‘be struck,’ ‘think,’ etc. That is, we cannot be wrong about what it is to play chess, etc. For what it is to play chess is constituted by the rules for the use of the expression ‘play chess.’ Again, “Grammar tells us what kind of object anything is.”
ReplyDeleteIt seems to me that the linguistic fact that ‘we only say …’ is itself prescriptive. For if someone means by ‘think’ what we mean by ‘think,’ then he cannot say that a machine thinks, i.e., he would be talking nonsense. This is not to say that a sense cannot be given to the form of words ‘A machine thinks.’ But to do so would be to use ‘think’ in a way that is different from, and contrary to, its normal use. In that case, the statement ‘A man thinks and a machine thinks’ would involve an equivocation.
I'm not so sure. I agree that we can't all be wrong about what playing chess is, what thinking is, etc., in the sense that we English-speakers collectively define what counts as playing chess, thinking, etc.. But what if someone then comes along and says of something that it plays chess, thinks, or whatever too, when this thing has never before been said to do those things? Now, the person might admit that this is a new use and try to make a case for using the same word. Or she might insist that this new case is really just the same as the old, familiar ones. How do we determine what is the same and what is not the same? Isn't it partly a matter of what strikes (in this case) English-speakers as being the same? And many English-speakers happily say that computers play chess. I'm not saying that they are right to do so. Other competent speakers deny that computers really play chess. I think we have a decision to make about what we ought to say, and this decision should (in my opinion) be thought about carefully. It might be harmless and convenient to talk about machines the way we used only to speak about human beings (and dolls and spirits), but it might really not be harmless at all. I don't think we can reasonably avoid thinking about this by pointing out that when machines "play chess" they do not literally do what human beings do when they play chess, because it can always be insisted that what machines do is the same in all relevant respects. Judgments of relevance, as I see it, are a kind of value judgment. And we don't have to be conservative or majoritarian about value judgments. It is possible and permissible to innovate. It is our language, after all, not just theirs.
ReplyDeleteHmm. That's probably pretty obscure. What I'm most unsure about in your most recent post is the bit where you say that someone who says that a machine thinks is using the word 'thinks' in a way that is different from and contrary to the established use. The existing use was already flexible enough to allow dolls and spirits to be said to think too. Why not machines? (I don't mean that no reason could be given--I'm simply denying that it's a no-brainer.) And how do we get to the claim that the new use is not only different from but contrary to the existing use? Not to mention the claim that saying that machines think really is new or different in any significant sense. Claiming that a tractor thinks would be at least close to nonsensical, but computers are significantly different from tractors. How do we apply our old rules to this new technological development? I think that is up to us, and not simply determined by the old rules.
Duncan,
ReplyDeleteIf you get a chance, check out this relevant article by Klagge, "Wittgenstein and Neuroscience":
http://www.phil.vt.edu/JKlagge/Neuroscience.pdf
Happy Holidays to all.
Thanks N.
ReplyDeleteI'll read that as soon as the kids are back in school.
Klagge's article is excellent (although I have some reservations, just as he seems to, about section 5). He treats the Churchlands' position as potentially dangerous, though, not simply as nonsensical or grammatically/conceptually mistaken. Perhaps it's a mixture of both. I'd rather see more exploration of the moral dangers than a short, sharp denunciation of certain claims as nonsensical. The latter kind of response strikes me as unlikely to work, and as failing to address what is most interesting and significant about the issue. (I'm not accusing anyone in particular of offering no more than such denunciations, but it is the kind of thing one sometimes hears from Wittgensteinians--and I have in mind here more Swansea-types, e.g. me when I'm being lazy, than n.n. or Hacker, by the way.)
ReplyDelete