Mad Belief?

 

Eric Schwitzgebel

Department of Philosophy

University of California at Riverside

Riverside, CA  92521

 

eschwitz at domain- ucr.edu

 

May 12, 2011

 


Mad Belief?

 

 

Abstract:

“Mad belief” (in analogy with Lewisian “mad pain”) would be a belief state with none of the causal role characteristic of belief – a state not caused or apt to have been caused by any of the sorts of events that usually cause belief and involving no disposition toward the usual behavioral or other manifestations of belief.  On token-functionalist views of belief, mad belief in this sense is conceptually impossible.  Cases of delusion – or at least some cases of delusion – might be cases of belief gone half-mad, cases in which enough of the functional role characteristic of belief is absent that the subject is in an “in-between” state regarding the delusive content, such that it is neither quite right to say the subject determinately believes the delusive content nor quite right to say that she determinately fails to believe that content.  Although Bortolotti (2010) briefly mentions such “sliding scale” approaches to the relationship of delusion and belief, she dismisses such approaches on rather thin grounds and then later makes some remarks that seem consonant with sliding scale approaches.


 

Mad Belief?

 

David Lewis writes:

There might be a strange man who sometimes feels pain, just as we do, but whose pain differs greatly from ours in its causes and effects.  Our pain is typically caused by cuts, burns, pressure, and the like; his is caused by moderate exercise on an empty stomach.  Our pain is generally distracting; his turns his mind to mathematics, facilitating concentration on that but distracting him from anything else.  Intense pain has no tendency whatever to cause him to groan or writhe, but does cause him to cross his legs and snap his fingers.  He is not in the least motivated to prevent pain or to get rid of it.  In short, he feels pain but his pain does not at all occupy the typical causal role of pain.  He would doubtless seem to us to be some sort of madman, and that is what I shall call him, though of course the sort of madness I have imagined may bear little resemblance to the real thing (1980, p. 216).

The current essay concerns Lisa Bortolotti’s excellent book, Delusions and Other Irrational Beliefs (2010); I shall begin by considering the possibility not of mad pain but rather of mad belief.  Might there be a person who has beliefs, just like we do, but whose beliefs have entirely different causes and effects?

Daiyu, let’s suppose – or at least let’s try to suppose – believes that most pearls are white.  However, this belief was not caused in the normal way.  It was not caused, for example, by having seen white pearls nor by hearing testimony to the effect that pearls are white nor by inferring that pearls are white from some other facts about pearls and whiteness.  It was caused, let’s say, by having spent four seconds watching the sun set over the Pacific Ocean.  And, for her, that is just the sort of event that would cause that belief: Daiyu would never have formed that belief by any normal means like those described above; rather, the kinds of events that cause that belief in her in all “nearby possible worlds”, or across the relevant range of counterfactual circumstances, are perceptions of setting-sun events of a certain sort, and maybe also eating spicy radish salad on a Wednesday.  The kinds of events that would cause her to cease believing that most pearls are white are also atypical: watching the sun rise over the Atlantic, perhaps, or putting daisies in her hair.  Furthermore, Daiyu’s belief that most pearls are white has entirely atypical effects.  It does not cause her to say anything like “most pearls are white” (which she would deny; she’d say instead that most pearls are black) or to think to herself in inner speech that most pearls are white.  She would not feel surprise were she to see a translucent purple pearl.  If a friend were to say to Daiyu that she was looking for white jewelry to accompany a dress, Daiyu would not at all be inclined to recommend a pearl necklace.  Nor is she disposed to infer from her belief that most pearls are white that there is a type of precious object used in jewelry that is white.  Daiyu’s belief that pearls are white, instead, causes her to flush on the left side of her body when talking on the telephone and to say “seventeen” whenever someone asks her to pick a number between one and twenty.

Lewis’s mad pain may or may not be conceivable.  However, mad belief of this radical sort, I hope you will agree, is inconceivable – or at least inconceivable in any sense that can serve as a guide to possibility.  If Daiyu does in fact have that causal-functional structure, she doesn’t really believe that pearls are white.  Functional role or causal-dispositional structure is essential to belief; and it is essential on a token-token basis.  Unlike being in pain (as Lewis portrays it) or having a heart (as I imagine it), it is not sufficient for believing that P that one be in a state, or possess an entity, that typically, for members of your population, plays a certain functional role.  That state has to actually play that functional role, for you.  Call this view token functionalism about belief.  My own version of token functionalism puts all the weight on the effects of the state in question (or, as I prefer to conceive of it, the dispositional profile of the person in that state) and none on the causes: Beliefs can arise in any old weird way, but – if they are to be beliefs – they cannot have just any old effects.  They must have, broadly speaking, belief-like effects; the person in that state must be disposed to act and react, to behave, to feel, and to cognize in the way characteristic of a normal believer-that-P (Schwitzgebel 2002).  I hope that seems plausible to you, but for the purposes of this commentary I don’t want to assume that specific version of token functionalism; any token functionalism will do.

On token functionalism, then, mad belief – reading “mad” in the strong, Lewisian sense – is impossible.  But what about mad belief in a more attenuated, more realistic sense?  What about cases in which the subject’s state plays some but not all of the relevant functional role, has some but not all of the appropriate belief-ish causes and effects?  Delusions would appear to be just such a case.  Delusions, as Bortolotti emphasizes repeatedly in her book, often fail to integrate in the usual way with other aspects of the subject’s mental life and often fail to manifest in action.  For example, the person suffering from Capgras delusion, who asserts, apparently sincerely, that a loved one has been replaced by an imposter, may not react in what would seem the normal way; she may, for example, make little effort to find the supposedly missing loved one, she may continue living with the “imposter”, she may recognize the implausibility of her delusive claim and yet do little to revise or defend it.  A patient can have the delusional view that her doctors and nurses are out to poison her, and yet happily eat the food they provide.  A delusional person might say that he is to be married that evening and yet make no preparations.  (See Bortolotti 2010, esp. p. 164-167; also Bleuler 1911/1950; Schneider 1950/1959; Sass 2001; Gallagher 2009.)  And of course, too, delusive attitudes tend not to have the usual causes characteristic of belief: One does not come to endorse the delusional content “I am dead” (a version of Cotard’s syndrome) in the usual way that one comes to believe that someone is dead, such as by reading an obituary or hearing it from a relative.

Since match to a functional profile is a matter of degree, it seems natural to suppose that possession of belief will also, at least sometimes, be a matter of degree.  We can support this conclusion with a slippery slope argument: If Daiyu does not believe that pearls are mostly white because the candidate state does not play the right functional role, and if I do have that belief, possessing a state that mostly plays the right functional role, it seems that we can create a series of cases between me and Daiyu; and if there is no bright line such that case N is determinately a case of belief while case N + 1 is determinately not a case of belief, then it would appear that “believes that pearls are white” is a vague predicate admitting of vague cases, what I would call “in-between” cases (see also Schwitzgebel 2001, 2002, 2010).  In in-between cases of canonically vague predicates like “tall”, the appropriateness of ascribing the predicate varies contextually, and often the best approach is to refuse to either simply ascribe or simply deny the predicate but rather to specify more detail (e.g., “well, he’s five foot eleven inches”); so too, I would argue, in in-between cases of belief.

Is it possible, then, that cases of delusion are, at least sometimes (when the functional role or dispositional profile is weird enough), cases in an in-betweenish gray zone – not quite belief and not quite failure to believe?  Bortolotti raises this possibility in her discussion of “sliding scale” approaches on pp. 20-21, but offers only slender reason to dismiss it: She suggests that the sliding scale approach makes it not straightforward how to answer questions about whether an action is intentional or not, which complicates ethical and policy applications.  In reply to this, of course, the friend of the sliding scale might suggest that in many cases of delusion it shouldn’t be straightforward to assess intentionality, and the ethical and policy applications are complicated, so that a philosophical approach that renders these matters straightforward is misleadingly simplistic.

Through virtually the whole book, Bortolotti presents herself as defending the view that delusions are beliefs against the view that they are not beliefs, without – it seems to me – much recognition of the possibility that at least some of them might be vague, in-betweenish cases, in some respects belief-like and in other respects not-very-belief-like.  However, near the end of the book, Bortolotti comes close to endorsing the in-between approach when, after discussing disowned thoughts, which she characterizes as not beliefs, and contrasting them with delusional reports which the subject fully endorses, which she does characterize as beliefs, she writes:

Rarely do we have these clear-cut cases: delusional reports that are fully endorsed versus disowned thoughts.  Most of the delusions we read about, and we come across, are integrated in the subject’s narrative, to some extent, and with limitations.  They may be excessively compartmentalised, for instance, or justified tentatively.  That is what makes it so difficult to discuss the relationship between delusions, subjects’ commitment to the content of the delusion, and autonomy.  As authorship comes in degrees, so does the capacity to manifest the endorsement of the delusional thought in autonomous thought and action (p. 252).

Since Bortolotti appears to regard authorship and endorsement as necessary for belief (e.g., p. 242), it seems to follow that she is acknowledging here that many actual delusions are in-between or gray-area cases of belief.  Thus, I would like to invite Bortolotti, in her reply to this commentary, to clarify the relationship between what she says on page 252 and her repeated assertions throughout the book – including in the title – that delusions are beliefs, against all those who would say that they are not.  It seems, rather, that her position should be an intermediate one, and that she should endorse the sliding scale view that she appears to reject early in the book.

It doesn’t, of course, follow from the abstract recognition that belief is a vague predicate admitting of in-between cases that most, or any, delusions belong among those in-between cases.  It appears to be characteristic of delusion, for example – though often with some wavering – that when the deluded person attempts to sincerely express her opinion about the subject matter at hand, she asserts the delusional content.  And maybe we should regard that dispositional fact about delusional people, when it is a fact, as sufficient for belief, even if the rest of the causal-functional structure characteristic of belief is absent.  I don’t know if Bortolotti would accept this view or not.

A good test case of this idea, I think – one that doesn’t turn on delusion – is the case of implicit bias.  Suppose that Juliet is someone who consistently endorses the intellectual equality of the races: Whenever the question explicitly arises, Juliet says, repeatedly and without feeling any uncertainty, that all the races are intellectually equal.  Perhaps, even, she is a psychological researcher who takes her own research to have shown that fact beyond doubt.  And suppose that nonetheless Juliet is racially biased in her assumptions about particular individuals she meets; suppose she tends to be surprised when a black person says something smart; suppose that her spontaneous actions and reactions generally accord with the view that black people are intellectually inferior.  Should we say that Juliet believes that all the races are intellectually equal?  I propose that we regard Juliet as an in-between case: It’s not quite right to say that Juliet believes, and it’s not quite right to say that she fails to believe, although in different contexts, with different purposes, one or another ascription may serve well enough.  (For a more detailed discussion of this case and related cases, see Schwitzgebel 2010.)  What makes belief important – what gives the term “belief” its central role in philosophy of mind, philosophy of action, and epistemology – is that our beliefs give shape to our actions, our reactions, and our cognitions generally, including our spontaneous assumptions, inferences, and reactions, as well as our more thoughtful reflections.  To borrow a metaphor from Frank Ramsey (1931), beliefs are the maps by which we steer.  Or better: To believe that P just is to steer a certain kind of P-ish path through the world.  Sincere linguistic endorsement is one important kind of steering, but it is not by itself important enough to justify giving a concept that is defined entirely in terms of it the central role in philosophy that belief has; what matters more is how we steer generally.  And in the case of Juliet, and also in many cases of delusion, the steering is a bit mixed up; there isn’t a single consistent map.  In such cases the most careful ascriptions refrain from either attributing or denying belief in the relevant proposition, as though that proposition were simply on the map or simply absent from the map; such cases are indeterminate and in-between.

Delusions and Other Irrational Beliefs appears to have two central theses: (a.) Delusions are beliefs; and (b.) The irrationality characteristic of delusions is not different in kind from (though perhaps it is more extreme than) the irrationality often found in ordinary beliefs.  While I have challenged Bortolotti regarding (a), I find her case for (b) compelling.  Among its many virtues, the book is an awesome catalog of a broad range of psychological research on human irrationality.  I don’t see how a careful reader can emerge from this book and still think that rationality is any kind of hard constraint on belief ascription.  Thus, it seems to me, this book is a major contribution to the literature on irrationality, and I hope that it will be read widely by people interested in irrationality in general and not only delusion in particular.

And yet I think we can deny the rationality constraint without surrendering the core insight of Davidson (1985) and Dennett (1987): When things go too haywire, the rules of interpretation start to fail; when a person deviates too much from the causal-functional patterns in behavior and cognition characteristic of belief, the assumptions inherent in the practice of belief ascription start to break down; and then we have to either abandon belief talk or allow for some indeterminacy in it.[1]


 

References

Bayne, Tim, and Elisabeth Pacherie (2005).  In defence of the doxastic conception of delusions.  Mind and Language, 20, 163-188.

Bleuler, Eugen (1911/1950).  Dementia praecox or the group of schizophrenias, trans. J. Zinkin.  New York: International Universities.

Bortolotti, Lisa (2010).  Delusions and other irrational beliefs.  Oxford: Oxford.

Davidson, Donald (1985).  Incoherence and irrationality.  Dialectica, 39, 345-354.

Dennett, Daniel C. (1987).  The intentional stance.  Cambridge, MA: MIT.

Gallagher, Shaun (2009).  Delusional realities.  In M. R. Broome and L. Bortolotti, eds., Psychiatry as cognitive neuroscience.  Oxford: Oxford.

Lewis, David (1980).  Mad pain and Martian pain.  In N. Block, ed., Readings in philosophy of psychology, vol. 1.  Cambridge, MA: Harvard.

Ramsey, Frank P. (1931).  The foundations of mathematics and other logical essays.  London: Routledge.

Sass, Louis Arnorsson (2001).  Self and world in schizophrenia: Three classic approaches.  Philosophy, Psychiatry, and Psychology, 8, 251-270.

Schneider, Kurt (1950/1959).  Clinical psychopathology, trans. M. W. Hamilton.  New York: Grune and Stratton.

Schwitzgebel, Eric (2001).  In-between believing.  Philosophical Quarterly, 51, 76-82.

Schwitzgebel, Eric (2002).  A phenomenal, dispositional account of belief.  Noûs, 36, 249-275.

Schwitzgebel, Eric (2010).  Acting contrary to our professed beliefs, or the gulf between occurrent judgment and dispositional belief.  Pacific Philosophical Quarterly, 91, 531-553.



[1] For useful discussion and comments on the topic, thanks especially to Lisa Bortolotti, Andy Egan, Dylan Murray, Bill Robinson, Maura Tumulty, and readers of my blog, The Splintered Mind.  I was influenced in my thinking about the relationship between my 2002 account of belief and the literature on delusions by Bayne and Pacherie 2005 and by Maura Tumulty’s 2009 Society for Philosophy and Psychology presentation on the topic.