how to cite
this entry

CITATION
INFO

Stanford Encyclopedia of Philosophy

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z
content revised
OCT
30
2002

Belief

Most contemporary philosophers of mind regard belief as a propositional attitude.  Propositional attitudes are held to be mental states having the canonical formulation ‘S A that P’, where ‘S’ refers to the subject possessing the mental state, ‘A’ expresses the general type of attitude, and ‘P’ expresses a proposition by means of a complete sentence.  For example: ‘Roshini [the subject] believes [the attitude] that snow is white [the proposition]’; or ‘Jake [the subject] hopes [the attitude] that Kelli kissed Ahmed [the proposition]’.  Besides belief, other widely discussed propositional attitudes include desire and intention.  The full list of propositional attitudes is generally viewed as extensive.

Philosophers willing to describe belief as a propositional attitude hold a wide variety of views about the nature of propositions (see propositions, singular; propositions, structured), including the view that propositions don’t actually exist; and to a considerable extent the debate about the nature of propositions runs separately from the debate about the nature of propositional attitudes such as belief.  At a minimum, a proposition is supposed to be whatever it is that is expressed by a sentence, such that, when two people say the same thing by uttering different sentences, they are expressing the same proposition.  Thus, if Roshini says ‘snow is white’ in English and Uta says ‘Schnee ist weiss’ in German, they are generally thought to be saying the same thing or expressing the same proposition.  Correspondingly, ‘Roshini believes that snow is white’ is supposed to attribute the same belief to Roshini that ‘Uta glaubt, dass Schnee weiss ist’ attributes to Uta.  Likewise, if ‘Kelli kissed Ahmed’ and ‘Ahmed was kissed by Kelli’ express the same proposition, ‘Jake believes that Kelli kissed Ahmed’ and ‘Jake believes that Ahmed was kissed by Kelli’ attribute the same belief.

Although ‘belief’ is not a high-frequency word in everyday English and may ordinarily connote considered opinion on a topic of broad significance (as in ‘religious beliefs’ or ‘the belief that all people are created equal’), analytic philosophers of mind generally use the term more broadly to capture the attitude often expressed by English sentences of the form ‘S thinks that P’.  This broad usage of ‘belief’ avoids the ambiguity inherent in the word ‘thinks’ between actively reflecting on something (often expressed by the progressive ‘is thinking’, as in ‘Xinyan is thinking about Beijing’) and taking a particular proposition to be true (as in ‘Eli thinks that waking early is a healthy habit’, which can be true even if Eli is not currently pondering the matter).  The nominal form ‘thought’ may then be reserved for thinking in the first sense and the nominal form ‘belief’ for thinking in the second sense.


General Approaches to Belief

The Representational Approach

It is natural to think of believing as involving entities -- beliefs -- that are in some sense contained in the mind.  When someone learns a particular fact, for example, when Kai reads that many astronomers no longer classify Pluto as a planet, she acquires a new belief (in this case, Kai acquires the belief that many astronomers no longer classify Pluto as a planet).  The fact in question -- or, more accurately, a representation, symbol, or marker of that fact -- may be stored in memory and accessed or recalled when necessary.  To possess a belief is to have something stored in the mind in this way.

It is also natural to suppose that beliefs play a causal role in the production of behavior.  Continuing the example, we might imagine that after learning about the potential demotion of Pluto, Kai naturally gets absorbed in other interests and does not consciously consider the matter for several days, until when reading his son's science textbook he encounters the sentence "our solar system contains nine planets".  Involuntarily, his new knowledge about Pluto is called up from memory.  He finds himself doubting the truth of the textbook's claim, and he says, "actually, there's some dispute about that".  It seems plausible to say that Kai's belief about Pluto, or his possession of that belief, caused, or figured in a causal explanation of, his utterance.

Various elements of this intuitive characterization of belief have been challenged by philosophers, but it is probably fair to say that the majority of contemporary philosophers of mind accept the bulk of this picture.  A cluster of approaches here termed representational tend to comport rather naturally with it.

Representational approaches to belief hold that central cases of belief involve S's having in her head or mind a representation with the content P.  (But see Implicit Versus Explicit Belief, below, for some caveats.)  As described in the entry on mental representation, and as will be illustrated briefly below, representationalists may diverge in their accounts of the nature of representation, and they need not agree about what further conditions, besides possessing such a representation, are necessary if a being is to qualify as having a belief.  Representationalism is probably the dominant approach to belief in contemporary philosophy of mind, advocated by Jerry Fodor (1975, 1981, 1987, 1990), Ruth Millikan (1984, 1993), Fred Dretske (1988), and Robert Cummins (1996), among many others.

Fodor's version of representationalism takes mental representations to be sentences in an internal language of thought.  To get a sense of what this view amounts to, it is helpful to start with an analogy.  Computers are sometimes characterized as operating by manipulating sentences in "machine language" in accordance with certain rules.  Consider a simplified description of what happens as one enters numbers into a spreadsheet.  Inputs from the keyboard cause the computer, depending on the programs it is running and its internal state, to instantiate or "token" a sentence (in machine language) with the content (translated into English) of, for example, "numerical value 4 in cell A1".  In accordance with certain rules, the machine then displays the shape "4" in a certain location on the monitor, and perhaps, if it is implementing the rule "the values of column B are to be twice the values of column A", it tokens the sentence "numerical value 8 in cell B1" and displays the shape "8" in another location on the monitor.

Perhaps eventually we will be able to construct a robot whose behavior resembles that of a human being (but, for objections, see Dreyfus 1992; see also mind, computational theory of and the forthcoming entry on artificial intelligence).  One might also imagine that this robot operates along broadly the lines described above -- that is, by manipulating machine-language sentences in accordance with rules, in connection with various potential inputs and outputs.  If we can imagine this, then perhaps we can also imagine the robot to store a variety of sentences in machine language corresponding to the ordinary beliefs of human beings.  For example, it might somewhere store the machine-language sentence whose English translation is "the chemical formula for water is H2O".  The robot is able to act as does a human who possesses this belief because it is disposed to access this sentence appropriately on relevant occasions: When asked "of what chemicals is water compounded?", the robot accesses the water sentence and manipulates it and other relevant sentences in such a way that it produces a human-like response.

According to the language of thought hypothesis, our cognition proceeds rather like such a robot's.  The formulae we manipulate are not in "machine language", of course, but rather in a species-wide "language of thought".  A sentence in the language of thought with the content P is a "representation" of P.  S stands in the "belief" relation to such a representation just in case the representation plays the right kind of role in S's cognition.  That is, it must not merely be instantiated somewhere in the mind or brain, but it must be deployed, or apt to be deployed, in the kinds of roles that we regard as characteristic of belief.  For example, it must be apt to be called up for use in theoretical inferences toward which it is relevant.  It must be ready for appropriate deployment in deliberation about means to desired ends.  It is sometimes said, in such a case, that S has the represented proposition tokened in her “belief box” (though of course it is not assumed that there is any literal box-like structure in which S stores her beliefs).  (For a more detailed presentation of this view, see the entry on the language of thought hypothesis.)

Dretske (1988) suggests another way of characterizing mental representations, not necessarily incompatible with regarding mental representations as sentences in the language of thought.  His view centers on the idea of representational systems as systems with the function of tracking features of the world (for a similar view, see Millikan 1984, 1993).  Organisms, especially mobile ones, generally need to keep track of features of their environment to be evolutionarily successful.  Consequently, they generally possess internal systems whose function it is to covary in certain ways with the environment.  For example, certain marine bacteria contain internal magnets that align with the Earth's magnetic field.  In the northern hemisphere, these bacteria, guided by the magnets, propel themselves toward magnetic north.  Since in the northern hemisphere magnetic north tends downward, they are thus carried toward deeper water and sediment, and away from toxic, oxygen-rich surface water.  We might thus say that the magnetic system of these bacteria is a representational system that functions to indicate the direction of benign or oxygen-poor environments.  In general, an organism can be said to represent P just in case that organism contains a subsystem whose function it is to enter state A only if P holds, and that subsystem is in state A.

Dretske grants that it may be inappropriate to regard very simple creatures, such as magnetosome bacteria, as literally possessing beliefs.  Perhaps to have genuine beliefs, he suggests, an organism must possess a manifold of related representational systems -- but exactly how rich the representational structure must be, and in what ways, Dretske does not address.  Dretske's treatment is typical of the representationalist literature on belief in devoting considerably more attention to the nature of mental representation in general than to belief in particular.

Dispositional and Interpretational Approaches

Another group of philosophers treats the internal structure of the mind as of only incidental relevance to the question of whether a being is properly described as believing.  One way to highlight the difference is this: Imagine that we discover an alien being, of unknown constitution and origin, whose actual behavior and overall behavioral dispositions are perfectly normal by human standards.  "Rudolfo", say, emerges from a spacecraft and integrates seamlessly into our society, becoming a tax lawyer, football fan, and Democratic Party activist.  Even if we know next to nothing about what is going on inside his head, it seems natural to say that Rudolfo has beliefs much like ours -- for example, that the 1040 is normally due April 15, that a field goal is worth 3 points, and that labor unions tend to support Democratic candidates.  Perhaps we can coherently imagine that Rudolfo does not manipulate sentences in a language of thought or possess internal representational structures of the right sort.  Perhaps it is conceptually, even if not physically, possible that he has no complex, internal, cognitive organ, no real brain.  But even if it is granted that a creature must have human-like representations in order to behave thoroughly like a human being, one might still think that it is the pattern of actual and potential behavior that is fundamental in belief -- that representations are essential to belief only because, and to the extent to, they underwrite such a pattern.

Traditional dispositional views of belief assert that for S to believe that P is for S to possess one or more particular behavioral dispositions pertaining to P.  Often cited is the disposition to assent to utterances of P in the right sorts of circumstances (if one understands the language, wishes to reveal one's true opinion, is not physically incapacitated, etc.).  Other relevant dispositions might include the disposition to exhibit surprise should the falsity of P make itself evident, the disposition to assent to Q if one is shown that P implies Q, and the disposition to depend on P's truth in executing one's plans.  Perhaps all such dispositions can all be brought under a single heading, which is, most generally, being disposed to act as though P is the case.  Such actions are normally taken to be at least pretty good prima facie evidence of belief in P; the question is whether being disposed, over all, so to act is tantamount to believing P, as the dispositionalist thinks, or whether it is merely an outward sign of belief.  R.B. Braithwaite (1932-1933) and Ruth B. Marcus (1990) are prominent advocates of the traditional dispositional approach to belief (though Braithwaite emphasizes in his analysis another form of belief, rather like "occurrent" belief as described in Occurrent Versus Dispositional Belief below).

The principal challenge for traditional dispositional views is to account for the evident connection between belief and other mental states.  For example, the behavior of someone with a particular belief will depend, at a minimum, on her relevant desires and related beliefs.  A person who believes that it will rain will only be disposed to take an umbrella if she also believes that the umbrella will help ward off the water and if she desires not to get wet.  Change the surrounding beliefs and desires, and very different behavior may result.  Furthermore, some beliefs may be only remotely connected to outward behavior, for example, on matters about which the subject wants to keep a private opinion (e.g., the belief that Stalin's purges are morally wrong, held by a Muscovite in 1937) or matters of very little practical relevance (e.g., an American homebody's belief that there is at least one church in Nice).  The traditional dispositionalist can respond to these points by loading the dispositions with conditions: Someone who believes that it will rain will be disposed to get an umbrella only if she such has such-and-such related desires and beliefs; the Russian would reveal his beliefs about Stalin if, in addition to all the usual conditions for the expression of belief, the political climate changed drastically.  However, most recent philosophers sympathetic with the view described in the first paragraph of this section have not taken that approach.  They divide into roughly two classes, which we may call broad dispositionalists and interpretationists.

Broad dispositionalists do not reject the traditional dispositionalist's emphasis on dispositions, but they do broaden the range of dispositions considered relevant to the possession of a belief so as to include at least some dispositions to undergo private mental episodes that do not manifest in outwardly observable behavior -- dispositions, for example, for S to feel (and not just exhibit) surprise should she discover the falsity of P, for S privately to draw conclusions from P, to feel confidence in the truth of P, to utter P silently to herself in inner speech, and so forth.  Thus, the Russian possesses his belief about Stalin's purges at least as much in virtue of the things he says silently to himself and the disapproval he privately feels as in virtue of his disposition to express that opinion were the political climate to change.  Advocates of views of this sort include H.H. Price (1969), Robert Audi (1972), Lynne R. Baker (1995), Eric Schwitzgebel (2002), and arguably Gilbert Ryle (1949) (though Baker characterizes her view in terms of conditional statements rather than dispositions).

However, a philosopher approaching belief with the specific goal of defending physicalism (or materialism) -- the view that everything in the world, including the mind, is wholly physical (or material) -- would have reason to be dissatisfied with broad dispositionalism.  Although broad dispositional accounts of belief are consistent with physicalism, they do not substantially advance that thesis, since they relate belief to other mental states that may or may not be seen as physical.  The same holds for traditional dispositionalism if it loads other mental states into the conditions of behavioral manifestation.  Since the defense of physicalism has been one of the driving forces in philosophy of mind since the late twentieth century, and one of the principal reasons philosophers have been  interested in accounts of propositional attitudes such as belief, this failure to advance the physicalist thesis may be seen as a substantial drawback.

Interpretationism, in contrast, characterizes belief, jointly with desire (and possibly other mental states), wholly in terms of patterns of observable behavior -- as interpretable by an outside observer -- and thus can more easily be seen as advancing the physicalist project, since behavior is widely assumed to be physical.  The two most prominent interpretationists have been Daniel C. Dennett (1978, 1987, 1991) and Donald Davidson (1984).

To gain a sense of Dennett's view, consider three different methods we can use to predict the behavior of a human being.  The first method, which involves what Dennett calls taking the "physical stance", is to apply our knowledge of physical law.  We can predict that a diver will trace a roughly parabolic trajectory to the water because we know how objects of approximately that mass and size behave in fall near the surface of the Earth.  The second method, which involves taking the "design stance", is to attribute functions to the system or its parts and to predict that the system will function properly.  We can predict that a particular jogger's pulse will increase as she heads up the hill because of what we know about exercise and the proper function of the circulatory system.  The third method, which involves taking the "intentional stance", is to attribute beliefs and desires to the person, and then to predict that he will behave rationally, given those beliefs and desires.  Much of our prediction of human behavior appears to involve such attribution.  Treating people as mere physical bodies or as biological machines will not, as a practical matter, get us very far in predicting what is important to us.

On Dennett's view, a system with beliefs is a system whose behavior, while complex and difficult to predict when viewed from the physical or the design stance, falls into patterns that may be captured with relative simplicity and substantial if not perfect accuracy by means of the intentional stance.  The system has the particular belief that P if its behavior conforms to a pattern that may be effectively captured by taking the intentional stance and attributing the belief that P.  For example, we can say that Heddy believes that a hurricane may be coming because attributing her that belief (along with other related beliefs and desires) helps reveal the pattern, invisible from the physical and design stances, behind her boarding up her windows, making certain phone calls, stocking up provisions, etc.  All there is to having beliefs, according to Dennett, is embodying patterns of this sort.  Dennett acknowledges that his view has the unintuitive consequence that a sufficiently sophisticated chess-playing machine would have beliefs if its behavior is very complicated from the design stance (which would involve appeal to its programmed strategies) but predictable with relative accuracy and simplicity from the intentional stance (attributing the desire to defend its queen, the belief that you won't sacrifice a rook for a pawn, etc.).

Davidson also characterizes belief in terms of practices of belief attribution.  He invites us to imagine encountering a being with a wholly unfamiliar language and then attempting the task of constructing, from observation of the being's behavior in its environment, an understanding of that language.  Success in this enterprise would necessarily involve attributing beliefs and desires to the being in question, in light of which its utterances make sense.  An entity with beliefs is a being for whom such a project is practicable in principle -- a being that emits, or is disposed to emit, a complex pattern of behavior that can productively be interpreted as linguistic, rational, and expressive of beliefs and desires.

Although dispositional and interpretational approaches to belief treat behavioral (and possibly cognitive or experiential) patterns, rather than mental representations, as fundamental, representational and dispositional or interpretational approaches can be married if one holds that the existence of the relevant patterns implies, or is implied by, the possession of appropriate mental representations.  Dennett, for example, regards it as likely that the intentional stance works because cognition operates somewhat as defenders of the language of thought hypothesis suppose.

Eliminativism

Some philosophers have denied the existence of beliefs altogether.  This view, generally known as eliminativism, has been most prominently advocated by Paul M. Churchland (1981) and Stephen Stich (in his 1983 book; he subsequently moderated his opinion).  On this view, people's everyday conception of the mind, their "folk psychology", is a theory on par with folk theories about the origin of the universe or the nature of physical bodies.  And just as our pre-scientific theories on the latter topics were shown to be radically wrong by scientific cosmology and physics, so also will folk psychology, which is essentially still pre-scientific, be overthrown by scientific psychology and neuroscience once they have advanced far enough.

According to eliminativism, once folk psychology is overthrown, strict scientific usage will have no place for reference to most of the entities postulated by folk psychology, such as belief.  Beliefs, then, like "celestial spheres" or "phlogiston", will be judged not actually to exist, but rather to be the mistaken posits of a radically false theory.  We may still find it convenient to speak of "belief" in informal contexts, if scientific usage is cumbersome, much as we still speak of "the sun going down", but if the concept of belief does not map onto the categories described by a mature scientific understanding of the mind, then, literally speaking, no one believes anything.

For further discussion of eliminativism and the considerations for and against it, see eliminative materialism.

Forms of Belief

Occurrent Versus Dispositional Belief

Philosophers often distinguish dispositional from occurrent believing.  This distinction depends on the more general distinction between dispositions and occurrences.  Examples of dispositional statements include: (1a.) Corina runs a six-minute mile; (1b.) Leopold is excitable; (1c.) salt dissolves in water.  These statements can all be true even if, at the time they are uttered, Corina is asleep, Leopold is relaxed, and no salt is actually dissolved in any water.  They thus contrast with statements about particular occurrences, such as (2a.) Corina is running a six-minute mile, (2b.) Leopold is excited, or (2c.) some salt is dissolving in water.  However, although (1a-c) can be true while (2a-c) are false, (1a-c) cannot be true unless there are conditions under which (2a-c) would be true.  We cannot say that Corina runs a six-minute mile unless there are conditions under which she would actually do so.  A dispositional claim is a claim, not about anything that is actually occurring at the time, but rather that some particular thing is prone to occur, under certain circumstances.

Suppose Harry thinks plaid ties are hideous.  Only rarely does the thought or judgment that they are hideous actually come to the forefront of his mind.  When it does, he possesses the belief occurrently.  The rest of the time, Harry possesses the belief only dispositionally.  The occurrent belief comes and goes, depending on whether circumstances elicit it; the dispositional belief endures.  The common representationalist warehouse model of memory and belief suggests a way of thinking about this.  A subject dispositionally believes P if a representation with the content P is stored in her memory or "belief box" (in the central, "explicit" case: see Implicit Versus Explicit Belief).  When that representation is retrieved from memory for active deployment in reasoning or planning, the subject occurrently believes P.  As soon as he moves to the next topic, the occurrent belief ceases.

As the last paragraph suggests, one needn't take a dispositional approach to belief in general to regard some beliefs as dispositional in the sense here described.  In fact, a strict dispositionalism may entail the impossibility of occurrent belief: If to believe something is to embody a particular dispositional structure, then a thought or judgment might not belong to the right category of things to count as a belief.  The thought or judgment, P, may be a manifestation of an overall dispositional structure characteristic of the belief that P, but it itself is not that structure.

Though the distinction between occurrent and dispositional belief is widely employed, it is rarely treated in detail.  A few important discussions are Price (1969), Armstrong (1973), Lycan (1986), Searle (1992), and Audi (1994).

Implicit Versus Explicit Belief

It seems natural to say that you believe that the number of months in the year is less than 13, and also that the number of months is less than 14, and also that the number of months is less than 15, and so on, for any number greater than 12 that one cares to name.  On a simplistic reading of the representational approach, this presents a difficulty.  If each belief is stored individually in representational format somewhere in the mind, it would seem that we must have a huge number of stored representations relevant to the number of months – more than it seems plausible or necessary to attribute to an ordinary human being.  And of course this problem generalizes easily.

This difficulty suggests that the representationalist should draw a distinction between explicit and implicit belief.  One believes P explicitly if a representation with that content is actually present in the mind in the right sort of way -- for example, if a sentence with that content is inscribed in the "belief box" (see above).  One believes P implicitly (or tacitly or "dispositionally", though the latter term may promote confusion with the occurrent-dispositional distinction) if one believes P, but the mind does not possess, in a belief-like way, a representation with that content.

Perhaps it is sufficient for implicit belief that the relevant content be swiftly derivable from something one explicitly believes (Dennett 1978, 1987).  Thus, in the months case, we may say that you believe explicitly that the number of months is 12 and only implicitly that the number of months is less than 13, less than 14, etc.  Of course, if swift derivability is the criterion, then although there may be a sharp line between explicit and implicit beliefs (depending on whether the representation is stored or not), there will not be a sharp line between what one believes implicitly and what, though derivable from one's beliefs, one does not actually believe, since swiftness is a matter of degree (see Field 1978; Lycan 1986).  

The representationalist may also grant the possibility of implicit belief, or belief without explicit representation, in cases of the following sort (discussed in Dennett 1978; Fodor 1987).  A chess-playing computer is explicitly programmed with a large number of specific strategies, in consequence of which it almost always ends up trying to get its queen out early; but nowhere is there any explicitly programmed representation with the content "get the queen out early" (or any explicitly programmed representation from which "get the queen out early" is swiftly derivable).  The pattern emerges as a product of various features of the hardware and software, despite its not being explicitly encoded.  While most philosophers would not want to say that any currently existing chess-playing computer literally has the belief that it should get its queen out early, it is clear that an analogous possibility could arise in the human case and thus threaten representationalism, unless representationalism makes room for a kind of emergent, implicit belief that arises from more basic structural facts in this way.  (However, if the representationalist grants the presence of belief whenever there is a belief-like pattern of actual or potential behavior, regardless of underlying representational structure, then the position risks collapsing into something more like dispositionalism or interpretationism.)

Empirical psychologists have drawn a contrast between implicit and explicit memory or knowledge, but this distinction does not map neatly onto the implicit/explicit belief distinction just described.  In the psychologists' sense, explicit memory involves the conscious recollection of previously presented information, while implicit memory involves the facilitation of a task, or a change in performance as a result of previous exposure to information, without, or at least not as a result of, conscious recollection (Schacter 1987; Schacter and Tulving 1994).  For example, if a subject is asked to memorize a list of word pairs -- bird/truck, stove/desk, etc. -- and is then cued with one word and asked to provide the other, the subject's explicit memory is being tested.  If the subject is brought back two weeks later, and has no conscious recollection of most of the word pairs on the list, then she has no explicit memory of them.  However, implicit memory of the word-pairs would be revealed if she found it easier to learn the "forgotten" pairs a second time.  A memory that is "implicit" in this sense may nonetheless (depending on one's view of memory) be stored "explicitly" in the sense of the previous paragraphs.

De Re Versus De Dicto Belief

W.V.O. Quine (1956) introduced contemporary philosophy of mind to the distinction between belief de re and belief de dicto (as it is now generally called) by means of examples like the following.  Ralph sees a suspicious-looking man in a trenchcoat, and concludes that that man is a spy.  Unbeknownst to him, however, the man in the trenchcoat is the newly elected mayor, Bernard J. Ortcutt, and Ralph would sincerely deny the claim that "the mayor is a spy".  So does Ralph believe that the mayor is a spy?  There appears to be a sense in which he does and a sense in which he does not.  Philosophers have attempted to characterize the difference between these two senses by saying that Ralph believes de re, of that man (the man in the trenchcoat who happens also to be the mayor), that "he is a spy", while he does not believe de dicto that "the mayor a spy".

The standard test for distinguishing de re from de dicto attributions is referential transparency or opacity.  A sentence, or more accurately a position in a sentence, is held to be referentially transparent if terms or phrases in that position that refer to the same object can be freely substituted without altering the truth of the sentence.  The (non-belief attributing) sentence "Jill kicked X" is naturally read as referentially transparent in this sense.  If "Jill kicked the ball" is true, then so also is any sentence in which "the ball" is replaced by a term or phrase that refers to that same ball, e.g., "Jill kicked Davy's favorite birthday present", "Jill kicked the thing we bought at Toys-R-Us on July 2".  Sentences, or positions, are referentially opaque just in case they are not transparent, that is, if the substitution of co-referring terms or phrases could potentially alter their truth value.  De dicto belief attribution, is held to be referentially opaque in this sense.  On the de dicto reading of belief, "Ralph believes that the man in the trenchcoat is a spy" may be true while "Ralph believes that the mayor is a spy" is false.  Likewise, "Lois Lane believes that Superman is strong" may be true while "Lois believes that Clark Kent is strong" is false, even if Superman and Clark Kent are, unbeknownst to Lois, one and the same person.  (Regarding the Lois example, however, see Frege's Puzzle below.)

Yet there are certainly also cases in which we liberally substitute co-referentials in ascribing belief.  Shifting example, suppose Davy is a preschooler who has just met a new teacher, Mrs. Sanchez, who is Mexican, and he finds her pretty.  Davy's mother, in reporting this fact to his father, might say "Davy thinks Mrs. Sanchez is pretty" or "Davy thinks the new Mexican teacher is pretty", even though Davy does not know the teacher's name or that she is Mexican.  Similarly, if Ralph eventually discovers that the man in the trenchcoat was Ortcutt, he might, in recounting the incident to his friends later, laughingly say, "For a moment, I thought the mayor was a spy!" or "For a moment, I thought Ortcutt was a spy".  In a de re mood, then, we can say that Davy believes, of X, that she is pretty and Ralph believes, of Y, that he is a spy, where X is replaced by any term or phrase that picks out Mrs. Sanchez and Y is replaced by any term or phrase that picks out Ortcutt -- though of course, depending on the situation, pragmatic considerations will favor the use of some terms or phrases over others.  In a strict de re sense, we can even say that Lois believes, of Clark Kent, that he is strong, though there may be pragmatic reasons not to phrase things in that way.

The standard view, then, takes the term "belief" to be systematically ambiguous between a referentially opaque, de dicto sense and a referentially transparent, de re sense.  Sometimes this view is conjoined with the view that de re but not de dicto belief requires some kind of direct acquaintance with the object of belief.

The majority of the literature on the de re / de dicto distinction since at least the 1980's has challenged this standard view in one way or another.  The challenges are sufficiently diverse that they resist brief classification, except perhaps to remark that a number of them invoke pragmatics or conversational context, instead of an ambiguity in the term "belief", to explain the fact that it seems in some way appropriate and in some way inappropriate to say that Ralph believes the mayor is a spy.

Among the more influential discussions of the de re / de dicto distinction are Quine (1956), Kaplan (1968), Burge (1977), Lewis (1979), Stich (1983), Dennett (1987), Crimmins (1992), Brandom (1994), and Taylor (2002).

Degree of Confidence

Jessie believes that Stalin was originally a Tsarist mole among the Bolsheviks, that her son is at school, and that she is eating a tomato.  She feels different degrees of confidence with respect to these different propositions.  The first she recognizes to be a speculative historical conjecture; the second she takes for granted, though she knows it could be false; the third she regards as a near-certainty.  Consequently, Jesse is more confident of the second proposition than the first and more confident of the third than the second.  We might suppose that every subject holds each of her beliefs with some particular degree of confidence.  In general, the greater the confidence one has in a proposition, the more willing one is to depend on it in one's actions.

One common way of formalizing degree of confidence is by means of a scale from 0 to 1, where 0 indicates absolute certainty in the falsity of a proposition, 1 indicates absolute certainty in its truth, and a degree of confidence of .5 indicates that the subject regards the proposition just as likely to be true as false.  Standard approaches treat degree of confidence as equal to the maximum amount the subject would, or alternatively should, be willing to wager on a bet that pays nothing if the proposition is false and 1 unit if the proposition is true.  So, for example, if the subject thinks that the proposition "the restaurant is open" is three times more likely to be true than false, she should be willing to pay no more than $0.75 for a wager that pays nothing if the restaurant is closed and $1 if it is open.  Consequently, the subject's degree of confidence is .75, or 75%.  Such a formalized approach to degree of confidence has proven useful in decision theory, game theory, and economics.  Standard philosophical treatments of this topic include Jeffrey (1983) and Skyrms (2000).

Degree of confidence is often called "degree of belief", but the latter phrase may be misleading, because the relationship between confidence and belief is not straightforward (see, e.g., Harman 1986).  For example, some people find it intuitive to say that a rational person holding a ticket in a fair lottery may not actually believe that she will lose, but instead regard it as an open question, despite having a degree of confidence of, say, .9999 that she will lose.  Assuming that this person genuinely believes some other propositions, such as that her son is at school, with a degree of confidence considerably less than .9999, then it follows that a rational person may in some cases have a higher degree of confidence in a proposition that she does not believe than in a proposition she does believe.

Relatives of Belief

Acceptance

Philosophers have sometimes drawn a distinction between acceptance and belief.  Generally speaking, acceptance is held to be more under the voluntary control of the subject than belief and more directly tied to practical action in a context.  For example, a scientist, faced with evidence supporting a theory, evidence acknowledged not to be completely decisive, may choose to accept the theory or not to accept it.  If the theory is accepted, the scientist ceases inquiring into its truth and becomes willing to ground her own research and interpretations in that theory; the contrary if the theory is not accepted.  If one is about to use a ladder to climb to a height, one may check the stability of the ladder in various ways.  At some point, one accepts that the ladder is stable and climbs it.  In both of these examples, acceptance involves a decision to cease inquiry and to act as though the matter is settled.  This does not, of course, rule out the possibility of re-opening the question if new evidence comes to light.

The distinction between acceptance and belief can be supported by appeal to cases in which one accepts a proposition without believing it and cases in which one believes a proposition without accepting it.  Bas C. van Fraassen (1980) has argued that the former attitude is common in science: the scientist often does not think that some particular theory on which her work depends is the literal truth, and thus does not believe it, but she nonetheless accepts it as an adequate basis for research.  The ladder case, due to Michael Bratman (1999), may involve belief without acceptance: One may genuinely believe, even prior to checking it, that the ladder is stable, but because so much depends on it and because it is good general policy, one nonetheless does not accept that the ladder is stable until one has checked it more carefully.

Influential discussions of acceptance include van Fraassen (1980), Harman (1986), Cohen (1989), Lehrer (1990), Bratman (1999), and Velleman (2000).

Knowledge

The traditional analysis of knowledge, brought into contemporary discussion (and famously criticized) by Edmund Gettier (1963), takes knowledge to be a species of belief -- specifically, justified true belief.  Most contemporary treatments of knowledge are modifications or qualifications of the traditional analysis and consequently also treat knowledge as a species of belief.  (For a detailed treatment of this topic see knowledge, analysis of.)

There may also be types of knowledge that are not types of belief, though they have received less attention from epistemologists.  Gilbert Ryle (1949), for example, emphasizes the distinction between knowing how to do something (e.g., ride a bicycle) and knowing that some particular proposition is true (e.g., that Paris is the capital of France).  In contemporary psychology, a similar distinction is sometimes drawn between procedural knowledge and semantic, or declarative, knowledge (see Squire 1987; Schacter, Wagner, and Buckner 2000; also memory).  Although knowledge-that or declarative knowledge may plausibly be a kind of belief, it is not easy to see how procedural knowledge or knowledge-how could be so, unless one holds that people have a myriad of beliefs about minute and non-obvious procedural details.  At least, there is no obvious relation between knowledge-how and "belief-how" that runs parallel to the relation epistemologists generally accept between knowledge-that and belief-that.

Some Issues of Debate

Atomism Versus Holism

Ani believes that salmon are fish; not knowing that whales are mammals, she also believes that whales are fish.  Sanjay, like Ani, believes that salmon are fish, but he denies that whales are fish.  Do Ani and Sanjay share exactly the same belief about salmon -- namely, that they are fish -- or is the content of their belief somehow subtly different in virtue of their different attitude toward whales?  With certain caveats, the atomist will say the former, the holist the latter.  In general, atomism is the view that people who sincerely and comprehendingly accept the same sentence normally have exactly the same belief -- the belief expressed by that sentence -- and consequently that the content of one's beliefs does not depend in any general way on the contents of related beliefs (though it may depend on the contents of a few specially related beliefs such as definitions).  Holism is the contrary view that the accuracy of belief ascriptions is almost always a matter of degree, affected by a broad range of the subject's related beliefs.

Holism may be defended by a slippery-slope argument.  It seems that we can imagine Sanjay's and Ani's beliefs about the nature of fish and the members of the class of fish slowly diverging.  At some point, it will seem plainly correct to say that even though they may both say "salmon are fish", they are not expressing the same belief by that sentence.  As an extreme case, we might imagine Ani to be so misguided as to hold that to be a fish is neither more nor less than to be an Earthly animal in regular contact with Martians, and that only salmon, whales, leopards, and banana slugs are in such contact; and furthermore, she recognizes no classification of beings, under any label, remotely resembling Sanjay's fairly conventional classification of fish.  But if we deny, in the extreme case, that Ani and Sanjay share the same belief, expressed by the sentence "salmon are fish", it seems artificial to draw a sharp line anywhere in the progression of divergence, on one side of which they share exactly the same belief about salmon and on the other side of which they have divergent beliefs.  One is thus led to the conclusion that similarity in belief is a matter of degree, and it may then be difficult to avoid accepting that even a relatively small divergence in surrounding beliefs may be sufficient to generate subtle differences between two beliefs expressed in the same words.  Similar slippery slope arguments can be constructed that emphasize gradual belief change in concept acquisition ("Leibniz was a metaphysician" agreed to before and after learning philosophy) or gradual change in surrounding theory or in the meaning of a term ("electrons have orbits" as uttered by Niels Bohr in 1913 and as uttered by a physicist in 2000).

Dispositional and interpretational approaches to belief tend to be holist.  On these views, recall, to believe is to be disposed to exhibit patterns of behavior interpretable or classifiable by means of various belief attributions (see above).  It is plausible to suppose that a subject's match to the relevant patterns will generally be a matter of degree.  There may be few actual cases in which two subjects exactly match in their dispositional patterns regarding P, even if it gets matters approximately right to attribute to each of them the belief that P.  Since behavioral dispositions are interlaced in a complex way, divergence in any of a variety of attitudes related to P may be sufficient to ensure divergence in the dispositional patterns relevant to P itself.  As Ani's associated beliefs grow stranger, her overall dispositional structure begins to look less and less like one that we would associate with believing that salmon are fish.

It is sometimes objected to holism that, intuitively, both Shakespeare and contemporary physicians believe that blood is red, while on the holist view it is hard to see how their beliefs could even be similar, given that they have so many different surrounding beliefs about both blood and redness.  Although in principle a holist could respond to this objection by describing what sort of differences in surrounding belief create only minor divergences and what differences create major ones, there have been no influential attempts at such a project.  An atomist might address the Shakespeare case by suggesting that the content of both Shakespeare's and the contemporary physician's belief is determined externalistically by the actual nature of blood and the actual nature of redness, despite the different conceptualizations over time (see Internalism and Externalism below).  Since neither blood nor redness have changed much since Shakespeare's day, Shakespeare's and the contemporary physician's belief may be exactly the same.

Holism appears to be incompatible with a certain variety of straightforward representationalism about belief.  If beliefs are stored as symbolic representations in the mind, somewhat like sentences on a chalkboard or objects in a box (to use standard Fodorian metaphors), then it is natural to suppose that those beliefs can, in principle, exist independently of each other.  Whether one believes P depends on whether a representation with the content 'P' is present in the right sort of way in the mind, which would not seem to be affected by whether Q or not-Q, or R or not-R, is also represented.  If there is, in addition, an innate language of thought of the sort advocated by Fodor and others, then the basic terms of that language may also be exactly the same from person to person.  If a view of this sort about the mind can be sufficiently well supported, holism would have to be rejected.  Conversely, if holism is plausible, it cuts against the more atomistic forms of representationalism.

Fodor and Lepore (1992) contains an excellent review and critique of arguments for holism.  The foremost defenders of holism are probably Quine (1951) and Davidson (1984).

Belief Without Language

A number of philosophers have argued that beings without language, notably human infants and non-human animals, cannot have beliefs.  The most influential case for this view has been Davidson's (1982, 1984; cf. Heil 1992), though some of the points he raises have also been raised by others.  Three primary arguments in favor of the necessity of language for belief can be extracted from Davidson.

The first starts from the observation that if we are to ascribe a belief to a being without language -- a dog, say, who is barking up a tree into which he has just seen a squirrel run -- we must ascribe a belief with some particular content.  At first blush, it seems natural to say that, in the case described, that the dog believes that the squirrel is in the tree.  However, on reflection, that attribution may seem to be not quite right.  The dog does not really have the concept of a squirrel or a tree in the human sense.  He may not know, for instance, that trees have roots and require water to grow.  Consequently, according to Davidson, it is not really accurate to say that he believes that the squirrel is in the tree (at least in the de dicto sense: see De Re Versus De Dicto Belief above).  However, neither does the dog have any other particular belief.   Embracing holism (see Atomism Versus Holism above), Davidson asserts that to have a belief with a specific content, that belief must be embedded in a rich network of other beliefs with specific contents, but a dog's cognitive life is not complex enough to support such a network.  "Belief" talk thus cannot get traction (cf. Dennett 1969; Stich 1983).

Several philosophers (e.g., Routley 1981; Smith 1982; Allen 1992) have objected to this argument on the grounds that the dog's cognition about things such as trees, while perhaps not much like ours, is nonetheless relatively rich, involving a number of elements generally neglected by us, such as their scent and their use in marking territory.  The dog's understanding of a tree may be at least as rich as the human understanding of some objects about which we seem to have beliefs.  For example, it seems that a chemically untrained person may believe that boron is a chemical element without knowing very much about boron apart from its position on the periodic table.  Since we have no language for doggy concepts, our belief ascriptions to dogs can only be approximate -- but if one accepts holism, then belief ascription to other human beings appears to be similarly approximate.

Davidson also argues that to have a belief one must have the concept of belief, which involves the ability to recognize that beliefs can be false or that there is a mind-independent reality beyond one's beliefs; and one cannot have all that without language.  However, Davidson offers little support for the claim that belief requires the concept of belief.  On the face of it, it is not evident why this should be so, any more than having a bad temper requires the concept of a bad temper.  Furthermore, developmental psychologists now generally say that children do not understand the appearance-reality distinction and do not recognize that beliefs can be false until they are at least three-and-a-half years of age, well after they have begun to talk (see Perner 1991; Wellman, Cross, and Watson 2001).  Davidson's view thus requires him either to reject this empirical thesis or embrace the seemingly implausible view that young three-year-olds have no beliefs.

The view that belief requires language is a natural consequence of the view that belief attribution and linguistic interpretation are inextricably intertwined.  Although Davidson leaves this point largely implicit in his discussion of non-human animals, it can be parlayed into an argument against belief without language.  Davidson, as described above (in Dispositional and Interpretational Approaches), argues that the interpretation of belief, desire, and language must come together as a package.  If this is plausible, then creatures without language are missing part of what is essential to a behavioral pattern of the sort that can underwrite proper belief ascription (and recall that on an interpretational view, all there is to having a belief is having a pattern of behavior that is interpretable in that way to an outside observer).  Any view that ties belief attribution and language as closely together as Davidson's does -- Wilfrid Sellars (1956) and Robert Brandom (1994) also offer views of this sort -- will have difficulty accommodating the possibility of belief in creatures without language.  Thus, whatever draws us to such views will also provide reason to deny belief in languageless creatures.

Positive arguments for attributing beliefs to (at least) human infants and non-linguistic mammals have tended to focus on the general biological and behavioral similarity between adult human beings, human infants, and non-human mammals; the intuitive naturalness of describing the behavior of infants and non-linguistic mammals in terms of their beliefs and desires; and the difficulty of usefully characterizing their mental lives without relying on the ascription of propositional attitudes (e.g., Routley 1981; Allen and Bekoff 1997).

Internalism and Externalism

A number of philosophers have suggested that the content of one's beliefs depends entirely on things going on inside one's head, and not at all on the external world, except via the effects of the latter on one's brain.  Consequently, if a genius neuroscientist were to create a molecule-for-molecule duplicate of your brain and maintain it in a vat, stimulating it artificially so that it underwent exactly the same sequence of electrical and chemical events as your actual brain, that brain would have exactly the same beliefs as you.  Those who accept this position are internalists about belief content.  Those who reject it are externalists.

Several arguments against internalism have prompted considerable debate in philosophy of mind recently.  Here is a condensed version of one argument, due to Hilary Putnam (1975).  Suppose that in 1750, in a far-off region of the universe, there existed a planet that was physically identical to Earth, molecule-for-molecule, in every respect but one: Where Earth had water, composed of H2O, Twin Earth had something else instead, "twater", coming down as rain and filling streams, behaving identically to water by all the chemical tests then available, but having a different atomic formula, XYZ.  Intuitively, it seems that the inhabitants of Earth in 1750 would have beliefs about water and no beliefs about twater, while the inhabitants of Twin Earth would have beliefs about twater and no beliefs about water.  By hypothesis, however, each inhabitant of Earth will have a molecularly identical counterpart on Twin Earth with exactly the same brain structures (except, of course, that their brains will contain XYZ instead of H2O, but reflection on analogous examples regarding chemicals not contained in the brain suggests that this fact is irrelevant).  Consequently, Putnam argues, the contents of one's beliefs do not depend entirely on internal properties of one's brain.

For extensive discussion of the debate between internalists and externalists, see content externalism and content, narrow

Frege's Puzzle

Recall that in the de dicto sense (see De Re Versus De Dicto Belief above) it seemed plausible to say that Lois Lane, who does not know that Clark Kent is Superman, believes that Superman is strong but does not believe that Clark Kent is strong.   Despite the intuitive appeal of this view, some widely accepted "Russellian" views in the philosophy of language appear committed to attributing to Lois exactly the same beliefs about Clark Kent as she has about Superman.  On such views, the semantic content of a name, or the contribution it makes to the meaning or truth conditions of a sentence, depends only on the individual picked out by that name.  Since the names "Superman" and "Clark Kent" pick out the same individual, it follows that the sentence "Lois believes Superman is strong" could not have a different meaning or truth value from the sentence "Lois believes Clark Kent is strong".  This issue, known as "Frege's Puzzle", has occupied much of philosophy of language since the 1970's.  Consequently, an enormous literature addresses this aspect of belief ascription.  See propositional attitude reports

Bibliography

Other Internet Resources

Related Entries

artificial intelligence | behaviorism | cognitive science | consciousness | consciousness and intentionality | Davidson, Donald | functionalism | intentionality | language of thought hypothesis | materialism | materialism, eliminative | memory | mental causation | mental content | mental content, causal theories of | mental content, externalism about | mental content, narrow | mental content, nonconceptual | mental content, teleological theories of | mental representation | mind, computational theory of | physicalism | propositional attitude reports | propositions | propositions, singular | propositions, structured

Copyright © 2003 by
Eric Schwitzgebel
  eschwitz@citrus.ucr.edu

[Please feel free to contact the author with suggestions or criticism.]


A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

Stanford Encyclopedia of Philosophy