The Unreliability of Naive Introspection

Eric Schwitzgebel

Department of Philosophy

University of California at Riverside

Riverside, CA  92521-0201

951 827 4288

eschwitz at domain: ucr.edu

 

September 7, 2007

 


The Unreliability of Naive Introspection

 

 

Abstract:

 

We are prone to gross error, even in favorable circumstances of extended reflection, about our own ongoing conscious experience, our current phenomenology.  Even in this apparently privileged domain, our self-knowledge is faulty and untrustworthy.  We are not simply fallible at the margins but broadly inept.  Examples highlighted in this essay include: emotional experience (for example, is it entirely bodily; does joy have a common, distinctive phenomenological core?), peripheral vision (how broad and stable is the region of visual clarity?), and the phenomenology of thought (does it have a distinctive phenomenology, beyond just imagery and feelings?).  Cartesian skeptical scenarios undermine knowledge of ongoing conscious experience as well as knowledge of the outside world.  Infallible judgments about ongoing mental states are simply banal cases of self-fulfillment.  Philosophical foundationalism supposing that we infer an external world from secure knowledge of our own consciousness is almost exactly backward.

 


The Unreliability of Naive Introspection

 

i.

 

Current conscious experience is generally the last refuge of the skeptic against his own uncertainty.  Though we might doubt the existence of other minds, that the sun will rise tomorrow, that the Earth existed five minutes ago, that there’s any “external world” at all, even whether two and three make five, still we can know, it’s said, the basic features of our ongoing stream of experience.  Descartes espouses this view in his first two Meditations.  So does Hume, in the first book of the Treatise, and – as I read him – Sextus Empiricus.[1]  Other radical skeptics like Zhuangzi and Montaigne, though they appear to aim at very general skeptical goals, don’t grapple specifically and directly with the possibility of radical mistakes about current conscious experience.  Is this an unmentioned exception to their skepticism?  Unintentional oversight?  Do they dodge the issue for fear that it is too poor a field on which to fight their battles?[2]  Where is the skeptic who says: We have no reliable means of learning about our own ongoing conscious experience, our current imagery, our inward sensations – we are as in the dark about that as about anything else, perhaps even more in the dark?

 Is introspection (if that’s what’s going on here) just that good?  If so, that would be great news for the blossoming – or I should say recently resurrected? – field of consciousness studies.  Or does contemporary discord about consciousness – not just about the physical bases of consciousness but seemingly about the basic features of experience itself – point to some deeper, maybe fundamental, elusiveness that somehow escaped the notice of the skeptics, that perhaps partly explains the first, ignoble death of consciousness studies a century ago?

 

ii.

 

One must go surprisingly far afield to find major thinkers who hold, as I do, that the introspection of current conscious experience is both (i.) possible, important, necessary for a full life, central to the development of a full scientific understanding of the mind, and (ii.) for the most part badly done.  In Eastern meditative traditions, I think this is a commonplace.  Also the fiercest advocates of introspective training in the first era of scientific psychology (circa 1900) endorsed both claims – especially E.B. Titchener.[3]  Both the meditators and Titchener, though, take comfort in optimism about introspection “properly” conducted – so they hardly qualify as general skeptics or pessimists.  It’s as though their advocacy of a regimen sets them free to criticize introspection as ordinarily practiced.  But might they be right in their doubts, less so in their hopes?  Might we need introspection, though the prospects are bleak?

I won’t say much to defend (i), which I take to be both common sense and the majority view in philosophy.  Of course we have some sort of attunement to our ongoing conscious experience, and we impoverish ourselves to try to do without it.  Part (ii) is the project.  In less abbreviated form: Most people are poor introspectors of their own ongoing conscious experience.  We fail not just in assessing the causes of our mental states or the processes underwriting them; and not just in our judgments about non-phenomenal mental states like traits, motives, and skills; and not only when we are distracted, or passionate, or inattentive, or self-deceived, or pathologically deluded, or when we’re reflecting about minor matters, or about the past, or only for a moment, or where fine discrimination is required.  We are both ignorant and prone to error.  There are major lacunae in our self-knowledge that are not easily repaired; and we make gross, enduring mistakes about even the most basic features of our currently ongoing conscious experience (or “phenomenology”), even in favorable circumstances of careful reflection, with distressing regularity.  We either err or stand perplexed, depending – rather superficially, I suspect – on our mood and caution.  (This essay will focus on error, but sufficient restraint can always transform error to mere ignorance.)

Contemporary philosophers and psychologists often doubt the layperson’s talent in assessing such non-conscious mental states as her personality traits, her motivations and skills, her hidden beliefs and desires, the bases of her decisions; and they may construe such doubts as doubts about “introspection”.  But it’s one thing not to know why you chose a particular pair of socks (to use an example from Nisbett and Wilson 1977), and quite another to be unable accurately to determine your currently ongoing visual experience as you look at those socks, your auditory experience as the interviewer asks you the question, the experience of pain in your back making you want to sit down.  Few philosophers or psychologists express plain and general pessimism about the latter sorts of judgment.  Or, rather, I should say this: I have heard such pessimism only from behaviorists, and their near cousins, who nest their arguments in a theoretical perspective that rejects the psychological value, sometimes even the coherence, of attempting to introspect conscious experiences at all – and thus reject claim (i) above – though indeed even radical behaviorists often pull their punches when it comes to ascribing flat-out error.[4]

Accordingly, though infallibilism – the view that we cannot err in our judgments about our own current conscious experience – is now largely out of favor, mainstream philosophical criticism of it is surprisingly meek.  Postulated mistakes are largely only momentary, or about matters of fine detail, or under conditions of stress or pathology, or at the hands of malevolent neurosurgeons.[5]  Fallibilists generally continue to assume that, in favorable circumstances, careful introspection can reliably reveal at least the broad outlines of one’s currently ongoing experience.  Even philosophers most of the community sees as radical are, by my lights, remarkably tame and generous when it comes to assessing our accuracy in introspecting current conscious experience.  Paul Churchland (1985, 1988) puts it on a par with the accuracy of sense perception.  Daniel Dennett (2002) says that we can come close to infallibility when charitably interpreted.[6]  Where are the firebrands?

A word about “introspection”.  I happen to regard it as a species of attention to currently ongoing conscious experience, but I won’t defend that view here.  The project at hand stands or falls quite independently.  Think of introspection as you will – as long as it is the primary method by which we normally reach judgments about our experience in cases of the sort I’ll describe.[7]  That method, whatever it is, is unreliable as typically executed.  Or so I will argue in this essay.

 

iii.

 

I don’t know what emotion is, exactly.  Neither do you, I’d guess.  Is surprise an emotion?  Comfort?  Irritability?  Is it more of a gut thing, or a cognitive thing?  Assuming cognition isn’t totally irrelevant, how is it involved?  Does cognition relate to emotion merely as cause and effect, or is it somehow, partly, constitutive?

I’m not sure there’s a single right answer to these questions.  The empirical facts seem ambiguous and tangled.[8]  Probably we need to conjecture and stipulate, simplify, idealize, to have anything workable.  So also, probably, for most interesting psychological concepts.  But here’s one thing that’s clear: Whatever emotion is, some emotions – joy, anger, fear – can involve or accompany conscious experience.

Now, you’re a philosopher, or a psychologist, presumably interested in introspection and consciousness and the like, or you wouldn’t be reading this article.  You’ve had emotional experiences and you’ve thought about them, reflected on how they feel as they’ve been ongoing or in the cooling moments as they fade.  If such experiences are introspectible, and if introspection is the diamond clockwork often supposed, then you have some insight.  So tell me: Are emotional states like joy, anger, and fear always felt phenomenally – that is, as part of one’s stream of conscious experience – or only sometimes?  Is their phenomenology, their experiential character, always more or less the same, or does it differ widely from case to case?  For example, is joy sometimes in the head, sometimes more visceral, sometimes a thrill, sometimes an expansiveness – or, instead, does joy have a single, consistent core, a distinctive, identifiable, unique experiential character?  Is emotional consciousness simply the experience of one’s bodily arousal, and other bodily states, as William James (1890/1981) seems to suggest?  Or, as most people think, can it include, or even be exhausted by, something less literally visceral?  Is emotional experience consistently located in space (for example, particular places in the interior of one’s head and body)?  Can it have color – for instance, do we sometimes literally “see red” as part of being angry?  Does it typically come and pass in a few moments (as Buddhists sometimes suggest) or does it tend to last awhile (as my English-speaking friends more commonly say)?

If you’re like me, you won’t find all such questions trivially easy.  You’ll agree that someone – perhaps even yourself – could be mistaken about some of them, despite sincerely attempting to answer them, despite a history of introspection, despite – maybe – years of psychotherapy or meditation or self-reflection.  You can’t answer these questions one-two-three with the same easy confidence that you can answer similarly basic structural questions about cars – how many wheels? hitched to horses? travel on water?  If you can – well heck, I won’t try to prove you wrong!  But if my past inquiries are indicative, you are in a distinct minority.

It’s not just language that fails us – most of us? – when we confront such questions (and if it were, we’d have to ask, anyway, why this particular linguistic deficiency?) but introspection itself.  The questions challenge us not simply because we struggle for the words that best attach to a patently obvious phenomenology.  It’s not like perfectly well knowing what particular shade of tangerine your Volvo is, stumped only about how to describe it.  No, in the case of emotion the very phenomenology itself – the “qualitative” character of our consciousness – is not entirely evident, or so it seems to me.  But how could this be so, if we know the “inner world” of our own experience so much better than the world outside?  Even the grossest features of emotional experience largely elude us.  Reflection doesn’t remove our ignorance, or it delivers haphazard results.

Relatedly, most of us have a pretty poor sense, I suspect, of what brings us pleasure and suffering.  Do you really enjoy Christmas?  Do you really feel bad while doing the dishes?  Are you happier weeding or going to a restaurant with your family?  Few people make a serious study of this aspect of their lives, despite the lip service we generally pay to the importance of “happiness”.  Most people feel bad a substantial proportion of the time, it seems to me.[9]  We are remarkably poor stewards of our emotional experience.  We may say we’re happy – overwhelmingly we do – but we have little idea what we’re talking about.[10]

 

iv.

 

Still, you might suggest, when we attend to particular instances of ongoing emotional experience, we can’t go wrong, or don’t, or not by far.  We may concede the past to the skeptic, but not the present.  It’s impossible – nearly impossible? – to imagine my being wrong about my ongoing conscious experience right now, as I diligently reflect.

Well, philosophers say this, but I confess to wondering whether they’ve really thought it through, contemplated a variety of examples, challenged themselves.  You’d hope they would have, so maybe I’m misunderstanding or going wrong in some way here.  But to me at least, on reflection, the possibility that I could be infallible in everything I’m inclined to say about my ongoing consciousness – even barring purely linguistic errors, and even assuming I’m being diligent and cautious and restricting myself to simple, purely phenomenal claims arrived at (as far as I can tell) “introspectively” – well, unfortunately that just seems blatantly unrealistic.

Let’s try an experiment.  You’re the subject.  Reflect on, introspect, your own ongoing emotional experience at this instant.  Do you even have any?  If you’re in doubt, vividly recall some event that still riles you, until you’re sure enough you’re suffering some renewed emotion.  Or maybe your boredom, anxiety, irritation, or whatever, in reading this essay is enough.  Now let me ask: Is it completely obvious to you what the character of that experience is?  Does introspection reveal it you to as clearly as visual observation reveals the presence of the text before your eyes?  Can you discern its gross and fine features through introspection as easily and confidently as you can, though vision, discern the gross and fine features of nearby external objects?  Can you trace its spatiality (or nonspatiality), its viscerality or cognitiveness, its involvement with conscious imagery, thought, proprioception, or whatever, as sharply and infallibly as you can discern the shape, texture, and color of your desk?  (Or the difference between 3 and 27?)  I cannot, of course, force a particular answer to these questions.  I can only invite you to share my intuitive sense of uncertainty.  (Perhaps I can buttress this sense of uncertainty by noting, in passing, the broad range of disputes and divergences within the literature on the experiential character of emotion – disputes that at least seem to be about emotional phenomenology itself, not merely about its causes and connections to non-experiential states, or about how best to capture it in a theory.[11])

Or consider this: My wife mentions that I seem to be angry about being stuck with the dishes again (despite the fact that doing the dishes makes me happy?).  I deny it.  I reflect, I sincerely attempt to discover whether I’m angry – I don’t just reflexively defend myself but try to be the good self-psychologist my wife would like me to be – and still I don’t see it.  I don’t think I’m angry.  But I’m wrong, of course, as I usually am in such situations: My wife reads my face better than I introspect.  Maybe I’m not quite boiling inside, but there’s plenty of angry phenomenology to be discovered if I knew better how to look.  Or do you think that every time we’re wrong about our emotions, those emotions must be nonconscious, dispositional, not genuinely felt?  Or felt and perfectly apprehended phenomenologically but somehow nonetheless mislabeled?  Can’t I also err more directly?

Surely my “no anger” judgment is colored by a particular self-conception and lack of coolness.  To that extent, it’s less than ideal as a test of my claim that even in the most favorable circumstances of quiet reflection we are prone to err about our experience.  However, as long as we focus on judgments about emotional phenomenology, such distortive factors will probably be in play.  If that’s enough consistently to undermine the reliability of our judgments, that rather better supports my thesis than defeats it, I think.

Infallible judges of our emotional experience?  I’m baffled.  How could anyone believe that?  Do you believe that?  What am I missing?

 

v.

 

Now maybe emotional experience is an unusually difficult case.  Maybe, though we err there, we are generally quite accurate in our judgments about other aspects of our phenomenology.  Maybe my argument even plays on some conceptual confusion about the relation between emotion and its phenomenology, or relies illegitimately on introspection’s undercutting the emotion introspected.  I don’t think so, but I confess I have no tidy account to eradicate such worries.

So let’s try vision.  Suppose I’m looking directly at a nearby, bright red object in good light, and I judge that I’m having the visual phenomenology, the “inward experience”, of redness.  Here, perhaps – even if not in the emotional case – it seems rather hard to imagine that I could be wrong in that judgment (though I could be wrong in using the term “red” to label an experience I otherwise perfectly well know).

I’ll grant that.  Some aspects of visual experience are so obvious it would be difficult to go wrong about them.  So also would it be difficult to go wrong in some of our judgments about the external world – the presence of the text before your eyes, the existence of the chair in which you’re sitting and are now (let’s suppose) minutely examining.  Introspection may admit obvious cases, but that in no way proves that it’s more secure than external perception – or even as secure.

Now of course many philosophers have argued plausibly that one could be wrong even in “obvious” judgments about external objects, if one allows that one may be dreaming, or allows that one’s brain may have been removed at night and teleported to Alpha Centauri to be stimulated by genius neuroscientists with inputs mimicking normal interaction with the world.  Generally, philosophers have supposed (with Descartes) that such thought experiments don’t undermine judgments about visual phenomenology.  So perhaps obvious introspective judgments are more secure than obvious perceptual ones, after all, since they don’t admit even this peculiar smidgen – usually it only seems like a smidgen – of doubt?

But in dreams we make baldly incoherent judgments, or at least very stupid ones.  I think I can protrude my tongue without its coming out; I think I see red carpet that’s not red; I see a seal as my sister without noticing any difficulty about that.  In dream delirium, these judgments may seem quite ordinary, or even insightful.  If you admit the possibility that you’re dreaming, I think you should admit the possibility that your judgment that you are having reddish phenomenology is a piece of delirium, unaccompanied by any actual reddish phenomenology.  Indeed, it seems to me not entirely preposterous to suppose that we have no color experiences at all in our sleep – or have them only rarely – and our judgments about the colors of dream-objects are on par with the seal-sister judgment, purely creative fiction unsupported by any distinctive phenomenology.[12]  If so, the corresponding judgments about the coloration of our experiences of those dream-objects will be equally unsupported.

Likewise, if we allow malevolent neurosurgeons from Alpha Centauri to massage and stoke our brains, I see no reason to deny them the power to produce directly the judgment that one is having reddish phenomenology, while suppressing the reddish phenomenology itself.  Is this so patently impossible?[13]

Absolute security, and immunity to skeptical doubt, thus eludes even “obvious” introspective judgments as well as perceptual ones.  If we rule out radically skeptical worries, then we’re left with judgments on a par (“red phenomenology now”, “paper in my hands”) – judgments as obvious and as secure as one could reasonably wish.  The issue of whether the introspection of current visual experience warrants greater trust than the perception of nearby objects must be decided on different grounds.

 

vi.

 

Look around a bit.  Consider your visual experience as you do this.  Does it seem to have a center and a periphery, differing somehow in clarity, precision of shape and color, richness of detail?  Yes?  It seems that way to me, too.  Now consider this: How broad is that field of clarity?  Thirty degrees?  More?  Maybe you’re looking at your desk, as I am.  Does it seem that a fairly wide swath of the desk – a square foot? – presents itself to you clearly in experience at any one moment, with the shapes, colors, textures all sharply defined?  Most people endorse something like this view when I ask them.[14]  They are, I think, mistaken.

Consider, first, our visual capacities.  It’s firmly established that the precision with which we detect shape and color declines precipitously outside a central, foveal area of about 1-2 degrees of arc (about the size of your thumbnail held at arm’s length).  Dennett (1991) has suggested a way of demonstrating this to yourself.  Draw a card from a normal deck without looking at it.  Keeping your eyes fixed on some point in front of you, hold the card at arm’s length just beyond your field of view.  Without moving your eyes, slowly rotate the card toward the center of your visual field.  How close to the center must you bring it before you can determine the color of the card, its suit, and its value?  Most people are quite surprised at the result of this little experiment.  They substantially overestimate their visual acuity outside the central, foveal region.  When they can’t make out whether it’s a Jack or a Queen, though the card is nearly (but only nearly) dead center, they laugh, they’re astounded, dismayed.[15]  You have to bring it really close.

By itself, this says nothing about our visual experience.  Surprise and dismay may reveal error in our normal (implicit) assumptions about our visual capacities, but it’s one thing to mistake one’s abilities, quite another to misconstrue phenomenology.  Our visual experience depends on the recent past, on general knowledge, on what we hear, think, and infer, as well as on immediate visual input – or so it’s plausible to suppose.  Background knowledge could thus fill in and sharpen our experience beyond the narrow foveal center.  Holding our eyes still and inducing ignorance could artificially crimp the region of clarity.

Still, I doubt visual experience is nearly as sharp and detailed as most untutored introspectors seem to think.  Here’s the root of the mistake, I suspect: When the thought occurs to you to reflect on some part of your visual phenomenology, you normally move your eyes (or “foveate”) in that direction.  Consequently, wherever you think to attend, within a certain range of natural foveal movement, you find the clarity and precision of foveal vision.  It’s as though you look at your desk and ask yourself: Is the stapler clear?  Yes.  The pen?  Yes.  The artificial wood grain between them and the mouse pad?  Yes – each time looking directly at the object in question – and then you conclude that they’re all clear simultaneously.[16]

But you needn’t reflect in this way.  We can prize foveation apart from introspective attention.  Fixate on some point in the distance, holding your eyes steady while you reflect on your visual experience outside the narrow fovea.  Better, direct your introspective energies away from the fovea while your eyes continue to move around (or “saccade”) normally.  This may require a bit of practice.  You might start by keeping one part of your visual field steadily in mind, allowing your eyes to foveate anywhere but there.  Take a book in your hands, and let your eyes saccade around its cover, while you think about your visual experience in the regions away from the precise points of fixation.

Most of the people I’ve spoken to, who attempt these exercises, eventually conclude to their surprise that their experience of clarity decreases substantially even a few degrees from center.  Through more careful and thoughtful introspection, they seem to discover – in fact, I think they really do discover – that visual experience does not consist of a broad, stable field flush with precise detail, hazy only at the borders.  They discover that, instead, the center of clarity is tiny, shifting rapidly around a rather indistinct background.  My interlocutors – most of them – confess to error in having originally thought otherwise.

If I’m right about this, then most naive introspectors are badly mistaken about their visual phenomenology when they first reflect on it, when they aren’t warned and coached against a certain sort of error, even though they may be patiently considering that experience as it occurs.  And the error they make is not a subtle one: The two conceptions of visual experience differ vastly.  If naive introspectors are as wrong as they seem to be, as wrong as they later confess they are, about the clarity and stability of visual experience, they’re wrong about an absolutely fundamental and pervasive aspect of their sensory consciousness.

I’m a pretty skeptical guy, though.  I’m perfectly willing to doubt myself.  Maybe I’m wrong: Visual experience is a plenum.  But if so, I’m not the only person who’s wrong about this.  So also are most of my interlocutors (whom I hope I haven’t browbeaten too badly) and probably a good number of philosophers and psychologists.[17]  We – I, my friends and cobelievers – have been seduced into error by some theory or preconception, perhaps, some blindness, stupidity, oversight, suggestibility.  Okay, let’s assume that.  I need only, now, turn my argument on my head.  We tried to get it right.  We reflected, sincerely, conscientiously, in good faith, at a leisurely pace, in calm circumstances, without external compulsion, and we got it wrong.  Introspection failed us.  Since what I’m trying to show is the aptitude of introspection to lead to just such errors, that result would only further my ultimate thesis.  Like other skeptical arguments that turn on our capacity for disagreement, it can triumph in partial defeat.

I do have to hold this, though: Our disagreement is real and substantial.  My interlocutors’ opinions about their ongoing visual experience change significantly as a result of their reflections.  The mistake in question, whichever side it’s on, though perhaps understandable, is large – no miniscule, evanescent detail, no mere subtlety of language.  Furthermore, opinions on both sides arise from normal introspective processes – the same types of process (whatever they are) that underwrite most of our “introspective” claims about consciousness.  And finally, I must hold that those who disagree don’t differ in the basic structure of their visual experience in such a way as to mirror precisely their disagreements.  Maybe you can successfully attack one of these premises?

 

vii.

 

In 2002, David Chalmers and David Hoy ran a summer seminar in Santa Cruz, California, for professional philosophers of mind.  They dedicated an entire week of the seminar to the “phenomenology of intentionality”, including most centrally the question of whether thought has a distinctive experiential character.

There can be little doubt that sometimes when we think, reflect, ruminate, dwell, or what have you, we simultaneously, or nearly so, experience imagery of some sort: maybe visual imagery, such as of keys on the kitchen table; maybe auditory imagery, such as silently saying “that’s where they are”.  Now here’s the question to consider: Does the phenomenology of thinking consist entirely of imagery experiences of this sort, perhaps accompanied by feelings (emotions?) such as discomfort, familiarity, confidence?  Or does it go beyond such images and feelings?  Is there some distinctive phenomenology specifically of thought, additional to or conjoined with the images, perhaps even capable of transpiring without them?

Scholars disagree.  Research and reflection generate dissent, not convergence, on this point.  This is true historically,[18] and it was also true at the Santa Cruz seminar: Polled at the week’s end, seventeen participants endorsed the existence of a distinctive phenomenology of thought, while eight disagreed, either disavowing the phenomenology of thought altogether or saying that imagery exhausts it.[19]

If the issue were highly abstract and theoretical, like most philosophy, or if it hung on recondite empirical facts, we might expect such disagreement.  But the introspection of current conscious experience – that’s supposed to be easy, right?  Thoughts occupied us throughout the week, presumably available to be discerned at any moment, as central to our lives as the seminar table.  If introspection can guide us in such matters – if it can guide us, say, at least as reliably as vision – shouldn’t we reach agreement about the existence or absence of a phenomenology of thought as easily and straightforwardly as we reach agreement about the existence of the table?

Unless people diverge so enormously that some have a phenomenology of thought and others do not, then someone is quite profoundly mistaken about her own stream of experience.  Disagreement here is no matter of fine nuance.  If there is such a thing as a conscious thought, then presumably we have them all the time.  How could you go looking for them and simply not find them?  Conversely, if there’s no distinctive phenomenology of thought, how could you introspect and come to believe that there is – that is, invent a whole category of conscious experiences that simply don’t exist?  Such fundamental mistakes almost beggar the imagination; they plead for reinterpretation as disagreements only in language or theory, not real disagreements about the phenomenology itself.

I don’t think that’s how the participants in these disputes see it, though; and, for me at least, the temptation to recast it this way dissipates when I attempt the introspection myself.  Think of the Price of Wales.  Now consider: Was there something it was like to have that thought?  Set aside any visual or auditory imagery you may have had.  The question is: Was there something further in your experience, something besides the imagery, something that might qualify as a distinctive phenomenology of thinking?  Try it again, if you like.  Is the answer so obvious you can’t imagine someone going wrong about it?  Is it as obvious as that your desk has drawers, your shirt is yellow, your shutters are cracked?  Must disagreements about such matters necessarily be merely linguistic or about philosophical abstracta?  Or, as I think, might people genuinely misjudge even this very basic, absolutely fundamental and pervasive aspect of their conscious experience, even after putting their best introspective resources to work?

 

viii.

 

In my view, then, we’re prone to gross error, even in favorable circumstances of extended reflection, about our ongoing emotional, visual, and cognitive phenomenology.  Elsewhere, I’ve argued for a similar ineptitude in our ordinary judgments about auditory experience and visual imagery.  I won’t repeat those arguments here.[20]  All this is evidence enough, I think, for a generalization: The introspection of current conscious experience, far from being secure, nearly infallible, is faulty, untrustworthy, and misleading – not just possibly mistaken, but massively and pervasively.  I don’t think it’s just me in the dark here, but most of us.  You too, probably.  If you stop and introspect now, there’s likely very little you should confidently say you know about your own current phenomenology.  Perhaps the right kind of learning, practice, or care could largely shield us from error – an interesting possibility that merits exploration! – but I see as yet no robust scientific support for such hopes.[21]

What about pain, a favorite example for optimists about introspection?  Could we be infallible, or at least largely dependable, in reporting ongoing pain experiences?  Well, there’s a reason optimists like the example of pain – pain and foveal visual experience of a single bright color.  It is hard, seemingly, to go too badly wrong in introspecting really vivid, canonical pains and foveal colors.  But to use these cases only as one’s inference base rigs the game.  And the case of pain is not always as clear as sometimes supposed.  There’s confusion between mild pains and itches or tingles.  There’s the football player who sincerely denies he’s hurt.  There’s the difficulty we sometimes feel in locating pains precisely or in describing their character.  I see no reason to dismiss, out of hand, the possibility of genuine introspective error in these cases.  Psychosomatic pain, too: Normally, we think of psychosomatic pains as genuine pains, but is it possible that some, instead, involve sincere belief in a pain that doesn’t actually exist?

Inner speech – “auditory imagery” as I called it above – can also seem hard to doubt – that I’m silently saying to myself “time for lunch”.  But on closer inspection, I find it slipping my grasp.  I lean toward thinking that there is a conscious phenomenology of imageless thought (as described in §vii) – but as a result, I’m not always sure whether some cogitation that seems to be in inner speech is not, instead, imageless.  And also: Does inner speech typically involve not just auditory images but also motor images in the vocal apparatus?  Is there an experiential distinction between inner speaking and inner hearing?  I almost despair.

Why, then, do people tend to be so confident in their introspective judgments, especially when queried in a casual and trusting way?  Here’s my suspicion: Because no one ever scolds us for getting it wrong about our experience and we never see decisive evidence of error, we become cavalier.  This lack of corrective feedback encourages a hypertrophy of confidence.  Who doesn’t enjoy being the sole expert in the room whose word has unchallengeable weight?  In such situations, we tend to take up the mantle of authority, exude a blustery confidence – and genuinely feel that confidence (what professor doesn’t know this feeling?), until we imagine possibly being shown wrong later by another authority or by unfolding events.  About our own stream of experience, however, there appears to be no such humbling danger.

 

ix.

 

But wait: Suppose I say “I’m thinking of a pink elephant” – or even, simply, “I’m thinking”.  I’m sincere, and there’s no linguistic mistake.  Aren’t claims of this sort necessarily self-verifying?  Doesn’t merely thinking such thoughts or reaching such judgments, aloud or silently, guarantee their truth?  Aren’t, actually, their truth conditions just a subset of their existence conditions? – and if so, mightn’t this help us out somehow in making a case for the trustworthiness of introspection?

I’ll grant this: Certain things plausibly follow from the very having of a thought: that I’m thinking, that I exist, that something exists, that my thought has the content it in fact has.  Thus, certain thoughts and judgments will be infallibly true whenever they occur – whatever thoughts and judgments assert the actuality of the conditions or consequences of having them.  But the general accuracy of introspective judgments doesn’t follow.

Infallibility is, in fact, cheap.  Anything that’s evaluable as true or false, if it asserts the conditions or consequences of its own existence or has the right self-referential structure, can be infallibly true.  The spoken assertion “I’m speaking” or “I’m saying ‘blu-bob’” is infallibly true whenever it occurs.  The sentence “This sentence has five words” is infallibly true whenever uttered.  So is the semaphore assertion “I’m holding two flags”.  So, sure, certain thoughts are infallibly true – true whenever they occur.  This shouldn’t surprise us; it’s merely an instance of the more general phenomenon of self-fulfillment.  It has nothing whatsoever to do with introspection; it implies no perfection in the art of ascertaining what’s going on in one’s mind.  If introspection happens to be the process by which thoughts of this sort sometimes arise, that’s merely incidental: Infallibly self-fulfilling thoughts are automatically true whether they arise from introspection, from fallacious reasoning, from evil neurosurgery, quantum accident, stroke, indigestion, divine intervention, or sheer frolicsome confabulation.

And how many introspective judgments, really, are infallibly self-fulfilling?  “I’m thinking” – okay.  “I’m thinking of a pink elephant” – well, maybe, if we’re liberal about what qualifies as “thinking of” something.[22]  But “I’m not angry”, “my emotional phenomenology right now is entirely bodily”, “I have a detailed image of the Taj Mahal in which every arch and spire is simultaneously well defined”, “my visual experience is all clear and stable 100 degrees into the periphery”, “I’m having an imageless thought of a pink elephant”.  Those are a different matter entirely, I’d say.

And, anyway, I’m not so sure we haven’t changed the topic.  Does the thought “I’m thinking” or “I’m thinking of a pink elephant” really express a judgment about current conscious experience?  Philosophers might reasonably take different stands here, but it’s not clear to me that I’m committed to believing anything, or anything particular, about my conscious experience in accepting such a judgment.  I’m certainly not committed to thinking I have a visual image of a pink elephant, or an “imageless thought” of one, or that the words “pink elephant” are drifting through my mind in inner speech.  I might hold “I’m thinking of a pink elephant” to be true while I suspect any or all of the latter to be false.  Am I committed at least to the view that I’m conscious?  Maybe.  Maybe this is one fact about our conscious experience we infallibly know (could I reach the judgment that I’m conscious nonconsciously?).[23]  But your ambitions for introspection must be modest indeed if that satisfies you.

 

x.

 

I sometimes hear the following objection: When we make claims about our phenomenology, we’re making claims about how things appear to us, not about how anything actually is.  The claims, thus divorced from reality, can’t be false; and if they’re true, they’re true in a peculiar way that shields them from error.  In looking at an illusion, for example, I may well be wrong if I say the top line is longer; but if I say it appears or seems to me that the top line is longer, I can’t in the same way be wrong.  The sincerity of the latter claim seemingly guarantees its truth.  It’s tempting, perhaps, to say this: If something appears to appear a certain way, necessarily it appears that way.  Therefore, we can’t misjudge appearances, which is to say, phenomenology.

This reasoning rests on an equivocation between what we might call an epistemic and a phenomenal sense of “appears” (or, alternatively, “seems”).  Sometimes, we use the phrase “it appears to me that such-and-such” simply to express a judgment – a hedged judgment, of a sort – with no phenomenological implications whatsoever.  If I say, “It appears to me that the Democrats are headed for defeat”, ordinarily I’m merely expressing my opinion about the Democrats’ prospects.  I’m not attributing to myself any particular phenomenology.  I’m not claiming to have an image, say, of defeated Democrats, or to hear the word “defeat” ringing in my head.  In contrast, if I’m looking at an illusion in a vision science textbook, and I say that the top line “appears” longer, I’m not expressing any sort of judgment about the line.  I know perfectly well it’s not longer.  I’m making instead, it seems, a claim about my phenomenology, about my visual experience.[24]

Epistemic uses of “appears” might, under certain circumstances, be infallible in the sense of the previous section.  Maybe, if we assume that they’re sincere and normally caused, their truth conditions will be a subset of their existence conditions – though a story needs to be told here.[25]  But phenomenal uses of “appears” are by no means similarly infallible.  This is evident from the case of weak, nonobvious, or merely purported illusions.  Confronted with a perfect cross and told there may be a “horizontal-vertical illusion” in the lengths of the lines, one can feel uncertainty, change one’s mind, and make what at least plausibly seem to be errors about whether one line “looks” or “appears” or “seems” in one’s visual phenomenology to be longer than another.  You might, for example, fail to notice – or worry that you may be failing to notice – a real illusion in your experience of the relative lengths of the lines; or you might (perhaps under the influence of a theory) erroneously report a minor illusion that actually isn’t part of your visual experience at all.  Why not?[26]

Philosophers who speak of “appearances” or “seemings” in discussing consciousness invite conflation of the epistemic and phenomenal senses of these terms.  They thus risk breathing an illegitimate air of indefeasibility into our reflections about phenomenology.  “It appears that it appears that such-and-such” may have the look of redundancy; but on disambiguation the redundancy vanishes: “It epistemically seems to me that my phenomenology is such-and-such”.  No easy argument renders this statement self-verifying.

 

xi.

 

Suppose I’m right about one thing – about something that appears, anyway, hard to deny: that people reach vastly different introspective judgments about their conscious experience, their emotional experience, their imagery, their visual experience, their thought.  If these judgments are all largely correct, people must differ immensely in the structure of their conscious experience.

You might be happy to accept that, if the price of denying it is skepticism about introspective judgments.  Yet I think there’s good reason to pause.  Human variability, though impressive, usually keeps to certain limits.  Feet, for example – some are lean and bony, some fat and square, yet all show a common design: skin on the outside, stout bones at the heel, long bones running through the middle into toes, nerves and tendons arranged appropriately.  Only in severe injury or mutation is it otherwise.  Human livers may be larger or smaller, better or worse, but none is made of rubber or attached to the elbow.  Human behavior is wonderfully various, yet we wager our lives daily on the predictability of drivers and no one shows to department meetings naked.  Should phenomenology prove the exception by varying radically from person to person – some of us experiencing 100 degrees of visual clarity, some only 2 degrees, some possessed of a distinctive phenomenology of thought, some lacking it, and so forth – with as little commonality as these diverse self-attributions seem to suggest?  Of course, if ocular physiology differed in ways corresponding to the differences in report, or if we found vastly different performances on tests of visual acuity or visual memory, or if some of us possessed higher cognition or sympathetic emotional arousal while others did not, that would be a different matter.  But as things are, two people walk into a room, their behavioral differences are subtle, their physiologies are essentially the same, and yet phenomenologically they’re so alien as to be like different species?  Hm!

Here’s another possibility: Maybe people are largely the same except when they introspect.  Maybe we all have basically the same visual phenomenology most of the time, for example, until we reflect directly on that phenomenology – and then some of us experience 100 degrees of stable clarity while others experience only two degrees.  Maybe we all have a phenomenology of thought, but introspection amplifies it in some people, dissipates it in others; analogously for imagery, emotions, and so forth.

That view has its attractions.  But to work it so as to render our introspective judgments basically trustworthy, one must surrender many things.  The view concedes to the skeptic that we know little about ordinary, unintrospected experience, since it hobbles the inference from introspected experience to experience in the normal, unreflective mode.  It threatens to make a hash of change in introspective opinion: If someone thinks a previous introspective opinion of hers was mistaken – a fairly common experience among people I interview (see, for example, §vi) – she must, it seems, generally be wrong that it was mistaken.  She must, generally, be correct, now, that her experience is one way, and also correct, a few minutes ago, that it was quite another way, without having noticed the intervening change.  This seems an awkward coupling of current introspective acumen with profound ignorance of change over time.  The view renders foolish whatever uncertainty we may sometimes feel when confronted with what might have seemed to be introspectively difficult tasks (as in §§iv, vii, and x).  Why feel uncertain if the judgment one reaches is bound to be right?  It also suggests a number of particular – and I might say rather doubtful – empirical commitments (unless consciousness is purely epiphenomenal): major differences in actual visual acuity, while introspecting, between those reporting broad clarity and those reporting otherwise; major differences in cognition, while introspecting, between people reporting a phenomenology of thought and those denying it; and so on.  The view also requires an entirely different explanation of why theorists purporting to use “immediate retrospection”[27] also find vastly divergent results – since immediate retrospection, if successful, postpones the act of introspection until after the conscious experience to be reported, when presumably it won’t have been polluted by the introspective act.

Is there some compelling reason to take on all this?

 

xii.

 

There are two kinds of unreliability.  Something might be unreliable because it often goes wrong or yields the wrong result, or it might be unreliable because it fails to do anything or yield any result at all.  A secretary is unreliable in one way if he fouls the job, unreliable in another if he neglects it entirely.  A program for delivering stock prices is unreliable in one way if it tends to misquote, unreliable in another if crashes.  Either way, they can’t be depended on to do what they ought.[28]

Introspection is unreliable in both ways.  Reflection on basic features of ongoing experience leads sometimes to error and sometimes to perplexity or indecision.  Which predominates in the examples of this essay is not, I think, a deep matter, but rather a matter of context or temperament.  Some introspectors will be more prone to glib guesswork than others.  Some contexts – for example, a pessimistic essay on introspection – will encourage restraint.  But whether the result is error or indecision, introspection will have failed – if we suppose that introspection ought to yield trustworthy judgments about the grossest contours of ongoing conscious experience.

You might reject that last idea.  Maybe we shouldn’t expect introspection to reveal (for example) the bodily or non-bodily aspects of emotion, the presence or absence of a distinctive cognitive phenomenology.  It wouldn’t, then, tell against the reliability of introspection if such cases baffle us.  It doesn’t tell against the reliability of a stock quote program if it doesn’t describe the weather.  A passenger car that overheats going 120 m.p.h. isn’t thereby unreliable.  Maybe I’ve pushed introspection beyond its proper limits, illegitimately forcing it into failure.

What, then, would be the proper domain of introspection, narrowly enough construed to preserve its reliability?  Our ongoing beliefs and desires?  That changes the topic away from current conscious experience: When I report believing that a body-builder is governor of California, I’m not, I think – at least not directly and primarily – reporting introspectively on an ongoing episode of consciousness.[29]  Our current thoughts and emotions – but only their contents, not their form or structure?  That, too, might be changing the topic.  Thought and emotion may not be best construed as purely phenomenal.  The self-attribution of current thought contents and emotional states (as opposed to the phenomenal form and structure of those thoughts and emotions) may be more expressive or reactive (like a spontaneous “I hate you!”) or simply self-fulfilling (§ix) than introspective, if we’re going to be strict about what properly falls in the domain of introspection.  And of course the accuracy of emotional self-attribution is disputable (§iv); as, I think, is the accuracy of our self-attribution of recently past thought contents.[30]

We may generally be right about foveal visual experience of color and the presence or absence of canonical pains, but it’s arbitrary to call such reports introspective and not similar-seeming reports about the overall clarity of the visual field or the presence or absence of bodily aspects of emotion.  In both formal and informal interviews with me, and in the experiments of early introspective psychologists like Titchener (1901-1905), and in the recent explorations of psychologists like Hurlburt (1990; Hurlburt and Schwitzgebel forthcoming), subjects confidently pronounce on the features of experience discussed in this essay.  Neither I, nor they, nor Titchener, nor Hurlburt, nor anyone else I’m aware of, sees any obvious difference in mechanism.  These basic facts of experience are the proper targets of introspection, if anything is.  If introspection regularly fails to discern them correctly, it is not a reliable process.

 

xiii.

 

Descartes, I think, had it quite backwards when he said the mind – including especially current conscious experience – was better known than the outside world.  The teetering stacks of paper around me, I’m quite sure of.  My visual experience as I look at those papers; my emotional experience as I contemplate the mess; my cognitive phenomenology as I drift in thought, staring at them – of these, I’m much less certain.  My experiences flee and scatter as I reflect.  I feel unpracticed, poorly equipped with the tools, categories, and skills that might help me dissect them.  They are gelatinous, disjointed, swift, shy, changeable.  They are at once familiar and alien.

The tomato is stable.  My visual experience as I look at the tomato shifts with each saccade, each blink, each observation of a blemish, each alteration of attention, with the adaptation of my eyes to lighting and color.  My thoughts, my images, my itches, my pains, bound away as I think about them, or remain only as self-conscious, interrupted versions of themselves.  Nor can I hold them still, even as artificial specimens – as I reflect on one aspect of the experience it alters and grows, or it crumbles.  The unattended aspects undergo their own changes too.  If outward things were so evasive, they’d also mystify and mislead.

I know better what’s in the burrito I’m eating than I know my gustatory experience as I eat it.  I know it has cheese.  In describing my experience, I resort to saying, vaguely, that the burrito tastes “cheesy”, without any very clear idea what this involves.  Maybe, in fact, I’m just – or partly – inferring: The thing has cheese, so I must be having a taste experience of “cheesiness”.  Maybe also, if I know that the object I’m seeing is evenly red, I’ll infer a visual experience of uniform “redness” as I look at it.  Or if I know that weeding is unpleasant work, I’ll infer a negative emotion as I do it.  Indeed, it can make great sense as a general strategy to start with judgments about plain, easily knowable facts of the outside world, then infer to what is more foreign and elusive, our consciousness as we experience that world.[31]  I doubt we can fully disentangle such inferences from more “genuinely introspective” processes.

Descartes thought, or is often portrayed as thinking, that we know our own experience first and most directly, and then infer from that to the external world.[32]  If that’s right – if our judgments about the outside world, to be trustworthy, must be grounded in sound judgments about our experiences – then our epistemic situation is dire indeed.  However, I see no reason to accept any such introspective foundationalism.[33]  Indeed, I suspect the opposite is nearer the truth: Our judgments about the world to a large extent drive our judgments about our experience.  Properly so, since the former are the more secure.[34]


References:

Armstrong, D.M. 1963.  Is Introspective Knowledge Incorrigible?  Philosophical Review 72:417-32.

Bar-On, Dorit. 2004.  Speaking My Mind.  Oxford: Oxford.

Bayle, Pierre. 1702/1734-8.  The Dictionary Historical and Critical of Mr. Peter Bayle, translated by des Maizeaux. London: Knapton et al.

Bem, Daryl J. 1972.  Self-Perception Theory.  Advances in Experimental and Social Psychology 6:1-62.

Berkeley, George. 1710/1965.  A Treatise Concerning the Principles of Human Knowledge.  In Principles, Dialogues, and Philosophical Correspondence, edited by C.M. Turbayne.  New York: Macmillan.

Blackmore, Susan. 2002.  There Is No Stream of Consciousness.  Journal of Consciousness Studies 9 (numbers 5-6):17-28.

Boring, E.G. 1921.  The Stimulus Error.  American Journal of Psychology 32:449-71.

Brandstätter, Hermann. 2001.  Time Sampling Diary: An Ecological Approach to the Study of Emotion in Everyday Life Situations.  In Persons, Situations, and Emotions, edited by H. Brandstätter and A. Eliasz, 20-52.  Oxford: Oxford.

Burge, Tyler. 1988.  Individualism and Self-Knowledge.  Journal of Philosophy 85:649-63.

Burge, Tyler. 1996.  Our Entitlement to Self-Knowledge.  Proceedings of the Aristotelian Society 96:91-116.

Chalmers, David J. 1996.  The Conscious Mind.  New York: Oxford.

Chalmers, David J. 2003.  The Content and Epistemology of Phenomenal Belief.  In Consciousness: New Philosophical Essays, edited by Q. Smith and A. Jokic.  Oxford: Clarendon.

Chisholm, Roderick M. 1957.  Perceiving.  Ithaca, NY: Cornell.

Chuang Tzu. 3rd c. B.C.E./1964.  Basic Writings, translated by B. Watson.  New York: Columbia.

Churchland, Paul M. 1985.  Reduction, Qualia, and the Direct Introspection of Brain States.  Journal of Philosophy 82: 8-28.

Churchland, Paul M. 1988.  Matter and Consciousness, rev. ed.  Cambridge, MA: MIT.

Dennett, Daniel C. 1969.  Content and Consciousness.  New York: Humanities.

Dennett, Daniel C. 1991.  Consciousness Explained.  Boston: Little, Brown, and Co.

Dennett, Daniel C. 2001.  Surprise, Surprise.  Behavioral and Brain Sciences 24:982.

Dennett, Daniel C. 2002.  How Could I Be Wrong?  How Wrong Could I Be?  Journal of Consciousness Studies 9 (numbers 5-6):13-6.

Descartes, René. 1641/1984.  Meditations on First Philosophy.  In The Philosophical Writings of Descartes, translated by J. Cottingham, R. Stoothoff, and D. Murdoch. Cambridge: Cambridge.

Dretske, Fred. 1995.  Naturalizing the Mind.  Cambridge, MA: MIT.

Dretske, Fred. 2000.  Perception, Knowledge, and Belief.  Cambridge: Cambridge.

Dretske, Fred. 2003.  How Do You Know You Are Not a Zombie?  In Privileged access, edited by B. Gertler, 1-13.  Aldershot, England: Ashgate.

Ericsson, K. Anders and Herbert A. Simon. 1984/1993.  Protocol Analysis: Verbal Reports As Data, rev. ed.  Cambridge, MA: MIT.

Gertler, Brie. 2001.  Introspecting Phenomenal States.  Philosophy & Phenomenological Research 63:305-28.

Gertler, Brie, editor. 2003.  Privileged Access.  Aldershot, England: Ashgate.

Goldman, Alvin. 1986.  Epistemology and Cognition.  Cambridge, MA: Harvard.

Goldman, Alvin. 2004.  Epistemology and the Evidential Status of Introspective Reports.  Journal of Consciousness Studies 11 (numbers 7-8):1-16.

Goldman, Alvin. 2006.  Simulating Minds.  New York: Oxford.

Gordon, Robert M. 1995.  Simulation Without Introspection or Inference from Me to You.  In Mental Simulation, edited by M. Davies and T. Stone, 53-67.  Oxford: Blackwell.

Haybron, Dan. forthcoming.  Do we know how happy we are?  Noûs.

Hintikka, Jaakko. 1962.  Cogito Ergo Sum: Inference or Performance?  Philosophical Review 71: 3-32.

Horgan, Terence and John Tienson. 2002.  The Intentionality of Phenomenology and the Phenomenology of Intentionality.  In Philosophy of Mind, edited by D. J. Chalmers, 520-33.  New York: Oxford.

Horgan, Terence, John Tienson, and George Graham. 2005.  Internal-World Skepticism and the Self-Presentational Nature of Phenomenal Consciousness.  In Experience and Analysis: Proceedings of the 27th International Wittgenstein Symposium, edited by M. Reicher and J. Marek, 191-207.  Wien: ÖBV & HPT.

Hume, David. 1740/1978.  A Treatise of Human Nature, edited by L.A. Selby-Bigge and P.H. Nidditch.  Oxford: Clarendon.

Hume, David. 1748/1975.  An Enquiry Concerning Human Understanding.  In Enquiries Concerning Human Understanding and Concerning the Principles of Morals, edited by L.A. Selby-Bigge and P.H. Nidditch.  Oxford: Clarendon.

Humphrey, George. 1951.  Thinking.  London: Methuen.

Hurlburt, Russell T. 1990.  Sampling Normal and Schizophrenic Inner Experience.  New York: Plenum.

Hurlburt, Russell T. and Eric Schwitzgebel.  Forthcoming.  Describing Inner Experience? Proponent Meets Skeptic.  Cambridge, MA: MIT.

Jack, Anthony I, and Tim Shallice. 2001.  Introspective Physicalism As an Approach to the Science of Consciousness.  Cognition 79:161-96.

Jackson, Frank. 1977.  Perception.  Cambridge: Cambridge.

James, William. 1890/1981.  The Principles of Psychology.  Cambridge, MA: Harvard.

Kornblith, Hilary. 1998.  What Is It Like to Be Me?  Australasian Journal of Philosophy 76:48-60.

Kriegel, Uriah. 2006.  The Same-Order Monitoring Theory of Consciousness.  In Self-representational approaches to consciousness, edited by U. Kriegel and K. Williford.  Cambridge, MA: MIT.

Lambie, John A. and Anthony J. Marcel. 2002.  Consciousness and the Varieties of Emotion Experience: A Theoretical Framework.  Psychological Review 109:219-59.

Lawlor, Krista. 2006.  Knowing What One Believes.  Oral presentation, U.C. Riverside.

Locke, John. 1690/1975.  An Essay Concerning Human Understanding, edited by P. H. Nidditch.  Oxford: Clarendon.

Lutz, Antoine, John D. Dunne, and Richard J. Davidson. Forthcoming.  Meditation and the Neuroscience of Consciousness.  In Cambridge Handbook of Consciousness, edited by P. Zelazo, M. Moscovitch, and E. Thompson.

Lycan, William G. 1996.  Consciousness and Experience.  Cambridge, MA: MIT.

Mack, Arien and Irvin Rock. 1998.  Inattentional Blindness.  Cambridge, MA: MIT.

McGeer, Victoria. 1996.  Is “Self-Knowledge” an Empirical Problem?  Renegotiating the Space of Philosophical Explanation.  Journal of Philosophy 93:483-515.

Montaigne, Michel de 1580/1948.  The Complete Essays of Montaigne, translated by D. M. Frame.  Stanford, CA: Stanford.

Moran, Richard. 2001.  Authority and Estrangement.  Princeton, NJ: Princeton.

Nichols, Shaun and Stephen P. Stich. 2003.  Mindreading.  Oxford: Clarendon.

Nisbett, Richard E. and Timothy DeCamp Wilson. 1977.  Telling More Than We Can Know: Verbal Reports on Mental Processes.  Psychological Review 84:231-59.

Noë, Alva. 2004.  Action in Perception.  Cambridge, MA: MIT.

O’Regan, J. Kevin. 1992.  Solving the “Real” Mysteries of Visual Perception.  The World As an Outside Memory.  Canadian Journal of Psychology 46:461-88.

Pitt, David. 2004.  The Phenomenology of Cognition, or What Is It Like to Think That P?  Philosophy and Phenomenological Research 69:1-36.

Prinz, Jesse J. 2004.  Gut Reactions.  Oxford: Oxford.

Rensink, Ronald A., J. Kevin O’Regan, and James J. Clark. 2000.  On the Failure to Detect Changes in Scenes Across Brief Interruptions.  Visual Cognition 7:127-45.

Robinson, William S. 2005.  Thoughts Without Distinctive Non-Imagistic Phenomenology.  Philosophy and Phenomenological Research 70:534-60.

Rosenthal, David M. 1986.  Two Concepts of Consciousness.  Philosophical Studies 49:329-59.

Ryle, Gilbert. 1949.  The Concept of Mind.  New York: Barnes & Noble.

Sanches, Francisco. 1581/1988.  That Nothing Is Known, edited and translated by E. Limbrick and D. F. S. Thomson.  Cambridge: Cambridge.

Schooler, Jonathan, and Charles A. Schreiber. 2004.  Experience, Meta-Consciousness, and the Paradox of Introspection.  Journal of Consciousness Studies 11 (numbers 7-8):17-39.

Schwitzgebel, Eric and Michael S. Gordon. 2000.  How Well Do We Know Our Own Conscious Experience? The Case of Human Echolocation.  Philosophical Topics 28:235-46.

Schwitzgebel, Eric. 2002a.  How Well Do We Know Our Own Conscious Experience? The Case of Visual Imagery.  Journal of Consciousness Studies 9 (numbers 5-6): 35-53.

Schwitzgebel, Eric. 2002b.  Why Did We Think We Dreamed in Black and White?  Studies in History and Philosophy of Science 33:649-60.

Schwitzgebel, Eric. 2004.  Introspective Training Apprehensively Defended: Reflections on Titchener’s Lab Manual.  Journal of Consciousness Studies 11 (numbers 7-8): 58-76.

Schwitzgebel, Eric. 2006.  Do Things Look Flat?  Philosophy and Phenomenological Research 72:589-99.

Schwitzgebel, Eric, Changbing Huang, and Yifeng Zhou. 2006. Do We Dream in Color? Cultural Variations and Skepticism.  Dreaming 16:36-42.

Schwitzgebel, Eric. 2007a.  Do You Have Constant Tactile Experience of Your Feet in Your Shoes? Or Is Experience Limited to What’s in Attention?  Journal of Consciousness Studies 14 (number 3): 5-35.

Schwitzgebel, Eric. 2007b.  No Unchallengeable Epistemic Authority, of Any Sort, Regarding Our Own Conscious Experience – Contra Dennett?  Phenomenology and the Cognitive Sciences 6:107-13.

Sextus Empiricus. c. 200/1994.  Outlines of Skepticism, translated by J. Annas and J. Barnes.  Cambridge: Cambridge.

Siewert, Charles P. 1998.  The Significance of Consciousness.  Princeton, NJ: Princeton.

Skinner, B. F. 1945.  The Operational Analysis of Psychological Terms.  Psychological Review 52:270-277.

Shoemaker, Sydney. 1994.  Self-Knowledge and “Inner Sense”.  Philosophy and Phenomenological Research 54:249-314.

Spener, Maja. 2007.  Phenomenal Adequacy and Introspective Evidence.  Unpublished MS.

Thomas, Nigel. 1999.  Are Theories of Imagery Theories of Imagination?  Cognitive Science 23:207-45.

Titchener, Edward Bradford. 1899.  A Primer of Psychology.  New York: Macmillan.

Titchener, Edward Bradford. 1901-1905.  Experimental Psychology.  New York: Macmillan.  [Note: The widely available 1971 reprint omits the instructor’s part of the first volume.]

Titchener, Edward Bradford. 1909.  Lectures on the Experimental Psychology of the Thought-Processes.  New York: Macmillan.

Titchener, Edward Bradford. 1910.  A Text-Book of Psychology.  New York: Macmillan.

Titchener, Edward Bradford. 1912.  The Schema of Introspection.  American Journal of Psychology 23:485-508.

Tye, Michael. 2003.  Representationalism and the transparency of experience.  In Privileged access, edited by B. Gertler.  Aldershot, England: Ashgate.

Unger, Peter. 1975.  Ignorance.  London: Clarendon.

Watson, John B. 1913.  Psychology As the Behaviorist Views It.  Psychological Review 20:158-77.

Wilson, Robert A. 2003.  Intentionality and Phenomenology.  Pacific Philosophical Quarterly 84:413-31.



[1] For Descartes, see especially his Second Meditation (1641/1984, 19).  For Hume, see the first Book of his Treatise (1739/1978), especially I.IV.II, 190, 212 and I.IV.V, 232.  (Hume may change his mind in the Enquiries: See the first Enquiry [1748/1975], §1, 13 and §7, 60.)  For Sextus, see Outlines of Skepticism (c. 200/1994), especially Ch. VII and X.  Pierre Bayle takes a similar position in the entry on Pyrrho in his Dictionary (1702/1734-8, vol. 4, especially remark B, 654).

[2] For Zhuangzi, see the second of his “Inner Chapters” (Chuang Tzu 3rd c. BCE/1964).  For Montaigne, see “Apology for Raymond Sebond” (1580/1948).  Sanches’ brief treatment of the understanding of the mind in That Nothing is Known (1581/1988, especially 243-5 [57-9]) is at most only a partial exception to this tendency.  So also is Unger 1975, III.§9, who seems to envision only the possibility of linguistic error about current experience and whose skepticism in this instance seems to turn principally upon an extremely demanding criterion for knowledge.  Huet’s Against Cartesian Philosophy (1694/2003) is nicely explicit in extending its skepticism to internal matters of ongoing thought, though the examples and arguments differ considerably from mine here.

[3] See especially his Primer of Psychology (1899) and his Experimental Psychology (1901-1905).  I discuss Titchener’s views about introspective training at length in Schwitzgebel 2004.

[4] Consider: Watson 1913; Skinner 1945; Ryle 1949; Bem 1972.

[5] For example: Armstrong 1963; Churchland 1988 – even Kornblith 1998, reading with a careful eye to distinguish error about current conscious experience from other sorts of error.  See also, recently: Shoemaker 1994; Lycan 1996; Dretske 2000; Jack and Shallice 2001; Nichols and Stich 2003; Goldman 2004, 2006; Horgan, Tienson, and Graham 2005; and most of the essays collected in Gertler 2003, among many others.  Gertler (2001) and Chalmers (2003) have recently attempted to revive restricted versions of (something like) infallibilism.  Chalmers’s infallibilism is so restricted I’m not sure how much useful substance remains.  See §ix for a discussion of the range and nature of infallible judgments.

[6] For more on Dennett’s granting people unchallengeable authority regarding their own experience, see Schwitzgebel 2007b.

[7] I see no necessary conflict between the current view of introspection and views on which conscious experience involves a “same order” (for example, Kriegel 2006) or “higher-order” (for example, Rosenthal 1986; Lycan 1996) representation of the conscious state.  Such views can allow – and to be plausible, I think they must allow – erroneous judgments of the sort to be discussed in this essay.  For example, a non-conscious “higher-order thought” that I am having experience E might conflict with a conscious judgment that I am not having experience E.  Of course, only the conscious judgment is a reportable result of an introspective process.

I am assuming the falsity of a strongly “self-presentational” view of consciousness (as, perhaps, in Horgan, Tienson, and Graham 2005).  The examples in the present essay, I think, reveal the implausibility of such an approach.

Views characterizing us as constantly and effortlessly introspecting must either generate unreportable, non-conscious judgments, or they must in some other way differ, in mechanism or result, from the sort of self-conscious introspective efforts that are the topic of this essay and to which the term “introspection” is here meant to refer.

[8] Prinz 2004 helpfully reviews a variety of positions and evidence pertinent to them.

[9] See, for example, Brandstätter 2001.  It wouldn’t surprise me in the least if positive mood even in studies such as this is considerably overreported.

[10] Haybron forthcoming presents an impressive array of evidence suggesting that we don’t know how (un-)happy we are.

[11] James 1890/1981 and Lambie and Marcel 2002 may be a good place to start on this topic.  In principle, of course, one could attempt to resolve such disputes by attributing vast individual differences in phenomenology to the participants, differences that perfectly mirror the divergences in their general claims; see §xi for a discussion of this.

[12] On skepticism about color in dreams see Schwitzgebel 2002b; Schwitzgebel, Huang, and Zhou 2006.

[13] I take this argument to be in the spirit of Armstrong 1963.  It needn’t require that the phenomenology and the judgment be entirely “distinct existences” in the sense Shoemaker 1994 criticizes, though of course it assumes that the one state is possible without the other.  The only reason I see to reject such a possibility is a prior commitment to infallibilism.

[14] For example, “Melanie” in Hurlburt and Schwitzgebel forthcomingHHurlHuHu.

[15] See also Dennett 2001, 982.

[16] In addition to this type of “refrigerator light” error (Thomas 1999), an implicit analogy between visual experience and pictures or photographs may also sway us to overascribe detail in visual experience (see Noë 2004).  Consider also Dennett 1969, 139-141.

[17] Among recent authors, Dennett (1991), O’Regan (1992), Mack and Rock (1998), Rensink, O’Regan, and Clark (2000), and Blackmore (2002) come to mind – though we differ somewhat in our positive views.  Some of these authors believe we do not visually experience what we don’t attend to.  I mean to take no stand here on that particular question, which I explore in depth in Schwitzgebel 2007a.

[18] The British empiricists (most famously, Locke 1690/1975; Berkeley 1710/1965; Hume 1739/1978) appear to have believed that conscious thought is always imagistic.  So did many later introspective psychologists influenced by them (notably Titchener 1909, 1910), against advocates of “imageless thought” (notably the “Würzburg group”, whose work is reviewed in Humphrey 1951).  Recent philosophers participating in the controversy include Siewert 1998; Horgan and Tienson 2002; Wilson 2003; Pitt 2004; Robinson 2005.  See also Aristotle De Anima 431a; Hurlburt and Schwitzgebel forthcoming.

[19] These and related poll results were published at http://consc.net/neh/pollresults.html (accessed May 2005).  I am inclined to read the disagreement between the “no phenomenology of thought” and the “imagery exhausts it” camps as a disagreement about terms or concepts rather than about phenomenology – a disagreement about whether having an image should count as “thinking”.  However, I see no similarly easy terminological explanation of the central dispute.

As I recall (though this number is not recorded on the website) only two participants (Maja Spener and I) said they didn’t know. 

[20] See Schwitzgebel and Gordon 2000; Schwitzgebel 2002a.  See also Schwitzgebel 2006 for a discussion of people’s divergent judgments about the experience of visual perspective, and Schwitzgebel 2007a for a discussion of our divergent judgments about whether we have a constant flow of peripheral experiences (of our feet in our shoes, the refrigerator hum, etc.).

[21] I explore the possibility of classical introspective training, along the lines of early introspective psychology, in Schwitzgebel 2004 and the possibility of careful interview about randomly sampled experiences in Hurlburt and Schwitzgebel forthcoming.  Schooler and Schreiber 2004 assesses the current scientific situation reasonably, if not quite as pessimistically.  Very recently, there has been some promising work on meditation: See Lutz, Dunne, and Davidson forthcoming.

[22] Compare Hintikka 1962; Burge 1988, 1996.

[23] But see Chalmers 1996 and Dretske 2003 on the possibility that we could be experienceless “zombies” without knowing it.  Both Chalmers and Dretske think we do know that we are conscious, but that it’s not straightforward to see how we know that.

[24] Compare Chisholm 1957; Jackson 1977.  Naturally, ordinary and philosophical usage of “appears” is rather more complex than this simple portrayal suggests, if one looks at the details; but I don’t think that affects the basic observation of this section.

[25] See Moran 2001 and Bar-On 2004 for versions of this story.

[26] For more on mistakes in the introspection of nonobvious or fictional illusions, see Schwitzgebel 2004.

[27] For example, James 1890/1981, 189; Titchener 1912, 491; Hurlburt 1990, ch. 2.

[28] Epistemologists often define “reliability” so that only the first type of failure counts as a failure of reliability (for example, Goldman 1986 who calls the second sort of failure a lack of “power”).  It’s a semantic issue, but I think ordinary language is on my side.

[29] See Gordon 1995; McGeer 1996; Moran 2001; Bar-On 2004; Lawlor 2006.

[30] Ericsson and Simon (1984/1993) are optimistic about the accuracy of descriptions of one’s thought processes when one “thinks aloud”, expressing the thought concurrently with having it.  They are considerably less optimistic about retrospective reports if the subject is not primed and trained in advance to express and reflect on her thoughts as they occur.

Burge (1996) argues that, to be successful, “critical reasoning” requires knowledge of recently past thought contents.  But I doubt much of our reasoning is “critical” in the relevant sense.  (Usually, it is spontaneous and un-self-reflective; often it is entirely hidden.)  Nor is it clear that when we try to reflect critically on our stream of reasoning we are reliably successful in doing so.

[31] Titchener thinks this strategy common among untutored introspectors, and he repeatedly warns against it as “stimulus error” or “R-error”: Titchener 1901-1905; Boring 1921.  This strategy bears some relation to the strategy that “transparency theorists” such as Dretske (1995, 2000) and Tye (2003) think we always use in reaching judgments about our experience (though they hardly think of experience as “elusive”).

[32] Whether this is the best interpretation of Descartes, I am uncertain.  My impression is that Descartes is not entirely clear on this point, and sympathetic interpretations of him shift with the mood of the times.  The view is also associated with Locke (1690/1985).

[33] Of course, if it were possible to draw a clear line between the trustworthy and untrustworthy introspective judgments, then maybe a version of introspective foundationalism could be salvaged.  I’m not optimistic that such a line could be drawn, or that, if it were, enough trustworthy would remain to be of much use.

[34] For helpful comments, criticism, and discussion, thanks to Donald Ainslie, Alvin Goldman, David Hunter, Tony Jack, Tori McGeer, Jennifer Nagel, Shaun Nichols, Gualtiero Piccinini, Josh Rust, Charles Siewert, Maja Spener (whose 2007 is similar in spirit to this essay), Aaron Zimmerman, and audiences at Washington University in St. Louis, Cal State Long Beach, University of Redlands, U.C. Santa Barbara, University of Toronto, and the Philosophy of Science Association.