psychology

Definition of Self Deception:

Self deception refers to the act of deceiving oneself or the practice of convincing oneself of something that is not true or distorting reality in order to maintain a certain belief, feeling, or behavior. It involves engaging in a false sense of consciousness or ignoring evidence that conflicts with one’s beliefs or desires. Self deception can occur in various aspects of life, including personal relationships, professional settings, and even within one’s own thoughts and emotions.

Characteristics of Self Deception:

Self deception is characterized by several key traits, including:

  • Delusion: Self deception often involves the creation or maintenance of delusions, where an individual forms a false belief or perception of reality based on their personal biases or desires.
  • Denial: Individuals engaging in self deception often deny or overlook evidence or information that contradicts their preferred beliefs or ideas, choosing to disregard facts or downplay their significance.
  • Justification: Self deception is often accompanied by justifications or rationalizations to support the individual’s false beliefs or actions. These justifications serve to protect one’s self-esteem or to maintain a certain level of comfort.
  • Selective Attention: People practicing self deception tend to selectively focus on information that confirms their pre-existing beliefs or desires, while disregarding or ignoring anything that challenges or contradicts them.
  • Emotional Bias: Self deception is often driven by emotional biases, where individuals allow their emotions or desires to influence their perception of reality, leading to distorted interpretations of events or situations.

Effects and Consequences of Self Deception:

Self deception can have various effects and consequences on individuals and their surroundings, including:

  • Interpersonal Challenges: Engaging in self deception can strain personal relationships, as it may lead to misunderstandings, lack of trust, and an inability to effectively communicate.
  • Failure in Decision Making: Self deception can impair one’s ability to make sound and rational decisions, as it often involves ignoring important information or overlooking potential consequences.
  • Mental Health Implications: Sustained self deception can negatively impact mental well-being, leading to increased stress, anxiety, and even depression.
  • Stagnation and Growth Limitations: By resisting or distorting reality, self deception can hinder personal growth, prevent individuals from acknowledging their flaws or weaknesses, and restrict their potential for self-improvement.
  • Loss of Objectivity: Self deception can cloud an individual’s ability to perceive and assess situations objectively, leading to biased and flawed perspectives.

Overcoming Self Deception:

While self deception can be deeply ingrained and challenging to overcome, individuals can strive to develop self-awareness and practice critical thinking to combat self deception. Some strategies to overcome self deception may include:

  • Honest Self-Reflection: Engaging in introspection and regularly questioning one’s beliefs and actions can help identify instances of self deception and open the door for personal growth.
  • Seeking External Perspectives: Actively seeking feedback and different viewpoints from trusted individuals can provide alternative insights and challenge one’s self-deceptive tendencies.
  • Examining Evidence Objectively: Making a conscious effort to evaluate evidence and information objectively, without allowing personal biases or desires to sway judgment, can help in countering self deception.
  • Cultivating Emotional Intelligence: Developing emotional intelligence can enhance self-awareness and enable individuals to recognize and manage emotional biases that contribute to self deception.
  • Embracing Vulnerability: Being open to vulnerability and acknowledging one’s limitations or mistakes can facilitate growth, reduce defensiveness, and help combat self deception.

Bookmark this page

  • Defining Critical Thinking
  • A Brief History of the Idea of Critical Thinking
  • Critical Thinking: Basic Questions & Answers
  • Our Conception of Critical Thinking
  • Sumner’s Definition of Critical Thinking
  • Research in Critical Thinking
  • Critical Societies: Thoughts from the Past

Translate this page from English...

*Machine translated pages not guaranteed for accuracy. Click Here for our professional translations.

Our Concept and Definition of Critical Thinking

Before viewing our online resources, please seriously consider supporting our work with a financial contribution. As a 501(c)(3) non-profit organization, we cannot do our work without your charitable gifts. We hope you will help us continue to advance fairminded critical societies across the world.

For full copies of many other critical thinking articles, books, videos, and more, join us at the Center for Critical Thinking Community Online - the world's leading online community dedicated to critical thinking!   Also featuring interactive learning activities, study groups, and even a social media component, this learning platform will change your conception of intellectual development.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Philos Trans R Soc Lond B Biol Sci
  • v.365(1538); 2010 Jan 27

Self-deception as self-signalling: a model and experimental evidence

Danica mijović-prelec.

1 Sloan School of Management and Neuroeconomics Center, MIT, Cambridge, MA 02139, USA

Dra z ̆ en Prelec

2 Department of Economics, MIT, Cambridge, MA 02139, USA

3 Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, USA

Self-deception has long been the subject of speculation and controversy in psychology, evolutionary biology and philosophy. According to an influential ‘deflationary’ view, the concept is an over-interpretation of what is in reality an instance of motivationally biased judgement. The opposite view takes the interpersonal deception analogy seriously, and holds that some part of the self actively manipulates information so as to mislead the other part. Building on an earlier self-signalling model of Bodner and Prelec, we present a game-theoretic model of self-deception. We propose that two distinct mechanisms collaborate to produce overt expressions of belief: a mechanism responsible for action selection (including verbal statements) and an interpretive mechanism that draws inferences from actions and generates emotional responses consistent with the inferences. The model distinguishes between two modes of self-deception, depending on whether the self-deceived individual regards his own statements as fully credible. The paper concludes with a new experimental study showing that self-deceptive judgements can be reliably and repeatedly elicited with financial incentives in a categorization task, and that the degree of self-deception varies with incentives. The study also finds evidence of the two forms of self-deception. The psychological benefits of self-deception, as measured by confidence, peak at moderate levels.

1. Introduction

Any definition of self-deception is likely to be controversial, so we start with an actual incident, witnessed by one of us a number of years ago.

It was sherry hour, a casual gathering of a few doctoral students, all good friends. A veteran student had just finished a lengthy disquisition on her recent scholarly progress and post-graduation aspirations. Warming to the topic, she asserted that she would complete her dissertation within the year. ‘Are you kidding, you're never going to finish it,’ remarked another with a smile, his guard down on account of the drink. The comment was not unjust; the student had nothing to show for some half dozen years in the programme. Yet, it hit the mark a bit too well, and in an instant its author found himself wiping the contents of a full glass of sherry from his face and shirt.

Like many true events, this one allows multiple interpretations. Two are relevant here, as picking out two modes of self-deception. To begin with, one could take the student's claim at face value: she is convinced that the dissertation will be completed on schedule, all evidence to the contrary. In the construction of this conviction, periodic extravagant affirmations played a key role, substituting for the absence of actual progress. Words became evidence, following the logic—‘if it wasn't true, then why would I say it?’ (and if true, how perverse to deny it?).

This would be one interpretation. On a second interpretation, the student understood very well that her scholarly prospects were dim. Yet, almost as a matter of personal ritual, she felt compelled to state a contrary belief, and perhaps for the moment she did entertain it. However, the belief was fragile, easily punctured by the offhand remark. She expressed conviction, but did not experience conviction, not in an authentic way. Tossing the sherry was a way of saying—‘Don't treat me like a fool, I have an idea how things stand, but why must you spell it out’.

Regardless of which reading is more faithful to the actual event, each refers to a genuine psychological possibility, requiring explanation. Here we present a formal theory of self-deception that relies on a single psychological mechanism—self-signalling—to generate self-deception in both of these alternative modes. The theory distinguishes among three levels of belief: deep belief , stated belief and experienced belief . Deep belief drives action, including overt statements of belief; experienced belief determines the emotional state following the statement. When stated belief does not match deep belief, we have attempted self-deception. The attempt succeeds if experienced belief matches stated belief. It misfires to the extent that the person discounts her own statement, with emotional response falling short of what might be expected on the basis of the words alone.

Deep beliefs are presumed to be largely inaccessible. This psychological opacity endows statements with self-signalling value, and creates a motive for self-deception. The formal model casts these assumptions into a signalling game, leading to predictions about how incentives and self-knowledge jointly determine whether self-deception is attempted and whether it succeeds. The two modes of self-deception arise as consequences of different levels of psychological awareness about the self-deception mechanism. According to the model, awareness should reduce the credibility of stated beliefs, as one might expect, but it need not eliminate the gap between stated and deep beliefs. In the full-awareness case, a person may be compelled to utter self-deceptive statements even though they have no effect on experienced belief ( Shapiro 1996 ). This would correspond to the ritualistic interpretation of the earlier incident.

We introduce the model in §§3 and 4. It extends the self-signalling model developed by R. Bodner & D. Prelec (Bodner & Prelec 1995 , 2003 ), and is also broadly related to recent economic models of intra-personal psychological interactions (Benabou & Tirole 2002 , 2004 ; Bernheim & Thomadsen 2005 ; Brocas & Carrillo 2008 ). This is followed by a new experimental test, presented in §5. In the study, subjects are asked to provide repeated assessments of their own performance in a competitive decision task. Self-assessments cannot affect actual performance, but can affect the subjects' expectations of winning the contest, leading potentially to self-deception. Consistent with the model, we find that financial incentives influence the degree of self-deception, and that the benefits of self-deception, as measured by confidence ratings, accrue to subjects exhibiting an intermediate level of self-deception, who are presumably unaware of their self-deception.

2. Background

Self-deception is an ancient subject. Classical philosophers, beginning with Aristotle and St Augustine, treated it at length, focusing especially on the connections between self-deception, morality and the emotions ( Elster 1999 ). Two thousand years of speculation and commentary have failed to exhaust the topic or forge a consensus interpretation. The notion of self-deception remains integral to Western understanding of human character, as shown by religious moralistic literature, drama and fiction, and by secular world-views such as Marxism, psychoanalysis and atheism, which promise to strip the scales from our self-deceiving eyes.

The modern scholarly literature on self-deception is similarly large, and rife with controversy. According to Gur & Sackeim's (1979) influential formulation, a self-deceived individual (i) holds two contradictory beliefs, p and not-p , (ii) holds them simultaneously, (iii) is unaware of holding one of the beliefs, and (iv) is motivated to remain unaware of that belief. There is an analogy here to inter-personal deception, where one party (the deceiver) knows or believes something and has a reason for inducing opposite beliefs in another party (the deceived). The interpersonal analogy highlights the distinction between those false beliefs that are arrived at by chance or through error, and those for which some intentional agency is responsible.

Moving from the inter-personal to the intra-personal level, the definition raises two paradoxes ( Mele 1997 ). The static paradox concerns the state of mind of the self-deceived individual: how can he hold two incompatible beliefs, p and not-p ? The dynamic paradox concerns the process of becoming self-deceived: how can a person intentionally acquire a belief or remain unaware of a belief? Recognition that one is generating or suppressing beliefs would seem to destroy the effectiveness of the effort itself.

An influential ‘deflationary’ response to these two paradoxes has been to deny both, and to assimilate self-deception to the general category of motivationally biased judgements (Mele 1997 , 1998 ). On this view, the interpersonal metaphor is misguided, and most if not all self-deception is not intentional. The opposite view takes the interpersonal analogy seriously, and holds that some part of the self actively manipulates information so as to mislead the other part. The psychoanalytic tradition falls squarely in this camp.

Some manifestations of self-deception lend themselves naturally to deflationary interpretations. Consider the finding that most people rate themselves as superior on virtually any desirable characteristic ( Brown & Dutton 1995 ; Dunning & Hayes 1996 ). For example, 94 per cent of university professors rate themselves as above average in professional accomplishment relative to their peers ( Gilovich 1991 ). Such findings may only show that most people give special weight to criteria favouring their own case. Once the self-serving bias is in place, the better-than-average conclusion can emerge even if specific pieces of evidence are evaluated in an impartial way. At no moment is it necessary for the individual to believe both p and not-p . Indeed, even rational inference can give rise to the better-than-average effect in some circumstances (J.-P. Benoit & J. Dubra 2009, unpublished data).

Self-serving beliefs can also be generated ad hoc through contrived cover stories, as shown by Kunda in a series of elegant demonstrations ( Kunda 1990 ). In one case, subjects were asked to evaluate the credibility of a (fake) scientific study linking coffee consumption and breast cancer. Female subjects who also happened to be heavy coffee drinkers were especially critical of the study, and the least persuaded by the presented evidence. This is only a sample of the literature documenting how evidence consistent with the favoured hypothesis receives preferential treatment ( Ditto & Lopez 1992 ; Dawson et al . 2002 ; Norton et al . 2004 ; Balcetis & Dunning 2006 ). Moreover, this phenomenon occurs largely outside of awareness ( Kunda 1987 ; Pyszczynski & Greenberg 1987 ; Pronin et al . 2004 ). No one questions the reality of motivated reasoning or perception. The critical issue is whether motivational biases are sufficient to explain self-deception.

From the perspective of the ‘real self-deception’ side, motivated reasoning explanations seem to ignore three critical aspects of self-deception. First, they do not account for the strong emotions generated when self-deceptive beliefs are challenged. What prevents the self-deceived from enjoying their false beliefs with smug complacency? There is no explanation for the brittle quality of self-deception ( Audi 1985 ; Bach 1997 ), 1 and the defensiveness associated with a self-deceptive personality.

Second, the motivated reasoning view denies the special significance of mistaken beliefs about the self . Yet, the concept of self-deception and the most salient examples of self-deception have historically been restricted to beliefs about the self ( Holton 2000 ). To reinforce this intuition, let us suppose that the student in our story had not been talking about the prospects for her dissertation but about some impersonal issue. Let us say that she believes that the 1969 Apollo moon landing is a gigantic hoax, and that she derived these views from a highly motivated interpretation of the evidence. In that case, we might call her biased, but it would be odd to accuse her of self-deception.

Finally, and perhaps most tellingly, under the motivated reasoning view it is hard to make sense of the notion of failed self-deception, a point made by Funkhouser in his provocatively entitled article, ‘Do the self-deceived get what they want?’ ( Funkouser 2005 ). If self-deception is merely the manifestation of a bias, then the self-deceived will by definition get what they want. A bias that misfires, i.e. one that leaves beliefs unchanged, is no bias at all.

In their original study of self-deception, Gur and Sackeim attempted to demonstrate the coexistence of two incompatible beliefs by exploiting the fact that people dislike the recorded sound of their own voice. In their experiment, subjects heard fragments of speech and were asked to identify the speaker ( Gur & Sackeim 1979 ). Non-recognition of own voice was often accompanied by physiological indications (galvanic skin response) suggestive of detection. Hence, the verbal assessment—‘this is not my own voice’—was in conflict with the physiologically based assessment—‘this is indeed my own voice’.

This interpretation has been criticized on grounds that physiological signs do not necessarily rise to the level of belief ( Mele 1997 ). Similar objections were raised by Mele against arguments from blindsight cases (the phenomenon where a patient claims blindness but is able to detect visual stimuli above chance; Weisenkrantz 1986 ). An ideal demonstration would be one where a single voluntary response conveys two incompatible propositions. A neuropsychological case study indicates how this may be done in principle ( Mijovic-Prelec et al . 1994 ). The patient in question suffered from unilateral visual neglect following a right hemisphere stroke, and to all appearances was unaware of details in the left visual space. However, under experimentally controlled conditions, when asked to judge the presence or absence of a randomly placed target, his verbal denial of left-side targets was suspiciously fast, much faster than his tentative response to null trials when no target was present—the two response time distributions were essentially non-overlapping. The speed of response matched the speed of detection of right-side targets, showing that the left-side target was noticed and that the patient realized the futility of searching for it elsewhere. A single response thus conveyed two contradictory propositions simultaneously: one voluntary response dimension (search time) conveyed p , while the other, equally voluntary, semantic dimension conveyed not-p . 2

Among studies with normal human subjects, an experiment by Quattrone & Tversky (1984) provides perhaps the cleanest challenge to deflationary accounts. Their experiment took place at a medical facility, adding credibility to the unusual cover story. Subjects were first asked to keep their hand submerged in a container of cold water until they could no longer tolerate the pain. This was followed by a debriefing, which explained that a certain inborn heart condition could be diagnosed by the effect of exercise on cold tolerance. The consequences of this condition included a shorter lifespan and reduced quality of life. Some subjects were told that having a bad heart would increase cold tolerance, while the others were told the opposite. Backing this up were charts showing different lifespan distributions associated with the two types of heart. Having absorbed this information, subjects were put on an exercycle for a minute after which they repeated the same cold water tolerance test. The majority showed changes in tolerance on the second cold trial in the direction correlated with ‘good news’. In effect, they were cheating on their own diagnosis.

Apart from the Quattrone–Tversky experiment, several other studies provide support for self-signalling. For example, respondents adjust answers to personality questionnaires so as to obtain a profile diagnostic of a good outcome ( Kunda 1990 ; Sanitioso et al . 1990 ; Dunning et al . 1995 ); they also adjust problem solving strategies ( Ginossar & Trope 1987 ), and charitable pledges in a diagnostically favourable direction ( Bodner 1995 ). In a recent paper, Dhar & Wertenbroch assess self-signalling directly in the context of consumer choices between goods that could be perceived as virtues (apples, organic pasta) or vices (cookies, steak) (R. Dhar & K. Wertenbroch 2007, unpublished data). They manipulate whether the choice set is homogeneous (containing only vice or only virtues) or mixed, the idea being that selections from mixed sets are diagnostic for self-control, whereas selections from a homogeneous set are not diagnostic. Consistent with the self-signalling hypothesis, they find that consumers are willing to pay relatively more for a virtuous good in a mixed set, when its selection would also generate positive diagnostic utility, but relatively more for a vice good in a homogeneous set, when its selection would avoid negative diagnostic utility.

3. Self-deception as self-signalling

One can attempt to provide a motivated reasoning interpretation of self-signalling. Thus, for example, Mele (1997) states that

One can hold (i) that sincere deniers (in the Quattrone–Tversky experiment), due to a desire to live a long, healthy life, were motivated to believe that they had a healthy heart; (ii) that this motivation (in conjunction with a belief that an upward/downward shift in tolerance would constitute evidence for the favoured proposition) led them to try to shift their tolerance; and (iii) that this motivation also led them to believe that they were not purposely shifting their tolerance …

According to this view, the trying and the false belief that one is not trying are both motivated by the desire for good news, but it does not follow that either the trying or the belief is intentional. However, to assimilate the results of Quattrone and Tversky to this deflationary point of view, one has to expand the powers ascribed to the concept of motivation. The mechanism responsible for trying to shift tolerance must register the difference between the natural tolerance level, corresponding to an absence of trying, and the shifted tolerance level obtained as a result of the trying. In other words, it must register both the true and the fake tolerance. It must not only be able to bias the interpretation of evidence, it must also be able to manufacture the evidence itself.

There is clearly a need to explain how a person can simultaneously try to do something and to be unaware of so trying. We will shortly provide an interpretation of self-deception that treats it as a special case of a self-signalling. Because the model draws on Bayesian game theory, we first say a few words about this modelling technology.

The basic building block is a rational agent, defined by preferences (utility function), beliefs (subjective probabilities), and an action or choice set. Faced with alternative actions, the agent is presumed to select the one that maximizes expected utility. New information is incorporated into his beliefs according to Bayes' rule. Strategic interactions among agents are modelled with Bayesian game theory. The standard solution concept here is the Nash equilibrium, which characterizes mutual consistency among different players' strategies. Briefly, strategies are in equilibrium if every player is maximizing expected utility, on the assumption that other players are following strategies specified by the equilibrium.

With these tools one can model self-deception in roughly three ways. The first is to adjust the Bayesian model of belief formation. For example, in a model by G. Mayraz (2009, unpublished data) subjective probabilities of outcomes are inflated or reduced in direct proportion to their utilities. In effect, the valuation of an uncertain outcome is treated as if it were an additional piece of information bearing on the likelihood of the outcome. The second is to treat the individual as a series of temporal selves, with earlier selves manipulating the beliefs of the later selves, e.g. by suppressing information directly or by exploiting future selves' recall of earlier actions but not of the motives that gave rise to those actions ( Caplin & Leahy 2001 ; Benabou & Tirole 2002 , 2004 ; Bernheim & Thomadsen 2005 ; Koszegi 2006 a , b ; Gottlieb 2009 ). The third approach is to add psychological structure by partitioning the decisionmaker into several simultaneously interacting entities, which could be called selves or modules depending on how much true agency and self-awareness they have ( Thaler & Shefrin 1981 ; Bodner & Prelec 2003 ; Brocas & Carrillo 2008 ; Fudenberg & Levine 2008 ).

The self-signalling model takes the behaviour revealed in the Quattrone and Tversky experiment as prototypical for self-deception. It was introduced by Bodner & Prelec (1995) , 3 as a formal decision model for non-causal motivation, that is, motivation to generate actions that are diagnostic of good outcomes but that have no causal ability to affect those outcomes. With respect to our threefold classification, it is a psychological structure model, partitioning the decision maker into two collaborative entities, one responsible for action selection and the other responsible for action interpretation. We first provide a short summary of the original model and then discuss how it accounts for self-deception as a byproduct of the self-signalling process.

Self-signalling presumes the existence of an underlying characteristic that is (i) personally important, (ii) introspectively inaccessible, and (iii) potentially revealed through actions. We let the parameter θ represent this characteristic, with θ ° indicating its actual value, x a possible outcome, and u ( x , θ ) the utility (reward or satisfaction) generated by the outcome x in the absence of any choice (i.e. a forced receipt of x ). Uncertainty about θ is defined by a probability distribution, p ( θ ), which may be taken as the current self-image with respect to this characteristic. The value of the self-image is, in turn, determined by a second function, v ( θ ), which indicates how much pleasure or pain a person would feel from discovering true θ .

By intentionally choosing one outcome over others, a person learns something about his or her inaccessible characteristics. Hence, an action leads to an updating of the self-image, from p ( θ ) to p ( θ | x ). The change in self-image generates a second form of utility, called diagnostic utility : Σ θ v ( θ ) p ( θ | x )− Σ θ v ( θ ) p ( θ ), produced by replacing p ( θ ) with the updated p ( θ | x ). Diagnostic utility captures the extent to which one's own choice provides good or bad news about θ .

In the context of the Quattrone–Tversky experiment, θ would correspond to cold sensitivity, u ( x , θ ) to the (dis)pleasure associated with x seconds of exposure to cold water in context of the experimental instructions, and v ( θ ) to relief or anxiety associated with discovering one's cold sensitivity level. The total utility of choosing to hold one's hand in cold water for x seconds would then be the sum of outcome and diagnostic utility:

equation image

where λ represents the weight of diagnostic utility. For notational simplicity we omit the constant term − Σ θ v ( θ ) p ( θ ).

This is the model as stated in Bodner & Prelec (2003) . However, in a self-deception scenario, what is at stake is a desired deep belief, e.g. that one's spouse is not having an affair. A husband may recognize certain problematic pieces of evidence but remain unsure about his own reading of them. Self-signalling is extended to such cases by treating one's interpretation of evidence as the relevant inaccessible characteristic. Formally, θ S is the probability of event S , and u ( x , θ ) an expectation over these events: u ( x , θ ) = Σ S θ S U ( x , S ), where U ( x , S ) is the utility of x if the event S occurs. The self-signalling equation then becomes 4

equation image

In §5, we will apply this equation to the explicit financial incentives that are set up by our experiment. But first we need to complete the model by specifying p ( θ | x ).

4. Two modes of self-deception

Previously, we had referred to the static and dynamic paradoxes of self-deception as central to the debate on the subject. The present model addresses the static paradox, on the coexistence of different beliefs, by postulating three levels of belief. Deep belief is associated with the inaccessible characteristic, whose actual value is θ 0 . Stated belief is associated with the signalling action x , which either directly or indirectly expresses belief. Experienced belief is associated with the self-inference that follows the statement, p ( θ | x ).

Regarding the second, dynamic paradox, the model allows resolution in one of two ways, both of which have psychological plausibility. Observe that to complete the model we need to specify how p ( θ | x) is derived from the choice and from p ( θ ). There are two endogenous rules for computing this distribution ( Prelec & Bodner 2003 ), that is, rules that require no new parameters beyond the ones already given: u ( x , θ ) and p ( θ ). These rules generate the two variants of self-signalling.

The first, face-value rule assumes that the inferential mechanism operates without awareness of diagnostic motivation. The updated inferences, p ( θ | x ), are then based on the assumption that an action reveals the characteristic that maximizes only the outcome-utility component of total utility, ignoring the diagnostic component. Formally, this corresponds to the requirement that: p ( θ | x ) > 0 implies: u ( x , θ ) ≥ u ( y , θ ), for any other choice y . That is, by choosing x I demonstrate deep beliefs such that x maximizes standard expected utility given these deep beliefs (with ties resolved by Bayes' rule). There is no discounting for diagnostic motivation. Diagnostic utility would be experienced as an unintentional byproduct of choice, not something that consciously affected choice.

The second rational rule, assumes full awareness about the self-signalling motive expressed in equation (3.1). p ( θ | x ) must then fully reflect the fact that actions are motivated by the anticipated inferences that flow from them. The signalling value of an ostensibly virtuous action is thereby reduced, or ‘discounted’ for diagnostic motivation. Formally, this corresponds to the requirement that: p ( θ | x ) > 0 implies: V ( x , θ ) ≥ V ( y , θ ), for any other choice y . This carries to a logical conclusion the basic idea in self-perception theory ( Bem 1972 ), namely, that the process of inferring underlying beliefs and desires from external behaviour is the same irrespective of whether the inferences pertain to someone else or to ourselves. Just as we might discount someone else's good behaviour as being due only to a desire to impress, so too we could discount our own behaviour for ulterior motives, according to the true interpretation assumption. 5

Now we can indicate how the model resolves the dynamic paradox of self-deception. Recall that the paradox centres on the question whether the attempt to self-deceive destroys the credibility of the resulting belief. The paradox disappears if there is consistency between choice of x as a function of θ , and inference about θ as a function of observed x . This is what the equilibrium requires: the experienced beliefs p ( θ | x ) place positive probability only on those characteristics θ that maximize utility in light of p ( θ | x )—total utility for the rational variant, or outcome utility for the face-value variant. Regardless of which inferential rule is used, the beliefs experienced following the self-deceptive action will be consistent with the level of insight one has into one's tendency to self-deceive.

Self-deception is attempted whenever a person selects an action that does not maximize u ( x , θ ); however, the attempt is successful to the extent that p ( θ | x ) changes relative to p ( θ ). Which situation obtains depends crucially on awareness. With face-value interpretations, self-deception if attempted always succeeds. There is no discounting for self-deceptive motivation. In contrast, rational interpretations lead to a discounting of self-deceptive actions and statements. The crucial point, however, is that discounting does not eliminate the motive to self-signal, even in the extreme case where the self-deceptive statement has no self-credibility. Intuitively, this is because discounting affects positive and negative statements asymmetrically. Self-serving statements and predictions may be weakly believed or not believed at all, while the pessimistic may remain totally credible. For example: ‘I will finish my dissertation on schedule,’ may provide little reassurance that the dissertation will be finished. However, the opposite statement, that ‘I will not finish my dissertation on schedule,’ is clear evidence that the dissertation will indeed not be finished. In that case, a positive statement becomes mandatory not because it will be believed, but because of fear of the all-too-credible power of a negative statement. The function of the positive statement is not to convince but merely to preserve uncertainty about deep beliefs.

The self-signalling model allows, therefore, for two modes of self-deception. In the first mode, self-deceptive statements lead to changes in experienced belief, which is consistent with the traditional understanding of self-deception. In the second mode, self-deceptive statements have a ritualistic quality, leaving little or no trace on experienced belief. One might call this is an ideological or ‘personal-correctness’ mode, by analogy with political-correctness in the social domain. 6 A ‘correctness regime’—whether personal or social/political—is characterized by rigid standards of expression and an intolerance of minor deviations from ‘official belief’. But in neither case is public conformity solid evidence of underlying support or conviction. This residual uncertainty about deep belief may be the source of the defensiveness and touchiness associated with self-deception.

5. A self-deception experiment

Much of the lay interest in self-deception derives from its alleged destructive consequences, from the feeling that people engage in self-deception in spite of the evident harm. Yet, the issue of cost is rarely addressed in experiments on self-deception, or in experiments on motivated reasoning (for an exception in the context of negotiations, see Babcock & Loewenstein 1997 ). It is generally considered sufficient to show that a particular manipulation biases judgements away from the truth. Subjects generally do not suffer any loss as a result of their experimentally induced self-deception.

A second unresolved issue is the link between awareness and self-deception. Indeed, the conceptual distinction between attempted and successful self-deception is not always observed. The impact of awareness is shown by an intriguing subsidiary result reported by Quattrone and Tversky. In the debriefing to the main experiment, they found that a significant minority of subjects acknowledged trying to influence the test after the fact, and were pessimistic about their heart condition. These subjects were evidently trying to self-deceive, but were not successful in the attempt.

These two issues motivate the study that we now describe. The specific experimental setting also hopes to capture some of the characteristics present in the dissertation incident. If one were to abstract from the details, these characteristics could be expressed as follows:

  • There is a remote, important goal, such as the success of a research programme or dissertation.
  • Interim signs of progress arrive regularly. They are ambiguous and require explicit assessment.
  • There are costs to providing over-optimistic assessments, but these costs will only be revealed at the end of the enterprise.
  • While optimistic assessments of interim progress may provide momentary psychological relief, they do nothing to increase the chances that the goal will actually be achieved. There are no benefits of the ‘self-fulfilling prophecy’ kind.

Self-signalling implies that if the desire for good news is strong enough, it will bias interim assessments even if such biasing reduces overall chances of achieving the long-run goal. Moreover, we should observe the bias even if the judgemental task is novel, and incentives purely financial, i.e. unrelated to any chronic self-esteem concerns that subjects might bring to the laboratory. In other words, we should be able to generate self-deception repeatedly, reliably, and with arbitrary stimuli and incentives.

(a) Procedure

The subjects were 85 students at Princeton University, recruited through PLESS, the Princeton Experimental Economics Lab. The experiment involved many repetitions of a difficult categorization and prediction task; the ‘large remote goal’ was a chance of winning a $40 bonus if their overall performance was exceptionally good, according to criteria described below.

The experiment had two phases. In the first phase, they saw a series of 100 Korean characters on the computer screen and, following the presentation of each character, they were asked to classify it as more ‘male-like’ or ‘female-like’ in appearance. Individuals who had some familiarity with Korean characters were excluded from the study. The subjects therefore could only view the characters as abstract figures. They were given no special instructions about how to make this judgement, except to try to use their intuition and to take into account the entire configuration of the sign. Following each classification, they also rated their confidence on a five point scale.

To create incentives for careful responding, they were told (truthfully) that there is a correct answer for each sign, determined by the majority opinion of a group of previously tested subjects. They were told nothing about the composition, size, or incentives of this group, except that it was given the same instructions to use intuition and take into account the entire configuration of the sign.

Having been informed about the consensus-based answer key, subjects were told that they would receive $0.02 for each correct binary gender classification, correctness defined according to this key (there were no separate incentives for confidence ratings). In economic terms, the incentives corresponded to a ‘beauty contest’ game, where the winning answer is the one that matches majority opinion. Importantly, subjects never received any feedback on the accuracy of their classifications. While deliberately ambiguous, these instructions nevertheless generated considerable agreement in classifications (60–65% on average). The sorting largely conformed to conventional stereotypes; for example, ‘female-like’ signs were more likely to contain circles or numerous smaller diagonal strokes. Examples of signs eliciting high consensus are shown in figure 1 .

An external file that holds a picture, illustration, etc.
Object name is rstb20090218f01.jpg

Examples of four signs judged to be more female-like ( a ) or more male-like ( b ) by a clear majority of respondents. There was no significant bias towards one or the other gender category in C1 or C2 classifications. However, there was a slight bias towards male anticipations: 51.9% for male, versus 48.1% for female, ( p < 0.001 by χ 2 -test). Subject's gender (41% female, 59% male) had no impact on classifications or anticipations.

The sole purpose of the initial classification in phase 1 was to create a subjective answer key, one for each participant, capturing that participant's best guess of how the peer group will assign gender. These answers could then be compared against subsequent classifications under incentive conditions designed to promote self-deception.

In phase II subjects encountered the same set of signs, in a different order, and were again asked to classify them according to gender (and rating confidence on the same five point scale). However, at the beginning of each trial, before the sign was displayed, subjects were asked to anticipate (by pressing the M or F key) whether the next sign would be more male-like or female-like. Because the signs arrived in random order, the gender of the next sign was unpredictable, and the subjects were forced to purely guess. As in phase I, each correct response (anticipation and classification) was credited with $0.02, with the total only revealed at the end of the experiment. In summary, a subject who somehow managed to respond with perfect accuracy would receive $2 in phase I, and $4 in phase II ($2 for the 100 perfect anticipations and $2 for the perfect classifications).

This incentive structure was set up to generate a potential motive for self-deception. Suppose, for example, that a participant anticipated that the next sign would be ‘male’. If the next sign had a more female-like shape, then the participant would face a dilemma, namely, whether to acknowledge the anticipation error or to reinterpret the sign as in fact looking more male.

To modulate the strength of the self-deception motive, we added to these piece-rate accuracy incentives an additional bonus of $40, which depended on overall performance relative to other subjects. The criteria for assigning the bonus differed across the two treatment groups. In the classification bonus group, the bonus was reserved for the top three subjects according to ex-post classification accuracy in phase II. In the anticipation bonus group, it was reserved for the top three subjects according to anticipation accuracy. As a result, the motive for self-deceptive, i.e. anticipation-confirming classifications was relatively weaker in the classification bonus condition and relatively stronger in the anticipation bonus condition.

We refer to this as a self-deception ‘motive’ rather than ‘incentive’ because the experiment in fact provides no financial incentives for self-deception. A subject that indulges in self-deceptive classifications will not thereby increase the actual accuracy of his or her anticipations, but will probably decrease the accuracy of classifications. Hence, a self-deceptive response pattern in the anticipation bonus condition purchases spurious psychological benefits (the feeling that one has a higher chance of winning the $40 bonus) with real financial costs.

(b) Predictions

These benefits would not appear in the analysis if one applied standard decision theory to the second classification decision. However, they do figure in the self-signalling equation. Suppose that the subject has anticipated that the next sign would be Male. Upon observing the sign, she has to decide whether to classify it as Male ( m ) or Female ( f ). The financial rewards of either response depend on actual gender, whether the sign ( S ) is male ( S = M ) or female ( S = F ), and are shown in the decision matrix below, where a is the reward for correct anticipation and c the reward for correct classification ( table 1 ).

Table 1.

The payoff matrix for the classification response, following an anticipation that the sign will be male. The reward for correct classification is c , while the reward for correctly having anticipated that the sign will be male is a . The terms a and c include both the $0.02 piece-rate payment for accuracy and any subjective impact on the expectation of winning the $40 bonus. In the classification bonus condition, the bonus increases the value of c , while in the anticipation bonus condition, it increases the value of a .

The subject gets credit for classifying correctly, but also wishes to believe that the stimulus is male, to validate the correctness of the preceding anticipation. Given these incentives, the self-signalling equation (3.2) derives the utilities for the two responses

equation image

Previously, we mentioned two rules for specifying the inferences that a person might draw from her own actions. With face-value interpretations, the subject falsely believes that she is not affected by diagnostic considerations, and therefore assumes that if she classified the stimulus as male, this must mean that she indeed believes deep down that θ M > θ F , which is to say that θ M > 0.5. This implies that E ( θ M | x = m ) = E ( θ M | θ M > 0.5) > E ( θ M | θ M < 0.5) = E ( θ M | x = f ).

With rational interpretations, the situation is more complex, because awareness of diagnostic motivation discounts the signal; the subject appreciates that there is now a lower bar θ * < 0.5 for classifying the sign as male, and consequently that E ( θ M | x = m = E ( θ M | θ M > θ *). However, discounting preserves the basic directional implication, namely, that a male classification provides positive information that the sign was in fact male, i.e. E ( θ M | x = m ) = E ( θ M | θ M > θ *) > E ( θ M | θ M < θ *) = E ( θ M | x = f ).

An external file that holds a picture, illustration, etc.
Object name is rstb20090218e5.jpg

Therefore, under rational interpretations if the weight of diagnostic utility exceeds the threshold: λ > 2 c / a , the only possible response is the confirming one, even though this response has no impact on experienced beliefs.

With either face-value or rational interpretations, the diagnostic utility of a male categorization, following a male anticipation, should be positive. The model thus predicts that anticipation-confirming classifications will increase with anticipation incentives ( a ) and decrease with classification incentives ( c ).

(c) Results

To summarize, participants made five responses in connection with each sign: an initial classification in phase I (C1) followed by a confidence rating (R1), and in phase II a blind anticipation (A) followed by a second classification (C2) and confidence rating (R2). The responses can be mapped onto the theoretical variables in the following way. C2 corresponds to x . If we let θ C denote the probability that a classification is correct, then R1 is an ordinal indicator of the prior expectation that C1 is correct, E ( θ C1 ) = Σ θ θ C1 p ( θ C1 ), and R2 is an indicator of posterior expectation E ( θ C2 |C2) = Σ θ θ C2 p ( θ C2 |C2). Therefore, the difference R2 − R1 will be our proxy measure of diagnostic utility.

Collapsing across the male/female categories and ignoring the confidence ratings, trials can be sorted into one of four types. A consistent trial corresponds to the pattern C2 = A = C1, where all three responses coincide. An honest pattern corresponds to C2 ≠ A and C2 = C1, that is, the subject acknowledges that the preceding anticipation was incorrect, and confirms the original gender classification in phase I. A self-deceptive pattern corresponds to: C2 = A≠ C1, that is, the sign changes gender so as to make the preceding anticipation seem correct. An inconsistent pattern corresponds to C2 ≠ A = C1, that is, the subject changes mind about the gender even though the anticipation was consistent with his original classification. The frequency of inconsistent patterns provides a baseline for assessing whether there is statistically significant self-deception, or whether the trials labelled as self-deceptive reflect simple variability in classifications.

Table 2 presents the breakdown of trial patterns, by treatment group. Two results stand out: first, the proportion of self-deceptive patterns is greater than the proportion of inconsistent patterns, which define the error baseline. Hence, the second classification judgement is influenced by the preceding anticipations at the aggregate level. 7 Second, this impact of anticipations is greater in the anticipation bonus condition, relative to the classification bonus condition. The table provides two measures of impact, as the absolute or relative per cent increase in self-deceptive patterns, over the inconsistent baseline. Depending on which measure one adopts, the gap between the self-deceptive and inconsistent shares is between two to three times greater in the anticipation bonus condition. This confirms that the impact of anticipations on subsequent classifications is controlled in large measure by the financial incentives.

Table 2.

Distribution of trial patterns for the two different treatment groups. The labelling MFF, for example, refers to an initial classification of male in phase I, and an anticipation of female followed by a classification as female in phase II.

Figure 2 displays the self-deceptive and inconsistent pattern percentages for all 85 subjects, indicating treatment by colour. The impact of treatment is evident here as well. This can be confirmed statistically by counting the number of subjects with significant self-deception at the individual level, and then comparing between groups. A logistic regression of C2 against C1 and A simultaneously provides a sensitive individual-level test (the inclusion of C1 in the regression controls for bias towards one or the other gender classification, as well as for chance correlation between C1 and A). In the absence of self-deception, the coefficient on A should be non-significant. In the classification bonus treatment, 53 per cent of subjects are significantly self-deceptive at the 0.05 level, and 27 per cent at the 0.001 level; these percentages rise to 73 and 45 per cent, respectively, in the anticipation bonus condition. Comparing treatments, the difference in proportions is significant ( χ 2 = 5.93, p < 0.02 for p = 0.05 cutoff, χ 2 = 3.13, p < 0.08 for p = 0.001 cutoff).

An external file that holds a picture, illustration, etc.
Object name is rstb20090218f02.jpg

The impact of incentive condition on self-deception. Per cent of inconsistent patterns gives the baseline for assessing self-deception. The majority of subjects with strong-deception come from the anticipation bonus condition. The ovals are approximate (green circles, subjects with $40 classification bonus; red circles, subjects with $40 anticipation bonus).

In what follows, we will refer to subjects with self-deception at the 0.001 level as the high self-deception (SD) group ( N = 30), and those with self-deception at only 0.05 level as the moderate SD group ( N = 20). 8

There are no indications that self-deception is associated with lower effort; if anything, the relationship runs the other way. The average accuracy at C1 (according to the peer group answer key) increases from 61.5 to 63.2 and 66.2 per cent for the non-, moderate and high SD groups (the difference between high and none is significant, t (63) = 2.10, p < 0.05, as is the difference between high and the rest, t (83) = 2.00, p < 0.05). However, this difference disappears at C2, where the average accuracies are 62.3, 63.8 and 62.8 per cent. The change in accuracy is significant for the high SD group only (matched-pair t -test, t (2973) = 3.38, p < 0.0005). It appears that the high SD subjects exhibit greater motivation and engagement with the task initially, but their advantage disappears in the second phase, as result of self-deception.

(d) What psychological benefits are obtained for the reduction in objective accuracy?

According to self-signalling theory there is a diagnostic utility benefit, which we cannot measure directly but which should be revealed through the confidence ratings that follow each classification response. The benefit is modulated by awareness: it should be higher with face-value interpretations, and lower or nonexistent with rational interpretations. A plausible proxy for awareness is the overall rate of anticipation-confirming responses. 9 These rates vary in a predictable manner across the groups: 53 per cent (non-SD), 63 per cent (moderate SD), and 76 per cent (high SD). 10 High confirmation rates ought to raise doubts about the integrity of the confirming response. The subjects presumably understand that their anticipations are random guesses, and that being correct three times out of four is simply not sustainable.

The average confidence ratings (1–5 scale) are not significantly different for the three groups, at 3.08, 3.32 and 2.92, respectively, but are directionally consistent with the hypothesis that the benefits of self-deception peak at moderate levels. Moreover, among subjects with statistical self-deception (pooling moderate and high groups), the correlation between confidence and confirmation rate is significantly negative ( r = −0.40, t (48) = −3.05, p < 0.005).

A more appropriate indicator of diagnostic utility is the difference between the second and the first confidence ratings, R2 − R1. This removes variation in intrinsic confidence that subjects might have with respect to the classification task, as well as variation in how they use the rating scale. On normative grounds, one would expect confidence to increase following C2 = C1, suggestive of a less ambiguous sign, and no change in confidence following C2 = A, because the anticipation has no information value. What one observes, instead, is that confirming responses (C2 = A) increase and disconfirming responses decrease confidence (matched-pairs, t (82) = +1.66 for C2 = A, p < 0.05; t (82) = −1.96, p < 0.03 for C2 ≠ A). In contrast, classification confirming responses (C2 = C1) have no impact on confidence.

Looking at the three groups separately, the moderate SD group experiences an increase in confidence following confirmation ( t (19) = +2.11, p < 0.05), the high SD group experiences a marginally significant decrease in confidence following disconfirmation ( t (27) = −1.76, p < 0.05 one-tailed), and the non-SD group does not register significant changes in confidence following either type of response. Hence, one could say that the moderate SD group is motivated by the benefits of confirmation, and the high SD group by the costs of disconfirmation, which is consistent with discounting of the confirming judgements as predicted by the model.

The net benefits of confirmation are highest at moderate rates, as shown in figure 3 , which displays quadratic regression of change in confidence on confirmation rate. As expected, the quadratic term is significant, but only following a confirming response. According to the estimated fit, the boost in confidence reaches a maximum at about 65 per cent confirmation rate, which is presumably high enough to have impact but not so high to raise suspicion. This relationship is driven by the changes in confidence experienced after a confirmation, and specifically among subjects in the anticipation bonus condition.

An external file that holds a picture, illustration, etc.
Object name is rstb20090218f03.jpg

Average change in the 1–5 confidence rating (R2−R1) following a disconfirming (C2 ≠ A, a ) or confirming response (C2 = A, b ) plotted by subject against subject's confirmation rate. The solid line is best fitting quadratic, with shaded 95% confidence interval. The linear term is not significant in either ( a ) or ( b ); the negative quadratic term is significant in ( b ) ( p < 0.002). If the analysis is conducted on the difference between ( a ) and ( b ) (which corresponds to the diagnostic utility of confirming rather than disconfirming), the negative quadratic remains highly significant ( t = −3.47, p < 0.001), and linear becomes positively significant ( t = +2.52, p < 0.02).

Response time data provide additional evidence of different processing at high self-deception levels. Figure 4 displays C2 response time as percentage of C1 response time, by trial pattern and level of self-deception. This nets out differences in response time between subjects, and also nets out stimulus-specific differences in response time, due to differential difficulty of classifying different stimuli.

An external file that holds a picture, illustration, etc.
Object name is rstb20090218f04.jpg

C2 response times expressed as per cent of C1 response times, plotted separately for subjects with high, moderate and no self-deception, and for different types of trials produced by the C2 response. Levels connected by the same letter are not significantly different ( p < 0.05, t -test; blue bar, no self-deception ( n = 35); green bar, moderate SD 0.001 < p < 0.05 ( n = 20); red bar, high SD p < 0.001 ( n = 20).

Subjects without statistical self-deception show no difference in C2 response times as a function of trial pattern. Moderate self-deception is associated with longer C2 response times on honest and inconsistent trials. The pattern that clearly emerges with high self-deception subjects is fast confirming response times, that is, whenever C2 = A.

To better understand the significance of this, we computed individual subject correlations between change in log-response time and change in confidence. Normally, one would interpret response time as an (inverse) indicator of response confidence. However, a fast response time could also indicate higher motivation without confidence, or a desire to move away quickly from a problematic stimulus to the next task.

The fraction of subjects showing an anomalous positive coefficient, implying lower confidence for faster response times, is higher (non-significantly) in the high SD group (27%, compared with 20% and 14% for the moderate SD and non-SD). The difference in correlation coefficients between the high SD group and the remaining subjects is significant ( t (83) = 2.40, p < 0.02), as is the correlation between the correlation coefficients and individual confirmation rates (Spearman ρ = 0.21, p < 0.07). The standard relationship between fast response time and confidence appears to deteriorate at high confirmation rates. Subjects with high SD have a higher propensity to confirm and they confirm more quickly, but these faster response times are no longer a reliable indicator of confidence.

This suggests that the self-deception we observe here is probably not a biased sifting of perceptual evidence. A sifting of evidence would presumably prolong response time on self-deceptive trials, which is opposite to the pattern we observe in figure 4 . Rapid response times associated with self-deception suggest suppression of evidence, rather than a second-look at the evidence.

(e) Discussion

Two main findings emerge from the experiment. First, it is possible to induce costly self-deception in a repeated decision task, by presenting subjects with the prospect of a large and essentially non-contingent financial bonus. Actions that provide favourable news about the chances of winning the bonus are selected more often than they should be. The extent of this self-deception is in turn sensitive to the financial parameters, as predicted by the self-signalling model. A majority of subjects exhibit some statistical self-deception, but some avoid it altogether, even with high incentives.

Second, among subjects with self-deception there is great variation in the extent of statistical bias towards the diagnostically favourable response. Subjects with moderate levels of bias appear to derive some psychological benefit from self-deception, reflected in their higher confidence ratings. In contrast, subjects with the most severe bias show no improvement in confidence relative to subjects without any bias at all.

These results strongly point to the conclusion that differences in levels of bias are associated with differences in awareness that one is biased. While we do not measure awareness directly, common sense suggests that a self-assessed success rate of 60 per cent (rather than the unbiased 50%) can sneak by under-the-radar, like small rates of cheating ( von Hippel et al . 2005 ; Mazar et al . 2008 ); however, a rate of, say, 80 per cent is definitely too good to be true. Confirming responses will deliver the psychological benefit in confidence only if the overall bias in confirmation rate does not stray outside of some reasonable margin.

Granting this, one still needs to explain the extravagant bias observed in so many subjects. As a group, these subjects are certainly not careless, as shown by their greater accuracy in the initial phase of the experiment, before the bonus opportunity is revealed. If they are strongly motivated to win, they will also appreciate that they have to do exceptionally well to have any chance; being right a little more than half the time is not enough. In the absence of feedback, a high self-estimated success rate, while not necessarily credible, preserves some possibility of winning the bonus, while a more candid, average rate would subjectively rule it out. This would explain the briskness of the confirming responses, and the lack of any boost in confidence following them. 11

An interesting question is whether subjects ‘see’ the characters differently, as a result of their anticipations. This would be consistent with the findings on motivated perception of ambiguous letters and animal drawings by Balcetis & Dunning (2006) , and with the Berns et al . (2005) fMRI replication of Asch's classic experiment on conformity. A potentially important difference between our paradigm and that of Balcetis and Dunning is that in our case the desired response category was changing rapidly from one trial to the next. In that sense, the task frustrated the development of a stable bias towards one or the other category. (We also find little evidence that more ambiguous signs are more vulnerable to self-deceptive reclassification, whether ambiguity is measured by initial confidence rating, initial response time, or concordance across subjects.) Therefore, while we cannot address the perceptual question conclusively, it seems that some other mechanism must be responsible for the very highest rates of confirmation observed in our study.

6. Concluding remarks

We have proposed here a theory of genuine self-deception, that is, self-deception conceived strictly on the interpersonal model. The equations of the model could apply equally well to the interaction of two individuals, each with distinct beliefs, actions, and objectives, with one individual attempting to deceive the other. From the equations, one cannot tell whether this is a model of self-deception or just plain deception. In the presentation of the theory, we have not emphasized the interpersonal interpretation, because the postulated personae or ‘selves’ are necessarily speculative. In this concluding section we will comment on the interpersonal interpretation in more detail. This will clarify the psychological architecture implicit in the model and allow us to comment briefly on the potential evolutionary benefits of this architecture.

At the formal level, the self-signalling model represents the interaction between two agents or ‘interests’ ( Ainslie 1986 ), whom we may identify as actor and interpreter. The actor has private motives that are hidden from the interpreter. He makes a choice, potentially revealing something about these motives. The interpreter observes the choice, infers the underlying motive, and then grades the motive according to a preset formula. The grade matters to the actor, it enters into his utility function as the diagnostic utility component. It does not matter to the interpreter, he does not care what grade he gives as long as it is the correct grade. So, there is conflict but it is not a conflict over ultimate objectives or ongoing behaviour, only a conflict over interpretations of actions and underlying motives. The actor wishes to extract a good grade, if possible a better grade than he deserves, while the interpreter strives for objectivity.

Returning to the psychological, intrapersonal level, the same model now refers to the interaction of two optimizing modules, one responsible for behaviour selection and the other for accurate online behaviour evaluation. The interpreting module has some characteristics of a conscience or superego in that it scrutinizes behaviour impartially for underlying motivation. However, it falls short of being a conscience in lacking intrinsic values or preferences.

What might be the reason for this psychological architecture? Why split the mind into two elements and render one element ignorant? Trivers developed a provocative evolutionary rationale for self-ignorance in his theory of self-deception ( Trivers 1985 ). According to him, we are unaware of our true motives so as to be better able to deceive others. The sincere deceiver is presumed to have advantage in not having to pretend, to hold two distinct attitudes in mind at the same time. This would be especially true of emotions, which are notoriously difficult to disguise. Complete unawareness of one's true motives would make deception of others effortless.

Trivers has in mind an unconcerned ignorance of motive. In contrast, the model developed here deals with concerned ignorance: The person is unsure about his characteristics and this precisely is the source of worry. Uncertainty makes actions informative, and sustains diagnostic motivation. This leads to a different rationalization of self-ignorance, as means of enhancing the motivational significance of actions.

It is notoriously hard to assess the significance of an additional day's progress, whether for a dissertation or some other remote goal. Assessed coldly, the impact of even a very good day may be negligible. Yet, success requires persistence, and persistence must be rewarded before the final outcome is known. Such rewards cannot come from the outside but only from the organism itself. If the organism acquires the ability to self-reward, then it must also acquire an objective, external attitude towards it's own actions. This argues for the structural separation of modules responsible for action selection and those responsible for interpreting and rewarding those actions. It also argues for denying internal information to the interpretational mechanisms. As custodian of self-reward, the interpreter should take into account the external evidence that actions would provide to an outside observer, and not the internal, corruptible evidence of feelings and intentions. The less the interpreter knows about internal parameters, the greater the chances that it will enforce objective criteria for delivering self-reward.

On this view, genuine self-deception, as opposed to mere bias, is a byproduct of this specific modular architecture. Like ordinary deception, it is an external, public activity, involving overt statements or actions directed towards an audience, whether real or imagined. Modelling this process as a signalling game, as we have done in this paper, provides benefits that we hope will be exploited further in future work. First, the formal theory raises conceptual possibilities that might otherwise be overlooked. In particular, it draws attention to the possibility of a stable state of inauthentic belief, characterized by a chronic mismatch between what a person says and what they truly believe and experience. Second, the theory motivates experimental studies, such as the one presented here. Finally, it guides search for brain mechanisms that might in principle carry out the computations required by the model.

Acknowledgements

We are grateful to Ravi Dhar, Guy Mayraz, Trey Hedden, Stephanie Carpenter, and Arnaldo Pereira-Diaz for extensive comments on the manuscript; to Dan Ariely, Jiwoong Shin, and Andreja Bubic for experimental help and advice; to the Institute for Advanced Study for hospitality and financial support; to the Psychology Department of Zagreb University for hosting a pilot study; and to Tom Palfrey and the Princeton Laboratory for Experimental Social Science for hosting the experiment reported here. We also wish to acknowledge numerous discussions with our MIT colleagues and collaborators, John Gabrieli, Richard Holton, Nina Wickens, Kristina Fanucci, Paymon Hosseini and Alexander Huang, as well as comments by seminar participants at the Robinson College (University of Cambridge) Workshop on Rationality and Emotions, the Institute for Advanced Study, GREQAM-Marseille, Sorbonne, Zurich Institute for Research in Experimental Economics, Toulouse School of Economics and Brown University, among others.

One contribution of 12 to a Theme Issue ‘Rationality and emotions’ .

1 Bach (1997) expresses this nicely: ‘For example, what makes the betrayed husband count as self deceived is not merely that his belief that his wife is faithful is sustained by a motivationally biased treatment of his evidence. He could believe this even if he had no tendency to think about the subject ever again. He counts as a self-deceiver only because sustaining his belief that his wife is faithful requires an active effort to avoid thinking that she is not. In self-deception, unlike blindness or denial, the truth is dangerously close at hand. His would not be a case of self-deception if it hardly ever occurred to him that his wife might be playing around and if he did not appreciate the weight of the evidence, at least to some extent. If self-deception were just a matter of belief, then once the self-deceptive belief was formed, the issue would be settled for him; but in self-deception it is not. The self-deceiver is disposed to think the very thing he is motivated to avoid thinking, and this is the disposition he resists’.

2 See also Levy's (2008) arguments about anosognosia for hemplegia (denial of paralysis) as a real case of self-deception.

3 As coauthored chapter of Bodner's (1995) doctoral dissertation.

4 It is important not to confuse self-signalling with evidential decision theory (EDT; Gibbard & Harper 1978 ). The decision criterion in EDT is Σ S U ( x , S ) p ( S | x ), which resembles the second part of (2), Σ θ u ( x , θ ) p ( θ | x ). The key difference is that actual deep beliefs θ ° do not appear in the EDT formula. From a formal standpoint, closest to the present approach is the memory-anticipation model of Bernheim & Thomadsen (2005) . In their model, at time-zero the individual selects an action affecting outcomes at time-two, in light of information that she knows will be later forgotten. At interim time-one the person tries to retroactively infer this information from actions already taken, leading to anticipatory emotions about outcomes at time-two. The individual at time-zero then has a reason to take actions supportive of positive interim emotions, knowing full that these emotions will be disappointed later. In the philosophical debate, Mele (1997) mentions this type of scenario and allows it to be a genuine, albeit rare example of intentional self-deception; for Audi (1997) it is a distinct phenomenon, more properly termed ‘self-caused deception’.

5 We are sidestepping important details, namely: (i) What inferences follow from an action that is suboptimal for any θ and thus, strictly speaking, should not occur (this is the problem of beliefs ‘off-the-equilibrium-path’)? (ii) When does an equilibrium exist, and when is it unique? See Cho & Sobel (1990) for a general treatment of these issues.

6 For an analogous treatment of social conformity see Bernheim (1994) .

7 Note that a disconfirming response strategy (C2 ≠ A) would guarantee that one of the two responses is always correct. This would provide a hedging benefit for subjects who are risk averse at the level of a single trial. We find no evidence of hedging in the data.

8 It is interesting that the high SD group includes some subjects from the classification bonus treatment. These subjects may have been motivated by the $0.02 reward for anticipations. Alternatively, this may reflect intrinsic motivation associated with self-confirming responses (or, equivalently, to a disinclination to acknowledge error, even if the financial consequences are minor).

9 If Prob(A = C1) = 0.5, then the confirmation rate equals to the combined frequency of consistent and self-deceptive trials. However, Prob(A = C1) could deviate from 0.5 through chance, or if a subject favours one category. To compensate for unequal base rates of A = C1 and A ≠ C1 we work with the corrected rate: (Prob(C2 = A|A = C1) + Prob(C2 = A|A ≠ C1))/2. The correlation between this index and the raw frequency of consistent and self-deceptive trials is +0.97, so for practical purposes we can regard them as the same.

10 They also vary between treatments: 58.5 per cent versus 66.5 per cent for classification and anticipation bonus groups, respectively, t (83) = 3.33, p < 0.002.

11 The notion that moderate levels of self-deception are beneficial for self-esteem and mental health has been debated extensively ( Lockhard & Paulhus 1988 ; Taylor & Brown 1988 ).

  • Ainslie G.1986 Beyond microeconomics: conflict among interests in a multiple self as a determinant of value . In The multiple self (ed. Elster J.), pp. 133–175 Cambridge, UK: Cambridge University Press [ Google Scholar ]
  • Audi R.1985 Self-deception and rationality . In Self-deception and self-understanding (ed. Martin M. W.), pp. 169–194 Lawrence, KS: University of Kansas [ Google Scholar ]
  • Audi R.1997 Self-deception vs. self-caused deception: a comment on Professor Mele . Behav. Brain Sci. 20 , 104 ( doi:10.1017/S0140525X97230037 ) [ Google Scholar ]
  • Babcock L., Loewenstein G.1997 Explaining bargaining impasse: the role of self-serving biases . J. Econ. Perspect. 11 , 109–126 [ Google Scholar ]
  • Bach K.1997 Thinking and believing in self-deception . Behav. Brain Sci. 20 , 105 ( doi:10.1017/S0140525X97240033 ) [ Google Scholar ]
  • Balcetis E., Dunning D.2006 See what you want to see: motivational influences on visual perception . J. Pers. Soc. Psychol. 91 , 612–625 ( doi:10.1037/0022-3514.91.4.612 ) [ PubMed ] [ Google Scholar ]
  • Bem D.1972 Self-perception theory . In Advances in experimental social psychology (ed. Berkowitz L.). New York, NY: Academic Press [ Google Scholar ]
  • Benabou R., Tirole J.2002 Self-confidence and personal motivation . Q. J. Econ. 117 , 871–915 ( doi:10.1162/003355302760193913 ) [ Google Scholar ]
  • Benabou R., Tirole J.2004 Willpower and personal rules . J. Polit. Econ. 112 , 848–887 ( doi:10.1086/421167 ) [ Google Scholar ]
  • Bernheim B.1994 A theory of conformity . J. Polit. Econ. 102 , 841–877 ( doi:10.1086/261957 ) [ Google Scholar ]
  • Bernheim B., Thomadsen R.2005 Memory and anticipation . Econ. J. 115 , 271–304 ( doi:10.1111/j.1468-0297.2005.00989.x ) [ Google Scholar ]
  • Berns G., Chappelow J., Zink C., Pagnoni G., Martin-Skurski M., Richards J.2005 Neurobiological correlates of social conformity and independence during mental rotation . Biol. Psychiatry 58 , 245–253 ( doi:10.1016/j.biopsych.2005.04.012 ) [ PubMed ] [ Google Scholar ]
  • Bodner R.1995 Self-knowledge and the diagnostic value of actions: the case of donating to a charitable cause . Doctoral dissertation, Sloan School, Massachusetts Institute of Technology, Cambridge, MA [ Google Scholar ]
  • Bodner R., Prelec D.1995 The diagnostic value of actions and the emergence of personal rules in a self-signaling model . In Self-knowledge and the diagnostic value of one's actions (ed. Bodner R.), ch. 2, pp. 53–67 Doctoral dissertation, Sloan School, Massachusetts Institute of Technology, Cambridge, MA, USA
  • Bodner R., Prelec D.2003 Self-signaling in a neo-Calvinist model of everyday decision making . In Psychology of economic decisions, Vol. I. (eds Brocas I., Carillo J.), pp. 105–126 London, UK: Oxford University Press [ Google Scholar ]
  • Brocas I., Carrillo J. D.2008 The brain as a hierarchical organization . Am. Econ. Rev. 98 , 1312–1346 ( doi:10.1257/aer.98.4.1312 ) [ Google Scholar ]
  • Brown J. D., Dutton K. A.1995 Truth and consequences—the costs and benefits of accurate self-knowledge . Pers. Soc. Psychol. Bull. 21 , 1288–1296 ( doi:10.1177/01461672952112006 ) [ Google Scholar ]
  • Caplin A., Leahy J.2001 Psychological expected utility theory and anticipatory feelings . Q. J. Econ. 116 , 55–79 ( doi:10.1162/003355301556347 ) [ Google Scholar ]
  • Cho I., Sobel J.1990 Strategic stability and uniqueness in signaling games . J. Econ. Theory 50 , 381–413 ( doi:10.1016/0022-0531(90)90009-9 ) [ Google Scholar ]
  • Dawson E., Gilovich T., Regan D. T.2002 Motivated reasoning and performance on the Wason selection task . Pers. Soc. Psychol. Bull. 28 , 1379–1387 ( doi:10.1177/014616702236869 ) [ Google Scholar ]
  • Ditto P. H., Lopez D. F.1992 Motivated skepticism—use of differential decision criteria for preferred and nonpreferred conclusions . J. Pers. Soc. Psychol. 63 , 568–584 ( doi:10.1037/0022-3514.63.4.568 ) [ Google Scholar ]
  • Dunning D., Hayes A.1996 Evidence for egocentric comparison in social judgment . J. Pers. Soc. Psychol. 71 , 213–229 ( doi:10.1037/0022-3514.71.2.213 ) [ Google Scholar ]
  • Dunning D., Leuenberger A., Sherman D.1995 A new look at motivated inference—are self-serving theories of success a product of motivational forces . J. Pers. Soc. Psychol. 69 , 58–68 ( doi:10.1037/0022-3514.69.1.58 ) [ Google Scholar ]
  • Elster J.1999 Alchemies of the mind: rationality and the emotions Cambridge, UK: Cambridge University Press [ Google Scholar ]
  • Fudenberg D., Levine D.2008 A dual self model of impulse control . Am. Econ. Rev. 96 , 1449–1476 ( doi:10.1257/aer.96.5.1449 ) [ PubMed ] [ Google Scholar ]
  • Funkouser E.2005 Do the self-deceived get what they want? Pacific Phil. Q. 86 , 295–312 [ Google Scholar ]
  • Gibbard A., Harper W.1978 Counterfactuals and two kinds of expected utility . In Foundations and applications of decision theory (eds Hooker C. A., Leach J. J., McClennen E. F.), pp. 125–162 Dordrecht, the Netherlands: Reidel [ Google Scholar ]
  • Gilovich T.1991 How we know what isn't so: fallibility of human reason in everyday life. New York, NY: Free Press [ Google Scholar ]
  • Ginossar Z., Trope Y.1987 Problem solving in judgment under uncertainty . J. Pers. Soc. Psychol. 52 , 464–474 ( doi:10.1037/0022-3514.52.3.464 ) [ Google Scholar ]
  • Gottlieb D.2009 Imperfect memory and choice under risk . Doctoral dissertation, Department of Economics, Massachusetts Institute of Technology [ Google Scholar ]
  • Gur R. C., Sackeim H. A.1979 Self-deception: a concept in search of a phenomenon . J. Pers. Soc. Psychol. 37 , 147–169 ( doi:10.1037/0022-3514.37.2.147 ) [ Google Scholar ]
  • Holton R.2000 What is the role of the self in self-deception? Proc. Aristotelian Soc. 101 , 53–69 [ Google Scholar ]
  • Koszegi B.2006a Ego utility, overconfidence, and task choice . J. Eur. Econ. Assoc. 4 , 673–707 [ Google Scholar ]
  • Koszegi B.2006b Emotional agency . Q. J. Econ. 121 , 121–156 [ Google Scholar ]
  • Kunda Z.1987 Motivated inference: self-serving generation and evaluation of evidence . J. Pers. Soc. Psychol. 53 , 636–647 ( doi:10.1037/0022-3514.53.4.636 ) [ Google Scholar ]
  • Kunda Z.1990 The case for motivated reasoning . Psychol. Bull. 108 , 480–498 ( doi:10.1037/0033-2909.108.3.480 ) [ PubMed ] [ Google Scholar ]
  • Levy N.2008 Self-deception without thought experiments . In Delusions and self deception: affective and motivational influences on belief-formation (eds Bayne T., Fernández J.), pp. 227–242 Hove: Psychology Press [ Google Scholar ]
  • Lockhard J., Paulhus D.1988 Self-deception: an adaptive mechanism? Englewood Cliffs, NJ: Prentice-Hall [ Google Scholar ]
  • Mazar N., Amir O., Ariely D.2008 The dishonesty of honest people . J. Market. Res. 45 , 633–644 ( doi:10.1509/jmkr.45.6.633 ) [ Google Scholar ]
  • Mele A. R.1997 Real self-deception . Behav. Brain Sci. 20 , 91–136 [ PubMed ] [ Google Scholar ]
  • Mele A. R.1998 Motivated belief and agency . Phil. Psychol. 11 , 353–369 ( doi:10.1080/09515089808573266 ) [ Google Scholar ]
  • Mijovic-Prelec D., Shin L. M., Chabris C. F., Kosslyn S. M.1994 When does ‘no’ really mean ‘yes’? A case study in unilateral visual neglect . Neuropsychologia 32 , 151–158 ( doi:10.1016/0028-3932(94)90002-7 ) [ PubMed ] [ Google Scholar ]
  • Norton M. I., Vandello J. A., Darley J. M.2004 Casuistry and social category bias . J. Pers. Soc. Psychol. 87 , 817–831 ( doi:10.1037/0022-3514.87.6.817 ) [ PubMed ] [ Google Scholar ]
  • Prelec D., Bodner R.2003 Self-signaling and self-control . In Time and decision (eds Loewenstein G., Read D., Baumeister R. F.), pp. 277–300 New York, NY: Russell Sage Press [ Google Scholar ]
  • Pronin E., Gilovich T., Ross L.2004 Objectivity in the eye of the beholder: divergent perceptions of bias in self versus other . Psychol. Rev. 111 , 781–799 ( doi:10.1037/0033-295X.111.3.781 ) [ PubMed ] [ Google Scholar ]
  • Pyszczynski T., Greenberg J.1987 Toward an integration of cognitive and motivational perspectives on social inference—a biased hypothesis testing model . Adv. Exp. Soc. Psychol. 20 , 297–340 ( doi:10.1016/S0065-2601(08)60417-7 ) [ Google Scholar ]
  • Quattrone G., Tversky A.1984 Causal versus diagnostic contingencies: on self-deception and on the voter's illusion . J. Pers. Soc. Psychol. 46 , 237–248 ( doi:10.1037/0022-3514.46.2.237 ) [ Google Scholar ]
  • Sanitioso R., Kunda Z., Fong G. T.1990 Motivated recruitment of autobiographical memory . J. Pers. Soc. Psychol. 59 , 229–241 ( doi:10.1037/0022-3514.59.2.229 ) [ PubMed ] [ Google Scholar ]
  • Shapiro D.1996 On the psychology of self-deception—truth-telling, lying and self-deception . Soc. Res. 63 , 785–800 [ Google Scholar ]
  • Taylor S., Brown J.1988 Illusion and well-being: a social psychological perspective on mental health . Psychol. Bull. 103 , 193–210 ( doi:10.1037/0033-2909.103.2.193 ) [ PubMed ] [ Google Scholar ]
  • Thaler R., Shefrin H. M.1981 An economic theory of self-control . J. Polit. Econ. 39 , 392–406 [ Google Scholar ]
  • Trivers R.1985 Social evolution Menlo Park, CA: Benjamin/Cummings Pub. Co [ Google Scholar ]
  • von Hippel W., Lakin J. L., Shakarchi R. J.2005 Individual differences in motivated social cognition: the case of self-serving information processing . Pers. Soc. Psychol. Bull. 31 , 1347–1357 ( doi:10.1177/0146167205274899 ) [ PubMed ] [ Google Scholar ]
  • Weisenkrantz L.1986 Blindsight: a case study and implications Oxford, UK: Oxford University Press [ Google Scholar ]

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Self-Deceived Individuals Are Better at Deceiving Others

Contributed equally to this work with: Shakti Lamba, Vivek Nityananda

* E-mail: [email protected] (SL); [email protected] (VN)

Affiliations Centre for Ecology and Conservation, College of Life and Environmental Sciences, University of Exeter, Penryn Campus, Cornwall, United Kingdom, Department of Anthropology, University College London, London, United Kingdom

Affiliations Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, London, United Kingdom, Institute of Neuroscience, Henry Wellcome Building for Neuroecology, Newcastle University, Framlington Place, Newcastle upon Tyne, United Kingdom

  • Shakti Lamba, 
  • Vivek Nityananda

PLOS

  • Published: August 27, 2014
  • https://doi.org/10.1371/journal.pone.0104562
  • Reader Comments

Table 1

Self-deception is widespread in humans even though it can lead to disastrous consequences such as airplane crashes and financial meltdowns. Why is this potentially harmful trait so common? A controversial theory proposes that self-deception evolved to facilitate the deception of others. We test this hypothesis in the real world and find support for it: Overconfident individuals are overrated by observers and underconfident individuals are judged by observers to be worse than they actually are. Our findings suggest that people may not always reward the more accomplished individual but rather the more self-deceived. Moreover, if overconfident individuals are more likely to be risk-prone then by promoting them we may be creating institutions, including banks and armies, which are more vulnerable to risk. Our results reveal practical solutions for assessing individuals that circumvent the influence of self-deception and can be implemented in a range of organizations including educational institutions.

Citation: Lamba S, Nityananda V (2014) Self-Deceived Individuals Are Better at Deceiving Others. PLoS ONE 9(8): e104562. https://doi.org/10.1371/journal.pone.0104562

Editor: Judith Korb, University of Freiburg, Germany

Received: April 17, 2014; Accepted: July 8, 2014; Published: August 27, 2014

Copyright: © 2014 Lamba, Nityananda. This is an open-access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: The authors confirm that all data underlying the findings are fully available without restriction. All relevant data are within the paper and its Supporting Information files.

Funding: SL and VN jointly received funding from the Centre for Ecology and Evolution: http://www.ucl.ac.uk/cee . The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Self-deception - individuals' false beliefs about their abilities – is widespread in humans. People consistently overrate their capabilities [1] , [2] , suffer from positive illusions [3] and deny their disabilities [4] . We are remarkably prone to both overconfidence - reflecting inflated beliefs about our abilities - and underconfidence arising from a negative self-image [5] – [7] . These biased beliefs can lead to costly errors with disastrous consequences including airplane crashes, financial meltdowns and war [5] , [6] , [8] , [9] . Why is this potentially harmful trait so common? A controversial theory proposes that self-deception has evolved to facilitate the deception of others [5] , [6] , [8] , [10] . Self-deceived individuals may be less likely to produce cues, such as stress, that reveal deception [5] . Here, we provide the first direct test of this hypothesis in a real-world setting and find support for it. We demonstrate that individuals who overestimate their abilities at a task are overrated at that task by observers. Equally, individuals who falsely believe that they are not good at the task are judged by observers to be worse at it than they actually are. Our findings suggest that people may not always reward the more accomplished individual but rather the more self-deceived. Moreover, if overconfident individuals are more likely to be risk-prone [11] then by promoting such individuals we may be creating institutions, including banks, trading floors, emergency services and armies, that are also more vulnerable to risk.

Many authors argue that the intra-personal gains of positive self-deception provide an adequate account for its prevalence [3] , [12] . For example, positive beliefs about oneself are associated with increased well-being and enhanced status [3] , [13] , [14] . Overconfidence may also be advantageous when competitors are uncertain about their relative abilities [15] . An alternative theory suggests that self-deception first evolved in the context of inter-personal relations because it facilitates the deception of others by eliminating cues that reveal deception [5] , [6] , [8] , [10] . According to this view, the intrapersonal advantages of self-deception are a by-product rather than the driving force for the evolution of this trait. While this idea has theoretical traction [16] , it remains empirically untested. We present the first direct evidence suggesting that fooling oneself helps fool others.

Our study was conducted within the context of the tutorial system implemented at some universities, where students meet in small groups on a weekly basis to review, debate and discuss course material with a tutor and each other. In these tutorials, students interact freely with each other and the tutor. At the end of the first tutorials for a first-year undergraduate course held in the first term, students were asked to privately predict the performance of each of their peers from the tutorial group; they were asked to predict the absolute grade and relative rank they thought each of their classmates would obtain for the next assignment that they would complete for the course. Similarly, they assessed their own performance. Participants received one British pound for each correct prediction that they made. 71 out of 73 participants did not know anyone in their tutorial group prior to enrolling at university only 3 weeks before the first tutorial was held. They were thus limited to basing their predictions solely on their interactions in a single tutorial. We later obtained participants' actual grades from the lecturer for the course. All assignments were marked double-blind, i.e. the lecturer did not know the identity of the students while grading them.

We measure self-deception as the difference between the self-estimate and the actual grade of an individual. We measure deception as the difference between the median estimate made by peers and the actual grade of an individual. We measure the susceptibility to being deceived as the median of the difference between the grades that an individual predicted for peers and the actual grades that those peers received from the tutor. Table 1 provides a summary of these behavioural measures and how they are calculated.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0104562.t001

If self-deception facilitates deception then we expect the measures of self-deception and deception to be positively associated with each other. Concurrently, if self-deception diminishes individuals' ability to detect deception by others then we expect our measure of self-deception to be positively associated with the susceptibility to being deceived.

The study was run at two universities in London, University College London and Queen Mary University of London, and the average number of students in each tutorial group was about 8. Table 2 presents summary statistics for the tutorial groups included in our study and Table 3 provides a demographic description of our study sample. The study was run two times, once at the end of the first tutorial and a second time at the end of a tutorial about six weeks later to test whether the association between self-deception and deception weakened with extended interaction between participants.

thumbnail

https://doi.org/10.1371/journal.pone.0104562.t002

thumbnail

https://doi.org/10.1371/journal.pone.0104562.t003

Results and Discussion

1. is self-deception about one's ability associated with how deceived others are about one's ability.

Individuals who rated themselves higher were rated higher by others, irrespective of their actual performance. There is a significant positive correlation between the measures of self-deception and deception based on both absolute grades and relative ranks, after controlling for actual grade and rank respectively (Grades: partial correlation coefficient = 0.31, one tailed p = 0.01, df = 69; Ranks: partial correlation coefficient = 0.42, one-tailed p<0.001, df = 63; Figure 1a ). The significant positive relationship between self-deception and deception is unaffected by an individual's sex, age, family income, tutorial group size or university.

thumbnail

Scatterplots with best-fit lines for residuals of deception (median estimate of focal individual's performance by peers – focal individual's actual performance) plotted against residuals of self-deception (self-estimate of focal individual's performance – focal individual's actual performance) based on absolute grades (red circles and red bold lines) and relative ranks (blue squares and blue dotted lines) in (a) week one and (b) week six. The residuals were obtained via a partial correlation analysis that regressed (i) self-deception against actual grade and (ii) deception against actual grade. Mean ± s.d. of absolute level of self-deception was 1.93±1.54 grades and 2.11±1.70 ranks in week one and 1.72±1.42 grades and 2.04±1.99 ranks in week six. Mean ± s.d. of absolute level of deception was 1.90±1.48 grades and 1.80±1.30 ranks in week one, and 1.27±1.03 grades and 1.86±1.59 ranks in week six.

https://doi.org/10.1371/journal.pone.0104562.g001

2. Does extended interaction with individuals diminish a self-deceived individual's ability to deceive those individuals?

Extended interaction may diminish or eliminate a self-deceived individual's ability to deceive another individual. This is because deception only works as long as the deceived individual has incomplete information about the deceiver and extended interaction is likely to provide the deceived individual with more information about the deceiver's true abilities. We therefore repeated the exercise at the end of a tutorial about six weeks later to investigate whether the association between self-deception and deception weakened with extended interaction between participants. We find that the measures of self-deception and deception remain significantly correlated (Grades: partial correlation coefficient = 0.40, one-tailed p = 0.001, df = 57; Ranks: partial correlation coefficient = 0.47, one-tailed p<0.001, df = 51; Figure 1b ) suggesting that there was little effect of interaction on this timescale. It is worth noting that at one of the universities, levels of self-deception changed significantly in week six compared to week one (Wilcoxon signed ranks test Z = −3.311, n = 18, p = 0.001). Since the association between self-deception and deception remains intact in week six, together these results suggest that as individuals' levels of self-deception change, their peers' judgements of them also change.

3. Is the degree to which individuals are self-deceived constrained by how believable their self-deception is to others?

Two factors are likely to constrain the degree to which individuals are self-deceived. First, the extent to which individuals' self-deception is believed by others, and second, the amount of error and risk it exposes them to. Self-deception is therefore expected to be anchored by an individual's actual capabilities to represent “believable deviations from reality” [5] . For instance, a B-grader should be more likely to believe that she will get an A or a C grade than an E grade. Similarly, a D-grader should be more likely to believe that she will get a C or an E grade than an A grade. In other words, we should observe a positive correlation between participants' self-predictions and their actual performance. We find that self-prediction and actual performance show no correlation based on absolute grades (Week 1 - Spearman rank correlation coefficient = 0.03, one-tailed p = 0.40, n = 72; Figure S1a ) but a significant positive correlation based on relative ranks (Week 1 - Spearman rank correlation coefficient = 0.39, one-tailed p = 0.001, n = 66; Figure S1a ). Concurrently, we find that individuals' peers' predictions about them do not correlate with their actual performance based on absolute grades but do so based on relative ranks (Week 1 - Grades: Spearman rank correlation coefficient = −0.01, one-tailed p = 0.46, n = 73; Ranks: Spearman rank correlation coefficient = 0.39, one-tailed p<0.001, n = 73; Figure S1b ). Figure S2 displays results for week six.

The above results suggest that self-deception may be anchored by actual performance only when individuals evaluate themselves within a relative framework (ranks) and not in an absolute framework (grades). The finding that peers' predictions are only anchored around an individual's actual performance when her self-predictions are too, further supports the idea that an individual's beliefs about herself influenced her peers' impressions of her. Moreover, it suggests that self-deception is believed by others even if it is not anchored around real ability. Thus, exposure to risk and error may be the more important constraint on levels of self-deception than how believable it is to others.

4. Are individuals who are self-deceived poor at detecting deception by others?

There may be several ways to detect deception by others such as relying on bodily cues and signals of deception (e.g. pitch of voice, fidgeting [6] , [17] ). One could also infer deception based on knowledge of the state of the world. In the latter case, we need to compare our own knowledge of the state of the world to the one that is being presented to us by the deceiver. Holding an erroneous representation of the state of the world may, therefore, interfere with our ability to detect deception. Since self-deception involves holding inaccurate beliefs about our abilities and the state of the world it may consequently diminish our ability to detect deception by others; in other words, it may make us more susceptible to being deceived.

We measured individuals' susceptibility to being deceived as the median difference between the grades/ranks that they predicted for peers and the actual grades/ranks that the peers received from the tutor. In week one there is a significant positive correlation between the measures of self-deception and susceptibility to being deceived based on absolute grades but not based on relative ranks, after controlling for actual grade and rank respectively (Grades: partial correlation coefficient = 0.302, one tailed p = 0.01, df = 70; Ranks: partial correlation coefficient = 0.066, one-tailed p = 0.30, df = 63; Figure 2a ). Thus, overconfident individuals tended to overestimate the abilities of others while underconfident individuals underestimated the abilities of others. However, in week six, self-deception is no longer correlated with susceptibility to being deceived based on grades or ranks (Grades: partial correlation coefficient = 0.008, one tailed p = 0.48, df = 57; Ranks: partial correlation coefficient = −0.01, one-tailed p = 0.47, df = 51; Figure 2b ).

thumbnail

Scatterplots with best-fit lines for residuals of susceptibility to being deceived (median of the difference between a focal individual's estimate of peer performance and the actual performance of peers) plotted against residuals of self-deception (self-estimate of focal individual's performance – focal individual's actual performance) based on absolute grades (red circles and red bold lines) and relative ranks (blue squares and blue dotted lines) in (a) week one and (b) week six. The residuals were obtained via a partial correlation analysis that regressed (i) self-deception against actual grade and (ii) susceptibility to being deceived against actual grade.

https://doi.org/10.1371/journal.pone.0104562.g002

Our results suggest the possibility that self-deception diminishes an individual's ability to accurately estimate the abilities of others when they use an absolute criterion (grades) to do so, but not when they use a relative criterion (ranks). Hence, another important cost of (and therefore constraint on) being self-deceived may be an impaired ability to detect deception by others. However, we also find that the association between self-deception and the susceptibility to being deceived disappears with extended interaction, perhaps because individuals gather more information about their peers and become less prone to being deceived. This is supported by the finding that the absolute level of deception based on grades is significantly lower in week six compared to week one (Wilcoxon signed rank test Z = −2.94, n = 58, p = 0.003)

Our results support the idea that self-deception facilitates the deception of others. Overconfident individuals were overrated and underconfident individuals were underrated. While the benefits of being overconfident are apparent, it is less clear whether underconfidence can also be advantageous. There may, however, be situations in everyday life where individuals underplay their abilities to their competitors in order to either avoid immediate conflict or to steal an advantage at the right moment, the “underdog” effect. “Dummying up” or appearing less knowledgeable than you are may also be a way to avoid working as hard as others (pg 167 in [6] ).

Since students hardly knew each other, they had little information about what the other members of their tutorial group thought about their academic abilities and we did not tell them the predictions their peers made for them. Thus, their peers' ratings of them could not have influenced their ratings of themselves. The study was conducted among Psychology and Anthropology students, both programmes with higher female enrolment, making our study sample female-biased. While we find no effect of sex on the relationship between self-deception and deception, previous studies have found that men are more likely to be overconfident and women are more likely to be the opposite [18] . It is therefore notable that overconfident women are equally likely to create a false positive impression on observers as overconfident men.

On a practical level, we find that a relative framework of evaluation (e.g. ranks) may be superior to an absolute framework (e.g. grades) in terms of individuals' ability to both evaluate themselves and others. Individuals' evaluations of themselves are anchored around reality when they use ranks but not when they use grades ( Results and Discussion Section 3). Concurrently, their estimations of others' abilities are unaffected by their own self-deception when using ranks but not when using grades ( Results and Discussion Section 4). This may simply be because ranking individuals is a computationally easier exercise than predicting grades since each individual can only be assigned a unique rank but can be assigned any of a set of grades. Alternatively, directly comparing individuals with each other may allow people to form more accurate evaluations of their abilities compared to when they evaluate them in isolation. Our results also advocate the use of double-blind assessment wherever possible, such as in educational establishments and the scientific peer-review system, in order to circumvent the influence of self-deception by the assessee on the assessor.

Our findings have implications for many types of social interactions but especially for those involving partner-choice (e.g. choosing mates, hiring people for jobs), suggesting that we may be rewarding overconfidence and penalizing underconfidence irrespective of an individual's capability. Furthermore, if overconfident individuals are more likely to be risk-prone [11] then by promoting such individuals we may be creating institutions such as banks, trading floors and armies, that are also more vulnerable to risk. From our smallest interactions to the institutions we build, self-deception may play a profound role in shaping the world we inhabit.

Materials and Methods

This study has approval from the Ethics Committees at University College London (UCL) and Queen Mary University of London (QMUL). Informed consent was obtained from all participants.

Study set-up

The tutor in charge conducted the tutorials and students were unaware that they would be requested to participate in our study during the tutorials. We entered the tutorial room once the tutor had finished the tutorial. Students were informed that we were conducting a study on people's ability to evaluate themselves and their peers but the precise research question and hypothesis being tested were not disclosed. All students were then provided an information sheet (see Information Sheet S1 ) and students who did not want to participate in our study were allowed to leave. We then handed out nametags to the participating students (so that they could clearly identify each other) and an evaluation sheet on which they recorded the absolute grades and relative ranks that they expected each of the participants in their tutorial group (including themselves) to receive for the next assignment that they completed for the course. Participants were instructed not to predict the grades and ranks of members of their tutorial who had declined to participate in this study.

Participants were not informed which predictions were correct and were paid £1 for each grade or rank that they predicted correctly. Participants were informed about their earnings from both tutorials and all payments were made only after all data collection was complete. Participants were only paid for a prediction if it was exactly correct and not paid based on how close the predicted grade/rank was to the actual grade/rank, thus incentivizing individuals to be as accurate as possible in their predictions.

We obtained the partial correlation between self-deception and deception as well as self-deception and susceptibility to being deceived controlling for actual grades/ranks. We repeated these analyses controlling for age, sex, family income, tutorial group size and university. Non-parametric statistics were used to analyse the overall correlation between self and other predictions and actual performance. All analyses were run in SPSS version 20.0.0 [19] .

Supporting Information

Correlations between self and peer predictions and actual performance in week one. Scatterplots with best-fit lines for a) self-predictions and b) peer predictions plotted against actual performance based on absolute grades (red circles and red bold lines) and relative ranks (blue squares and blue dotted lines) in week 1.

https://doi.org/10.1371/journal.pone.0104562.s001

Correlations between self and peer predictions and actual performance in week six. Scatterplots with best-fit lines for peer predictions plotted against actual performance Scatterplots with best-fit lines for a) self-predictions and b) peer predictions plotted against actual performance based on absolute grades (red circles and red bold lines) and relative ranks (blue squares and blue dotted lines) in week six.

https://doi.org/10.1371/journal.pone.0104562.s002

Data File S1.

Grades and ranks predicted by participants during tutorials in week one and week six, actual grades and ranks obtained in the subsequent assignment and calculated measures of self-deception and deception based on these grades and ranks.

https://doi.org/10.1371/journal.pone.0104562.s003

Information Sheet S1.

Information and evaluation sheets provided to participants during the experiment.

https://doi.org/10.1371/journal.pone.0104562.s004

Acknowledgments

We thank students at UCL and QMUL for participating in this study. Many thanks to staff at UCL and QMUL, Vera Sarkol, Emily Emmott and Kathleen Bryson for their assistance in conducting this study. The Centre for Ecology and Evolution funded this study. S.L. is currently an ESRC Future Research Leaders Fellow. S.L. and V.N. designed the study, collected the data, performed the analyses and co-wrote the paper.

Author Contributions

Conceived and designed the experiments: SL VN. Performed the experiments: SL VN. Analyzed the data: SL VN. Contributed reagents/materials/analysis tools: SL VN. Contributed to the writing of the manuscript: SL VN.

  • View Article
  • Google Scholar
  • 6. Trivers R (2011) Deceit and Self-Deception: Fooling Yourself the Better to Fool Others. London, United Kingdom: Penguin Books Ltd. 397 p.
  • 19. SPSS Inc. (2009) SPSS 20.0.0 for Mac OSX.
  • 20. Hollingshead AB (1975) Four factor index of social status. (Department of Sociology, Yale University, New Haven, CT).

SEP thinker apres Rodin

Self-Deception

Virtually every aspect of the current philosophical discussion of self-deception is a matter of controversy including its definition and paradigmatic cases. We may say generally, however, that self-deception is the acquisition and maintenance of a belief (or, at least, the avowal of that belief) in the face of strong evidence to the contrary motivated by desires or emotions favoring the acquisition and retention of that belief. Beyond this, philosophers divide over whether this action is intentional or not, whether self-deceivers recognize the belief being acquired is unwarranted on the available evidence, whether self-deceivers are morally responsible for their self-deception, and whether self-deception is morally problematic (and if it is in what ways and under what circumstances). The discussion of self-deception and its associated puzzles gives us insight into the ways in which motivation affects belief acquisition and retention. And yet insofar as self-deception represents an obstacle to self-knowledge, which has potentially serious moral implications, self-deception is more than an interesting philosophical puzzle. It is a problem of particular concern for moral development, since self-deception can make us strangers to ourselves and blind to our own moral failings.

1. Definitional Issues

2.1 temporal partitioning, 2.2 psychological partitioning, 3.1 intentionalist objections, 4. twisted self-deception, 5.1 moral responsibility for self-deception, 5.2 the morality of self-deception, books and anthologies on self-deception, articles and book chapters on self-deception, other internet resources, related entries.

What is self-deception? Traditionally, self-deception has been modeled on interpersonal deception, where A intentionally gets B to believe some proposition p , all the while knowing or believing truly ~ p . Such deception is intentional and requires the deceiver to know or believe ~ p and the deceived to believe p . One reason for thinking self-deception is analogous to interpersonal deception of this sort is that it helps us to distinguish self-deception from mere error, since the acquisition and maintenance of the false belief is intentional not accidental. If self-deception is properly modeled on such interpersonal deception, self-deceivers intentionally get themselves to believe p , all the while knowing or believing truly ~ p . On this traditional model, then, self-deceivers apparently must (1) hold contradictory beliefs, and (2) intentionally get themselves to hold a belief they know or believe truly to be false.

The traditional model of self-deception, however, has been thought to raise two paradoxes: One concerns the self-deceiver's state of mind—the so-called ‘static’ paradox. How can a person simultaneously hold contradictory beliefs? The other concerns the process or dynamics of self-deception—the so-called ‘dynamic’ or ‘strategic’ paradox. How can a person intend to deceive herself without rendering her intentions ineffective? (Mele 1987a; 2001)

The requirement that the self-deceiver holds contradictory beliefs raises the ‘static’ paradox, since it seems to pose an impossible state of mind, namely, consciously believing p and ~ p at the same time. As the deceiver, she must believe ~ p , and, as the deceived, she must believe p . Accordingly, the self-deceiver consciously believe p and ~ p . But if believing both a proposition and its negation in full awareness is an impossible state of mind to be in, then self-deception as it has traditionally been understood seems to be impossible as well.

The requirement that the self-deceiver intentionally gets herself to hold a believe she knows to be false raises the ‘dynamic’ or ‘strategic’ paradox, since it seems to involve the self-deceiver in an impossible project, namely, both deploying and being duped by some deceitful strategy. As the deceiver, she must be aware she's deploying a deceitful strategy; but, as the deceived, she must be unaware of this strategy for it to be effective. And yet it is difficult to see how the self-deceiver could fail to be aware of her intention to deceive. A strategy known to be deceitful, however, seems bound to fail. How could I be taken in by your efforts to get me to believe something false, if I know what you're up to? But if it's impossible to be taken in by a strategy one knows is deceitful, then, again, self-deception as it has traditionally been understood seems to be impossible as well.

These paradoxes have led a minority of philosophers to be skeptical that self-deception is possible (Paluch 1967; Haight 1980). In view of the empirical evidence that self-deception is not only possible, but pervasive (Sahdra & Thagard 2003), most have sought some resolution to these paradoxes. These approaches can be organized into two main groups: those that maintain that the paradigmatic cases of self-deception are intentional, and those that deny this. Call these approaches intentionalist and non-intentionalist respectively. Intentionalists find the model of intentional interpersonal deception apt, since it helps to explain the apparent responsibility of self-deceivers for their self-deception, its selectivity and difference from other sorts of motivated belief such as wishful thinking. Non-intentionalists are impressed by the static and dynamic paradoxes allegedly involved in modeling self-deception on intentional interpersonal deception and, in their view, the equally puzzling psychological models used by intentionalists to avoid these paradoxes, such as semi-autonomous subsystems, unconscious beliefs and intentions and the like.

2. Intentionalist Approaches

The chief problem facing intentional models of self-deception is the dynamic paradox, namely, that it seems impossible to form an intention to get oneself to believe what one currently disbelieves or believes is false. For one to carry out an intention to deceive oneself one must know what one is doing, to succeed one must be ignorant of this same fact. Intentionalists agree on the proposition that self-deception is intentional, but divide over whether it requires the holding of contradictory beliefs, and thus over the specific content of the alleged intention involved in self-deception. Insofar as even the bare intention to acquire the belief that p for reasons not having to do with one's evidence for p seems unlikely to succeed if directly known, most intentionalists introduce temporal or psychological divisions that serve to insulate self-deceivers from the awareness of their deceptive strategy. When self-deceivers are not consciously aware of their beliefs to the contrary or their deceptive intentions, no paradox seems to be involved in deceiving oneself. Many approaches utilize some combination of psychological and temporal division (e.g., Bermúdez 2000).

Some intentionalists argue that self-deception is a complex process that is often extended over time and as such a self-deceiver can consciously set out to deceive herself into believing p , knowing or believing ~ p , and along the way lose her belief that ~ p , either forgetting her original deceptive intention entirely, or regarding it as having, albeit accidentally, brought about the true belief she would have arrived at anyway (Sorensen 1985; Bermúdez 2000). So, for instance, an official involved in some illegal behavior might destroy any records of this behavior and create evidence that would cover it up (diary entries, emails and the like), knowing that she will likely forget having done these things over the next few months. When her activities are investigated a year later, she has forgotten her tampering efforts and based upon her falsified evidence comes to believe falsely that she was not involved in the illegal activities of which she is accused. Here the self-deceiver need never simultaneously hold contradictory beliefs even though she intends to bring it about that she believe p , which she regards as false at the outset of the process of deceiving herself and true at its completion. The self-deceiver need not even forget her original intention to deceive, so an unbeliever who sets out to get herself to believe in God (since she thinks such a belief is prudent, having read Pascal) might well remember such an intention at the end of the process and deem that by God's grace even this misguided path led her to the truth. It is crucial to see here that what enables the intention to succeed in such cases is the operation of what Johnston (1988) terms ‘autonomous means’ (e.g., the normal degradation of memory, the tendency to believe what one practices, etc.) not the continued awareness of the intention. Some non-intentionalists take this to be a hint that the process by which self-deception is accomplished is subintentional (Johnston 1988). In any case, while it is clear that such temporal partitioning accounts apparently avoid the static and dynamic paradoxes, many find such cases fail to capture the distinctive opacity, indirection and tension associated with garden-variety cases of self-deception (e.g., Levy 2004).

Another strategy employed by intentionalists is the division of the self into psychological parts that play the role of the deceiver and deceived respectively. These strategies range from positing strong division in the self, where the deceiving part is a relatively autonomous subagency capable of belief, desire and intention (Rorty 1988); to more moderate division, where the deceiving part still constitutes a separate center of agency (Pears 1984, 1986; 1991); to the relatively modest division of Davidson, where there need only be a boundary between conflicting attitudes (1982, 1985). Such divisions are prompted in large part by the acceptance of the contradictory belief requirement. It isn't simply that self-deceivers hold contradictory beliefs, which though strange, isn't impossible. One can believe p and believe ~ p without believing p & ~ p , which would be impossible. The problem such theorists face stems from the appearance that the belief that ~ p motivates and thus form a part of the intention to bring it about that one acquire and maintain the false belief p (Davidson 1985). So, for example, the Nazi official's recognition that his actions implicate him in serious evil motivates him to implement a strategy to deceive himself into believing he is not so involved; he can't intend to bring it about that he holds such a false belief if he doesn't recognize it is false, and he wouldn't want to bring such a belief about if he didn't recognize the evidence to the contrary. So long as this is the case, the deceptive subsystem, whether it constitutes a separate center of agency or something less robust, must be hidden from the conscious self being deceived if the self-deceptive intention is to succeed. While these psychological partitioning approaches seem to resolve the static and dynamic puzzles, they do so by introducing a picture of the mind that raises many puzzles of its own. On this point, there appears to be consensus even among intentionalists that self-deception can and should be accounted for without invoking divisions not already used to explain non-self-deceptive behavior, what Talbott (1995) calls ‘innocent’ divisions.

Some intentionalists reject the requirement that self-deceivers hold contradictory beliefs (Talbott 1995; Bermúdez 2000). According to such theorists, the only thing necessary for self-deception is the intention to bring it about that one believe p where lacking such an intention one would not have acquired that belief. The self-deceiver thus need not believe ~ p . She might have no views at all regarding p , possessing no evidence either for or against p ; or she might believe p is merely possible, possessing evidence for or against p too weak to warrant belief that p or ~ p (Bermúdez 2000). Self-deceivers in this minimal sense intentionally acquire the belief that p , despite their recognition at the outset that they do not possess enough evidence to warrant this belief by selectively gathering evidence supporting p or otherwise manipulating the belief-formation process to favor belief that p . Even on this minimal account, such intentions will often be unconscious, since a strategy to acquire a belief in violation of one's normal evidential standards seems unlikely to succeed if one is aware of it.

3. Non-Intentionalist Approaches

A number of philosophers have moved away from modeling self-deception on intentional interpersonal deception, opting instead to treat it as a species of motivationally biased belief. These non-intentionalists allow that phenomena answering to the various intentionalist models available may be possible, but everyday or ‘garden-variety’ self-deception can be explained without adverting to subagents, or unconscious beliefs and intentions, which, even if they resolve the static and dynamic puzzles of self-deception, raise many puzzles of there own. If such non-exotic explanations are available, intentionalist explanations seem unwarranted.

The main paradoxes of self-deception seem to arise from modeling self-deception too closely on intentional interpersonal deception. Accordingly, non-intentionalists suggest the intentional model be jettisoned in favor of one that takes ‘to be deceived’ to be nothing more than to believe falsely or be mistaken in believing (Johnston 1988; Mele 2001). For instance, Sam mishears that it will be a sunny day and relays this misinformation to Joan with the result that she believes it will be a sunny day. Joan is deceived in believing it will be sunny and Sam has deceived her, albeit unintentionally. Initially, such a model may not appear promising for self-deception, since simply being mistaken about p or accidentally causing oneself to be mistaken about p doesn't seem to be self-deception at all but some sort of innocent error—Sam doesn't seem self-deceived, just deceived. Non-intentionalists, however, argue that in cases of self-deception the false belief is not accidental but motivated by desire (Mele 2001), anxiety (Johnston 1988, Barnes 1997) or some other emotion regarding p or related to p . So, for instance, when Allison believes against the preponderance of evidence available to her that her daughter is not having learning difficulties, the non-intentionalist will explain the various ways she misreads the evidence by pointing to such things as her desire that her daughter not have learning difficulties, her fear that she has such difficulties, or anxiety over this possibility. In such cases, Allison's self-deceptive belief that her daughter is not having learning difficulties, fulfills her desire, quells her fear or reduces her anxiety, and it is this function (not an intention) that explains why her belief formation process is bias. Allison's false belief is not an innocent mistake, but a consequence of her motivational states.

Some non-intentionalists suppose that self-deceivers recognize at some level that their self-deceptive belief p is false, contending that self-deception essentially involves an ongoing effort to resist the thought of this unwelcome truth or is driven by anxiety prompted by this recognition (Bach 1981; Johnston 1988). So, in Allison's case, her belief that her daughter is having learning difficulties along with her desire that it not be the case motivate her to employ means to avoid this thought and to believe the opposite. Others, however, argue the needed motivation can as easily be supplied by uncertainty or ignorance whether p , or suspicion that ~ p (Mele 2001, Barnes 1997). Thus, Allison need not hold any opinion regarding her daughter's having learning difficulties for her false belief that she is not experiencing difficulties to count as self-deception, since it is her regarding evidence in a motivationally biased way in the face of evidence to the contrary, not her recognition of this evidence, that makes her belief self-deceptive. Accordingly, Allison need not intend to deceive herself nor believe at any point that her daughter is in fact having learning difficulties. If we think someone like Allison is self-deceived, then self-deception requires neither contradictory beliefs nor intentions regarding the acquisition or retention of the self-deceptive belief.

On such deflationary views of self-deception, one need only hold a false belief p , possess evidence that ~ p , and have some desire or emotion that explains why p is believed and retained. In general, if one possesses evidence that one normally would take to support ~ p and yet believes p instead due to some desire, emotion or other motivation one has related to p , then one is self-deceived.

Intentionalists contend these deflationary accounts do not adequately distinguish self-deception from other sorts of motivated believing, cannot explain the peculiar selectivity associated with self-deception, and lack a compelling explanation for why, typically, self-deceivers are thought to be responsible and open to censure in many instances (See section 5.1 for this last item).

What distinguishes wishful thinking from self-deception, according to intentionalists just is that the latter is intentional while the former is not (e.g., Bermúdez 2000). Non-intentionalists respond that what distinguishes wishful thinking from self-deception is that self-deceivers recognize evidence against their self-deceptive belief whereas wishful thinkers do not (Bach 1981; Johnston 1988), or merely possess, without recognizing, greater counterevidence than wishful thinkers. Some contend wishful thinking is a species of self-deception, but self-deception includes unwelcome as well as wishful belief, and thus may be motivated by something other than a desire that the target belief be true (See section 4 for more on this variety of self-deception).

Another objection raised by intentionalists is that deflationary accounts cannot explain the selective nature of self-deception, termed the ‘selectivity problem’ by Bermúdez (1997, 2000). Why is it, such intentionalists ask, that we are not rendered bias in favor of the belief that p in many cases where we have a very strong desire that p (or anxiety or some other motivation related to p )? Intentionalists argue that an intention to get oneself to acquire the belief that p offers a relatively straightforward answer to this question. Mele (2001), drawing on empirical research regarding lay hypothesis testing, argues that selectivity may be explained in terms of the agent's assessment of the relative costs of erroneously believing p and ~ p . So, for example, Josh would be happier believing falsely that the gourmet chocolate he finds so delicious isn't produced by exploited farmers than falsely believing that it is, since he desires that it not be so produced. Because Josh considers the cost of erroneously believing his favorite chocolate is tainted by exploitation to be very high—no other chocolate gives him the same pleasure, it takes a great deal more evidence to convince him that his chocolate is so tainted than it does to convince him otherwise. It is the low subjective cost of falsely believing the chocolate is not tainted that facilitates Josh's self-deception. But we can imagine Josh having the same strong desire that his chocolate not be tainted by exploitation and yet assessing the cost of falsely believing it is not tainted differently. Say, for example, he works for an organization promoting fair trade and non-exploitive labor practices among chocolate producers and believes he has an obligation to accurately represent the labor practices of the producer of his favorite chocolate and would, furthermore, lose credibility if the chocolate he himself consumes is tainted by exploitation. In these circumstances, Josh is more sensitive to evidence that his favorite chocolate is tainted, despite his desire that it not be, since the subjective cost of being wrong is higher for him than it was before. It is the relative subjective costs of falsely believing p and ~ p that explains why desire or other motivation biases belief in some circumstances and not others. Challenging this solution, Bermúdez (2000) suggests that the selectivity problem may reemerge, since it isn't clear why in cases where there is a relatively low cost for holding a self-deceptive belief favored by our motivations we frequently do not become self-deceived. Mele (2001), however, points out that intentional strategies have their own ‘selectivity problem', since it isn't clear why some intentions to acquire a self-deceptive belief succeed while others do not.

Self-deception that involves the acquisition of an unwanted belief, termed ‘twisted self-deception’ by Mele (1999, 2001), has generated a small but growing literature of its own, most recently, Barnes (1997), Mele (1999, 2001), Scott-Kakures (2000; 2001). A typical example of such self-deception is the jealous husband who believes on weak evidence that his wife is having an affair, something he doesn't want to be the case. In this case, the husband apparently comes to have this false belief in the face of strong evidence to the contrary in ways similar to those ordinary self-deceivers come to believe something they want to be true.

One question philosophers have sought to answer is how a single unified account of self-deception can explain both welcome and unwelcome beliefs. If a unified account is sought, then it seems self-deception cannot require that the self-deceptive belief itself be desired. Pears (1984) has argued that unwelcome belief might be driven by fear or jealousy. My fear of my house burning down might motivate my false belief that I have left the stove burner on. This unwelcome belief serves to ensure that I avoid what I fear, since it leads me to confirm that the burner is off. Barnes (1997) argues that the unwelcome belief must serve to reduce some relevant anxiety; in this case my anxiety that my house is burning. Scott-Kakures (2000; 2001) argues, however, that since the unwelcome belief itself does not in many cases serve to reduce but rather to increase anxiety or fear, their reduction cannot be the purpose of that belief. Instead, he contends that we think of the belief as serving to make the agent's goals and interests more probable than not, in my case, preserving my house. My testing and confirming an unwelcome belief may be explained by the costs I associate with being in error, which is determined in view of my relevant aims and interests. If I falsely believe that I have left the burner on, the cost is relatively low—I am inconvenienced by confirming that it is off. If I falsely believe that I have not left the burner on, the cost is extremely high—my house being destroyed by fire. The asymmetry between these relative costs alone may account for my manipulation of evidence confirming the false belief that I have left the burner on. Drawing upon recent empirical research, both Mele (2001) and Scott-Kakures (2000) advocate a model of this sort, since it helps to account for the roles desires and emotions apparently play in cases of twisted self-deception. Nelkin (2002) argues that the motivation for self-deceptive belief formation be restricted to a desire to believe p . She points out that the phrase “unwelcome belief” is ambiguous, since a belief itself might be desirable even if its being true is not. I might want to hold the belief that I have left the burner on, but not want it to be the case that I have left it on. The belief is desirable in this instance, because holding it ensures that it will not be true. In Nelkin's view, then, what unifies cases of self-deception—both twisted and straight—is that the self-deceptive belief is motivated by a desire to believe p ; what distinguishes them is that twisted self-deceivers do not want p to be the case, while straight self-deceivers do. Restricting the motivating desire to a desire to believe p , according to Nelkin, makes clear what twisted and straight self-deception have in common as well as why other forms of motivated belief formation are not cases of self-deception. Though non-intentional models of twisted self-deception dominate the landscape, whether desire, emotion or some combination of these attitudes plays the dominant role in such self-deception and whether their influence merely triggers the process or continues to guide it throughout remain matters of controversy.

5. Morality and Self-deception

Despite the fact that much of the contemporary philosophical discussion of self-deception has focused on epistemology, philosophical psychology and philosophy of mind, the morality of self-deception has been the central focus of discussion historically. As a threat to moral self-knowledge, a cover for immoral activity, and a violation of authenticity, self-deception has been thought to be morally wrong or, at least, morally dangerous. Some thinkers, what Martin (1986) calls ‘the vital lie tradition’, however, have held that self-deception can in some instances be salutary, protecting us from truths that would make life unlivable (e.g., Rorty 1972; 1994). There are two major questions regarding the morality of self-deception: First, can a person be held morally responsible for self-deception and if so under what conditions? Second, is there is anything morally problematic with self-deception, and if so, what and under what circumstances? The answers to these questions are clearly intertwined. If self-deceivers cannot be held responsible for self-deception, then their responsibility for whatever morally objectionable consequences it might have will be mitigated if not eliminated. Nevertheless, self-deception might be morally significant even if one cannot be taxed for entering into it. To be ignorant of one's moral self, as Socrates saw, may represent a great obstacle to a life well lived whether or not one is at fault for such ignorance.

Whether self-deceivers can be held responsible for their self-deception is largely a question of whether they have the requisite control over the acquisition and maintenance of their self-deceptive belief. In general, intentionalists hold that self-deceivers are responsible, since they intend to acquire the self-deceptive belief, usually recognizing the evidence to the contrary. Even when the intention is indirect, such as when one intentionally seeks evidence in favor of p or avoids collecting or examining evidence to the contrary, self-deceivers seem intentionally to flout their own normal standards for gathering and evaluating evidence. So, minimally, they are responsible for such actions and omissions.

Initially, non-intentionalist approaches may seem to remove the agent from responsibility by rendering the process by which she is self-deceived subintentional. If my anxiety, fear, or desire triggers a process that ineluctably leads me to hold the self-deceptive belief, I cannot be held responsible for holding that belief. How can I be held responsible for processes that operate without my knowledge and which are set in motion without my intention? Most non-intentionalist accounts, however, do hold self-deceivers responsible for individual episodes of self-deception, or for the vices of cowardice and lack of self-control from which they spring, or both. To be morally responsible in the sense of being an appropriate target for praise or blame requires, at least, that agents have control over the actions in question. Mele (2001), for example, argues that many sources of bias are controllable and that self-deceivers can recognize and resist the influence of emotion and desire on their belief acquisition and retention, particularly in matters they deem to be important, morally or otherwise. The extent of this control, however, is an empirical question.

Other non-intentionalists take self-deceivers to be responsible for certain epistemic vices such as cowardice in the face of fear or anxiety and lack of self-control with respect the biasing influences of desire and emotion. Thus, Barnes (1997) argues that self-deceivers “can, with effort, in some circumstances, resist their biases” (83) and “can be criticized for failing to take steps to prevent themselves from being biased; they can be criticized for lacking courage in situations where having courage is neither superhumanly difficult nor costly” (175). Whether self-deception is due to a character defect or not, ascriptions of responsibility depend upon whether the self-deceiver has control over the biasing effects of her desires and emotions.

Levy (2004) has argued that non-intentional accounts of self-deception that deny the contradictory belief requirement should not suppose that self-deceivers are typically responsible, since it is rarely the case that self-deceivers possess the requisite awareness of the biasing mechanisms operating to produce their self-deceptive belief. Lacking such awareness, self-deceivers do not appear to know when or on which beliefs such mechanisms operate, rendering them unable to curb the effects of these mechanisms, even when they operate to form false beliefs about morally significant matters. Levy also argues that if self-deceivers typically lack the control necessary for moral responsibility in individual episodes of self-deception, they also lack control over being the sort of person disposed to self-deception. Non-intentionalists may respond by claiming that self-deceivers often are aware of the potentially biasing effects their desires and emotions might have and can exercise control over them. They might also challenge the idea the self-deceivers must be aware in the ways Levy suggests. One well known account of control, employed by Levy, holds that a person is responsible just in case she acts on a mechanism that is moderately responsive to reasons (including moral reasons), such that were she to possess such reasons this same mechanism would act upon those reasons in at least one possible world (Fischer and Ravizza 1999). Guidance control, in this sense, requires that the mechanism in question be capable of recognizing and responding to moral and non-moral reasons sufficient for acting otherwise. In cases of self-deception, deflationary views may suggest that the biasing mechanism, while sensitive and responsive to motivation, is too simple to itself be responsive to reasons. However, the question isn't whether the biasing mechanism itself is reasons responsive but whether the mechanism governing its operation is, that is, whether self-deceivers typically could recognize and respond to moral and non-moral reasons to resist the influence of their desires and emotions and instead exercise special scrutiny of the belief in question. At the very least, it isn't obvious that they could not. Moreover, that some overcome their self-deception seems to indicate such a capacity and thus control over ceasing to be self-deceived at least.

Insofar as it seems plausible that in some cases self-deceivers are apt targets for censure, what prompts this attitude? Take the case of a mother who deceives herself into believing her husband is not abusing their daughter because she can't bear the thought that he is a moral monster (Barnes 1997). Why do we blame her? Here we confront the nexus between moral responsibility for self-deception and the morality of self-deception. Understanding what obligations may be involved and breached in cases of this sort will help to clarify the circumstances in which ascriptions of responsibility are appropriate.

While some instances of self-deception seem morally innocuous and others may even be thought salutary in various ways (Rorty 1994), the majority of theorists have thought there to be something morally objectionable about self-deception or its consequences in many cases. Self-deception has been considered objectionable because it facilitates harm to others (Linehan 1982) and to oneself, undermines autonomy (Darwall 1988; Baron 1988), corrupts conscience (Butler 1722), violates authenticity (Sartre 1943), and manifests a vicious lack of courage and self-control that undermine the capacity for compassionate action (Jenni 2003). Linehan (1982) argues that we have an obligation to scrutinize the beliefs that guide our actions that is proportionate to the harm to others such actions might involve. When self-deceivers induce ignorance of moral obligations, of the particular circumstances, of likely consequences of actions, or of their own engagements, by means of their self-deceptive beliefs, they are culpable. They are guilty of negligence with respect to their obligation to know the nature, circumstances, likely consequences and so forth of their actions (Jenni 2003). Self-deception, accordingly, undermines or erodes agency by reducing our capacity for self-scrutiny and change. (Baron 1988) If I am self-deceived about actions or practices that harm others or myself, my ability to take responsibility and change are also severely restricted. Joseph Butler, in his well-known sermon “On Self-Deceit”, emphasizes the ways in which self-deception about one's moral character and conduct, ‘self-ignorance’ driven by inordinate ‘self-love', not only facilitates vicious actions but hinders the agent's ability to change by obscuring them from view. Such ignorance, claims Butler, “undermines the whole principle of good … and corrupts conscience, which is the guide of life” (“On Self-Deceit”). Existentialist philosophers such as Kierkegaard and Sartre, in very different ways, viewed self-deception as a threat to ‘authenticity’ insofar as self-deceivers fail to take responsibility for themselves and their engagements past, present and future. By alienating us from our own principles, self-deception may also threaten moral integrity (Jenni 2003). Furthermore, self-deception also manifests certain weakness of character that dispose us to react to fear, anxiety, or the desire for pleasure in ways that bias our belief acquisition and retention in ways that serve these emotions and desires rather than accuracy. Such epistemic cowardice and lack of self-control may inhibit the ability of self-deceivers to stand by or apply moral principles they hold by biasing their beliefs regarding particular circumstances, consequences or engagements, or by obscuring the principles themselves. In all these ways and a myriad of others, philosophers have found some self-deception objectionable in itself or for the consequences it has on our ability to shape our lives.

Those finding self-deception morally objectionable, generally assume that self-deception or, at least, the character that disposes us to it, is under our control to some degree. This assumption need not entail that self-deception is intentional only that it is avoidable in the sense that self-deceivers could recognize and respond to reasons for resisting bias by exercising special scrutiny (see section 5.1). It should be noted, however, that self-deception still poses a serious worry even if one cannot avoid entering into it, since self-deceivers may nevertheless have an obligation to overcome it. If exiting self-deception is under the guidance control of self-deceivers, then they might reasonably be blamed for persisting in their self-deceptive beliefs when they regard matters of moral significance.

But even if agents don't bear specific responsibility for their being in that state, self-deception may nevertheless be morally objectionable, destructive and dangerous. If radically deflationary models of self-deception do turn out to imply that our own desires and emotions, in collusion with social pressures toward bias, lead us to hold self-deceptive beliefs and cultivate habits of self-deception of which we are unaware and from which cannot reasonably be expected to escape on our own, self-deception would still undermine autonomy, manifest character defects, obscure us from our moral engagements and the like. For these reasons, Rorty (1994) emphasizes the importance of the company we keep. Our friends, since they may not share our desires or emotions, are often in a better position to recognize our self-deception than we are. With the help of such friends, self-deceivers may, with luck, recognize and correct morally corrosive self-deception.

Evaluating self-deception and its consequences for ourselves and others is a difficult task. It requires, among other things: determining the degree of control self-deceivers have; what the self-deception is about (Is it important morally or otherwise?); what ends the self-deception serves (Does it serve mental health or as a cover for moral wrongdoing?); how entrenched it is (Is it episodic or habitual?); and, whether it is escapable (What means of correction are available to the self-deceiver?). In view of the many potentially devastating moral problems associated with self-deception, these are questions that demand our continued attention.

Bibliography

  • Ames, R.T., and W. Dissanayake, (eds.), 1996, Self and Deception, New York: State University of New York Press.
  • Barnes, A., 1997, Seeing through Self-Deception , New York: Cambridge University Press.
  • Dupuy, J-P., (ed.), 1998, Self-Deception and Paradoxes of Rationality (Center for the Study of Language and Information - Lecture Notes, 69), Cambridge: Cambridge University Press.
  • Elster, J., (ed.), 1985, The Multiple Self, Cambridge: Cambridge University Press.
  • Fingarette, H., 1969, 2000, Self-Deception , Berkeley: University of California Press.
  • Haight, R. M., 1980, A Study of Self-Deception , Sussex: Harvester Wheatsheaf.
  • Lockhard, J. and Paulhus, D. (eds.), 1988, Self-Deception: An Adaptive Mechanism? , Englewood Cliffs: Prentice-Hall.
  • Martin, M., 1986, Self-Deception and Morality , Lawrence: University Press of Kansas.
  • –––, (ed.), 1985, Self-Deception and Self-Understanding. Lawrence: University Press of Kansas.
  • McLaughlin, B. and Rorty, A. O. (eds.), 1988, Perspectives on Self-Deception , Berkeley: University of California Press.
  • Mele, A., 1987a, Irrationality: An Essay on Akrasia, Self-Deception, Self-Control , Oxford: Oxford University Press.
  • –––, 2001, Self-Deception Unmasked , Princeton: Princeton University Press.
  • Pears, D.,1984, Motivated Irrationality, New York: Oxford University Press.
  • Audi, R., 1976, “Epistemic Disavowals and Self-Deception”, The Personalist , 57: 378-385.
  • –––, 1982, “Self-Deception, Action, and Will”, Erkenntnis , 18: 133-158.
  • –––, 1989, “Self-Deception and Practical Reasoning”, Canadian Journal of Philosophy, 19: 247-266.
  • Bach, K., 1997, “Thinking and Believing in Self-Deception”, Behavioral and Brain Sciences , 20: 105.
  • –––,1981, “An Analysis of Self-Deception”, Philosophy and Phenomenological Research, 41: 351-370.
  • Baron, M., 1988, “What is Wrong with Self-Deception”, in Perspectives on Self-Deception, B. McLaughlin and A. O. Rorty (eds.), Berkeley: University of California Press.
  • Bok, S., 1980, “The Self Deceived”, Social Science Information, 19: 923-935.
  • –––, 1989, “Secrecy and Self-Deception”, in Secrets: On the Ethics of Concealment and Revelation , New York: Vintage
  • Bermúdez, J., 2000, “Self-Deception, Intentions, and Contradictory Beliefs.” Analysis 60(4): 309-319.
  • –––, 1997, “Defending Intentionalist Accounts of Self-Deception”, Behavioral and Brain Sciences , 20: 107-8.
  • Bird, A., “Rationality and the Structure of Self-Deception”, in Gianfranco, S.. (ed.), 1994, European Review of Philosophy, Volume 1: Philosophy of Mind, Stanford: CSLI Publications.
  • Brown, R., 2003, “The Emplotted Self: Self-Deception and Self-Knowledge.”, Philosophical Papers , 32: 279-300.
  • Butler, J., 1726, “Upon Self-Deceit”, in White, D. E., (eds.), 2006, The Works of Bishop Butler , Rochester: Rochester University Press. [ Available online ]
  • Chisholm, R. M., and Feehan, T., 1977, “The Intent to Deceive”, Journal of Philosophy , 74: 143-159.
  • Cook, J. T., 1987, “Deciding to Belief without Self-deception”, Journal of Philosophy, 84: 441-446.
  • Dalton, P., 2002, “Three Levels of Self-Deception (Critical Commentary on Alfred Mele's Self-Deception Unmasked”, Florida Philosophical Review , II/1: 72-76. [ Preprint available Online ]
  • Darwall, S., 1988, “Self-Deception, Autonomy, and Moral Constitution”, in Perspectives on Self-Deception, B. McLaughlin and A. O. Rorty (eds.), Berkeley: University of California Press.
  • Davidson, D., 1985, “Deception and Division”, in Actions and Events , E. LePore and B. McLaughlin (eds.), New York: Basil Blackwell.
  • –––, 1982, “Paradoxes of Irrationality”, in Philosophical Essays on Freud , R. Wollheim and J. Hopkins (eds.), Cambridge: Cambridge University Press.
  • Demos, R., 1960, “Lying to Oneself”, Journal of Philosophy, 57: 588-95.
  • Dennett, D., 1992, “The Self as a Center of Narrative Gravity”, in Consciousness and Self: Multiple Perspectives , F. Kessel, P. Cole, and D. Johnson (eds.), Hillsdale, NJ: L. Erlbaum.
  • de Sosa, R., 1978, “Self-Deceptive Emotions.” Journal of Philosophy , 75: 684-697.
  • –––, 1970, “Self-Deception”, Inquiry , 13: 308-321.
  • DeWeese-Boyd, I., 2007.“Taking Care: Self-Deception, Culpability and Control”, teorema , XXVI/3: 161-176.
  • Dunn, R., 1995, “Motivated Irrationality and Divided Attention”, Australasian Journal of Philosophy , 73: 325-336.
  • –––, 1995, “Attitudes, Agency and First-Personality”, Philosophia , 24: 295-319.
  • –––, 1994, “Two Theories of Mental Division”, Australasian Journal of Philosophy , 72: 302-316.
  • Fairbanks, R., 1995, “Knowing More Than We Can Tell”, The Southern Journal of Philosophy, 33: 431-459.
  • Fingarette, H., 1998, “Self-Deception Needs No Explaining”, The Philosophical Quarterly , 48: 289-301.
  • Fischer, J. and Ravizza, M., 1998, Responsibility and Control . Cambridge: Cambridge University Press.
  • Funkhouser, E., 2005, “Do the Self-Deceived Get What They Want?”, Pacific Philosophical Quarterly , 86/3: 295-312.
  • Hales, S. D., 1994, “Self-Deception and Belief Attribution”, Synthese , 101: 273-289.
  • Hernes, C., 2007.“Cognitive Peers and Self-Deception”, teorema , XXVI/3: 123-130.
  • Lazar, A., 1999, “Deceiving Oneself Or Self-Deceived?”, Mind , 108: 263-290.
  • –––, 1997, “Self-Deception and the Desire to Believe”, Behavioral and Brain Sciences , 20: 119-120.
  • Jenni, K., 2003, “Vices of Inattention”, Journal of Applied Philosophy, 20/3: 279-95.
  • Johnston, M., 1988, “Self-Deception and the Nature of Mind”, in Perspectives on Self-Deception , B. McLaughlin and A. O. Rorty (eds.), Berkeley: University of California Press.
  • Gendler, T. S., 2007, “Self-Deception as Pretense”, Philosophical Perspectives , 21: 231-258.
  • Hauerwas, S. and Burrell, D., 1977, “Self-Deception and Autobiography: Reflections on Speer's Inside the Third Reich”, in Truthfulness and Tragedy, S. Hauerwas with R. Bondi and D. Burrell, Notre Dame: University of Notre Dame Press.
  • Kirsch, J., 2005, “What's So Great about Reality?”, Canadian Journal of Philosophy , 35/3: 407-428.
  • Levy, N., 2004, “Self-Deception and Moral Responsibility”, Ratio (new series) , 17: 294-311.
  • Martínez Manrique, F., 2007.“Attributions of Self-Deception”, teorema , XXVI/3: 131-143.
  • Mele, A., 2000, “Self-Deception and Emotion”, Consciousness and Emotion, 1: 115-139.
  • –––, 1999, “Twisted Self-Deception”, Philosophical Psychology , 12: 117-137.
  • –––, 1997, “Real Self-Deception”, Behavioral and Brain Sciences , 20: 91-102.
  • –––, 1987b, “Recent Work on Self-deception”, American Philosophical Quarterly , 24: 1-17.
  • –––, 1983, “Self-Deception”, Philosophical Quarterly , 33 (1983): 365-377.
  • Moran, R., 1988, “Making Up Your Mind: Self-Interpretation and Self-constitution”, Ratio (new series) , 1: 135-151.
  • Nelkin, D., 2002, “Self-Deception, Motivation, and the Desire to Believe”, Pacific Philosophical Quarterly , 83: 384-406.
  • Nicholson, A., 2007.“Cognitive Bias, Intentionality and Self-Deception”, teorema , XXVI/3: 45-58.
  • Noordhof, P., 2003, “Self-Deception, Interpretation and Consciousness”, Philosophy and Phenomenological Research , 67: 75-100.
  • Paluch, S., 1967, “Self-Deception”, Inquiry , 10: 268-78.
  • Patten, D., 2003, “How do we deceive ourselves?”, Philosophical Psychology , 16(2): 229-46.
  • Pears, D., 1991, “Self-Deceptive Belief Formation”, Synthese , 89: 393-405.
  • Philström, S., 2007. “Transcendental Self-Deception”, teorema , XXVI/3: 177-189.
  • Räikkä, J. 2007, “Self-Deception and Religious Beliefs”, Heythrop Journal , XLVIII: 513-526.
  • Rorty, A. O., 1994, “User-Friendly Self-Deception”, Philosophy , 69: 211-228.
  • –––, 1983, “Akratic Believers”, American Philosophical Quarterly , 20: 175-183.
  • –––, 1980, “Self-Deception, Akrasia and Irrationality”, Social Science Information, 19: 905-922.
  • –––, 1972, “Belief and Self-Deception”, Inquiry , 15: 387-410.
  • Sartre, J-P., 1946, L'etre et le néant , Paris: Gallimard; trans. H. E. Barnes, 1956, Being and Nothingness , New York, Washington Square Press.
  • Sahdra, B. and Thagard, P., 2003, “Self-Deception and Emotional Coherence”, Minds and Machines , 13: 213-231.
  • Scott-Kakures, D., 2002, “At Permanent Risk: Reasoning and Self-Knowledge in Self-Deception”, Philosophy and Phenomenological Research, 65: 576-603.
  • –––, 2001, “High anxiety: Barnes on What Moves the Unwelcome Believer”, Philosophical Psychology , 14: 348-375.
  • –––, 2000, “Motivated Believing: Wishful and Unwelcome”, Nous , 34: 348-375.
  • Sorensen, R., 1985, “Self-Deception and Scattered Events”, Mind , 94: 64-69.
  • Talbott, W. J., 1997, “Does Self-Deception Involve Intentional Biasing”, Behavoir and Brain Sciences , 20: 127.
  • –––, 1995, “Intentional Self-Deception in a Single Coherent Self”, Philosophy and Phenomenological Research, 55: 27-74.
  • Tversky, A., 1985, “Self-Deception and Self-Perception”, in The Multiple Self, ed. Jon Elster. Cambridge: Cambridge University Press.
  • Van Fraassen, B., 1995, “Belief and the Problem of Ulysses and the Sirens”, Philosophical Studies , 77: 7-37.
  • –––, 1984, “Belief and Will”, Journal of Philosophy , 81: 235-256.
  • Whisner, W., 1993, “Self-Deception and Other-Person Deception”, Philosophia , 22: 223-240.
  • –––, 1989, “Self-Deception, Human Emotion, and Moral Responsibility: Toward a Pluralistic Conceptual Scheme”, Journal for the Theory of Social Behaviour , 19: 389-410
  • Self-Deception Bibliography , compiled by David Chalmers and David Bourget, Australian National University

action | Davidson, Donald | lying and deception: definition of | moral responsibility | Sartre, Jean-Paul | self-knowledge | weakness of will

Acknowledgments

Peter Gärdenfors Ph.D.

The Noble Art of Self-Deception

Self-deception is not always a bad thing..

Updated July 16, 2023 | Reviewed by Ray Parker

  • Psychologists Justin Kruger and David Dunning revealed people seriously overestimate their abilities.
  • Metacognition is the ability to reflect on and assess one's own thought processes.
  • Incompetent individuals demonstrate less efficient metacognition compared to competent individuals.

Photoagent/Shutterstock

Where I grew up, there was a lumberjack who was an oddball. He was stingy, surviving on coarse bread, grease, and salted herring. The remaining money was spent on vodka.

It is said that during his evening meal, he would spread grease on a slice of bread and place a piece of herring from a jar at one end of the bread. As he ate, he moved the herring farther away from the bread. Finally, when he finished the last piece of bread, he would return the herring to the jar and exclaim aloud to himself: "I fooled you again, you stupid bastard."

Who was fooling whom? The lumberjack was not schizophrenic, but like everyone else, he sometimes had a dialog with himself: Should he eat the herring now or save it for his future self? The herring became increasingly rancid, the more he favored the future.

How We Deceive Ourselves

How is it possible to deceive oneself? Wouldn't one immediately recognize the trickery if attempted? In reality, we are surprisingly adept at deceiving ourselves, often unconsciously. Psychologists have long understood that people live with various kinds of life lies, but self-deception manifests in many more contexts.

Bruce Rolff/ Shutterstock

Self-deception operates because the self is not an indivisible entity: The unconscious side of the self can deceive the conscious one. One form of self-deception involves expressing a desire to achieve a particular goal while unconsciously working towards another. This strategy is succinctly summarized by the French philosopher Blaise Pascal's aphorism: "The heart has its reasons which reason does not know at all."

We overestimate ourselves to prioritize ourselves over others and thus survive. If we were to perceive our objectively true selves, we would likely become despondent.

Deception does not always involve outright lies; it can also involve an exaggerating of certain characteristics. Literal self-embellishment—makeup, hairstyling, clothing choices—is an everyday form of self-deception that most people engage in. Rarely do we desire to reveal our authentic selves.

Most individuals harbor illusions about themselves and believe that they possess above-average positive qualities. We tend to think we are more intelligent, honest, friendly, original, and reliable than average. We also believe that we will live longer than average and drive better than average (even those who have been hospitalized for traffic accidents hold this belief). Moreover, these illusions extend to self-reflection: Most people perceive themselves as less influenced by such illusions than the average person.

Overestimating Our Own Abilities

Naturalist Charles Darwin long ago observed that self-confidence more often stems from ignorance than knowledge. For example, drivers who have been involved in accidents or people who have failed a driving test are worse at judging their own performance on a reaction test than are experienced drivers.

Social psychologists Justin Kruger and David Dunning conducted a series of tests revealing that people who are among the worst in terms of reasoning logically, writing grammatically, or understanding humor , for example, seriously overestimate their own abilities. On average, the lowest-performing quarter of the participants rated themselves as being in the top 40 percent.

Metacognition and the Two-Fold Burden

Kruger and Dunning explain this self-assessment by asserting that incompetent individuals possess poorer metacognition compared to competent ones. Metacognition is the ability to reflect on and assess one's own thought processes.

For example, the ability to write a grammatically correct sentence is akin to the ability to recognize that there is a grammatical error in a sentence. Hence, if they fail to recognize their mistakes, they will grossly overestimate their ability to write grammatically correctly.

Incompetent individuals, therefore, bear a two-fold burden: Not only do they draw the wrong conclusions and make the wrong decisions, but their incompetence also robs them of the metacognitive ability to recognize their shortcomings.

On the other hand, the top quarter of the subjects in the experiment slightly underestimated their competence on average. This aligns with research demonstrating that experts in a field have much more developed metacognition when it comes to problem-solving than do novices.

what is self deception in critical thinking

Positive Illusions and the Natural Urge to Embellish

A beneficial effect of overestimating oneself is that positive illusions lead to better health and longer life. Studies conducted on HIV-positive people revealed that those with excessively positive perceptions of themselves exhibited a significantly slower progression of the disease.

Similarly, patients who perceived no risk in an upcoming operation tended to recover more quickly after surgery compared to those who were concerned about the procedure. Furthermore, women who denied problems associated with a breast cancer diagnosis had fewer recurrences of the disease compared to others.

I travel extensively, both for work and vacation. Most of the time, I carry a camera to capture people, places, and moments that I want to remember. Often, I find myself attempting to embellish the pictures, for example, only capturing people when they appear happy or deliberately excluding an ugly house in a beach photograph. I believe many amateur photographers can relate to this behavior.

Why do I really want to embellish the images? My perception has been that I want to present others with more appealing depictions of my experiences than they actually were, much like dressing up to look good. However, I rarely show the pictures to others; instead, I mostly deceive myself.

In fact, what happens when I embellish a photo is that I design a memory . My memory of the trip will be largely colored by the images I choose to preserve. I deceive myself into thinking that the trip was more golden than it really was.

Apart from everyday self-aggrandizement, one of the most prevalent forms of self-deception involves selectively choosing which information to acknowledge. “What I don't know can't hurt me" is a prime example of self-deception.

The common understanding of self-deception posits the existence of hidden urges and other unconscious forces that drive our actions, while conscious motives guide our actions, or so we believe. In fraudulent cases, the unconscious does not align with the conscious.

Hence, the paradox of self-deception lies in the question of how we can avoid discovering that the interpretations we make of our actions are, in the long run, so poorly aligned with our actual behavior. Our consciousness never encourages us to be honest with ourselves. A life free from self-deception can only be attained through an unadulterated understanding of our actions.

Peter Gärdenfors Ph.D.

Peter Gärdenfors, Ph.D. , is a professor of cognitive science at Lund University, Sweden.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Teletherapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Therapy Center NEW
  • Diagnosis Dictionary
  • Types of Therapy

March 2024 magazine cover

Understanding what emotional intelligence looks like and the steps needed to improve it could light a path to a more emotionally adept world.

  • Coronavirus Disease 2019
  • Affective Forecasting
  • Neuroscience
  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Archaeology
  • Greek and Roman Papyrology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Agriculture
  • History of Education
  • History of Emotions
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Acquisition
  • Language Variation
  • Language Families
  • Language Evolution
  • Language Reference
  • Lexicography
  • Linguistic Theories
  • Linguistic Typology
  • Linguistic Anthropology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Modernism)
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Religion
  • Music and Culture
  • Music and Media
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Science
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Oncology
  • Medical Toxicology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Clinical Neuroscience
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Ethics
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Neuroscience
  • Cognitive Psychology
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Strategy
  • Business History
  • Business Ethics
  • Business and Government
  • Business and Technology
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic Systems
  • Economic Methodology
  • Economic History
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Theory
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Politics and Law
  • Public Administration
  • Public Policy
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Moral Psychology

  • < Previous chapter
  • Next chapter >

16 Self-Deception and the Moral Self

Richard Holton is Professor of Philosophy at the University of Cambridge.

  • Published: 20 April 2022
  • Cite Icon Cite
  • Permissions Icon Permissions

Self-deception is rife in moral thinking. Some have even argued that moral behaviour is fundamentally driven by self-signalling: we want to see ourselves as good, and we make use of self-deception to achieve it. What form does this self-deception take? A rough division is made between two sorts of account: the proactive, wherein the agent is seen as having in place a prior strategy to avoid unwanted knowledge; and the reactive, wherein the agent is held to initially register the unwanted knowledge before responding to block it. Refining this distinction takes us to the issue of whether there are ‘tension triggers’: states that serve to trigger a self-deceptive response that are in tension with the agent’s existing beliefs. The empirical literature does not conclusively show that there are tension triggers; but it provides plausible examples.

16.1 Introduction

Suppose that we are motivated by the moral judgments that others make about us: we want others to think well of us as moral beings. That may move us to act well. But equally, when we act badly, it may move us to deceive those who see our transgression. We may deceive them in straightforward terms about what we did. Or, if that is not possible, we may deceive them about how to categorize our action, about our motivation, or about what we knew. We may say that this wasn’t really a case of dishonesty but of tact; that we acted, not for any personal benefit, but for the benefit of others; or that we had no idea, when we acted, of the harm that we would cause.

Suppose, however, that another story is true: we are primarily motivated, not by how others judge us, but by how we morally judge ourselves. Suppose, that is, that we want to judge ourselves as morally good. Here again we may be motivated to act well; and again, when we do not, we may be motivated to deceive. Now, though, rather than deceiving others, the deception will be self-deception. Deceiving oneself about what one has done may be hard, at least in the immediate aftermath when memories are clear. But deceiving oneself about such murky issues as how to classify one’s actions, about one’s motives, or about the prior evidence one had for certain outcomes, may be much easier.

This idea, that we are moved by wanting to see ourselves as good, and that we use self-deception to achieve it, is an old one; most pre-twentieth century discussions of self-deception were focused on its moral importance. 1 More recent philosophical discussions of self-deception have tended to lose this moral focus, but it has remained at centre stage in much recent thought in a variety of disciplines. Some see the maintenance of our moral self-image as providing the essence of moral motivation ( Bénabou and Tirole 2011 ); others see self-deception as an essential but instrumental step in the deception of others ( von Hippel and Trivers 2011 ). Whether or not we want to make such sweeping claims, we are certainly very accustomed to the idea that self-deception plays an important role in our moral lives: that even if we genuinely want to do the right thing, our parallel desire to maintain our moral self-image means that when we behave badly we frequently fail to realize that we are doing so.

This idea has been central to the thesis of the banality of evil. Roy Baumeister (1999) records these attitudes on the part of many of those involved in the atrocities of the twentieth century. 2 For a compelling example—admittedly not at the most atrocious end—consider the results of Timothy Garton-Ash’s quest, after the fall of the Berlin Wall, to interview those who had kept a Stasi file on him. Almost without fail he was met with a mixture of denial, minimization, and self- justification. ‘What you find is less malice than human weakness, a vast anthology of human weakness. And when you talk to those involved, what you find is less deliberate dishonesty than our almost infinite capacity for self-deception’ ( Garton-Ash 1997 : 223).

How well has this approach fared under psychological scrutiny? Our concerns here will be twofold. The first is with the evidence that we do indeed go in for moral self-deception, either for some of the reasons just sketched, or for other reasons. The second is with how what we find here fits with the perennial issue of the nature of self-deception. A literal-minded approach models it on the deception of others: it holds that in central cases of self-deception we know the truth, but we succeed in hiding it from ourselves. On such an approach, self-deception would involve the simultaneous holding of contradictory beliefs, with a purposive manipulation of what is made available to consciousness. That immediately raises the problem of how it would be possible: how we can be at once clever enough to arrange the deception and then gullible enough to fall for it.

An increasingly influential deflationary alternative holds that nothing like this is going on. There are two different ideas here. The first is that in self-deception the part that deceives doesn’t have to be seen as a homunculus, a full-blown knowing agent with intentional projects of its own (Johnston 1988). This is not so controversial now; in fact it is plausible that even Freud, often seen as the paradigmatic proponent of the inflated approach, didn’t really see the unconscious self as anything like a separate agent ( Gardner 1993 ). More controversial is the second idea, that self-deceived agents need have no awareness, at any level, of the facts from which they are screening themselves. The alternative model here involves the kind of self-serving bias that work in social psychology has shown enables us to persist in self-ignorance in many spheres: we somehow divert our gaze to avoid the uncomfortable facts (Mele 1997 ; 2001 ; Barnes 1997 ). Such a bias may be bad enough for our ordinary view that we possess a reasonable degree of moral self-awareness ( Doris 2015 ), but it certainly doesn’t amount to anything like a knowing self-manipulation, analogous to that involved in the deceit of others.

I say that this view of self-deception as mere self-serving bias is controversial, since the self-deception displayed in moral cases frequently looks to be reactive. That is, it seems to involve a tactical tuning of response to any threat to the picture that we have of our moral self. Such a reactivity requires there to be some recognition of the threat. This process can still be described as involving bias, but this is not a purely prophylactic bias, one put in place pre-emptively to ensure that the gaze will be averted. Rather it is a bias that is, in part at least, shaped in response to the threat, so it requires, to some degree at least, that the threat be recognized. In examining the empirical work, much of the focus will be on whether self-deception really does display this reactive dimension, or whether it can be fully explained using only the machinery of pre-existing bias: whether it is reactive, or proactive, as I shall, somewhat stipulatively, put it.

Section 16.2 explores, at some length, the empirical evidence for self-deception in the moral domain; readers might wish to skip ahead once they judge that they have seen enough. Section 16.3 describes existing accounts of self-deception, distinguishing the broadly deflationary accounts from those that involve more. Section 16.4 proposes a new understanding of exactly what the main fault lines are, along the lines of the reactive/proactive distinction. Section 16.5 applies this understanding to the examples that were presented in Section 16.2 .

16.2 Self-deception in the moral domain

We start with the work of two economists, Bénabou and Tirole (2011) . Their central idea is that moral behaviour is largely driven by self-signalling: we act morally to convince ourselves that we are moral, since our actions provide our primary source of information of what we are really like. Self-signalling—that is, behaviour that is motivated at least partly by a quest to form beliefs about oneself—is not particularly exotic, nor does it require self-deception: for instance, we routinely try things out to see if we like them ( Bodner and Prelec 2002 ; Holton 2016 ). And even in cases in which the behaviour is performed solely in order to show that one can do it, there may be nothing problematic. If I stand up straight in order to show to myself that I have good posture, that is one way of getting myself to have good posture.

But in the case of moral behaviour, things are less straightforward. For a start, unlike in the case of posture, motivation matters. We do not normally think of ourselves as acting morally in order to form beliefs about our own moral rectitude; indeed, it may be that such a motivation would be inconsistent with truly moral behaviour. Morality requires doing things for the right reasons, and trying to show oneself that one is good is plausibly not amongst them. 3 So if this is my motivation in acting well, it had better not be clear to me that it is. I will need, at the very least, to be self-ignorant. More substantially, if I am acting well simply to convince myself that I am good, then any time that I can achieve that conviction without paying the costs of acting well—by avoiding challenging circumstances, or by telling a story that will put my actions in a better light—I am likely to take the less costly course. Here it seems that I will need to move beyond self-ignorance to self-deception, for we might think that a more active policy would be needed to keep me ignorant throughout such manoeuvres. Whether this requires reactive or merely proactive self-deception is a question to which we shall return in due course; for now, let us look at the alleged phenomena.

Bénabou and Tirole want to accommodate three kinds of findings; I group them under the useful headings they provide, fitting in other research along the way. In each case their argument is that the findings are best explained if we understand the agent as involved in self-signalling.

Unstable altruism: Rather than being robust across different circumstances, moral behaviour diverges in the face of apparently morally insignificant differences.

This is a large and diverse class; readers should feel free to skip to the next section when they have had enough. Bénabou and Tirole cite findings that subjects are less likely to cheat if they are paid in cash rather than with tokens, or if they have read the Ten Commandments or a university honour code before acting; they are more likely to steal a can of coke from a fridge than a dollar bill, and so on ( Mazar, Amir, and Ariely 2008 ). Such behaviour might be explained as self-signalling: in these contexts the consequences for one’s self-conception might be more salient, and less amenable to excuse. However, they might equally be explained by subjects wanting to be good, and not simply wanting to believe that they are: they may need reminding that this is what they want, or what it is that good behaviour requires. Bénabou and Tirole also cite the findings on moral credentialling, where earlier bad behaviour gives rise to a subsequent tendency to compensatory better behaviour later on (e.g. Carlsmith and Gross 1969 ), and, conversely, earlier good behaviour licences worse behaviour later ( Monin and Miller 2001 ; Mazar and Zhong 2010 ; Zhong et al. 2010 ). Again this is compatible with self-signalling, but it is also compatible with simply thinking that subjects want to be good enough. It is also complicated by converse findings from the cognitive dissonance literature that performing small good acts will subsequently make subjects more likely to perform larger good acts—the so-called ‘foot in the door’ effect ( DeJong 1979 ; see Cooper 2007 for the materials to fit this into the current complexities of cognitive dissonance theory). Bénabou and Tirole aim to explain this discrepancy by saying that in these latter cases it is a weaker aspect of identity that is challenged; but they give no independent reason for thinking that, and the traditional cognitive dissonance explanation (once I have started to conceive of myself in a certain way I will tend to act in accordance with that conception) has strong support (though we have no account of quite how this is supposed to interact with moral credentialling).

Still under the general heading of ‘unstable altruism’, there is more persuasive support for self-signalling from findings that subjects will seek to avoid information that could put them in a bad light, or will act in worse ways if they can seem to offload some of the responsibility onto others. For instance, subjects in a ‘dictator game’ who can choose to allocate a sum of money equally between themselves and another ($5:$5), or to increase their share marginally at great cost to the other ($6:$1), will normally choose the more equal option. But now consider a second game in which the share going to the subject is openly stated, but in which the share going to the second person is hidden, although it can be costlessly revealed by the subject. You would expect a subject who was genuinely concerned with behaving well to reveal that information before choosing how to act; but around half chose not to, opting for the greater benefit to themselves, while preserving their ignorance of the consequence for the second person ( Dana, Weber and Kuang 2007 ; see also Lazear, Malmendier, and Weber 2012 ). So it seems that sometimes people will ensure to avoid knowing things so that they can persist in activities with a clean conscience.

Such motivations can easily be overstated; in a further condition only around 25 per cent of subjects showed what looked to be morally self-deceptive behaviour ( Dana, Weber and Kuang 2007 : table 4, ‘plausible deniability’); and in a different experiment, it was found that subjects were primarily concerned to deceive others, not themselves ( Dana, Cain, and Dawes 2006 ). So there are almost certainly varied motives here, and probably mixed motives within any one individual. Nevertheless, some people, in some circumstances, seem to be primarily motivated by self-signalling.

Other studies lend broad support to this picture. A relatively early US study ( Gaertner 1973 ) looked for different levels of racial bias between Liberal and Conservative Party members in New York. Experimenters with identifiably White or Black accents telephoned subjects, pretending to have broken down on a freeway, and to have dialled the wrong number while trying to contact a garage. Explaining that they had used up their last coin, they then asked for assistance in getting through to the garage. Gaertner found that Liberals were more likely than conservatives to help Black callers once the request had been made; but that they were more likely than Conservatives to hang up on Black callers before this point. Discussing the experiment, Miller and Monin (2016) suggest that Liberal subjects were more likely to identify the situation that was evolving as a potential test of their moral self-image, and foreseeing the required behaviour as costly, they withdrew from it; Conservative subjects, less concerned that maintaining their self-image would require them to help, were less likely to hang up. Other interpretations of the result are possible, but it does seem plausible that, in some subjects at least, moral self-signalling was playing a role here. 4

Consider next studies on how much people are prepared to pay for things when they know that a proportion of what they pay goes to charity. In one study ( Jung et al. 2017 ), reusable bags were offered to shoppers outside a supermarket. Shoppers could choose how much they paid for the bag, but they could not choose what proportion of their payment went to charity—in different conditions this would be 0, 1, 50, 99, or 100 per cent. The move from 0 to 1 per cent more than doubled the average amount paid, but further increases had very little effect. It seems that what mattered most in determining what people were prepared to pay was whether there was something going to charity; how much mattered far less. If they were primarily concerned with the benefit to the charity, that is odd. It makes more sense on a signalling model if the value of the signal is relatively coarse: that is, if the benefit to self-image is much the same however much the charity receives.

Suggestive findings also come from the much-discussed phenomenon of ‘crowding out’, although here the issues are complex. The central idea is that adding a financial incentive for some behaviour can crowd out a prior moral motivation for it. The classic case for this was made by Titmuss (1970), who argued that having a system where payments were given for human blood, as in the US, would yield poorer-quality blood than a purely voluntary system, as in the UK. Titmuss canvassed various arguments for this (e.g. that payment would encourage those with diseases to conceal them), but central was the idea that a moral motivation to donate would be crowded out once payment was provided. This might seem surprising: you might think that if it is a good thing to give blood when you are not paid, it is still good to give it when you are. Here self-signalling might provide the explanation: if the aim is to show that you are morally motivated, then payment greatly obscures it.

Titmuss’s claims about blood provision in response to payment have been contested (his evidence was very thin), and they are still not fully clear, but a recent meta-analysis suggests that, at the very least, adding a financial incentive does not increase provision, which is itself contrary to standard economic models ( Niza Tung and Marteau 2013 ). Still, other explanations need to be excluded before we conclude that it provides evidence for self-signalling. One is that blood donation might provide signalling to others. Another, more radical, is that offering a financial inducement does not just change the information about motivation, but changes the agent’s perception of the act itself: once you are paid for your blood, the act of giving it is no longer seen as a moral act. If that were the case, then there need not be any self-signalling involved: agents could be simply motivated to do the right thing, independently of any signal given. Various other findings do point in this direction. For instance, a famous Israeli childcare study found that adding a fine to discourage the late collection of children actually had the reverse effect: the explanation given was that parents came to see the fine as a fee that could be blamelessly paid, rather than understanding lateness as a moral issue. (See Gneezy and Rustichini 2000 ; the framework comes from Fiske 1992 ; for experimental support, see Heyman and Arieli 2004 .) Strikingly, removal of the fines did not return the number of late collections to the earlier level, a finding consistent with the ‘intrinsic/extrinsic motivation’ research ( Deci and Ryan 2000 ), which finds that once someone moves to a framework of extrinsic motivations (in this case, financial) it is hard to get back to intrinsic ones (in this case, moral). This is hard to explain if there is only self-signalling going on: once the financial reward is removed, it should be clear that the motivation cannot be driven by it. Nevertheless, the findings are perfectly compatible with a mixed account, one that combines self-signalling with moral categorization: it is only once an agent perceives an act as moral that performing it provides signalling information. Clearly more work is needed here to distinguish the various possibilities; let us move on to the second and third of Bénabou and Tirole’s categories.

Social and antisocial punishments. Agents will punish others for not being moral enough, but equally they will punish them for being too moral.

A fairly extensive experimental literature indicates that subjects in trust games and the like are prepared to punish those who have behaved badly, even at cost to themselves ( Fehr and Fischerbacher 2003 ). But this enforcement of morality only goes so far. Consider the familiar case of people who are vegetarian for moral reasons: rather than admiring them as moral exemplars, non-vegetarians often treat them with a mixture of scorn and resentment ( Minson and Monin 2012 ). One could imagine various explanations for this. The non-vegetarians might genuinely disagree with the vegetarian moral position; or if they have some secret sympathy with it, they might be concerned that the vegetarians are raising the moral bar too high. But studies on this and similar cases suggest that a powerful factor here is self-signalling. It is hard to maintain a view of oneself as morally good if it is clear that there are people who are morally better around; an easier course than changing one’s own behaviour is to deny the moral standing of the would-be exemplars. It is easier to scorn vegetarianism than to give up meat oneself.

So, for instance, consider a case in which subjects were given a task to do that was itself morally worrisome ( Monin et al. 2008 ). They were asked to imagine themselves as detectives investigating a burglary, with the job of identifying the most likely culprit among three suspects. The descriptions were designed so that far and away the most plausible culprit was the sole African American. Almost all subjects dutifully followed the instructions and identified the African American as the culprit. They were then shown a response purportedly from another subject (a ‘rebel’) who, rather than identifying the African American, had written on the form ‘I refuse to make a choice here—this task is obviously biased. [ … ] Offensive to make black man the obvious suspect. I refuse to play this game.’ A second group did things the other way round: first they were given the rebel response to look at, and then they were asked to make the assessment themselves. Subjects in the first group, those who had themselves acted before they saw the rebel response, did not judge the rebel as morally better; when asked for comments they described them as ‘self-righteous’, ‘defensive’, and the like. In contrast, those who acted after seeing the rebel response typically judged the rebel as morally better, describing them as ‘strong-minded’, ‘independent’, or suchlike.

The experiment nicely rules out the obvious alternative explanations. If subjects see the rebel before they themselves act, they tend to approve of the rebel’s behaviour: it is not typically judged as morally misguided, nor as raising the moral bar too high. It is only after they have already acted, and so implicitly committed themselves, that it tends to be denigrated. It is hard to see what could drive this if not a desire to maintain their own relative standing. Of course this might be signalling to others as much as to themselves: they want to demote the actions of the rebel in the eyes of the experimenters. But it seems unlikely just to be signalling to experimenters: if subjects genuinely thought that the rebel was morally justified, they would surely think there was a good chance that others would think likewise. If so, a negative public assessment of the rebel would backfire: it would reflect badly on them. It is much more plausible in this case that self-signalling and signalling to others go hand in hand here.

Taboo thoughts and trade-offs: There are certain thoughts that we judge it would be wrong even to entertain.

A final set of findings that Bénabou and Tirole invoke concern the unthinkable. A number of psychological studies have examined ‘protected values’, the violation of which people are reluctant even to contemplate: the price at which one would sell one’s children, for instance ( Tetlock et al. 2000 ; Schoemaker and Tetlock 2012). There may be good reason to put limits on thinkability; it may well be, as some philosophers have urged, that not being prepared to think about something is a good first defence against doing it ( Williams 1973 : 93–4; 1992). But reluctance here is certainly not understood in pragmatic terms. Rather, people who have incited to transgress against thought taboos tend to see themselves as having been corrupted, and to seek ‘moral cleansing behaviour’, such as performing other good tasks, in response.

It is possible still to see this as driven by a concern to be good: if the prohibition can be costlessly violated, it is not going to work very well. But there is also plausibility in seeing this as (at least partly) self-signalling behaviour: ‘Good people would not normally have such thoughts; since I have had them, I had better do something to prove that they were anomalous.’

So, taking these three sets of considerations together, there is good evidence that people are frequently in the business of moral self-signalling. Perhaps there is more, but this is good enough to be going on with. 5 Note that this falls far short of Bénabou and Tirole’s contention that this is the primary source of moral motivation; there is also plenty of reason to doubt that. But it looks to be an important part of it. If such behaviour is to be effective, subjects had better not realize that this is what they are doing, since a signal is hardly effective once it is known that it has been manipulated. At the very least, then, subjects will need to be self-ignorant: they will need to fail to realize that they are engaged in self-signalling. But the processes that we have sketched certainly have an air of self-deception. Our next task is to understand what this would involve.

16.3 Accounts of self-deception

A natural place to start on understanding self-deception is to model it on the deception of others. There is a predictably complex literature on the exact requirements for deception, but a reasonable starting point is that I deceive you if and only if I intentionally get you to believe something I know to be false. Such an account applied to self-deception brings us to the corresponding idea that people are self-deceived if and only if they intentionally get themselves believe things they know to be false. Yet that has been widely held as problematic with respect both to process and to outcome (see Mele 1997 , where these are termed the dynamic and the static paradoxes respectively).

Taking outcome first. If I come to believe something I know to be false, then presumably I both believe it and disbelieve it, which, if not impossible, seems to involve a very radical failure indeed. That might be avoided by thinking that self-deception involves a shift in belief, so that what I once believed to be false I now, by my own hand, believe to be true. But that concentrates the problem at the level of process. For how can I at once be manipulative enough to engineer my own deception and credulous enough to fall for it? It is not simply that I will need to change my mind on the subject matter of the deception itself; if the deception is to be successful, I will have to arrive in a state of belief without realizing how I put myself there.

In response, deflationary theorists want to understand self-deception along other lines, dropping the parallel with the deception of others. There are independent reasons for worrying about that parallel. The deception of others is often achieved by speech, by straightforward verbal lying, yet presumably no one achieves self-deception in that way. So self-deception is going to have to involve more subtle expedients involving the selection of evidence and the construction of rationalizing hypotheses. Once we focus on them, it becomes more plausible that self-deception can be achieved without believing contradictions, and without intentionally engineering one’s own deception. In a number of very influential pieces, Al Mele has argued that self-deception needs no more than the acquisition of a false belief as the result of the operation of a pre-existing motivated bias. More specifically, he wants to explain central cases of self-deception using what he calls the ‘Frederich–Trope–Lieberman (FTL) model’, according to which agents require greater evidence to believe a proposition that they find aversive than they would to believe one they find sympathetic ( Mele 2001 : 31ff.).

To see how this might work, we’ll consider two experiments, one, by Quattrone and Tversky (1984) , that has received a fair bit of philosophical discussion, the other, by Mijovic-Prelec and Prelec (2010) , that has received rather less. In the Quattrone and Tversky experiment, subjects were told that they were involved in a study on the effects of cold showers after exercise. They were first asked to hold their forearms in a vat of iced water until they were not prepared to tolerate it any longer. Then their pulse was taken and they were asked to exercise on a stationary bicycle, after which they were asked to repeat the iced-water test, until they were no longer prepared to tolerate it any longer. In each case subjects were made aware of how long they had kept their arms in the water. Crucially, though, in the period between the two iced-water tests, they were given a mini-lecture on psychophysics, during which they were told (falsely!) that people fall into two broad groups, those with Type 1 hearts, with shorter life expectancies, and those with Type 2 hearts, with longer. The distinction was allegedly revealed by the degree of tolerance shown for cold water after exercise. Half the subjects were told that increased tolerance was a sign of a Type 1 heart, and hence of shorter life expectancy; whereas the others were told that it was a sign of a Type 2 heart, and hence of longer.

Quattrone and Tversky found that most subjects (around 70 per cent) changed their tolerance in the second test relative to the first, in a way that gave them good news. That is, those who believed that increased tolerance was a sign of longer life expectancy showed increased tolerance; whereas, conversely, those who thought increased tolerance was a sign of shorter life expectancy showed decreased tolerance. Centrally to our concerns, when asked whether they had tried to shift their tolerance, the majority said that they had not. Those who denied that they had tried to shift were much more likely to conclude that they had the healthy Type 2 hearts than those who admitted that they had.

Does this show self-deception? Quattrone and Tversky were quite cautious in the conclusions they drew. They followed Gur and Sackeim (1979) in defining the self-deceived agent as someone with contradictory beliefs who engages in the motivated act of bringing the more favourable of these to their attention. They concluded that in this sense ‘[a] certain degree of self-deception was probably involved’ (p. 247), though ‘[t]o be sure, self-deception and denial are not all-or-none matters. Even subjects who indicated no attempt to shift may have harbored a lingering doubt to the contrary’ (p. 243). We can summarize their conclusion as being that (i) most of their subjects were probably modifying their tolerance ‘purposefully’ to obtain a better diagnosis; that (ii) most to some degree both believed they were doing this and believed they were not; and that (iii) most were more aware of the second of these beliefs than of the first.

In some ways this experiment looks like a good parallel to the kinds of self-deceptive behaviour shown in the moral case: subjects seem to be doing something to provide themselves with good news. Nevertheless, and even though it has been the focus of much discussion—Mele devotes several pages to explaining how the FTL model can explain it ( Mele 2001 : 85–91; see also Mele 2019 )—there is something unsatisfactory about it. Most cases of self-deception involve a shift in belief, or at least a shift from what the subject would have believed without the deception, to what they believe with it. But in this case we have a shift in desire: in the second trial, the subjects want to take their arms out of the iced water earlier, or later, than in the first trial, depending on the information they have. The only problematic belief in question is the belief about whether they have shifted their tolerance. Clearly here in many cases they are mistaken—they believe they have not shifted their tolerance when they have. But Quattrone and Tversky do not give us any reasons for thinking that they also believe that they have shifted their tolerance. This looks less like self-deception and more like straightforward self-ignorance.

If we are to provide a proper test for the FTL model, then, we need a case in which we really have good evidence to think that there is something more than self-ignorance going on. So let’s move to the second experiment, by Mijovic-Prelec and Prelec, which does involve straightforward modification of beliefs to achieve self-signalling in what looks like a self-deceiving way. The experiment involved asking subjects, who knew no Korean, to classify 100 Korean characters as either ‘male-like’ or ‘female-like’. More specifically, the subjects were asked to classify the characters on the basis of how they thought others would classify them given similar instructions. (The task is thus what Keynes called a ‘beauty contest’: the right answer is that which matches the majority opinion. 6 )

In the first round, subjects were told that they would be rewarded with two cents for every classification that they got right. This was designed to give a baseline in which people were simply trying to do as well as they could. The second round was designed to provide a situation in which they might display self-deception. The central idea was to provide a more complex reward structure, but not to provide information about how well subjects were doing. Self-deception would be shown if subjects acted in ways that would in fact be irrational, but that they could easily take to provide evidence that they were doing well.

The details of the second round were as follows: before characters were shown, subjects were asked to predict whether they would be male or female. Since no information was given, this would be a pure guess. Then the character was shown, and subjects were asked to determine whether it was male or female, as in the first round. And as in the first round, subjects were rewarded with two cents every time they got it right, though in this round they got two cents for a correct guess, and two cents for a correct identification. In addition they were told that in this round there was a substantial bonus prize of $40 that would be awarded to subjects who did best. For this they were divided into two groups. One group was told that this would go to the three people who made the best guesses prior to the characters being displayed; call this the ‘guess-bonus’ group. In the other was told that it would go to the three who made the best assessments once they had seen them; call this the ‘assessment-bonus’ group.

Obviously the best strategy to maximize financial return would be to guess randomly (or to always predict one gender if one thought that was more highly represented), and then to make the most accurate assessment that one could when actually presented with the character. But recall that the subjects were getting no feedback on how well they were doing. They could however provide themselves with some apparent good news about the accuracy of their guesses if they skewed their assessments so that they tended to line up with them: if you have guessed that a character will be male, be more prepared to assess it as male when you get to see it. That of course will probably cost you money, since your assessments will be less accurate than they could have been, and accuracy will bring you more overall reward. But it will provide you with some (short-term) good news, news that you are doing well. The value of that good news will differ depending on which group you were in. If you were in the assessment-bonus group, where the bonus was offered for the greatest accuracy of assessment, then it would merely indicate that you would pick up more two cent rewards for lucky guesses, something that would not amount to very much—even if you got them all right, you would only win $2. But if you were in the guess-bonus group, where the bonus goes to the best guessers, the good news would be much more significant: it would show that you were more likely to win $40. So if the self-deception were motivated by the value of the good news, you would expect to see more of it in the guess-bonus group than in the assessment-bonus group.

That is exactly what Mijovic-Prelec and Prelec found. There are three ways in which a subject’s assessments in the second round might diverge from their first round baseline assessments. They might diverge so that they systematically stand in line with the guesses; this would be providing good news about the guesses. They might diverge so that they systematically stand out of line with the guesses; this would be providing bad news about them. Or they might diverge equally in both directions; this would be providing no news either way. Mijovic-Prelec and Prelec found no subjects who gave themselves bad news; subjects were split between those who gave themselves good news, and those who gave themselves no news either way. Strikingly, the proportion giving themselves good news was much larger in the guess-bonus group—where the good news was more significant—than in the assessment-bonus group. 7

How should we understand this case? We start with the self-deceived subjects’ first-order judgments about the gender of the characters. Here the judgments clearly changed as a result of the changing reward structure, and presumably, the desire to get good news about the chance of winning the bonus. But there is no evidence that the judgments are reactive, rather than merely proactive, in the sense discussed in Section 16.1 . Recall that distinction: self-deception about some subject matter will be reactive if the subject needs to register the truth about that subject matter in order to deploy their strategy of self-deception. It will be proactive if they can put in place a self-deception strategy that avoids the need to recognize the truth. The first-order judgments here look to be explicable as proactive, very much along the lines of the FTL model. Once there was reason to want the character to be, say, female, then the evidence that it was female was given greater weight than the evidence that it was male in all the subsequent perceptions. Subjects didn’t need to identify whether the characters were really male or female in each case; a pre-existing general-purpose FTL strategy would do the job.

What of their judgements at the second-order level? Presumably if they had known that they were skewing their assessments in this way, those assessments would have failed to have delivered any good news. But there is no reason to think that they did know; as with the Quattrone and Tversky experiment, this looks like simple self-ignorance. And the FTL approach looks to be able to explain other features of the case too. No subject altered all of their assessments to give themselves good news. Of the 80 subjects, only two showed a self-deceptive pattern in over 40 per cent of trials; most of those who were self-deceiving kept it at a level of between 20 and 40 per cent of trials, a level where the pattern would not have been so obvious. This too looks to be explicable: some subjects are simply more prone to the FTL effect than others. It reduces the tendency to believe what is unpalatable, but doesn’t remove it altogether, so that even the strongly self-deceived are left with a broadly credible picture, especially where things are vague enough to allow for flexibility in interpretation ( Sloman, Fernbach, and Hagmeyer 2010 ). 8

So we have a plausible illustration of the FTL approach explaining a case. Our question now is whether all the cases of moral self-deception can be explained in these terms, or something like them. In the moral cases it certainly can seem as though there is some reactive manipulation going on—manipulation in response to unwelcome knowledge, something that goes beyond anything countenanced by the FTL approach. Gur and Sackeim tried to capture this idea with the claim that self-deceived agents have simultaneous contradictory beliefs, and then engage in the motivated act of bringing the more favourable of these to attention. But that is to make a particularly strong claim. Perhaps there are features somewhat less stark than those, but which nonetheless cannot be explained by the FTL approach—features that bring back the idea of a reactive process. There is, after all, a great deal of space between the idea of preexisting bias, and that of the intentional inducing of a contradictory belief; self-deception might sit somewhere in this space. Let’s explore quite what such a space would look like.

To start, we need to be clearer on what is really at issue between the proactive deflationary approach that Mele and others have championed, and the approach that sees self-deception as reactive. It will be helpful to step back from the details of the debate around the Gur and Sackeim proposal and the FTL approach, to see things a little more broadly.

16.4 Beyond contradiction

Let’s suppose that there is some property—the bad property—that I do not want to know is ever instantiated. It may be instantiated; it may not be; I simply do not want to know. Here are two naive strategies I might take:

Blanket strategy: I close my eyes to everything. I take in no new information whatsoever. Fine-tuned strategy: I keep a careful watch on the world. Whenever I see that the bad property is instantiated, I turn my eyes away and vehemently deny that it is.

Clearly both of these strategies are problematic. The blanket strategy will do the job; since I take in no information whatsoever, a fortiori I take in no information that the bad property is instantiated. But for most people in most situations it is clearly far too strong: in keeping myself ignorant of the bad property, I keep myself ignorant of everything that I need to know. In particular, when the bad property is not instantiated, I won’t have the good news that it isn’t.

In contrast the fine-tuned strategy is perfectly discriminating. I only close my eyes to cases where the bad property is instantiated, and maintain my knowledge of everything else. Its problem is the opposite. Knowledge cannot be so easily lost. Once I have seen the bad property is instantiated, no amount of avoidance and denial will undo my knowledge. If my denials are vigorous enough, I might come to believe them; but that will take me to contradiction rather than ignorance.

In response to these problems, either strategy might be refined. The blanket strategy might be made somewhat less blanket: I might refuse to look in certain pre-ordained places, or give any credibility to certain sources of evidence. Or, when I do get evidence, I might weight it differently using certain preassigned criteria. The fine-tuned strategy might involve less than full recognition of the bad property before I turn away: I might register it only unconsciously, or I might turn when my assessment of its likelihood is high enough. Nonetheless, the distinction between the two approaches is reasonably clear: in the first, I put in place a strategy that works without my needing to register the bad property in any way; in the second, I register the bad property in some way, and then react on the basis of that.

I suggest that this distinction is what is centrally at stake in the debate between the deflationary approach and that which sees self-deception as involving a more responsive self-manipulation. It is what I have tried to articulate in the introduction with the distinction between proactive and reactive strategies. Defenders of the deflationary approach see all self-deception as involving descendants of the blanket strategy: the proactive strategies. In contrast, those who think that the deflationary strategy cannot explain all cases of self-deception think that this is because some involve descendants of the fine-tuned strategy: the reactive strategies. Their central idea is that it is the belief that something is the case, or at least the suspicion that it might be, that brings on the self-deception. It is exactly because I start to believe that things are bad—I form a certain triggering belief—that I come to self-deceptively believe that they are fine. I react to defend myself, but in order to do this I need to identify the threat, and identify the kind of response that would work. The simplest approach is to understand this in terms of contradictory beliefs—I continue to believe both the triggering belief and a self-deceptive belief that contradicts it. But there are other, less extreme ways of deploying a reactive strategy.

A first refinement is this: as Quattrone and Tversky point out, belief can be more or less certain. There is no contradiction in thinking that p is possible, and that not-p is also possible. But we do not escape something like contradiction just in virtue of having partial beliefs. If I think that p is very likely, and that not-p is also very likely, or that p is certain and that not-p is possible, then I may not be strictly contradicting myself, but I will be guilty of the probabilistic analogue: I will have violated the requirement of the probability calculus that the probability of p and the probability of not-p must sum to one. The self-deceived person will have something analogous to contradictory beliefs if they categorically maintain the belief that p, while thinking that not-p is a real possibility, or accept some other inadmissible combination. 9

A second refinement: deception does not fundamentally concern individual propositions, but subject matters. This shows up in the grammar—we do not say that A was deceived that p, but rather that A was deceived about some subject or topic—but the issue goes deeper than that. If I tell you that my friend has gone overseas, when really he is hiding upstairs, I deceive you about where my friend is, but I also deceive you about a host of other things: about what I believe, about how many people there are in my house, about whether you will be able to vent your rage on my friend here and now, and so on.

Some of these further things will be strictly entailed by what I say, but they do not all need to be. If you ask whether a company is solvent and I, knowing that the receivers have just been called in, tell you quite truthfully that it has the highest possible credit rating, then I have certainly acted to deceive you. What I have said—that it has the highest possible credit rating—is consistent with the claim that it is not solvent; indeed, in this case both are true, at least for now. But they are in tension, in the sense that believing the former would, in the absence of further information, naturally lead you to reject the latter. So deception can extend to other items in the relevant subject matter even when they are not entailed by what I say.

There is a parallel phenomenon in the case of self-deception. If I know that my son has been killed, but I convince myself that he is still alive, then I have contradictory beliefs. But if someone whom I would normally trust tells me that he has been killed, and I become all the more sure that he is still alive, my two beliefs—the triggering belief that A said he is dead, and my self-deceptive belief that he is alive—are not contradictory. Again, though, they are in tension: were it not for my self-deception, the triggering belief would have led me to the opposite conclusion.

Issues here are delicate, for we need to distinguish this from a deflationary, proactive approach. If I decide in advance not to believe in any talk about the health of my son, that is a proactive strategy, explained by the deflationary approach. If I hear talk that he is dead, and my self-deception is a response to my realization that the talk is credible, then it is not. The crucial difference is whether I have a pre-existing strategy for blocking certain types of inference or not. If I do, that is compatible with a proactive strategy; if I have to tune which inferences I make in the light of my evidence that the bad property maybe instantiated, that is, in contrast, reactive.

A third issue concerns the timing of the different beliefs that one might have. To believe a contradiction is to believe two contradictory things at once. Even in the second-person case, deception does not require the simultaneous holding of contradictory beliefs by the deceiver and the deceived: the deceiver might have forgotten what they once believed in the meanwhile; indeed, their deception may be all the more effective if they succeed in deceiving themselves alongside their victim ( von Hippel and Trivers 2011 ). All that matters in general is the causal influence of the deceiver’s belief on that of the deceived; the contradiction may be temporally dissociated. But particular cases may require more. If I am planning an elaborate deception, one that requires constant manoeuvring in response to changing circumstances, I may well need to keep track of how things actually are as the deception unfolds. Suppose I decide, Iago-like, to convince you that your devoted lover is unfaithful. Getting your lover to protest their innocence, when I have contrived to stack the odds against them, is part of my plan, since it will make them appear all the more duplicitous; but my confidence that they will protest is grounded in my knowledge that they are innocent. If for some reason I come to believe that they will not protest, I will need to change my plan. Here then the successful execution of the deception requires an ongoing awareness of relevant facts about the subject matter about which I am deceiving you, ongoing in that they continue while the deception is operative.

The facts about timing are similar in the case of self-deception if we understand it as reactive. Again there is no general need for the agent to simultaneously hold contradictory beliefs. What is needed, on the reactive model, is the casual influence of the triggering belief on the ensuing, conflicting, self-deceptive belief; we can think of that as giving rise to something approaching an extended contradiction, even if there is never a simultaneous one.

Is there an analogue, in the case of self-deception, to the need for an ongoing awareness of how things actually stand on the part of the deceiver? It is easy to sketch one (though recall that we are not at this point asking whether such a thing really happens). Suppose that I want to maintain a good impression of myself, and suppose that I do this by filtering the information that comes to me. The flattering information I attend to; the derogatory I ignore. How do I know what is flattering and what is not? It could be that it is marked in some independent way that enables a prior filter: information that is flattering is likely to come just from these sources, so to them I attend. But I may be living in an environment with no such useful indicators. Then I will need to attend to each piece of information closely enough to see whether it is flattering or not. I will need, in an ongoing way, to know the truth in order to self-deceive.

To summarize then: while avoiding straight-out contradiction, agents engaged in reactive self-deception might have beliefs (or partial beliefs) that are in probabilistic tension; beliefs that are in tension within a subject matter; and beliefs that are in tension over time, either in a one-off or an ongoing way. And these three of course can combine. Rather than spelling out all of the possibilities, I will speak broadly of a triggering state that is, by the agent’s own lights, in tension with the self-deceptive belief, adding details as need be. Call this a tension-trigger. If Mele is right that states of self-deception result from bias, he will still think that they are triggered. But if his account is to avoid these weaker forms of contradiction, if he wants to keep them broadly in the proactive camp, he will not want to accept that they are tension-triggered, since he will not want the subject to recognize the triggers to be in tension.

We can now sketch three different possible types of mechanism. The first is the only sort of mechanism that a pure motivated bias account, following the proactive strategy and eschewing any kind of contradictory belief, could countenance:

(i) No tension-trigger: the state of self-deception does not involve any triggering state that is in tension with it.

So, for instance I might be born with a tendency to discount the critical remarks of others. If this bias is to count as motivated, there will presumably be a beneficial defensive explanation for it, but that doesn’t proceed by means of a defensive reaction to the realization that others are thinking badly of me.

At the other extreme, the self-deceived agent might need to keep track, in an ongoing way, of the very facts that are in tension with those that they are deceiving themselves about—the first-person analogue of the Iago strategy described above:

(ii) Running tension-trigger: maintaining the state of self-deception requires the constant monitoring of triggering states that are in tension with it.

In between these two we have a mixed strategy. Here the tension-trigger provides the cue to put a strategy in place, and influences the nature of that strategy; but the strategy itself is a local blanket strategy, not requiring ongoing monitoring of the trigger:

(iii) Up-front tension-trigger: the state of self-deception does involve a triggering state that is in tension with it, but the triggering state need only be entertained before the self-deception takes place, and so does not need to be maintained through it.

So, for instance, a certain source of information might be identified as providing bad news, which results in a blanket decision not to monitor that source. This might result in first-order self-deception: the state whose recognition prompts putting the policy in place might be in tension with the first-order beliefs that the self-deception engenders: it is because I hear you telling me bad news that I resolve to avoid you in the future. But the clash is likely to be more salient at the second-order level. In many situations, putting an effective policy in place will require some careful thought; but if it is to be effective, that thought, and the policy that results, had better not be transparent.

Corresponding to these mechanisms I’ll speak of trigger-free self-deception, running self-deception, and up-front self-deception. Trigger-free self-deception is the kind of proactive self-deception of which Mele talks, where there is no need for the subject to register the facts about which they are self-deceived. Running self-deception requires an ongoing registering of those facts. And again up-front self-deception falls between the two, requiring a registration of the facts initially to put the defence in place, but not thereafter. Clearly, though, if the aim is to deflate, both running self-deception and up-front self-deception will provide a challenge, even if the latter is less dramatic, for to get the proactive self-deceptive strategy in place will require just the kind of reactive self-deception that the deflator wishes to deny.

16.5 What kinds of mechanism are involved in moral self-deception as we actually see it?

The last section was highly theoretical: the aim was to show the different sorts of self-deception that might be possible. The focus in this section is empirical: what grounds do we have for thinking that any of these are actual? I take it that we have plenty of evidence of pre-existing bias; trigger-free self-deception is not in question. What is contentious is whether there are cases where there is a tension-trigger: cases of running self-deception or, less radically, of up-front self-deception.

Given the multiple interpretations available of any real-world example, it is only in controlled studies that we can hope for an answer; and even in such studies, it is hard to be sure that alternative interpretations are not available. Let us start with the more radical case.

16.5.1 Running self-deception

There is evidence for running self-deception, but from cases that are in some way abnormal. I start with a striking one, but with two caveats: the subject was suffering from hemispatial neglect, and there was only one of him. The lead author of the study was again Mijovic-Prelec (1994) .

Hemispatial visual neglect is a not uncommon effect of strokes and other brain injuries. Patients are apparently unable to see objects in one side (typically the left side) of the visual field. But the visual processing areas of the brain remain undamaged; the problem lies somewhere else. Quite what the problem is remains contentious, and will not be addressed here (see Robertson 2009 for a general introduction). What is important for us is that the neglect is often not complete. In a famous example, a patient shown two pictures of houses whose right sides were identical but left sides differed in that one was on fire and the other wasn’t, judged them to be the same, but expressed a preference to live in the one that was not burning ( Marshall and Halligan 1988 ; see Dorrichi and Galati for replication and development; and compare the similar phenomenon in Volpe et al. 1980 ). So clearly there is some tension between the subject’s explicit beliefs and some awareness that is influencing their desires.

The Mijovic-Prelec study provides a clear case of this sort of tension, but instantiating more closely a pattern that looks like self-deception. FC, the subject involved, was showing left-side visual neglect as the result of a stroke a month before. He was told that a dot might, or might not, be displayed on a screen in front of him; he was asked to say whether or not it was. There were three conditions: a dot on the right hand side; a dot on the left; or no dot. Unsurprisingly given his neglect, FC was able to correctly identify the presence of the dot on the right-hand side; able to correctly identify its absence; but normally unable to identify its presence on the left (he said it was absent). What was surprising was the reaction times. When the dot was present on the right-hand side his response was twice as fast as when it was absent: seeing the spot enabled him to stop a more laborious search. But when the spot was present on the left-hand side, his response—that is, his denial that a spot was present—was as fast as his recognition when the spot was presented on the right. It seems that at some level he saw the spot on the left, which was enough to tell him that it wouldn’t be present in the right-hand field that he could consciously see.

Is this self-deception? It doesn’t fit a certain paradigm, in that it isn’t obviously motivated. 10 But structurally it looks to be: in some way FC registered that the dot was there, and then he sincerely denied that it was. If so, this is clearly a case of running self-deception. There is no systematic bias that would enable FC to do what he was doing. Instead, he had to register, each time, that the dot was there on the left hand-side, in order to conclude so quickly that it was not.

Could it be that FC didn’t register the dot, only the fact that he didn’t need to go on looking? That is certainly possible; but it is equally possible, and more in line with current thinking, is that he saw the dot but failed to attend to it ( Bartolomeo 2014 ), or perhaps, and more controversially, that he ‘saw’ with one of his visual systems and not with another ( Milner and Goodale 2006 ; for some concerns, Schenk and McIntosh 2010 ). Clearly FC provides just one case, but it does fit with other results from hemispatial neglect as mentioned above (see also Bisiach, Berti, and Vallar 1985 ). Stroke or other brain injury can give rise to other conditions that are naturally seen as self-deceptive. Neil Levy makes a plausible case for it in anosognosia—the denial of illness by those suffering from it—more generally, although in many cases the phenomena he reports look more like up-front than running self-deception ( Levy 2009 ).

Even if these cases are widespread, there is an obvious concern that the afflictions facing those with brain injuries are hardly indicative of the capacities of those without. Perhaps that is right; but it would be somewhat surprising if brain injury, which typically and understandably depletes capacity, in this case generates a new one. What looks more likely is that there are mechanisms that normally keep distinct systems in line, and that these are damaged here. In the FC case, as we have seen, it seems plausible that some attentional failure caused him to register the dot without attending to it.

If so, then it raises the possibility that the separate operation of those distinct systems could give rise to self-deception in the more normal cases. Moreover, cases other than brain damage can give rise to similar features. Patients suffering from visual conversion disorder (what was once known as hysterical blindness, and is often now called functional blindness) sincerely claim not to be able to see anything, but their visual systems are undamaged, and their behaviour on visual discrimination tasks differs from that of organically blind subjects, sometimes worse than chance, sometimes better (Bryant and McConkey 1989 ; 1999 ).

Nevertheless, when we look for clear documented examples of running self-deception in otherwise normal subjects, none are obvious. It would be good to have experiments designed expressly to test whether it can occur. But none of the cases of moral self-deception documented here seem to need it. That is not to say, though, that they can all be explained by the FTL; for there is reason to think that they require up-front self-deception. To this we now turn.

16.5.2 Up-front self-deception

The FTL model that Mele proposed was based on the idea that self-deceived subjects received evidence that could have given them knowledge about how things really stood, but did not because of their biased belief forming practices. (Whether that is the best way of characterizing what is happening—whether we can distinguish knowledge and evidence in this way—is controversial—see e.g. Williamson 1997 —but presumably some sense can be made of the distinction.) But what about cases in which the subject avoids gaining evidence, presumably because, were they to get it, they would not be able to avoid forming the unwelcome belief? We saw this in the experiment by Dana, Weber and Kuang (2007) . Recall that there, subjects in a game were presented with a choice between an option which gave them $5 and a co-player $5, or an option which gave them $6 and the co-player $1. Most took the former option: they sacrificed $1 to significantly improve the other’s lot. Other subjects, also told that they could choose between $5 or $6 for themselves, were told that the other player’s share, again either $5 or $1, had been attached to one or the other of these options by a toss of coin (so that e.g. choosing the $6 option might bring the other player $1 or, with an equal chance, $5). The other player’s share was hidden, but could be revealed by the press of a button, and yet around half chose not to reveal it, taking the $6 without knowing what the other got. Or recall the Gaertner experiment in which liberal subjects, who were generally more likely to help out a Black caller, were more likely to hang up before any request could be made.

In these cases there is an up-front policy. Here it is a simple one, an easy blocking of a certain piece of information. But presumably in many real-world cases the policy will have to be rather more adaptive. Can it be achieved by the FTL approach? There is a problem for that approach, since the subject will have to recognize, at some level, that a certain source of information will bear on the point at issue. In the Dana study, they do not know that revealing what the other gets will show them that taking $6 for themselves is wrong, but they know that it might show them that it is. That, it seems very plausible, is why they choose not to reveal it.

Would a prior blanket strategy enable them to decide which sources to ignore? Mele, in discussing a similar worry, suggests that people might ignore certain sources of information ‘because they found exposure to it very unpleasant’ ( Mele 2001 : 48). That may be so, but why do they find it unpleasant? Because, in this case, they judge that it might give them information that would preclude them from taking the $6 and maintaining their moral image. But that involves acting on information which is itself in tension with the belief that they are trying to maintain: they want to believe that they are moral agents, with the openness to relevant information that that requires, but they are now acting to avoid information that they know such a moral agent should seek. There may not be an absolute contradiction here, but there will be tension along the lines discussed above. The FTL approach made a distinction between evidence and belief; this is not available here, since the agent needs to process the relevant evidence to know where to look and where to avoid.

If there is a way of blocking the idea that there are tension-triggers at work here, it is by denying that there is false belief at the second-order level. In the Dana study, the subjects must realize that they are avoiding information. So perhaps they are thinking that doing so is perfectly compatible with being a moral agent. Here we need more information about quite what they were thinking. It might seem surprising that someone could think that a certain course of action would be precluded once its nature were known, while at the same time thinking that there is no requirement to get information about its nature, even if doing so costs nothing. Yet that approach is far from absurd. Subjects may be drawing a doing/allowing distinction here: it is one thing to be guided by information that one has, another to require that one gets it. 11 Likewise in the Gaertner experiment: they may think that it is one thing to refuse a request, another to wilfully make it impossible for the request to be made. Such distinctions may look like sophistry when clearly spelled out, but even if they are ultimately indefensible, this may indicate moral ignorance on the part of the subjects and not up-front self-deception.

In conclusion, then, there remains much to do at the empirical level. We have plenty of evidence that moral behaviour is pervaded with something like self-deception; moreover, there are good grounds for thinking that this extends beyond the moral. 12 And we have plenty of evidence, most clearly from the pathological cases, that human subjects have sufficient divisions within them to enable this to happen in a highly reactive way. But discovering whether this is in fact happening in cases of moral self-deception, or whether this can be explained in more deflationary ways, is going to need some more work. 13

Barnes, Annette . 1997 . Seeing Through Self-Deception. New York: Cambridge University Press.

Google Scholar

Google Preview

Bartolomeo, Paolo.   2014 . Attention Disorders after Right Brain Damage: Living in Halved Worlds . Berlin: Springer.

Baumeister, Roy.   1999 . Evil . New York: Henry Holt.

Bénabou, Roland , and Jean Tirole . 2011 . Identity, morals and taboos: beliefs as assets.   Quarterly Journal of Economics 126: 805–55.

Bermudez, Jose Luis.   2003 . Self-deception, intentions, and contradictory beliefs.   Analysis 60: 309–19.

Bisiach, E. , A. Berti , and G. Vallar . 1985 . Analogical and logical disorders underlying unilateral neglect of space. In Attention and Performance , vol. 3, ed. M. Posner and O. Marin. Hillsdale, NJ: Lawrence Erlbaum.

Bodner, Ronit , and Drazen Prelec . 2002 . Self-signaling and diagnostic utility in everyday decision making. In Collected Essays in Psychology and Economics , ed. I. Brocas and J. Carillo. New York: Oxford University Press.

Bryant, Richard , and Kevin McConkey . 1989 . Visual conversion disorder: a case analysis of the influence of visual information.   Journal of Abnormal Psychology 98: 326–9.

Bryant, Richard , and Kevin McConkey . 1999 . Functional blindness: a construction of cognitive and social influences.   Cognitive Neuropsychiatry 4: 227–41.

Carlsmith, J. M. , and A. E. Gross . 1969 . Some effects of guilt on compliance.   Journal of Personality and Social Psychology 11: 232–9.

Cooper, Joel.   2007 . Cognitive Dissonance: Fifty Years of a Classic Theory . London: Sage.

Cushman, Fiery , Joshua Knobe , and Walter Sinnott-Armstrong . 2008 . Moral appraisals affect doing/allowing judgments.   Cognition 108: 353–80.

Dana, Jason , Daylian Cain , and Robyn Dawes . 2006 . What you don’t know won’t hurt me: costly (but quiet) exit in a dictator game.   Organizational Behavior and Human Decision Processes 100: 193–201.

Dana, Jason , Roberto Weber, Jason Xi Kuang . 2007 . Exploiting moral wriggle room: experiments demonstrating an illusory preference for fairness.   Economic Theory 33: 67–80.

Deci, Edward.   1971 . Effects of externally mediated rewards on intrinsic motivation.   Journal of Personality and Social Psychology 18: 105–15.

Deci, Edward , and Richard Ryan . 2000 . Self-determination theory.   American Psychologist 55: 68–78.

DeJong, W.   1979 . An examination of self-perception mediation of the foot-in-the-door effect.   Journal of Personality and Social Psychology 37: 2221–39.

Doricchi, F. , and G. Galati . 2000 . Implicit semantic evaluation of object symmetry and contralesional visual denial in a case of left unilateral neglect with damage of the dorsal paraventricular white matter.   Cortex 36: 337–350.

Doris, John.   2015 . Talking to Our Selves . Oxford: Oxford University Press.

Dyke, Daniel.   1614 . The Mystery of Selfe-Deceiving . London: Griffin.

Fehr, Ernst , and Urs Fischerbacher . 2003 . The nature of human altruism.   Nature 425: 785–91.

Fiske, A. P.   1992 . The four elementary forms of sociality: framework for a unified theory of social relations.   Psychological Review 99: 689–723.

Gaertner, Samuel.   1973 . Helping behavior and racial discrimination among liberals and conservatives.   Journal of Personality and Social Psychology 25: 335–41.

Gardner, Sebastian.   1993 . Irrationality and the Philosophy of Psychoanalysis . Cambridge: Cambridge University Press.

Garrett, Aaron.   2017 . Self-knowledge and self-deception in modern moral philosophy. In Self-Knowledge: A History , ed. Ursula Renz . New York: Oxford University Press.

Garton-Ash, Timothy.   1997 . The File . New York: HarperCollins.

Gneezy, Uri , and Aldo Rustichini . 2000 . A fine is a price.   Journal of Legal Studies 29: 1–18.

Gur, Ruben , and Harold Sackeim . 1979 . Self-deception: a concept in search of a phenomenon.   Journal of Personality and Social Psychology 37: 147–69.

Hirstein, William.   2005 . Brain Fiction: Self-Deception and the Riddle of Confabulation . Cambridge, MA: MIT Press.

Heyman, J. , and Daniel Ariely . 2004 . Effort for payment: a tale of two markets.   Psychological Review 15: 787–93.

Holton, Richard.   2016 . Addiction, self-signalling, and the deep self.   Mind and Language 31: 300–313.

Huber, Franz , and Christoph Schmidt-Petri (eds) 2009 . Degrees of Belief . New York: Springer.

Johnson, Mark.   1988 . Self-deception and the nature of mind.   Perspectives on Self-Deception , ed. B. McLaughlin and A. Rorty . Berkeley: University of California Press.

Jung, Minah , Leif Nelson , Uri Gneezy , and Ayelet Gneezy . 2017 . Signaling virtue: charitable behavior under consumer elective pricing.   Marketing Science 36: 187–94.

Lazear, E. , U. Malmendier and R. Weber . 2012 . Sorting in Experiments with Application to Social Preferences.   American Economic Journal: Applied Economics 4: 136–64.

Levy, Neil.   2009 . Self-deception without thought experiments. In Delusion and Self-Deception , ed. T. Bayne and J. Fernández. New York: Psychology Press.

Lifton, Robert Jay. 1988 . The Nazi Doctors: Medical Killing and the Psychology of Genocide . New York: Basic Books.

Marshall, J. C. , and P. W. Halligan . 1988 . Blindsight and insight in visuo-spatial neglect.   Nature 336: 766–7.

Mazar, Nina , On Amir , and Dan Ariely . 2008 . The dishonesty of honest people: a theory of self-concept maintenance.   Journal of Marketing Research 45: 633–4.

Mazar, Nina , and Chen-Bo Zhong . 2010 . Do green products make us better people?   Psychological Science 21: 494–8.

Mele, Alfred.   1997 . Real self-deception.   Behavioral and Brain Sciences 20: 91–102.

Mele, Alfred.   2001 . Self-Deception Unmasked . Princeton, NJ: Princeton University Press.

Mele, Alfred.   2019 . Self-deception and selectivity.   Philosophical Studies 177: 2697–2711.

Mijovic-Prelec, Danica , L. M. Shin , C. F. Chabris , and S. M. Kosslyn . 1994 . When does ‘no’ really mean ‘yes’? A case study in unilateral visual neglect.   Neuropsychologia 32: 151–8.

Mijovic-Prelec, Danica , and Drazen Prelec . 2010 . Self-deception as self-signalling: a model and experimental evidence.   Philosophical Transactions of the Royal Society B 365: 227–40.

Miller, Dale , and Benoît Monin . 2016 . Moral opportunities versus moral tests. In The Social Psychology of Morality , ed. Joseph Forgas , Lee Jussim , and Paul van Lange . New York: Routledge.

Milner, A. D. , and M. A. Goodale . 2006 . The Visual Brain in Action , 2nd edn. Oxford: Oxford University Press.

Minson, Julia , and Benoît Monin . 2012 . Do-gooder derogation: disparaging morally motivated minorities to defuse anticipated reproach.   Social Psychological and Personality Science 3(2): 200–207.

Monin, Benoît , and Dale Miller . 2001 . Moral credentials and the expression of prejudice.   Journal of Personality and Social Psychology 81: 33–43.

Monin, Benoît , Pamela Sawyer , and Matthew Marquez . 2008 . The rejection of moral rebels: resenting those who do the right thing.   Journal of Personality and Social Psychology 95: 76–93.

Moriarty, Michael.   2011 . Disguised Vices: Theories of Virtue in Early Modern French Thought . Oxford: Oxford University Press.

Niza, C. , B. Tung and T. Marteau . 2013   Incentivizing Blood Donation: Systematic Review and Meta-Analysis to Test Titmuss’ Hypotheses.   Health Psychology 32: 941–9.

O’Conner, Kieran , and Benoît Monin . 2016 . When principled deviance becomes moral threat: testing alternative mechanisms for the rejection of moral rebels.   Group Processes & Intergroup Relations 19: 676–93.

Quattrone, George , and Amos Tversky . 1984 . Causal versus diagnostic contingencies.   Journal of Personality and Social Psychology 46: 237–48.

Robertson, Lynn.   2009 . Spatial deficits and selective attention. In The Cognitive Neurosciences , 4th edn, ed. Michael Gazzaniga. Cambridge, MA: MIT Press.

Schenk, Thomas , and Robert D. McIntosh . 2010 . Do we have independent visual streams for perception and action?   Cognitive Neuroscience 1: 52–62.

Shoemaker, P. and P. Tetlock . 2012 . Taboo Scenarios: How to Think about the Unthinkable.   California Management Review , 5: 5–24.

Sloman, Steven , Philip Fernbach , and York Hagmayer . 2010 . Self-deception requires vagueness.   Cognition 115: 268–81.

Strohminger, Nina , and Shaun Nichols . 2014 . The essential moral self.   Cognition 131: 159–71.

Styles, Suzy , and Lauren Gawne . 2017 . When does maluma / takete fail? Two key failures and a meta-analysis suggest that phonology and phonotactics matter. i-Perception 8 (4).

Tesser, Abraham.   1988 . Toward a self-evaluation maintenance model of social behavior.   Advances in Experimental Social Psychology 21: 181–227.

Tetlock, Philip , et al. 2000 . The psychology of the unthinkable: taboo trade-offs, forbidden base rates, and heretical counterfactuals.   Journal of Personality and Social Psychology 78: 853–70.

Titmuss, Richard . 1970 . The Gift Relationship: From Human Blood to Social Policy. London: Allen and Unwin.

van Dellen, Michelle , W. Keith Campbell , Rick H. Hoyle , and Erin K. Bradfield . 2010 . Compensating, resisting, and breaking: a meta-analytic examination of reactions to self-esteem threat.   Personality and Social Psychology Review 15: 51–74.

Volpe, Bruce , Joseph Ledoux , and Michael Gazzaniga . 1980 . Information processing of visual stimuli in an ‘extinguished’ field.   Nature 282: 722–4.

Von Hippel, William , and Robert Trivers.   2011 . The evolution and psychology of self-deception.   Behavioural and Brain Sciences 34: 1–56.

Williams, Bernard.   1973 . Utilitarianism: For and Against . Cambridge: Cambridge University Press.

Williams, Bernard.   1992 . Moral incapacity.   Proceedings of the Aristotelian Society 92: 59–70.

Williams, Daniel.   2021 . Socially adaptive belief. Mind and Language 36: 333–54.

Williamson, Timothy.   1997 . Knowledge as evidence.   Mind 106: 717–42.

Zhong, Chen-Bo , Gillian Ku , Robert Lount , and J. Keith Murnighan . 2010 . Compensatory ethics.   Journal of Business Ethics 92: 323–39.

  Dyke (1614) , often cited as the first work on self-deception, is actually more about self-ignorance. Something closer to the contemporary notion develops in the 17th century, and is refined through the 18th; highlights include works by La Rochefoucauld, Nicole, Hobbes, Butler, Hume, and Smith. For discussion, see Moriarty (2011 : ch. 8); Garrett (2017) . Note that for these thinkers the idea is not that we merely want to believe that we are doing the right thing; instead the stress is on our genuinely wanting to do the right thing, but being over-ready to believe that we are doing so when we are not.

There may be other factors at work too, most obviously a sincere belief in utterly implausible moral principles. This may involve self-deception too. For a thoughtful discussion of something that is certainly in the self-deception family, see Lifton’s account of ‘doubling’ as practised by certain Nazi doctors ( Lifton 1988 : 418ff.).

Of course, moral philosophers differ on quite how important this is, from Kant, at one end, who held that impure motivations destroy virtue, to Hume, at the other, who held that an impure attitude, one involving pride, can, on the contrary, provide a buttress to virtue ( Treatise I iv).

Miller and Monin make a general distinction between situations that provide opportunities for self-signalling—which they gloss as those that could enhance the agent’s self-image—and those that provide tests—those that could diminish it. Put like that, the distinction surely doesn’t partition: most cases will provide both possibilities of enhancing and of diminishing, depending on how the agent acts. Presumably the point is that the net effect on self-image can be compared to the cost of acting. Sometimes performing an expensive signalling act will bring only a small gain to self-image, whereas failing to perform it will bring a large reduction; situations involving such tests should be avoided by self-signallers. Conversely, sometimes a relatively cheap act will bring a large gain to self-image, and failing to perform it will bring a small reduction; situations involving such tests should be sought out. Others fall somewhere in between. The distinction is a nice one, but it is not clear that many of the cases discussed by Miller and Monin, with the plausible exception of Gaertner’s, really address it.

And there is much more work that draws rather similar conclusions. See e.g. ideas of self-evaluation maintenance ( Tesser 1988 ), self-esteem threat ( Van Dellen et al. 2010 ), and self-enhancement ( Doris 2015 : 92–4); (for reasons for thinking that defending the moral self might be particularly important in all of these, see Strohminger and Nichols (2014) .

As with other such tasks that require apparently meaningless classifications (e.g. Köhler’s maluma / takete task—see Styles and Gawne 2017 ), the authors found considerable convergence—between 60% and 65%—in the classifications made. There was nothing particularly surprising about the features involved: more rounded characters were judged as more female and so on.

Measured at the 0.05 confidence level, 73% of the guess-bonus group and 53% of the assessment-bonus group gave themselves good news; at the 0.001 level, this fell to 45% and 27% respectively. For a perspicuous representation, see Mijovic-Prelec and Prelec (2010 : 235, graph).

The account thus seems able to have something to say in response to the ‘selectivity problem’, which is concerned with the idea that an account needs to be able to explain how agents are selective in the self-deception that they exhibit ( Bermudez 2003 ; Mele 2019 ). At least it can say something about differences in when the bias does and doesn’t lead to belief. We will return later to the issue of whether it can account for when subjects do and don’t turn a blind eye.

Here again I skate over various issues about how we should understand partial belief, all-out belief, and the like: whether we should think of partiality as affecting the content of the attitude, or the attitude itself. For discussion of the options, see the papers collected in the first part of Huber and Schmidt-Petri (2009) . My contention is simply that however we think of this, we have to find space for quasi-contradictory states along these lines.

Of course it is possible that it is in some less obvious way. For discussion of the ways in which confabulation may serve an agent’s purposes, see Hirstein (2005) and Doris (2015) .

Such a distinction does seem to be fairly naturally drawn by many subjects, though quite it should be understood, and whether it is independent of moral evaluation, or in some sense a function of it, remains controversial. See ( Cushman et al 2008 ).

For a review of a wealth of recent literature arguing that belief in general is often formed in response to incentives that are not straightforwardly epistemic, see ( Williams, 2021 ).

Thanks to John Doris, Eleanor Holton, Rae Langton, Neil Levy, Sanjay Manohar, Al Mele, Danica Mijovic-Prelec, Hanna Pickard, Dracen Prelec and Anna Wehofsits for comments and discussion.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

IMAGES

  1. Self-deception

    what is self deception in critical thinking

  2. Self-deception

    what is self deception in critical thinking

  3. Self Deception: Definition, Examples, and Breaking the Cycle of Self

    what is self deception in critical thinking

  4. Amazon.com: Critical Thinking: Stoicism, Reasoning, and Self-Deception

    what is self deception in critical thinking

  5. THE ART OF DECEPTION: An Introduction to Critical Thinking

    what is self deception in critical thinking

  6. 10 Essential Critical Thinking Skills (And How to Improve Them

    what is self deception in critical thinking

VIDEO

  1. The Critical Thinker 002: Self-Defense

  2. Debunking Psychic Tricks: The Truth Behind Astrologers and Psychics

  3. How Can We Recognize Self-Deception?

  4. FCG is Really Good at Deception

  5. [Media Skeptic] Episode 0: Introduction

  6. How To Become a Critical Thinker #criticalthinking #selfimprovement

COMMENTS

  1. Self-Deception

    Virtually every aspect of self-deception, including its definition and paradigmatic cases, is a matter of controversy among philosophers. Minimally, self-deception involves a person who (a) as a consequence of some motivation or emotion, seems to acquire and maintain some false belief despite evidence to the contrary and (b) who may display behavior suggesting some awareness of the truth.

  2. Self-Deception

    Self-Deception. First published Tue Oct 17, 2006; substantive revision Mon Nov 7, 2016. Virtually every aspect of self-deception, including its definition and paradigmatic cases, is a matter of controversy among philosophers. Minimally, self-deception involves a person who seems to acquire and maintain some false belief in the teeth of evidence ...

  3. PDF The Thinker's Guide To Fallacies

    The Foundation for Critical Thinking. To understand the human mind, understand self-deception. Anon. The word 'fallacy' derives from two Latin words, fallax ("deceptive") and fallere ("to deceive"). This is an important concept in human life because much human thinking deceives itself while deceiving others. The human mind has no ...

  4. Ethics and Self-Deception

    Self-deception gives rise to numerous important ethical questions as well—questions concerning the moral status, autonomy, and well-being of the self-deceiver. Many worries concerning self-deception stem from the self-deceiver's distorted view of the world and of himself or herself. Some philosophers believe that the self-deceiver's ...

  5. Self-deception

    Self-deception. Self-deception is a process of denying or rationalizing away the relevance, significance, or importance of opposing evidence and logical argument. Self-deception involves convincing oneself of a truth (or lack of truth) so that one does not reveal any self-knowledge of the deception .

  6. Self‐Deception

    It represents the product of self‐deception, being self‐deceived, as holding contradictory beliefs at the same time (at least temporarily) and the process that leads to it, deceiving oneself, as driven by an intention to form a belief that conflicts with a belief one already has. However, if one were aware that one had such an intention ...

  7. PDF The what and why of self-deception

    In this view, self-deception can arise from, for example, selective attention, biased information search, or forgetting. In the second definition, self-deception is a motivated false belief that persists in spite of disconfirming evidence [e.g. 7,8]. In this view, not all positive illusions are self-deceptive, and biased information search does ...

  8. The what and why of self-deception

    The nature and definition of self-deception remains open to debate. Philosophers have questioned whether — and how — self-deception is possible; evolutionary theorists have conjectured that self-deception may — or must — be adaptive [1, 2, 3].Until recently, there was little evidence for either the existence or processes of self-deception; indeed, Robert Trivers [4] (p.

  9. 1 Self, Deception, and Self-Deception in Philosophy

    Self-deception is a consequence of wanting to be thought of and treated in certain ways and not others by other people. One's self-conceptions are the product and not the source of the opinions of others, and self-deception is thus an attempt to manipulate those opinions and not just one's own.

  10. Self-Deception

    Self-deception and deception by others are the two hardest things to uncover and deal with in critical thinking. And yet, they are the most important, and often have the most potential for harming you if not uncovered. A good way to begin seeing how deception can hurt you is to read through some of the Alternate News Sources. Then think about ...

  11. PDF Teaching Critical Thinking in the Strong Sense

    self-deception, background logic, and multi-categorical ethical issues. The usual scenario runs something like this. One begins with some gen-eral pep-talk on the importance of critical thinking in personal and social . life. In this pep-talk one reminds students of the large scale social problems

  12. Evaluation of self-deception: Factorial structure, reliability and

    We all need to resort to deception, either with ourselves (denial, self-deception, mystification) or with others (with modalities, such as impression management, social desirability), to a greater or lesser extent. Lies, in their broader meaning, are interpreted as something rather adaptive, useful, and necessary in our socioaffective world. In particular, self-deception is a highly ...

  13. When Is Self-Deception Helpful or Dangerous?

    Rather than engaging in safe behavior, we faulted others or denied the existence of a problem. "Self-deception," Newen said, "becomes detrimental in times of crisis that require radical rethinking ...

  14. Self Deception

    Loss of Objectivity: Self deception can cloud an individual's ability to perceive and assess situations objectively, leading to biased and flawed perspectives. Overcoming Self Deception: While self deception can be deeply ingrained and challenging to overcome, individuals can strive to develop self-awareness and practice critical thinking to combat self deception.

  15. Our Conception of Critical Thinking

    A Definition. Critical thinking is that mode of thinking — about any subject, content, or problem — in which the thinker improves the quality of his or her thinking by skillfully analyzing, assessing, and reconstructing it. Critical thinking is self-directed, self-disciplined, self-monitored, and self-corrective thinking.

  16. How to Outsmart Yourself

    Self-deception is challenging to recognize and can be costly or even dangerous. There are several ways you can overcome self-deception, including sustaining humility. Many people deceive themselves.

  17. Self-deception as self-signalling: a model and experimental evidence

    The critical issue is whether motivational biases are sufficient to explain self-deception. From the perspective of the 'real self-deception' side, motivated reasoning explanations seem to ignore three critical aspects of self-deception. ... Bach K.1997 Thinking and believing in self-deception. Behav. Brain Sci. 20, 105 (doi:10.1017 ...

  18. Self-Deceived Individuals Are Better at Deceiving Others

    Self-deception is widespread in humans even though it can lead to disastrous consequences such as airplane crashes and financial meltdowns. Why is this potentially harmful trait so common? A controversial theory proposes that self-deception evolved to facilitate the deception of others. We test this hypothesis in the real world and find support for it: Overconfident individuals are overrated ...

  19. Self-Deception

    What distinguishes wishful thinking from self-deception, according to intentionalists just is that the latter is intentional while the former is not (e.g., Bermúdez 2000). ... Dalton, P., 2002, "Three Levels of Self-Deception (Critical Commentary on Alfred Mele's Self-Deception Unmasked", Florida Philosophical Review, II/1: 72-76 ...

  20. Teaching Critical Thinking in the "Strong" Sense: A Focus On Self

    DOI: 10.22329/IL.V4I2.2766 Corpus ID: 53001906; Teaching Critical Thinking in the "Strong" Sense: A Focus On Self-Deception, World Views, and a Dialectical Mode of Analysis

  21. The Noble Art of Self-Deception

    Literal self-embellishment—makeup, hairstyling, clothing choices—is an everyday form of self-deception that most people engage in. Rarely do we desire to reveal our authentic selves.

  22. 16 Self-Deception and the Moral Self

    A literal-minded approach models it on the deception of others: it holds that in central cases of self-deception we know the truth, but we succeed in hiding it from ourselves. On such an approach, self-deception would involve the simultaneous holding of contradictory beliefs, with a purposive manipulation of what is made available to consciousness.

  23. Self-Deception in Clinical Nursing Practice: A Concept Analysis

    In this paper, we explore the phenomenon of "self-deception" within the context of nursing, focusing on how nurses employ this coping mechanism when faced with dissonance, distress, ... By fostering a solid philosophical foundation and encouraging critical thinking, concept analysis empowers nurses to make informed decisions that uphold ...