Архитектура Аудит Военная наука Иностранные языки Медицина Металлургия Метрология
Образование Политология Производство Психология Стандартизация Технологии


The science of social influence: advances and future progress



The science of social influence: advances and future progress

Anthony R.Pratkanis, 2007

Chapter 2

Social Influence Analysis: An Index of Tactics.

   
   

Association

Another way to change the meaning of a concept is through association – the linking of an issue, idea, or cause to another positive or negative concept in order to transfer the meaning from the second to the first. For example, Staats and Staats (1958) paired national names and masculine names with either positive or negative words and found that the positive or negative meaning tended to transfer to the original names. In a similar vein, Lott and Lott (1960) found that receiving a reward in the presence of a previously neutral person was sufficient to increase the probability of liking that person – the positive aspects of the reward became associated with the person. One particularly effective means of association is to make the object similar to another object on irrelevant attributes (Warlop & Alba, 2004; see Farquhar & Herr, 1993 for a discussion of associations and brand equity).

 

Change the Meaning of an Object Category

Objects (say, people) typically belong to one or more categories (say, ideal job candidate). By changing the meaning (or range) of a category, any given object can be made to look better or worse. For example, Rothbart, Davis-Stitt, and Hill (1997) presented subjects with numerical ratings of job candidates along with an arbitrary categorization of those scores as representing ideal, acceptable, or marginal applicants. Subjects’ ratings scores as representing ideal, acceptable, or marginal applicants. Subjects’ ratings of the similarity of job candidates increased when the job candidates were in the same category as opposed to when they appeared on opposite sides of the category boundary. Salancik and Conway (1975) demonstrated another way of changing the perceptions of an object (in this case, the self) by changing the meaning of a category (in this case, religiosity). In their experiment, students endorsed pro- and anti-religious survey questions that used either the word “frequently” or “occasionally” in the stem. Those subjects who rated themselves using the “occasionally” questions perceived themselves as more religious (because they endorsed more items) compared to those who responded to stems with the word “frequently.”

 

Set Expectations

An expectation is a belief about the future. As such, expectations can sculpt the influence landscape in at least one of two ways. First, expectations serve as a reference point by which options are judged. For example, much research shows that customer satisfaction with a product is a function of whether the product met or failed to meet expectations (e.g., Ross & Kraft, 1983). Second, expectations guide interpretations and perceptions to create a picture of reality that is congruent with expectations (Kirsch, 1999). For example, Pratkanis, Eskenazi, and Greenwald (1994) had subjects listen to subliminal self-help tapes designed either to improve memory or to build self-esteem. Half of the tapes were mislabeled so that the subject received a memory tape labeled as self-esteem or vice versa. The results showed no therapeutic effects of the subliminal messages but that subjects thought there was an improvement based on the tape label. In other words, expectations had created a reality that didn’t really exist. In some cases, when expectation-driven perceptions are acted on, a placebo effect (Shapiro & Shapiro, 1997) or a self-fulfilling prophecy (Darley & Fazio, 1980) can result.

 

Valence Framing

Critical information concerning a decision can be cast in a positive (gain) or negative (loss) way. In general, people seek to avoid losses; it is more painful to lose $20 than it is pleasurable to gain $20. Issues framed in terms of losses (as opposed to gains) will thus generate motivation to avoid the loss. Tversky and Kahneman (1981) developed a classic demonstration of this phenomenon using what has become known as the Asian disease problem. In these problems, subjects are asked to imagine preparation for an outbreak of a disease and to decide on which course of action to take. In one set of actions, Program A is framed as a gain:

Which program do you favor to solve the disease epidemic?

If Program A is adopted, 200 people will be saved.

 If Program B is adopted, there is a 1/3 probability that 600 people will be saved, and a 2/3 probability that no people will be saved.

 

 When the problem is framed in this manner, most subjects (78%) select Program A. However, consider this framing of the same actions but with Program A framed as a loss:

If Program A is adopted, 400 people will die.

If Program B is adopted, there is a 1/3 probability that nobody will die and a 2/3 probability that 600 will die.

 

In this frame, most subjects (78%) select Program B.

Tversky and Kahneman’s  findings generated considerable subsequent research, which found mixed results for their type of risky choice frames (see Levin, Schneider, & Gaeth, 1998 for a review). However, consistent support has been obtained for attribute framing, as in research by Levin and Gaeth (1988) showing that consumers prefer beef that is 75% lean to beef that is 25% fat, and for goal framing, as in research by Meyerowitz and Chaiken (1987) showing that women who read pamphlets about breast self-examinations emphasizing the negative consequences of not performing the exam were more likely to later do the exam than those receiving messages about the positive consequences of performing the exam.

 

 Set the Decision Criterion

In Figure 2.1, Option A would be readily selected if A-ness were seen as the primary criterion that needs to be maximized. Eiser, Eiser, Patterson, and Harding (1984) provide a straightforward example of setting the decision criterion. In their experiment, five groups of subjects were setting the decision criterion. In their experiment, five groups of subjects were asked to memorize how 21 common foods rated on one of five possible nutrients (such as iron, fat, or protein). A subsequent evaluation of each food revealed that the nutritional quality of the food was determined by how the food was rated on the previously memorized attribute. In organizations, the decision criteria are set by the rules and policy of the organization and, perhaps more importantly, by asserting criteria on an as-needed basis (Pfeffer, 1981; Uhlmann & Cohen, 2005). One devious means for setting a decision criterion is the hot potato – the creation of a sensational event, incident, or situation that must be dealt with and in the process determines the decisive factors to be used in making a decision (Lee, 1952).

 

Decoys

A decoy is an inferior option that no one would choose. Including a decoy in a choice set makes other options appear superior in comparison and thus more likely to be chosen. For example, in Figure 2.1, a decoy shadowing Object A would be represented by Object D, located at the same level of B-ness as Object A but slightly less A-ness. The inclusion of Object D in the choice set would increase the probability of A being selected over B. The impact of decoys on choice was first identified by Tadeusz Tyszka in 1977 (see Tyszka, 1983) and has been replicated subsequently in a variety of manners (see Huber, Payne, & Puto, 1982; Pratkanis & Aronson, 2001).

 

Phantoms

A phantom alternative is an option that looks real, is typically superior to other options, but is unavailable at the time a decision is made (see Farquhar & Pratkanis, 1993; Pratkanis & Farquhar, 1992). For example, in Figure 2.1, a phantom is represented by an unavailable option (Object P), which is slightly better than Object A on A-ness and the same on B-ness. The inclusion of a phantom in a choice set (a) decreases the positive evaluation of other options, (b) alters the relative importance of decision criteria so that attributes on which the phantom is strong are seen as more important, and (c) serves as a reference point in decision-making. The impact of a phantom on choice depends on the exact magnitude of these processes. For example, Object P would increase the probability of selecting A in this choice set since it produces a mild contrast effect and a strong change in attribute weights such that A-ness is now of increased importance in decision-making. In addition to these landscaping effects, a phantom also has the ability to create strong emotional effects. When the denial of a phantom implicates the self, it can produce such emotions as frustration, relative deprivation, and self-threat. These emotions can be used for propaganda purposes such as motivating a person to attempt to obtain the phantom (phantom fixation; see 1-in-5 prize tactic below) or blaming others for the denial of a phantom (scapegoating).

 

Metaphor

The use of metaphor can constrain and focus thought about an issue, thereby impacting how that issue will be decided. For example, Gilovich (1981) found that comparing a military crisis to Nazi Germany invites thoughts about intervention whereas a comparison to Vietnam elicits thoughts about avoiding involvement. Metaphors are effective influence devices because metaphors guide information processing (selective attention to details) and suggest solutions for resolving the issue (Mio, 1996; Sopory & Dillard, 2002).

 

Storytelling

A story is a narrative that provides a causal structure to facts and evidence. Plausible stories serve to guide thought, determine the credibility of information, and ultimately direct evaluation and choice about story-related decisions (Hastie & Pennington, 2000). For example, Pennington and Hastie (1992) presented mock jurors with either a murder or a hit-and-run case in which the preponderance of evidence argued for either the guilt or innocence of the defendant. When information was organized in a story format (a sequence of events), the mock jurors were more likely to render verdicts consistent with the preponderance of evidence compared to a mere listing of that evidence. Similarly, Slusher and Anderson (1996) were much more effective in arguing that the AIDS virus is not spread by casual contact when they used facts embedded in a causal structure on how the disease is transmitted compared to when they used statistical facts. Such causal stories or social theories tend to persist even in the face of strong, discrediting information (Anderson, Lepper, & Ross, 1980).

 

(Mis)Leading Questions

Question asking is a way to structure information and to imply certain answers or solutions. How a question is asked can determine the range of thought about an issue. For example, Loftus and Palmer (1974) found higher estimates of vehicle speed when people were asked, “How fast were the cars going when they smashed?” as opposed to hit. Ginzel (1994) found that interviewers who asked biased questions (designed to promote a positive or negative view of a speech) tended to bias their impressions consistent with the questions (see also Snyder & Swann, 1978). Questioning is a powerful influence device because it is capable of directing attention and inferences about the situation.

 

Innuendo

 An innuendo is an insinuation (often subtle or hidden) of a fact, especially concerning reputation and character. As such, innuendoes set up expectations, which serve to fi lter future information. For example, in courtroom settings, inadmissible evidence, pretrial publicity, and accusatory questioning can all impact jury verdicts (e.g., Kassin, Williams, & Saunders, 1990; Sue, Smith, & Caldwell, 1973). In the political domain, Wegner, Wenzlaff, Kerker, and Beattie (1981) found that merely asking about the possible wrongdoing of a political candidate can result in negative perceptions of that politician. Of course, intensive lies and character attacks can be quite coercive for the victim.

 

Projection Tactic

A more specific form of innuendo is based on projection – accusing another person of the negative traits and behaviors that one possesses and exhibits with the goal of deflecting attention from one’s own misdeeds and towards the accused. In four experiments, Rucker and Pratkanis (2001) found that projection was effective in increasing the blame placed on the target of projection and decreasing the culpability of the accuser. In addition, the effects of projection persisted despite attempts to raise suspicions about the motives of the accuser and providing evidence that the accuser was indeed guilty of the deeds.

 

Debias Techniques

We have seen how landscaping tactics such as decoys, phantoms, storytelling, and comparison points can influence judgment and choice. Suppose you don’t want that choice. What can be done to disrupt landscaping tactics? In 1897, Chamberlin suggested the “method of multiple working hypotheses” or to bring up all rational explanations, perspectives, and alternatives for consideration as a means of avoiding premature closure on a suggested option. Anderson and Sechler (1986) demonstrated the wisdom of this advice in their research showing that subjects asked to provide a counterexplanation of a relationship evidenced less bias based on initial theories. Mussweiler, Strack, and Pfeiffer (2000) found that inducing a “consideration of the opposite” reduced the impact of a comparison point. Similarly, Maier’s problem-solving discussion techniques such as the risk technique, structured discussion guidelines, developmental discussion, two-column method, and second-solution are all designed to generate multiple hypotheses and perspectives to overcome landscaping biases (Maier, 1952; Maier & Hoffman, 1960a, 1960b).

 

Committee-Packing

Determining who will make the decision will often determine the outcome. The tactic gets its name from attempts to obtain a desired outcome by putting supporters on the appropriate committee (as in Franklin Roosevelt’s attempt to pack the Supreme Court), although it can also refer to any attempt to control who will make a decision. One’s desired outcome can be obtained by assigning decision-making to friends and supporters, denied by assigning the decision to enemies, and never voted upon by assigning the responsibility to incompetents. In negotiations and mediation, the selection of a third-party intervener and the characteristics of that intervention can impact the ultimate results (Rubin, 1981). In organizations, assigning a task to a committee has the effects of legitimizing the outcome (i.e., procedural justice) as well as distancing an administrator from a potentially adverse result (Pfeffer, 1981).

 

Coalition Formation

Whenever a decision involves more than two participants, there is the strong possibility that the matter will not be decided by principle, reason, or the self-interest of the whole, but by coalition formation. Whether we are discussing outcomes in the US House of Representatives, balance of power in 19th-century Europe, or the winner in a game of Parcheesi in a laboratory setting, the combination of one group of actors against another will determine the allocation of resources and the resolution of conflicting interests (Groennings, Kelley, & Leiserson, 1970; for discussion of theories on how coalitions form see Kahan & Rapoport, 1984; Komorita & Chertkoff, 1973).

 

Be a Credible Source

 One of the most prominent demonstrations of Aristotle’s “good character” rule of influence was conducted by Hovland and Weiss (1951; Hovland, Janis, & Kelley, 1953). In this experiment, expert and trustworthy sources (e.g., Robert J. Oppenheimer, New England Journal of Biology and Medicine) were more effective in securing persuasion to various issues (e.g., future of atomic submarines, sale of antihistamine drugs) compared to communicators lacking in expertise and trust (e.g., Pravda, a pictorial magazine). The explanation for this effect given by Hovland et al. (1953) assumed that people desire to hold a correct attitude (see also Petty & Cacioppo, 1986) and that relying on an expert and trustworthy source is rewarding in terms of meeting this goal. In addition to communicators who are expert and trustworthy, researchers have identified a number of other attributes of credible (or effective) communicators. These include sources that are physically attractive (Chaiken, 1979), similar to the target (Brock, 1965), likeable (Cialdini, 2001), an authority (Bickman, 1974), and of high social status (Lefkowitz, Blake, & Mouton, 1955), and members of ingroups (Abrams, Wetherell, Cochrane, Hogg, & Turner, 1990). Some divide the nature of credibility into two general types: the hard (expert, authority, high social status) and the soft (attractive, likeable, similar) sell. For example, Cialdini (2001) describes the two influence principles of liking (the friendly thief) and authority (directed deference). Additional source-related principles might be added, such as a warm or benefic sell (taking the role of someone who is dependent and needs help to induce compliance with a request for aid). Nevertheless, some of these attributes strain the Hovland et al. (1953) explanation of source credibility. For example, the desire to hold a correct attitude toward shaving would suggest that the independent assessment of a barber or dermatologist would be a more effective source than a high-status pro football player paid for his endorsement. An alternative explanation of credibility is that the credible communicator is one that holds a prominent, positive status in the web of relationships in a social system – the prestige hypothesis of research conducted in the 1930s and 1940s (Lorge, 1936; Wegrocki, 1934).

 Much research has been devoted to identifying techniques for manufacturing source credibility. Some of these techniques are: (a) put on the trappings (ornamentation) of authority and attractiveness (make-up, clothing, symbols, stories and narratives, etc.; Cialdini, 2001; Pratkanis & Aronson, 2001); (b) do a favor for the target (Lott & Lott, 1960); (c) get the target to do a favor for you (Jecker & Landy, 1969); (d) agree (attitude similarity) with the target (Byrne, 1971); (e) show or demonstrate liking for the target (Curtis & Miller, 1986); (f) personalize or individuate the source (Garrity & Degelman, 1990); (g) be critical and then praise the target (Sigall & Aronson, 1967); (h) commit a blunder or pratfall (to appear human) if you are a competent source (Aronson, Willerman, & Floyd, 1966); (i) be confident in tone and manner (Leippe, Manion, & Romanczyk, 1992); (j) create a sense that future anticipated interaction is inevitable or a fait accompli (Darley & Berscheid, 1967); (k) increase familiarity and proximity (Segal, 1974); (l) admit a small flaw to establish overall credibility (Settle & Golden, 1974); (m) surround yourself with beautiful people (Sigall & Landy, 1973); (n) punish a tar- get’s enemy or reward a target’s friend (Aronson & Cope, 1968); (o) imitate the target (Thelen & Kirkland, 1976); (p) share a secret (Wegner, Lane, & Dimitri, 1994); (q) reciprocate self-disclosures (Derlega, Harris, & Chaikin, 1973); and (r) be perceived as empathetic, warm, and genuine (Girard, 1977; Rogers, 1942).

In general, the advice to “be credible” should be heeded by all who seek to persuade. However, there are cases when low-credible sources are more effective. For example, Walster, Aronson, and Abrahams (1966) found that a low- credible source (a hardened criminal) was more effective than a high-credible source (a judge) when arguing for tougher judicial sentencing. Aronson and Golden (1962) found that an outgroup member (an African-American for whites) was more effective under certain conditions in arguing for the value of arithmetic than a more prestigious ingroup member (see also White & Harkins, 1994). In addition, there are a number of cases where sources with differing basis of credibility (say, expert versus attractiveness) produce differential persuasion under varying treatments (see Pratkanis, 2000).

To account for these cases, I have proposed an altercasting theory of source credibility as an extension of the hypothesis (from prestige research) that  cipient (Pratkanis, 2000; Weinstein & Deutschberger, 1963). According to an altercasting theory, source credibility (effectiveness) is a function of the roles taken by the source and recipient of a message. Altercasting describes a social interaction in which an ego (e.g., source of the message) adopts certain lines of action (e.g., self-descriptions, mannerisms, impression management) to place alter (e.g., message recipient) into a social role that specifies an interpersonal task (e.g., message acceptance or rejection). A role is “a set of mutual (but not necessarily harmonious) expectations of behavior between two or more actors, with reference to a particular type of situation” (Goode, 1968, p. 249). In other words, a set of roles provides the occupants of those roles with certain responsibilities and privileges that then structure and shape future interaction. Once a person accepts a role, a number of social pressures are brought to bear to insure the role is enacted, including the expectations of self and others, the possibility of sanctions for role violations, selective exposure and processing of information consistent with role constraints, and the formation of an identity that provides the actors with a stake in a given social system. Any influence attempt is more or less effective depending on what roles are invoked and how it makes use of the responsibilities and privileges inherent in each role. The next seventeen tactics illustrate some of the more common uses of altercasting to secure influence.

 

Tact Altercast:

 In tact altercasting, alter (the target of an influence attempt) is placed in a role through mere contact with others in the social world. (The term is based on the Skinnerian term “tact,” which is derived from contact.) In other words, the agent of influence takes a social role to place the target in a complementary role. Pratkanis and Gliner (2004–2005) conducted simple experiments to illustrate tact altercasting. In one of their experiments, a child or an expert argued in favor of either nuclear disarmament or the presence of a tenth planet in the solar system. Traditional theories of source credibility (Hovland et al., 1953; Petty & Cacioppo, 1986) would predict that the expert should always be more effective than the child in terms of holding a correct belief. However, Pratkanis and Gliner found differential persuasion based on the social roles invoked. Specifically, a child was more effective than the expert when arguing for nuclear disarmament whereas the expert was more effective than the child when arguing for a tenth planet. A child places the message recipient in the role of “protector” and thus gains an advantage when arguing for protection-themed messages such as nuclear disarmament. An expert places the message recipient in the role of “unknowing public” and thus is most effective when advocating for technical issues such as a tenth planet. Pratkanis (2000) lists a number of role-pairs and role sets frequently used in influence attempts. The next ten tactics describe common tact altercasts.

 

Social norms

A norm is a rule that states expectations about the appropriate and correct behavior in a situation – for example, tip 15%, don’t urinate in public, and African-Americans shouldn’t be CEO. As Goldstein and Cialdini (this volume) point out, norms can be either descriptive (a summary of what most people do) or injunctive (an expectation of what ought to be done). As such, a norm represents an implied social consensus and thus carries both informational influence (especially descriptive norms) and social

pressures (especially in ought norms) useful for influencing behavior. For example, Sherif (1936) found that groups quickly developed norms that would in turn guide judgment and perceptions of the autokinetic effect. Pettigrew (1991) has repeatedly observed the power of norms in the regulation of interracial beliefs and behavior. Perkins (2003) describes a number of studies showing that informing college students about the actual norm in regards to substance abuse (less students engage in substance abuse than most students think) may result in a decrease in such abuses. Goldstein and Cialdini (this volume) review a number of experiments illustrating the use of norms in social influence.

 

Social modeling

The presence of a person (either live or on film) demonstrating a given behavior generally increases the probability of the emission of that behavior by observers. In other words, social models are a source of social proof on what to do in any given situation. For example, Bryan and Test (1967) found that passersby were more likely to contribute to the Salvation Army or help a distressed motorist with the presence of a helping model. Bandura and Menlove (1968) found that children who were afraid of dogs reduced their avoidance of dogs after watching models interacting nonanxiously with dogs. 

Phillips (1986) observed that watching highly-publicized prizefights increased the homicide rate in the viewing area. The tendency to follow and imitate social models is especially likely for models who are high in prestige, power, and status, are rewarded for performing a behavior to be imitated, provide information on how to perform the behavior, and are attractive and competent (Pratkanis & Aronson, 2001).

 

Social reinforcement

Insko (1965) demonstrated the power of a verbal reinforcer to influence attitudes. In his experiment, students were contacted via phone to take a survey of campus attitudes. On this survey, students were asked if the agreed or disagreed with 14 statements concerning a campus Aloha week. The survey-taker then positively reinforced with the word “good” for agreement (or disagreement) with each statement. A week later, Insko surveyed the students in an unrelated class and found that those who were

reinforced for agreeing with Aloha week statements evaluated it more favorably than those reinforced for disagreeing with such statements. Insko and Cialdini (1969: Cialdini & Insko, 1969) advanced a two-factor theory of verbal reinforcement: verbal reinforcement (a) provides information about the survey-taker’s opinion (or social proof) and (b) indicates that the survey-taker likes or approves of the respondent (social pressure).

 

Multiple sources

An increase in the number of sources for a communication can, under certain

conditions, results in an increase in persuasion. For example, Harkins and Petty (1981a; see also Harkins & Petty, 1981b) found that three different speakers delivering three different cogent arguments were more effective than one source delivering the same three arguments (see Lee & Nass, 2004 for a replication using synthetic voices). Through experimental analysis, Harkins and Petty showed that increasing the number of sources of a communication increases thinking about each argument. This leads to more persuasion when the arguments are strong and compelling, but less persuasion when the arguments are weak.

 

Public audience

The presence of an audience can increase concerns for maintaining a positive public image; this can result in increased compliance when the request is one that is socially approved. For example, Rind and Benjamin (1994) asked male shoppers to purchase raffle tickets to support the United Way; male shoppers with a female companion purchased almost twice as many tickets compared to when they were alone. Similarly, Froming, Walker, and Lopyan (1982) found that subjects were more or less willing to use shocks as punishment in an experiment depending on the perceived beliefs of an evaluative audience. In contrast, when compliance is not socially-approved (say, when the person would look wish-washy or weak), the presence of a public audience may hinder persuasion.  (As an aside, the presence of an audience can also facilitate or hinder performance on a task; see Guerin, 1993; Zajonc, 1965).

 

Fleeting interactions

 A number of studies demonstrate that having a fleeting, brief social interaction with the target of a request increases compliance with a request. Such fleeting interactions have included introducing yourself (Garrity & Degelman, 1990), a gentle touch (Gueguen & Fisher-Lokou, 2002; Segrin, 1993 for a review), asking about how a person feels (Howard, 1990), engaging in a short dialogue before asking the request (Dolinski, Nawrat, & Rudak, 2001), personalizing a message (Howard & Kerin, 2004), and just sitting in a room with someone (Burger, Soroka, Gonzaga, Murphy, & Somervell, 2001). There are a number of explanations for these effects including the invoking of a liking heuristic (Burger et al. 2001), mimicking friendship (Dolinski et al., 2001), inducing positive mood, desire to maintain a social relationship, reciprocity, and individuating the requester and thus making her or him seem more human. 

Sorting out these explanations will be a fruitful research endeavor (see Burger, this volume for an

introduction to the issues).

 

Self-generated persuasion

One of the most effective means of influence is to design subtly the situation so that the target generates arguments in support of a position and thereby persuades her or himself. Lewin’s (1947) work during World War II provides a classic demonstrate of the effectiveness of self-generated persuasion. In this research, Lewin attempted to get housewives to serve sweetbreads (intestinal meats) by either giving a lecture on the value of serving sweetbreads or having the housewives generate their own reasons for serving sweetbreads. The results showed that those housewives who generated their own  arguments were nearly 11 times more likely to serve sweetbreads than those who received the lecture. 

Miller and Wozniak (2001) provide a contemporary example. In their experiment, after listening to a lecture on the ineffectiveness of subliminal influence, students either summarized the points made in the lecture or generated the arguments they thought were most effective. The results showed that those students who self-generated arguments were least likely to believe in the effectiveness of subliminal influence. Self-generated persuasion typically results in persistence of attitude change (Boninger, Brock, Cook, Gruder, & Romer, 1990; Miller & Wozniak, 2001; Watts, 1967).

 

Imagery sells

 Imaging the adoption of an advocated course of action increases the probability that that course of action will indeed be adopted. For example, Gregory, Cialdini, and Carpenter (1982) sent salespersons door-to-door to sell cable TV subscriptions. Some potential customers were merely informed of the advantages of cable TV. Others were asked to “take a moment and imagine how cable television will provide you with broader entertainment” followed by inducing the potential customer to imagine how he or she would enjoy each benefit of cable TV. The results showed that those customers who were asked to imagine the benefits of cable TV were 2.5 times more likely to purchase a subscription compared to those who were merely given the information (see also, Anderson, 1983; Taylor, Pham, Rivkin, & Armor, 1998).

 

Rhetorical questions

Don’t you think you should be using rhetorical questions in your communications? A rhetorical question is one that is asked for effect and for which an answer is not expected. In general, rhetorical questions motivate more intensive processing of message content (Burnkrant & Howard, 1984).This increased message attention results in an increase in persuasion when the message is strong, but a decrease in persuasion when the message is weak. (However, when message recipients are already highly motivated to process a message, rhetorical questions can disrupt thinking resulting in less persuasion for a strong message; Petty, Cacioppo, & Heesacker, 1981). Recently, Ahluwalia and Burnkrant (2004) developed a model of rhetorical question effects. In their model, rhetorical questions can also draw attention to the source of the message, resulting in an increase in persuasion for positive sources and a decrease for negative ones.

 

Pique technique

According to Santos, Leve, and Pratkanis (1994), the pique technique consists of the disruption of a mindless refusal script by making a strange or unusual request so that the target’s interest is piqued, the refusal script is disrupted, and the target is induced to think positively about compliance. To test this tactic, Santos et al. had panhandlers ask for money in either a strange (e.g., Can you spare 17 cents?)

or typical (e.g., Can you spare a quarter?) manner. Subjects receiving the strange request were almost 60% more likely to give money than those receiving the typical plea. Davis and Knowles (1999) have also found evidence that a strange request can promote compliance, but hypothesized such requests operate through a different process and gave the technique a different name (disrupt-then-reframe). According to Davis and Knowles, an odd request disrupts resistance and creates confusion that makes the target more susceptible to a reframe that leads to influence; this process is similar to the role of distraction in persuasion (see below). In contrast, Santos et al. argue that an odd request disrupts a refusal script and then induces the target to wonder why the strange request was made. Compliance is then dependent on the nature of cognitive responses (e.g. disruption of counterarguments and promotion of support arguments) that result from this attempt to understand the strange request; this process is similar to that invoked by rhetorical questions (see above). Cognitive responses can be internally generated (in the Santos et al. study the strange request prompted targets to like the panhandler) or supplied externally (as in the Davis and Knowles reframe). Santos et al. provide process data in support of their hypothesized mechanism. They find that the pique technique results in more question asking and that these questions are specifically addressed to understanding the nature of the strange request. More recently, Fennis, Das, and Pruyn (2004) have provided experimental data to understand the process involved when making a strange request. In three experiments, they find that odd requests increased compliance. In their third experiment, they coupled an odd request designed to promote a college fee increase with either a weak goal-incongruent or a strong goal-congruent message. Davis and Knowles would predict either that both messages would produce the same results or that the weak message would gain an advantage over the strong (as in distraction research). Santos et al. would predict the opposite. The results showed that the odd request produced significantly more compliance when paired with the strong as opposed to the weak argument, as predicted by Santos et al.

 

Of the recipient

 Both Plato (in Gorgias) and Aristotle advised would-be influence agents to link their arguments and appeals to the beliefs and experiences of their audiences. The Institute of Propaganda analysis referred to this as tabloid thinking – reducing complex issues to one simple, widely-accepted slogan, commonplace, or truism (Werkmeister, 1948). Similarly, Pratkanis and Shadel (2005) find that

fraud criminals often tailor a scam or pitch to the psychological and other characteristics of their target of victimization. Considerable research in a variety of domains demonstrates the effectiveness of this

technique. For example, Cacioppo, Petty, and Sidera (1982) presented messages based on religious and

legalistic arguments to religious- and legalistic-oriented subjects and found that message arguments were rated as more convincing when those messages fit the subject’s orientation. Snyder and DeBono (1989) demonstrated that high self-monitors found ads emphasizing image and appearance to be more appealing and convincing whereas low self-monitors found ads emphasizing argument quality to be most persuasive. Howard (1997) obtained increased persuasion for messages that used familiar phrases or slogans (compared to unfamiliar phrases conveying the same meaning), especially in limited-thinking situations. In implementing this tactic, the message can be tailored to fit a variety of pre-existing beliefs, experiences, and knowledge including the attitude heuristic (Pratkanis, 1988), balancing processes (Heider, 1958), ideal and ought self (Evans & Petty, 2003), experimentally-induced needs (Julka & Marsh, 2000), slogans (Bellak, 1942; Sherif, 1937), commonplaces (or widely accepted arguments; Pratkanis, 1995), prejudice and stereotypes (Ruscher, 2001), wishful thinking (Lund, 1925), natural heuristic (Rozin, Spranca, Krieger, Neuhaus, Surillo, Swerdlin, & Wood, 2004), tendency for egocentric thought (e.g., Barnum statements; Petty & Brock, 1979), laws of sympathetic magic such as physical contagion (Rozin & Nemeroff, 2002) and a host of cognitive biases such as representativeness, availability, accessibility, fundamental attribution error, illusory correlation, naïve realism, and hindsight (Gilovich, Griffin, & Kahneman, 2002; Kahneman, Slovic, & Tversky, 1982; Nisbett & Ross, 1980). 

 

Placebic reasons

A placebic argument is a reason that appears to make sense but is really vacuous and without information. Langer, Blank, and Chanowitz (1978) attempted to cut in line to make photocopies of either a small or large number of papers. The request was made with no information (e.g., “Excuse me,…May I use the Xerox machine?”), with a real reason added (“because I am in a rush?”), or with a placebic reason (“because I have to make copies?”). Langer et al. found that real reasons increased compliance with both small and large requests (compared to controls) whereas placebic arguments increased compliance for a small but not a large request. However, it should be noted that Folkes (1985) attempted replication of the Langer et al. findings yielded inconsistent results, and thus additional research is required before we fully understand the effectiveness of placebic arguments. 

 

Misleading inference

Another way to increase the effectiveness of an argument is to induce a misleading inference – in other words, “say what you don’t mean, and mean what you don’t say” while giving the appearance of “saying what you mean, and meaning what you say.” For example, Harris (1977) found that consumers would falsely assume that juxtaposing two imperative statements in an ad would implya causal relationship, and Shimp (1978) found that incomplete comparatives in an ad are often used by consumers to incorrectly infer that a product is superior to competitors. Such misleading inferences can result in inflated product beliefs, evaluations, and purchase intentions (Burke, DeSarbo, Oliver, & Robertson, 1988; Olson & Dover, 1978). Preston (1994) presents a scheme for classifying misleading

inferences and deception in advertising (see also Geis, 1982).

 

Negativity effect

 In general, negative information receives more attention and weight than positive information when making judgments about persons, issues, and things (Kanouse & Hanson, 1972). For example, Hodges (1974) gave subjects personality descriptors varying in the amount of positive and negative information and found that negative information had a greater impact on evaluation. Lau (1982) found that negative information was more influential than positive information about U.S. Presidential candidates in the 1968, 1972, and 1980 elections. Rozin and Royzman (2001) review evidence to conclude that this negativity bias is manifested in both animals and humans and may be innate.

 

Discrepancy of a message

Should a message ask the target for a small or a big change in belief? The answer is that it depends on how easy it is to disparage the communication. When the source of the message is of high credibility (and thus difficult to disparage), asking for a large opinion change is most effective (cf., Zimbardo, 1960). On the other hand, when the communicator is of low credibility or the issue is involving (or any other factor that makes the extreme request appear incredulous), asking for a large opinion change is not as effective as asking for a smaller change (a curvilinear result) and may backfire (Hovland, Harvey, & Sherif, 1957; for an illustration of both processes see Aronson, Turner, & Carlsmith, 1963; Brewer & Crano, 1968).

 

Message length = message strength

 A simple rule of thumb for accepting the conclusion of a message is “the longer the message (larger the number of argument), the more it appears that the message has something to say.” Petty and Cacioppo (1984) varied the number of arguments (3 or 9), the cogency of those arguments (weak or strong), and the level of involvement of the message recipient. When involvement was low (and the recipient was not carefully processing the message), a long message increased persuasion whereas when involvement was high (and the recipient was motivated to scrutinizing the message), persuasion was dependent on the cogency of the arguments (see also Friedrich at al, 1996).

 

Vivid appeals

A vivid appeal is a message that is (a) emotionally interesting, (b) concrete and image-provoking, and (c) immediate (Nisbett & Ross, 1980). Such messages can be compelling. For example, Gonzales, Aronson, and Costanzo (1988) taught energy auditors to speak in a vivid language (e.g., instead of saying, “the attic needs insulation,” they said such things as, “you have a naked attic that is facing winter without any clothes on”) and found an increase in compliance with recommendations for making homes more energy efficient. Similarly, Borgida and Nisbett (1977) found that students’ selection of courses was much more dependent on receiving a vivid comment from another person than average ratings of the course by previous students (see also Hamill, Wilson, & Nisbett, 1980). Although an effective tactic,

there are conditions when vividness is ineffective or may boomerang such as when it is paired with a weak argument (Pratkanis & Aronson, 2001) or when vividness becomes distracting (Frey & Eagly, 1993; see Taylor & Thompson, 1982 for a review).

 

Distraction

A mild distraction such as keeping track of lights on a display while processing a persuasion message disrupts dominate cognitive responses (see Festinger & Maccoby, 1964 for the original finding). Thus, it can result in more persuasion when the message is weak or counterattitudinal (and likely to provoke counterarguments) and less persuasion when the message is strong (and likely to elicit supporting

arguments; see Petty, Wells, & Brock for the definitive experiments on this topic). In other words,

distraction can be viewed as a response de-amplification.

 

Overt behavior movements

Overt behavior movements such as smiles, frowns, body positions, and head movement can result in social influence consistent with the meaning of those movements. For example, in a test of the facial feedback hypothesis, Strack, Martin, and Stepper (1988) had subjects hold a pen in their mouths (under the guise of testing procedures for use with paraplegics) in a manner that inhibited or facilitated muscles used in smiling. The results showed that cartoons were rated as more humorous when a smile was facilitated as opposed to inhibited. Recently, Briñol and Petty (2003) identified what may be the nature of the feedback in the feedback hypothesis: overt behavior movements serve to self-validate (increase or decrease confidence) in one’s thoughts. In the first of a series of experiments, Briñol

and Petty had subjects engage in head nodding or shaking (under the guise of testing the quality of

equipment) as they listened to a strong or weak message. For a strong message, nodding produced more attitude change than shaking; for a weak message the results were reversed, indicated that the head movement served to validate what the target was thinking during message processing.

 

 Overheard communication

 In a series of clever experiments, Walster and Festinger (1962) invited subjects to tour the psychology labs and especially the one-way mirror room. As part of this tour, the subjects listened in on a conversation about smoking, living in dorms, or student husbands spending more time with their wives. Some of the subjects thought they just overheard (eavesdropped) on a conversation (the participants didn’t know they were there) whereas others thought the conversationalists knew that the subject was listening. The results showed the overheard communication produced more opinion change for subjects who found the topic to be important and involving. Brock and Becker (1965) replicated these

results and added a limiting condition to the findings: the overheard message must be agreeable to the

subject and not counterattitudinal. The principle reason advanced for the overheard communication effect is that listeners will not infer self-serving motives to the communicator, although there is disagreement on whether this is the mechanism or not (Brock & Becker, 1965).

 

Hostile audience effect

The knowledge that a communicator previously delivered a message to an audience that opposed and was hostile to the message conclusion increases the acceptance of that message. For example, Mills and Jellison (1967) gave subjects a message arguing for the tripling of tractor trailer license fees and told the subjects that the message was given at a meeting of either a union of railroad workers or long-haul truck drivers (a hostile audience). The subjects were much more likely to endorse the tripling of license fees when they thought the message was given to truck drivers as opposed to railroaders (see Eagly, Wood, & Chaiken, 1978 for a replication).

 

Heckling

Heckling refers to attempts by an audience member or members to disrupt a speech and to make it clear to others that the speaker is wrong and not to be listened to. Four research efforts have all

converged on the finding that heckling, in general, is an effective means for countering a speaker. For

example, Sloan, Love, and Ostrom (1974) found that heckling caused listeners who were neutral to a speaker to disagree with the speaker’s views (relative to no-heckling controls; partisans showed a complex relationship to heckling). Similarly, Silverthorne and Mazmanian (1975) found that booing a speaker resulted in less persuasion (compared to controls), regardless of whether the speech was given live, on audiotape, or on videotape (see also Ware & Tucker, 1974). The best way to respond to a heckler is with a calm and relevant reply (Petty & Brock, 1976). Although heckling seems to produce consistent results, there is yet no agreed upon theory to account for these findings with distraction, variation in response range, identification with the heckler, and negative associations proposed as mediators of the effect.

 

Repetition of a message

Repeating a message over and over again,  generally increases believability and acceptance of the communication. Message repetition works by increasing liking for the object through the mere exposure effect (Zajonc, 1965) and by increasing the perceived validity of "facts" stated in the message (Boehm, 1994). However, when a target carefully attends to a message, repetition can result in no increase and sometimes a decrease in persuasion as tedium sets in, as the target becomes motivated to counterargue the message. Such "wear-out" effects can be reduced by using repetition with variation

(Schumann, Petty, & Clemons, 1990).

 

Inoculation

 Another tactic for preventing persuasion is inoculation - a target receives a brief, opposing message that can be easily refuted and thus immunizes against a subsequent attack. This technique was pioneered by McGuire (1964) in a series of research investigations. In these experiments, McGuire

created effective messages capable of changing attitudes about various cultural truisms (e.g., one should brush after every meal and get a routine chest x-ray). He then developed effective inoculation messages in which he taught possible responses (counterarguments) to these attack messages, with the result that the target of the communication could resist a latter, stronger influence attempt (see An & Pfau, 2004 for a recent application to political communications).

 

Stealing thunder

Another tactic for mitigating or reducing the impact of an opponent’s persuasive message is the technique of stealing thunder or revealing potentially damaging information before it can be stated by an opponent. The effectiveness of this tactic was demonstrated in two experiments by Williams,

Bourgeois, and Croyle (1993). In these experiments, mock jurors received trial transcripts in which negative information was presented by the opposing side about the defendant (Experiment 1) or a witness (Experiment 2). This information had strong, negative effects on the target. However, for some of the mock jurors the “thunder was stolen” by having the negative information presented by the defendant’s attorney or the witness himself (before it was given by the opposing side). In such cases, the negative effects of the information was mitigated (Experiment 1) and eliminated (Experiment 2; for a summary of stealing thunder research see Williams & Dolnik, 2001).

 

IV. Emotional Tactics

An emotional appeal is one that uses the message recipient’s subjective feelings, affect, arousal, emotions, and tension-states as the basis for securing influence (see Lewis, 1993 for a discussion of the definition of emotion). Aristotle urged communicators who want to be effective to control the emotions (or pathos) of the audience and to use emotions such as pity, pleasure, fear, and anger, to bring about the desired effects. In order to be able to effectively use an emotion in persuasion, Aristotle believed that one must be able to know the names of the emotions, understand what produces them, who is most likely to experience each emotion, and comprehend the way each emotion is excited (its course and effects). Since Aristotle, those who seek to persuade have advocated the use of emotions while those concerned about misguided influence have warned us of the power of emotions to propagandize. There are two general reasons why emotions are effective as an influence device.

First, emotions are relatively easy to create and marshal in any given influence situation. Emotions

can be aroused directly by appeals to fear, laying on a “guilt trip,” piling on the flattery, and similar

techniques. Emotions can also be aroused indirectly by placing a target in a situation that is likely to invoke emotions – for example, providing a gift to invoke a sense of obligation, having the person behave for no apparent reason to create a need for self-justification, or attacking the target’s self-esteem. Second, when an emotion is aroused and experienced, it can involve a number of psychological

processes that can then be used as a platform for promoting and securing influence and compliance. For example, emotions have been shown to (a) provide valenced information that can be used to interpret the situation and guide behavior (Clore, 1992; Schwartz, 1990), (b) supply emotion-specified influences that impact judgment and choice (Lerner & Keltner, 2000), (c) change information processing priorities in the sense that dealing with the emotion becomes paramount (Simon, 1967), (d) reduce attentional capacity and narrow attention to goal-relevant information, especially when strong emotions are involved (Baron, 2000; Easterbrook, 1959; Kahneman, 1973), (e) motivate behavior to avoid or reduce negative feelings, especially in the case of negative tension-states or dissonance (Aronson, this volume; Festinger, 1957), and (f) regulate behavior for the survival and adaptation of a social structure (Kemper, 1984). 

The following tactics are designed to allow a communicator to control the emotions of the target for

desired effects. These tactics follow a simple rule: arouse an emotion and then offer the target a way of  responding to that emotion that just happens to be the desired course of action. The emotion comes to color the target’s world. The target becomes preoccupied with dealing with the emotions, is unable to critically analyze the issue, and thus complies with the request in hopes of escaping a negative emotion or maintaining a positive one.

 

Fear appeals

 A fear appeal is one that creates fear by linking an undesired action (e.g., smoking) with negative consequences or a desired action (e.g., brushing teeth) with the avoidance of a negative outcome. Fear as an emotion creates an avoidance tendency - a desire to shun the danger. As an influence device, fear has proven to be effective in changing attitudes and behavior when the appeal (a) arouses

intense fear, (b) offers a specific recommendation for overcoming the fear and when (c) the target believes he or she can perform the recommendation (Leventhal, 1970, Maddux & Rogers, 1983). In other words, the arousal of fear creates an aversive state that must be escaped. If the message includes specific, doable recommendations for overcoming the fear, then it will be effective in encouraging the adoption of that course of action. Without a specific, doable recommendation, the target of the communication may find other ways of dealing with the fear such as avoidance of the issue and message, resulting in an ineffective appeal. Propagandists find fear to be a particularly useful influence device because it is easy to create “things that go bump in the night” along with a ready, doable solution – namely supporting the propagandist.

 

Guilt sells

 Guilt is the feeling of responsibility for some wrongdoing or transgression. Guilt induces a desire to make restitution and to repair a self-image. It can be used as an influence tactic by turning the act of restitution and image-repair into an act of compliance. For example, Carlsmith and Gross (1969) induced students to perceive that they had given a series of painful shocks to another person as part

of a learning experiment.  These guilty students were more likely to comply (relative to controls) to a

subsequent request to make phone calls to “Save the Redwoods” when asked either by the person they

supposed shocked or another person who knew nothing about the shocks (see Kassin & Kiechel, 1996 for anexample of how guilt can induce false confessions). In cases where restitution is not possible, guilt for a transgression can result in self-justification for the wrong-doing (Glass, 1964). 

 

Jeer pressure

Ridicule and insults can increase compliance with a request. Steele (1975) found that insulting (name-calling) the target of a request increased the rate of completing a survey regardless of whether the insult was relevant (the target was uncooperative and selfish) or irrelevant (the target wasn’t a safe driver) to the request. More recently, Janes and Olson (2000) found that merely having a target observe another person being ridiculed increased the target’s rate of conformity. Such jeer pressure increases compliance because the target seeks to repair a self tarnished by the attack and hopes to avoid future ridicule by going along with the request. Abelson and Miller (1967) have identified one limiting factor for jeer pressure: an insult on a specific, previously-held belief (especially in a public setting) can result in a boomerang effect or an increase in the original opinion.

 

Flattery (ingratiation)

It is a widely-held belief that flattery is a powerful influence device (see Pratkanis & Abbott, 2004 for a review). There is a considerable amount of research showing that we like those that flatter us, as illustrated by Gordon’s (1996) meta-analysis of 106 effect sizes. However, only two experiments have looked explicitly at the effects of flattery on compliance with a direct request. Hendrick, Borden, Giesen, Murray, and Seyfried (1972) found that flattery (compliments on the goodness and kindness

of the target) increased compliance with a request to complete a seven-page questionnaire relative to a

control condition. Pratkanis and Abbott (2004) asked passersby on a city street to participate in a “stop junk mail” crusade after they were flattered about an article of clothing or asked the time of day. (control treatment). We found that flattery increased compliance by 10 percentage points over control. Interestingly, these effects were obtained regardless of whether the “stop junk mail” request was made by the flatterer or a different person (immediately after the flattery was given), indicating that in this study flattery was working primarily through intrapersonal (e.g., mood and disposition of the target) as opposed to interpersonal (e.g., liking of the flatterer) processes.

 

Empathy

Empathy consists of two aspects: a cognitive awareness of another person’s internal states

(thoughts, feelings, perceptions, intentions) and a vicarious affective response of concern and distress for another person. Empathy can be induced by instructions to “put yourself in another’s shoes” or assessed using standards measures (Davis, 1996). In general, empathetic concern for another person increases the likelihood of agreeing to requests to help that person (e.g., Batson, Duncan, Ackerman, Buckley, & Birch, 1981). For example, Archer, Foushee, Davis, and Aderman (1979) found that increased empathy with a defendant in a legal trial (e.g., imagine how you would feel if you were on trial) resulted in more favorable decisions for the defendant.

 

Norm of reciprocity

Every human society (and a few chimpanzee ones too) has a simple rule of reciprocity: If I do something for you, then you should do something for me in return. Invoking this rule triggers a feeling of indebtedness or obligation to the person who has given a gift or performed a favor. A tension state is thus created: Do I live up to my social obligation or not? The norm of reciprocity is one of the glues of primate society. It can be employed for influence purposes when the compliance agent supplies

a desired course of action for resolving the indebtedness tension state. For example, Regan (1971) staged an experiment where a confederate of the experimenter gave a subject a soft drink as a favor or provided no favor. The subject was more likely to purchase raffle tickets when a favor was rendered than when no coke was provided.

 

Commitment trap

Commitment is defined as the binding of an individual to a behavior or course of action. In other words, the person becomes identified with a certain behavior; commitments are strongest when that behavior is visible, irreversible, and perceived to be freely chosen (Salancik, 1977). Breaking this binding produces a negative tension of not living up to one’s promises and a concern that one will look

inconsistent and untrustworthy (e.g., a need to save face). As such, securing a commitment increases the likelihood that the target will comply and perform that behavior. A commitment can be secured through a number of devices including a public verbal commitment (Wang & Katzev, 1990), investment in a course of action, (Brockner & Rubin, 1985), sunk costs, self-selection of goals, and the pretense that a commitment has been made (e.g., presumptive close in sales). Commitment can lead to disastrous results when negative setbacks result in escalating commitment to a failing course of action (Shaw, 1976).

Foot-in-the-door

 In the foot-in-the-door tactic (FITD), a target is first asked to do a small request (which most people readily perform) and then is asked to comply with a related and larger request (that was the goal of influence all along). For example, Freedman and Fraser (1966) asked suburbanites to put a big, ugly sign stating “Drive Carefully” in their yard. Less than 17% of the homeowners did so. However, 76%

of the homeowners agreed to place the sign in their yards, if two weeks earlier they had agreed to post in their homes a small, unobtrusive 3-inch sign urging safe driving. Burger (1999) has conducted a thoughtful analysis of over 55 published research reports on the FITD and concludes that it has the potential to invoke a number of psychological processes that may increase (self-perceptions that one is the type of person to perform an action, commitment, and desire for consistency) or decrease (reactance, norm of reciprocity, and other social pressures) the magnitude of compliance. 

 

Low-balling

A common sales tactic is low-balling or throwing the low-ball. In this tactic, the target first makes a commitment to perform a course of action (say, purchase a car for $20,000) and then this action is switched for a more costly behavior (opps, the car really costs $20, 859). The target is more likely

to perform this costlier task as a result of the earlier commitment. For example, Cialdini, Cacioppo, Bassett, and Miller (1978) found that securing students’ agreement to sign up for a psychology experiment before telling them that the experiment was at 7 am (a high cost behavior for most students) resulted in more compliance compared to asking them to sign up for a 7 am experiment. Burger and Petty (1981) have replicated the low-balling effect and argue that it is based on commitment, not necessarily to the task, but to the requester. Low-balling bears a similarity to the FITD in that both involve a commitment to an initial request or requester followed by a less attractive second request. In addition, the effectiveness of both tactics is reduced if there is no commitment to the first task (Burger, 1999; Burger & Cornelius, 2003). Low-balling differs from FITD in that the first request is the actual target behavior (only later made less attractive by adding costs) whereas in FITD the first request may be related to the second request but it is not the target behavior.

 

Bait-and-switch

Joule, Gouilloux, and Weber (1989) demonstrated a tactic they called “the lure” that is similar to what is called bait-and-switch in sales. In their experiment, subjects volunteered to participate in an exciting study on film clips. This experiment was then cancelled and subjects were asked to switch to a boring experiment involving word memorization. These subjects were three times more to continue with the boring experiment relative to a control treatment. Bait and switch is based on commitment processes. It differs from low-balling in that the bait or lure is not available in bait-and-switch as opposed to

merely made less desirable in low-balling.

 

Effort justification

In general, requiring a person to expend large amounts of efforts to obtain an object leads to a justification of this expenditure by increased liking of the object. For example, Aronson and Mills (1959) required students to engage in a severe initiation (reciting obscene words to an opposite sex

experimenter) in order to join what turned out to be a very boring discussion of sex. Compared to those students who engaged in a mild or no initiation, those who expended effort in the form of a severe initiation liked the boring discussion and found it interesting and worthwhile (see Axsom & Cooper, 1985 for an application of this technique to weight loss). In addition, the mere expectation of expending effort can lead to attitude change (Wicklund, Cooper, & Linder, 1967). Recent research also shows that humans possess an effort heuristic – the more effort it takes to produce an object, the higher that object is rated in terms of quality and liking (Kruder, Wirtz, Van Boven, & Altermatt, 2004).

 

Hypocrisy reduction

 Hypocrisy is aroused by having a person make a public commitment (say, tell teenagers to practice safe sex) and then make that person mindful of past failures to meet the commitment (say, complete a questionnaire on past sexual practices). To reduce the negative feelings of hypocrisy, the person is more likely to adopt the advocated behavior (in this case, practice safe sex; see Stone, Aronson, Crain, Winslow, & Fried, 1994 who conducted this experiment). In addition to increasing the use of condoms, the induction of hypocrisy has been shown to encourage water conservation (Dickerson, Thibodeau, Aronson, & Miller, 1992) and increase recycling (Fried & Aronson, 1995). 

 

Question-behavior effect

The tactic of question-behavior entails asking a person to make a self-prediction about their intention to perform a certain behavior; the result is an increase in the likelihood of performing that action. For example, Greenwald, Carnot, Beach, and Young (1987) asked potential voters before an election, “What do you expect to do between now and the time the polls close tomorrow? 

Do you expect that you will vote or not?” Those voters who answered this question (relative to a no-question control) voted at a 20% higher rate. The technique has been applied to a wide variety of issues including recycling, fund-raising, and nutrition (Spangenberg & Greenwald, 2001). The self-prophecy appears to work through one of two mechanisms: (a) cognitive dissonance arousal – the respondent seeks to reduce the discrepancy between what was predicted and his or her behavior (Spangenberg, Sprott, Grohmann, & Smith, 2003) and (b) evoking a cognitive script for the behavior that then increases performance through imaging processes (Williams, Block, & Fitzgerald, in press).

 

Self- affirmation

 When a person receives a threatening message, say one that presents a disturbing conclusion or contains counterattitudinal information, a common response is to act defensively – to ignore,

reject, or otherwise resist the message. One technique for overcoming this defensiveness is to have the

target affirm the value of her or his self by engaging in such tasks as endorsing important values or writing an essay about one’s values. For example, Sherman, Nelson, and Steele (2000) found that self-affirmations increased the likelihood that a person would accept threatening health information about the causes of breast cancer and about the practice of safe sex. Similarly, Cohen, Aronson, and Steele (2000) found that self-affirmations increased acceptance of counterattitudinal information concerning capital punishment and abortion. Blanton, Cooper, Skurnik, and Aronson (1997) have identified an important limiting condition of this technique: The self-affirmation should be unrelated to the content of the message since topic-relevant self-affirmations may increase defensiveness.

 

Self-efficacy

Another approach for changing high-risk, defensive, and anxiety-producing behavior is to increase the target’s perceived self-efficacy or the beliefs about one’s capability to organize and execute the courses of action required to reach given goals (Bandura, 1997). Perceived self-efficacy can be increased by such procedures as teaching skills, guided mastery, vicarious learning, and verbal persuasion (Maddux & Gosselin, 2003). The development of self-efficacy has been instrumental in inducing at-risk

targets to accept a persuasive communication and change behavior in such areas as exercise, diet, smoking cessation, HIV prevention, and alcohol abuse (see Bandura, 1997; Maddux & Gosselin, 2003 for reviews).

 

Scarcity

As long ago as Aristotle, people have realized that making an alternative appear scarce or rare increases its perceived value. Scarcity invokes a number of psychological processes. Consider an experiment conducted by Worchel, Lee, and Adewole (1975). In their experiment, subjects were asked to rate the attractiveness of cookies. They found that the cookies were deemed more attractive when there

were only two cookies in a jar as opposed to ten cookies. This finding illustrates one of the reasons that

scarcity is an effective influence device: we humans possess a rule in our head, "if it is rare, it must be

valuable.” Worchel et al. included another treatment in their experiment in which the subject began with a jar of ten cookies but an experimenter replaced that jar with one containing only two cookies under the pretense that he needed the cookies because subjects in his experiment had eaten more cookies than expected. In this case, subjects rated the cookies as even more attractive than the constant two cookies in a jar, illustrating the ability of scarcity to create a sense of urgency and panic that increases its effectiveness as an influence device. Scarcity also has the power to implicate the self for better or worse. The failure to possess or obtain a scarce object can create frustration and imply that the self is lacking in some regards (see summary by Pratkanis & Farquhar, 1992 of early work on barriers conducted in the Lewin tradition) as well as inducing reactance (see next tactic). In contrast, possessing a rare item may result in increased feelings of uniqueness and self-worth (Fromkin, 1970) that can serve as the basis for conspicuous consumption (Braun & Wicklund, 1989).

 

Psychological reactance

Reactance occurs when an individual perceives that his or her freedom of behavior is restricted; it is an aversive tension state that motivates behavior to restore the threatened freedom (Brehm, 1966; Brehm & Brehm, 1981). Reactance can be aroused by a number of threats to freedom including the elimination of an alternative, social pressure to take a course of action, physical barriers, censorship, requiring everyone in a group to agree with another’s decision, and authorities overstepping their mandates. The exact response to reactance may vary with the situation; however two common responses designed to restore threatened freedoms include increased attractiveness and desire for an eliminated or threatened alternative and an oppositional response (boomerang) of attempting to do the reverse of the reactance-arousing social pressure. Once reactance is created, it can be used as an influence device by directing responses to restore freedom in a manner consistent with the goals of the influence agent. Brehm’s (1966) theory of psychological reactance is important for understanding social influence in that it places a limiting condition (the production of reactance) on the effectiveness of many tactics listed in this chapter.

 

Evocation of freedom (But you are free to… technique)

In contrast to reactance, reminding the target of her or his freedom to choose with a simple statement such as “But you are free to accept or refuse” can increase compliance. For example, Gueguen and Pascual (2000) solicited money for a bus in a shopping mall. When they included the statement “But you are free to accept or to refuse” at the end of the request, compliance increased almost 5-fold (see also Gueguen & Pascual, 2002; Horwitz, 1968).

 

Anticipatory regret

 Regret is a negative feeling or emotion that a decision or choice may not work out as you want it to, and you will not be able to reverse it later. Anticipating such regret can lead to attempts to minimize the chances of self-blame (e.g., “I blew that decision”) and experiencing regret (Bell, 1982; Festinger, 1964). For example, Hetts, Boninger, Armor, Gleicher, and Nathanson (2000) staked subjects with $10 and then had them play a game with a 50% chance of losing this stake. Before playing the game, subjects could purchase insurance against loss. The critical manipulation consisted of emphasizing the regret that might be experienced if a disaster occurred and there was no insurance versus regret over purchasing insurance but yet no disaster occurs. Subjects purchased insurance consistent with their anticipated regrets (see also Crawford, McConnell, Lewis, & Sherman, 2002; Wicklund, 1970).   

 

In-5 prize tactic

The 1-in-5 prize tactic is commonly used in telemarketing scams and other swindles. The con criminal will tell a mark that he or she has won one of five prizes (such as an automobile, a vacation, a Van Gogh lithograph, a beachfront home, or $50,000 in cash). In order to claim the prize, all the target needs to do is to send in a fee — ostensibly to pay for shipping, taxes, or for some other seemingly plausible reason. The prize is a phantom and is rarely won; on those occasions when an award is given it is

usually a gimme prize (such as the Van Gogh lithograph which sounds good in the context of the other

prizes but is in reality a cheap reproduction). Surveys reveal that over 90% of Americans have been targeted by this pitch with over 30% responding to the appeal. Horovitz and Pratkanis (2002) conducted an experimental analog of the 1-in-5 prize tactic by telling subjects at the end of another experiment that they had won one of these five prizes: a TV, CD player, university mug (the gimme prize), a VCR, or a $50 mall gift certificate. In order to claim the prize, the subject had to agree to write essays for about 2 hours. In a control condition in which subjects were merely asked to write essays, 20% of the subjects complied with the request. In the 1-in-5 prize treatment, 100% of the subjects across two experiments agreed to write essays. Horovitz and Pratkanis suggest that phantom fixation (Pratkanis & Farquhar, 1992) along with other psychological processes is the reason that the 1-in-5 prize tactic is so effective.

 

Self-threat

Students of the history of propaganda repeatedly observe that when many members of a society feel their selves to be threatened (e.g., experience relative deprivation, fear devaluation of self) fertile ground is established for the seeds of propaganda to grow and flourish (Pratkanis & Aronson, 2001). A

similar relationship has been found in numerous experiments. For example, van Duüren and di Giacomo

(1996) showed that failure on a test increased the chances of complying with a request to commit a theft. Kaplan and Krueger (1999) observed that giving subjects a negative personality profile generally increased compliance with participation in a charity food drive. Zeff and Iverson (1966) demonstrated that subjects faced with downward mobility were more likely to privately conform to a group. A self-threat appears to induce a state of social dependency and a desire to re-establish the positive aspects of the self, thus making the individual vulnerable to influence which appeals to these goals.

 

Emotional see-saw

What happens when a person experiences an emotion that is then rapidly withdrawn? Dolinski and his colleagues (this volume; Dolinski & Nawrat, 1998; Dolinski, Ciszek, Godlewski, & Zawadzki, 2002) have conducted a program of research to show that when people experience an emotion that is then removed, they are more likely to comply with a request. For example, in one set of experiments, subjects experienced a fear that was then immediately removed – the subjects thought they had

received a parking ticket or were caught jaywalking but it turned out to be a false alarm. In such cases,

subjects were more likely to comply with a request to fill out a burdensome questionnaire or help out an

orphanage. Similarly, in another set of experiments, subjects experienced happiness and delight that was quickly eliminated – the subjects thought they had found some money or received a high grade on a test only to find that the money was really an advertisement and the grade was only average. In such cases, subjects were more likely to comply with a request to watch a bag or help their school. Dolinski explains his findings by noting that emotions invoke specific plans of action and that when the emotion is removed the plan is no longer operative but the person has not yet invoked a new plan. In this state of confusion and disorientation, the person is more likely to comply with a request.

Sensory deprivation

In sensory deprivation experiments, mostly conducted in the 1950s and 1960s, an individual is placed between 24 and 96 hours in a room designed to eliminate as much as possible such sensations as light, sound, and touch. The research showed that individuals in such environments experienced decrements in cognitive performance, hallucinations (after an extended period), and a desire for stimulation (Zubek, 1969). In addition, individuals experiencing sensory deprivation also showed an increase in susceptibility to influence as evidenced by increased (a) suggestibility as measured by the Hull body-sway test and responses to the autokinetic effect, (b) desire to listen to counterattitudinal propaganda,

(c) attitude change in response to propaganda, and (d) conformity (although the latter two effects may be lessened for those high in cognitive complexity and intelligence; see Suedfeld, 1969 for a review). Sensory deprivation results in a “cocktail” of cognitive (e.g., performance decrements), and emotional (e.g., boredom, tedium, anxiety, arousal) effects, making it difficult to identify the causal nexus of influence. Nevertheless, the research is significant in that it was originally motivated by reports of brainwashing of American soldiers by Red Chinese during the Korean conflict and confession extraction during the Stalinist purges in the Soviet Union. (Interestingly, preliminary research is emerging to show that increased cognitive stimulation or interest may also result in higher influence rates; Rind, 1997).

 

Positive happy mood

 Isen and her colleagues have found that people placed in a positive mood (e.g., by the discovery of a dime in a coin return or a gift of cookies) are more likely to comply with a request to render help and assistance (e.g., volunteering to meet a request, picking up papers, mailing a letter; see Isen & Levin, 1972; Levin & Isen, 1976). In a review of positive mood and helping research, Carlson, Charlin, and Miller (1988) found consistent results that happy mood leads to helpfulness and identified a variety of mechanism for why this is so. Positive mood also impacts the processing of a persuasive message. For example, Petty, Schumann, Richman, and Strathman (1993) found that positive mood results in persuasion through one of two routes: (a) when a target is not motivated to think about an issue, a positive mood directly impacts the positivity of the attitude and (b) when a target is motivated to think about an issue, a positive mood results in more positive thoughts about the issue, resulting in positive attitude. In addition, Wegener, Petty, and Smith (1995) found that happiness can produce more message scrutiny when message processing is useful for maintaining a positive mood and less scrutiny when processing might mean the reduction of a positive mood with persuasion dependent on how well the message stands up to this scrutiny. Thus, we end our tour of influence tactics on a happy note.

 

The science of social influence: advances and future progress


Поделиться:



Последнее изменение этой страницы: 2019-04-01; Просмотров: 270; Нарушение авторского права страницы


lektsia.com 2007 - 2024 год. Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав! (0.386 с.)
Главная | Случайная страница | Обратная связь