In a great book “The Story of Psychology”, the author, Morton Hunt, writes of the social psychologists.
“Q: What busy and productive field of modern psychology has no clear cut identity and not even a generally accepted definition?
A: Social psychology. It is less a field than a no man's land between psychology and sociology, overlapping each and also impinging on anthropology, criminology, several other social sciences, and neuroscience. Ever since the emergence of social psychology, its practitioners have had trouble agreeing on what it is. Psychologists define it one way, sociologists another, and most textbook writers, seeking to accommodate both views and to cover the field's entire gallimaufry of topics, offer nebulous definitions that say everything and nothing. An example: "[Social psychology is] the scientific study of the personal and situational factors that affect individual social behavior." A better definition: "Social psychology is the study of the ways in which thoughts, feelings, perceptions, motives, and behavior are influenced by interactions and transactions between people." Better, but it still leaves one with a multiform and even bewildering impression of the field. As Brenda Major, president in 2006 of the Society for Personality and Social Psychology, admits, "It's hard to pigeonhole social psychology. In cognitive neuroscience you can say, 'I study the brain,' but in social psychology you can't say anything clear-cut like that."
The problem is that social psychology has no unifying concept; it did not develop from the seed of a theoretical construct (as did behaviorism and Gestalt psychology) but grew like crabgrass in uncultivated regions of the social sciences. In 1965, Roger Brown of Harvard, in the introduction to his well-known social psychology textbook, noted that he could list the subjects generally considered to belong to social psychology but could see no common denominator among them:
I myself cannot find any single attribute or any combination of attributes that will clearly distinguish the topics of social psychology from topics that remain within general experimental psychology or sociology or anthropology or linguistics. Roughly speaking, of course, social psychology is concerned with the mental processes (or behavior) of persons insofar as these are determined by past or present interaction with other persons, but this is rough and it is not a definition that excludes very much.
More than two decades later, in his second version of the book, Brown did not bother to say any of this but simply began, without a definition, in medias res. A good idea; let us do so, too. Here, as a first dip into the field, is a handful of famous examples of sociopsychological research:
An undergraduate volunteer-call him U.V.-arrives at a laboratory in the psychology building to take part in an experiment in "visual perception"; six other volunteers are there already. The researcher says the experiment has to do with the discrimination of the length of lines. At the front of the room is a card with a single vertical line several inches long (the standard), and to the right, on another card, three more lines, numbered 1,2, and 3. The volunteers are to say which of the numbered lines is the same length as the standard. U.v. can easily see that 2 matches the standard and that 1 and 3 are both shorter. The other volunteers announce their choices, each speaking up for 2, as does U.V. in his turn. The experimenter changes the cards, and the procedure is repeated, with similar results.
But with the next card, the first volunteer says, "One," although to U.V.'s eye 1 is clearly longer than the standard. As each of the other volunteers, in turn, inexplicably says the same thing, U.V. becomes more disconcerted. By the time it is his turn, he is squirming, hesitant, nervous, and a little disoriented, and does not know what to say. When he, and others who are subjected to the same experience, do finally speak up, 37 percent of the time they go along with the majority and name as the matching line one they think is either shorter or longer than the standard.
In reality, only one person present at each session - in this case, U.V.-is an experimental subject; the other supposed volunteers are accomplices of Solomon Asch, the researcher, who has instructed them to name the wrong lines on certain trials. The aim of this classic experiment, conducted in the early 1950s, was to determine the conditions producing conformity-the tendency to yield to actual or imagined pressure to agree with the majority view of one's group. Research on conformity has continued ever since and many experiments have identified its various causes, among them the desire to be correct (if others all agree, maybe they're right) and the wish not to be considered a dissident or "oddball."
Two student volunteers, after spending some time discussing and performing a routine clerical chore together, are asked by the experimenter to playa game called the Prisoner's Dilemma. Its premise:
Two suspects are taken into custody and separated. The district attorney is certain that together they committed a crime but he has insufficient evidence to convict them. He tells each one that if neither confesses, he can convict them on a lesser charge and each will get a year in prison. But if one confesses and the other does not, the confessing one will get special treatment (only half a year in prison) and the other the most severe treatment possible-almost surely a twenty-year sentence. Finally, if both confess, he will ask for lenient sentencing and each will get eight years.
Since Prisoner I cannot reach Prisoner 2 to agree on a plan, he thinks through the possibilities. If he confesses and 2 does not, he (1) will get only six months, the best possible result for himself, and 2 will get twenty years, the worst outcome for him. But 1 recognizes that it is risky to take that chance; if he and 2 both own up, each will get eight years. Perhaps he'd be better off not confessing. If he doesn't, and 2 also doesn't, each gets one year, not a bad outcome. But suppose he doesn't and 2 does then 2 will get a mere six months and he a terrible twenty years.
Clearly, rational thinking cannot yield the best answer for either prisoner unless each trusts the other to do what is best for both. If one of them chooses on the basis of fear or of greed, both will lose. Yet it makes no sense to choose on the basis of what is best for both unless each is certain that the other will do likewise. And so the volunteers play, with any of a number of results, depending on the conditions and instructions laid down by the researcher. (Achieving what is best for both is only sometimes the outcome.)
The Prisoner's Dilemma has been used, in various forms, by many researchers for five decades to study trust, cooperation, and the conditions that create them and their opposites.
A college student rings the doorbells of a number of homes in Palo Alto, California, introduces himself as a representative of Citizens for Safe Driving, and makes a preposterous request: permission to place on the front lawn a billboard bearing the message DRIVE CAREFULLY (preposterous because a photograph he produces shows a lovely house partly obscured by a huge, poorly lettered sign). Not surprisingly, most of the residents refuse. But some agree. Why do they? Because for them this was not the first request. Two weeks earlier, a different student, claiming to be a volunteer with the Community Committee for Traffic Safety, had asked them to display a neatly lettered three-inch-square sign reading BE A SAFE DRIVER, and they had agreed to this innocuous request. Of the residents who had not been softened up by the previous modest request, only 17 percent said yes to the billboard; of those who had previously agreed to display the three-inch sign, 55 percent did so.
The experiment, carried out in 1966, was the first of many to explore the foot-in-the-door technique, well known to fund raisers, of asking for a very small contribution and later returning to ask for a much larger one. The researchers, however, were not interested in raising funds or in safe driving but in the reasons that this method of persuasion works. They concluded that the people who agree to a first small request see themselves, in consequence, as helpful and civic-minded, and that this self-perception makes them more likely to help the next time, when the request is for something much larger. (The foot-in-the-door technique is still being used in experiments exploring the subtleties of motivation.)
The staff of a large mental hospital says that Mr. X is schizophrenic. A well-dressed middle-aged man, he came in complaining of hearing voices; he told the admitting psychiatrist that they were unclear but that "as far as I can tell, they were saying 'empty,' 'hollow,' and 'thud.' " Since being admitted, he has said nothing more about the voices and has behaved normally, but the staff continues to consider him mentally ill. The nurses even make note in his chart of one frequent abnormal activity: "Patient engages in writing behavior." Several of his fellow inmates see him differently; as one of them says, "You're not crazy. You're a journalist or a professor. You're checking up on the hospital."
The patients are right, the staff wrong. In this 1973 study of how staffs of mental hospitals interact with their patients, a professor of psychology and seven research assistants got themselves admitted to twelve East Coast and West Coast hospitals by using the story about voices and, once they had been admitted, acting normally. As patients, they covertly observed staff attitudes and actions toward patients that they would never have had the chance to witness had they been identified as researchers. Among their disturbing findings:
- Once staff members had identified a patient as schizophrenic, they either failed to see, or misinterpreted, everyday evidence that he was sane. On the average, it took the pseudo-patients nineteen days of totally normal behavior to get themselves released.
- The staff, having come to think of the pseudo-patients as schizophrenic, spent as little time as possible in contact with them. Typically, they would react to a patient's direct question by ignoring it and moving on, eyes averted.
- Staff members often went about their work or talked to each other as if the patients were not present. As David Rosenhan, the senior author of the study, wrote: "Depersonalization reached such proportions that pseudo-patients had the sense that they were invisible or at least unworthy of account."
In a campus psychological laboratory, six male sophomores sit in separate cubicles, each wearing a headset. Participant A, through his, hears the researcher say that at the countdown, participants A and D are to shout "RAH!" as loudly as possible, holding it for a few seconds. After the first round, A hears that now he alone is to shout at the countdown; next, that all six are to shout; and so on. Part of the time, these instructions are transmitted to all six students, but part of the time one or another is fed false instructions. Participant A, for instance, may be told that all six are to shout, although, in fact, all the others hear messages telling them not to. To conceal what is happening, all six hear recorded shouting over their headsets during each trial. (The experiment, like many others in social psychology, would not even have been conceived of before the development of modern communications equipment.)
All this bamboozlement has a serious purpose: it is part of a series of studies of "social loafing," the tendency to do less than one's best in group efforts unless one's output is identifiable and known to the others. The evidence in this case is the measured volume of each student's shouting (each student is separately miked). When a student believes he and one other are shouting together, he shouts, on average, only 82 percent as loudly as when he thinks he alone is shouting. And when he thinks all six are shouting, his average output drops to 74 percent of his solo performance. In their report the research team concludes, "A clear potential exists in human nature for social loafing. We suspect that the effects of social loafing have far-reaching and profound consequences. . . [It] can be regarded as a kind of social disease." A number of recent studies have explored ways to combat the disease by such means as instilling a sense of importance and responsibility in each person, making it clear that individual as well as group performance will be evaluated, and so on.
No such sampling, however varied, can do justice to the range of subjects and research methods of social psychology, but perhaps these specimens give some idea of what the field is about - or at least what it is not about. It is not about what goes on strictly within one's head, as in Cartesian, Jamesian, or Freudian introspection, nor is it about large sociological phenomena, like stratification, social organization, and social institutions.
It is about everything in between-whatever an individual thinks or does as a result of what other individuals think or do, or what the first person thinks the others are thinking or doing. As Gordon Allport wrote many years ago, social psychology is "an attempt to understand and explain how the thought, feeling, and behavior of individuals are influenced by the actual, imagined, or implied presence of others." That's less a definition than a thumbnail description, but having looked at some examples, we begin to see what he meant and to appreciate the difficulty of putting it into words.
A Case of Multiple Fatherhood
Social psychology is both a recent area of knowledge and an ancient one. It emerged in its modern form more than eighty years ago and did not catch on until the 1950s, but philosophers and protopsychologists had long been constructing theories about how our interactions with others affect our mental life and, conversely, how our mental processes and personality affect our social behavior. One could make the case, according to Allport, that Plato was the founder of social psychology, or if not he, then Aristotle, or if not he, then any of a number of later political philosophers such as Hobbes and Bentham, although what all these ancestors contributed was thoughtful musing, not science. The claims of paternity grow more numerous but equally shaky in the nineteenth and early twentieth centuries: Auguste Comte, Herbert Spencer, Emile Ourkheim, the American sociologists Charles Horton Cooley, William Sumner, and many others all wrote about social psychological issues, but their work was still largely armchair philosophizing, not empirical science.
In 1897, however, an American psychologist named Norman Triplett conducted the first empirical test of a commonsense sociopsychological hypothesis. He had read that bicycle racers reach higher top speeds when paced by others than when cycling alone, and it occurred to him that perhaps it is generally true that an individual's performance is affected by the presence of others. To test his hypothesis, he had children of ten and twelve wind fishing reels alone and in pairs (but did not tell them what he was looking for) and found that many of them did indeed wind faster when another child was present.
Triplett did more than verify his hypothesis; he created a crude model of social psychological investigation. His method, an experiment that simulates a real-world situation, conceals from the volunteers what the researcher is looking for, and compares the effects of the presence and absence of a variable (in this case, observers), became the dominant mode of social psychological research. Moreover, his topic, "social facilitation" (the positive effect of observers on an individual's performance), remained the major problem-Allport even said the only one-studied by social psychologists for three decades.
(The basic problem-the "situational norm" induced by the presence of some variable in the environment-has continued to be of interest to the present. In studies reported in 2003, a research team found that participants who were told they would be visiting a library and then were asked to read words on a screen spoke softly; when told they would be visiting a railroad station, they spoke more loudly. When participants expected to be eating in a fancy restaurant, they ate more politely than usual, even biting a biscuit more neatly than other participants who did not expect to be going to the fancy restaurant.)
Social psychology gained a foothold in psychology in 1924 with the publication of Floyd Allport's Social Psychology, a book that became widely used in social psychology classes at American universities. Either because of that book or a spontaneous expansion of interest, social psychology research caught on. By the 1930s the new discipline was clearly distinguished from its sociological origins when Experimental Social Psychology by Gardner and Lois Barclay Murphy and Handbook of Social Psychology by Carl Murchison, both defined it as an experimental discipline separate from the more naturalistic observational techniques used in sociology.
Up to this point, social facilitation (Triplett's interest) had remained the central topic of social psychology research, but the field expanded significantly in the 1930s when Muzafer Sherif (1906-1988), a Turk who took graduate training in psychology at Harvard and Columbia, studied the influence of other people on one's judgment, not on one's performance. Sherif had his subjects, one at a time, sit in a dark room, stare at a tiny light, and tell him when it started to move and how far it moved. (They were unaware that the apparent movement is a common visual illusion.) Sherif found that each person, when tested alone, had a characteristic impression of how far the light moved, but when exposed to the opinions of others tended to be swayed by the group norm. His experiments strikingly showed the vulnerability of individual judgment to social opinion and pointed the way for hundreds of conformity experiments in the following two decades. (Asch's famous length-of-lines conformity experiment, described above, came nearly twenty years later.)
An even more significant expansion of the domain of social psychology was a result of the rise of Nazism in Germany. A number of Jewish psychologists immigrated to America in the 1930s, among them some who had broader views of social psychology than those in the American tradition. Among the refugees was the man generally acknowledged to be the real father of the field, Kurt Lewin, of whom we heard earlier; he was the Gestaltist at the University of Berlin whose graduate student, Bluma Zeigarnik, conducted an experiment to test his hypothesis that uncompleted tasks are remembered better than completed ones. (He was right.) Although Lewin's name never became familiar to the public and is unknown today except to psychologists and psychology students, Edward Chase Tolman said of him after his death in 1947:
Freud the clinician and Lewin the experimentalist-these are the two men whose names will stand out before all others in the history of our psychological era. For it is their contrasting but complementary insights which first made psychology a science applicable to real human beings and real human society.
Lewin, heavily bespectacled and scholarly looking, was a rarity: a genius who was extremely sociable and friendly. He loved and encouraged impassioned, free-wheeling group discussions of psychological problems with colleagues or graduate students; at such times his mind was an intellectual flintstone that cast off showers of sparks-hypotheses that he freely handed to others and ideas for intriguing experiments that he often was happy to have them carry out and take credit for.
Lewin was born in 1890 in a village in Posen (then part of Prussia, today part of Poland), where his family ran a small general store. He did poorly in school and showed no sign of intellectual gifts, perhaps because of the anti-Semitism of his schoolmates, but when he was fifteen his family moved to Berlin, and there he blossomed intellectually, became interested in psychology, and eventually earned a doctorate at the University of Berlin. Much of the course work in psychology, however, was in the Wundtian tradition. Lewin found the problems it dealt with petty, dull, and yielding no understanding of human nature, and he hungered for a more meaningful kind of psychology. Shortly after he returned to the university from military service in World War I, Kohler became head of the institute and Wertheimer a faculty member, and Lewin found what he was looking for in the form of Gestalt theory.
His early Gestalt studies dealt with motivation and aspiration, but he soon moved on to apply Gestalt theory to social issues. Lewin conceived of social behavior in terms of "field theory," a way of visualizing the total Gestalt of forces that affect a person's social behavior. Each person, in this view, is surrounded by a "life space" or dynamic field of forces within which his or her needs and purposes interact with the influences of the environment. Social behavior can be schematized in terms of the tension and interplay of these forces and of the individual's tendency to maintain equilibrium among them or to restore equilibrium when it has been disturbed.
To portray these interactions, Lewin was forever drawing "Jordan curves" -ovals representing life spaces-on blackboards, scraps of paper, in the dust, or in the snow, and diagramming within them the push and pull of the forces in social situations. His students at Berlin called the ovals "Lewin's eggs"; later, his students at MIT called them "Lewin's bathtubs"; still later those at the University of Iowa called them "Lewin's potatoes." Whether eggs, bathtubs, or potatoes, they pictured the processes taking place within the small, face-to-face group, the segment of reality that Lewin saw as the territory of social psychology.
Although students at Berlin flocked to Lewin's lectures and research programs, like many another Jewish scholar he made little progress up the academic ladder. But his brilliant writing about field theory, particularly as applied to interpersonal conflicts and child development, brought him an invitation in 1929 to lecture at Yale and another in 1932 to spend six months as a visiting professor at Stanford. In 1933, shortly after Hitler became chancellor of Germany, Lewin resigned from the University of Berlin and with the help of American colleagues got an interim appointment at Cornell and later a permanent one at the University of Iowa.
In 1944, realizing a long-held ambition, he set up his own social psychology institute, the Research Center for Group Dynamics, at the Massachusetts Institute of Technology, and there assembled a first-rate staff and a group of top-notch students. It became the primary training center for mainstream American social psychology. In 1947, only three years later, Lewin, then fifty-seven, died of a heart attack; the Research Center for Group Dynamics soon moved to the University of Michigan, and there and elsewhere his former students continued to promulgate his ideas and methods.
Lewin's boldly imaginative experimental style, going far beyond that of earlier social psychologists, became tl)e most salient characteristic of the field. A study inspired by his experience of Nazi dictatorship and passionate admiration of American democracy illustrates the point. To explore the effects of autocratic and democratic leadership on people, Lewin and two of his graduate students, Ronald Lippitt and Ralph White, created a number of clubs for eleven-year-old boys. They supplied each club with an adult leader to help with crafts, games, and other activities, and had each leader adopt one of three styles: autocratic, democratic, or laissez-faire. The boys in groups with autocratic leaders soon became either hostile or passive; those with democratic leaders became friendly and cooperative, and those with laissez-faire leaders became friendly but apathetic and disinclined to get things done. Lewin was unabashedly proud of the results, which confirmed his belief in the deleterious effect of autocratic leadership and the salutary effect of democratic leadership on human behavior.
It was topics and experiments like this that account for Lewin's powerful impact on social psychology. (Field theory enabled him to conceive of such research, but it never became central to the discipline.) Leon Festinger (1919-1989), Lewin's student, colleague, and intellectual heir, has said that Lewin's major contribution was twofold. One part was his gifted choice of interesting or important problems; it was largely through him that social psychology began exploring group cohesiveness, group decision making, authoritarian versus democratic leadership, techniques of attitude change, and conflict resolution. The other part was his "insistence on trying to create, in the laboratory, powerful social situations that made big differences" and his extraordinary inventiveness of ways to do so.
Despite Lewin's catalytic influence, for some years social psychology gained a foothold only in a handful of large metropolitan universities. Elsewhere, behaviorism was still king, and its adherents found social psychology too concerned with mental processes to be acceptable. But during World War II the needs of the military gave rise to several important social-psychological studies of soldier morale and behavior, and in the postwar years a number of social influences and problems brought about a surge of interest in the young discipline. Among them: the increasing mobility of the American population and the many social and interpersonal problems that it created; the search in the expanding business world for new and more persuasive sales techniques; the effort by social scientists to comprehend Nazi genocide and, more broadly, the sources and control of aggression; the gradual return of cognitivism to psychology; the rise of Senator McCarthy, which stimulated interest in the phenomenon of conformity; and incessant international negotiations, which turned social psychologists' attention to group dynamics and bargaining theory.
During the 1950s, social psychology expanded explosively and soon was offered by virtually every university psychology department in the United States. The rebelliousness of American youth in the 1960s, the disruptions caused by the Vietnam War, the activism of blacks, women, and gays, and other social problems made it an increasingly pertinent field of study. All too often, however, when businessmen and legislators turned to social psychologists for answers, they were exasperated at hearing that social psychologists were only beginning their work and had no ready answers. Yet it was not long before the data the researchers were gathering did have profound effects on American society, as a single example attests. The United States Supreme Court, in its 1954 Brown v. Board of Education decision, said that the evidence of "modern authority" showed that Negro children were harmed by segregated education, and cited numerous social-psychological studies demonstrating that segregated schooling, even if equal, left Negro children with a sense of inferiority, low self-esteem, and hostility toward themselves. Lewin, had he been alive, would surely have been proud of his offspring.
Many social psychologists feel that their field is unusually subject to fads; many "hot topics" have come and gone in its fifty-odd years as a leading discipline, and certain subjects that once seemed the very essence of social psychology have been relegated to storage.
The main reason, however, is not faddism so much as the nature of social psychology. In most other sciences knowledge about a particular group of phenomena accumulates and deepens, but social psychology deals with a range of problems that have little in common and do not add up. In consequence, many a phenomenon has captured the attention of social psychologists, been intensively studied, and essentially explained. When only details remain to be filled in, for all intents and purposes the file is marked "Solved" and the case closed.
Herewith four famous closed cases.
This was without question the most influential theory in social psychology and the dominating subject in the field's journals from the late I950s to the early 1970s. Thereafter it slowly lost its position as the center of attention and today is an accepted body of knowledge but no longer an area of active research, although a number of recent studies apply the theory to special problems.
Cognitive dissonance theory says that the human being feels tension and discomfort when holding inconsistent ideas (for instance, "So-and so is a windbag and a bore" but "I need So-and-so as a friend and ally"), and will seek ways to decrease that dissonance ("So-and-so isn't so bad, once you get to know him," or "I don't really need him; I can get along fine without him").
In the 1930s, Lewin had come close to the subject when he explored how a person's attitudes are changed by his or her being a member of a group that reaches a decision, and how such a person will tend to hold fast to that decision, ignoring later information that conflicts with it. Lewin's student Leon Festinger carried this line of inquiry further and developed the theory of cognitive dissonance.
As a young graduate student, Festinger had gone to the University of Iowa in 1939 expressly to study under Lewin - not social psychology, in which he had no interest, but Lewin's early work on motivation and aspiration. Under Lewin's spell, however, he was drawn into social psychology and in 1945 became an assistant professor at Lewin's new Research Center for Group Dynamics at MIT.
For some years after Lewin's death, Festinger, who moved to the University of Minnesota, wore Lewin's mantle, thanks to his fine intellect, the excitement he brought to teaching, and the daring with which he undertook research that overstepped the boundaries of propriety to obtain otherwise unavailable data. In part he was emulating Lewin's boldness, but in part expressing his own personality. A peppery fellow of moderate size and a lover of cribbage and chess, both of which he played with fierce competitiveness, Festinger had the tough, brash, aggressive spirit so often found in men who grew up between the world wars on the tempestuous Lower East Side of New York.
A prime instance of Festinger's boldness and unconventionality was a research project in which he and two young colleagues, Henry W.Riecken and Stanley Schachter '(who had been his student at MIT), acted as undercover agents for seven weeks. They had read a newspaper story, in September 1954, about a Mrs. Marian Keech (not her real name), a housewife in a town not far from Minneapolis, who claimed that for nearly a year she had been receiving messages from superior beings she identified as the Guardians on the planet Clarion. (The messages came in the form of automatic writing that she produced while in a trance.) She revealed to the press that on December 21, according to the Guardians, a great flood would cover the northern hemisphere, and all who lived there, except a chosen few, would perish.
Festinger, who was already working out his theory, and his junior colleagues saw a golden opportunity to study cognitive dissonance at first hand. As they stated their hypothesis in When Prophecy Fails, the report they published in 1956:
Suppose an individual believes something with his whole heart; suppose further that he has a commitment to this belief, that he has taken irrevocable actions because of it; finally, suppose that he is presented with evidence, unequivocal and undeniable evidence, that his belief is wrong: what will happen? The individual will frequently emerge, not only unshaken, but even more convinced of the truth of his beliefs than ever before.
The three social psychologists felt that Mrs. Keech's public statements and the ensuing events would be an invaluable real-life demonstration of the development of a paradoxical response to contradictory evidence. They called on Mrs. Keech, introducing themselves as a businessman and two friends who were impressed by her story and wanted to know more. Riecken gave his real name, but Schachter, who had an irrepressible sense of humor, introduced himself as Leon Festinger, leaving a stunned Festinger no option but to say he was Stanley Schachter and maintain that identity in all his contacts with Mrs. Keech and her followers.
Mrs. Keech, they learned, had already gathered a small coterie that met regularly, was making plans for the future, and was awaiting final directions from the planet Clarion. The team drew up a research plan calling for the three of them, plus five student assistants, to be "covert participant observers." In the guise of true believers, they visited cult members and took part in their meetings sixty times over a seven-week period. Some visits lasted only an hour or two, but others involved nonstop séance-like sessions running twelve to fourteen hours. The research was physically and emotionally exhausting, partly because of the strain of concealing their reactions to the absurd goings-on at the meetings, and partly because of the difficulty of making a record of the words of a Guardian as voiced by Mrs. Keech and others in their trances. As Festinger later recalled:
At intervals infrequent enough not to arouse comment, each of us would go to the toilet to make notes in private-that was the only place in that house where there was any privacy. Periodically, one or two of us together would announce we were taking a short walk to get some fresh air. We would then dash madly to the hotel room to dictate from our notes. . . By the time the study was terminated we all literally collapsed from fatigue.
At last Mrs. Keech received the long-awaited message. Spaceships would come to a certain place at a specific time to rescue the believers and take them to safety. But the spaceships failed to arrive either then or at several later promised times, and December 21 came and went without any flood.
At that point, Mrs. Keech received word that, thanks to the goodness and light created by the believers, God had decided to call off the disaster and spare the world. Some of the members, particularly those who had been doubtful or unsure, could not reconcile the failure of the prophecies with their beliefs and dropped out, but the members who had been most deeply committed-some had even quit their jobs and sold their possessions-behaved just as the researchers had hypothesized. They came away more strongly convinced than ever of the truth of Mrs. Keech's revelations, thereby eliminating the conflict between what they believed and the disappointing reality.
Festinger went on to develop and publish his theory of cognitive dissonance in 1957. It immediately became the central problem of social psychology and remained the principal topic of experimental research for over fifteen years. In 1959 he and a colleague, J. Merrill Carlsmith, conducted what is usually cited as the classic cognitive dissonance experiment. They artfully deceived their volunteer subjects about the purpose of the study, since the subjects, had they known the researchers wanted to see whether they would change their minds about some issue to minimize cognitive dissonance, might well have felt embarrassed to do so.
Festinger and Carlsmith had their undergraduate male subjects perform an extremely tedious task: they had to put a dozen spools into a tray, remove them, put them back, and repeat the process for half an hour. Then they had to turn each of forty-eight pegs in a board a quarter turn clockwise, then another quarter turn, and so on, again for half an hour. After each subject had finished, one of the researchers told him that the purpose of the experiment was to find out whether people's expectation of how interesting a task is would affect how well they performed it, and that he had been in the "no-expectation group" but others would be told that the task was enjoyable. Unfortunately, the researcher went on, the assistant who was supposed to tell that to the next subject had just called in to say he couldn't make it. The researcher said he needed someone to take the assistant's place and asked the subject to help out. Some subjects were offered $1 to do so, others $20.
Nearly all of them agreed to tell what was obviously a lie to the next subject (who, in reality, was a confederate). After they had done so, the subjects were asked how enjoyable they themselves had found the task. Since it had unquestionably been boring, lying about it to someone else created a condition of cognitive dissonance ("I lied to someone else. But I'm not that kind of person"). The crucial question was whether the size of the payment they had received led them to reduce dissonance by deciding that the task had really been enjoyable.
Intuitively, one might expect that those who got $20-a substantial sum in 1959-would be more likely to change their opinion of the task than those who got a dollar. But Festinger and Carlsmith predicted the opposite. The subjects who got $20 would have a solid reward to justify their lying, but those who got a dollar would have so little justification for lying that they would still feel dissonance, and would relieve it by convincing themselves that the task had been interesting and they had not really lied. Which is exactly what the results showed.
Festinger and Carlsmith were exhilarated; social psychologists find it particularly exciting to discover something that is not obvious or that contradicts usual impressions. As Schachter has often told his students, it's a waste of time to study bubbe psychology; that's the kind that when you tell your grandmother-bubbe, in Yiddish-what you found, she says, "So what else is new? They pay you for this?"
Cognitive dissonance theory stirred up a good deal of hostile criticism, which Festinger scathingly dismissed as "garbage," and attributed to the fact that the theory presented a "not very idealistic" image of humankind.25 Whatever the motives of the critics, a flood of experiments showed cognitive dissonance to be a robust (consistent) finding. And moreover, a fertile theory. Reminiscing, the eminent socialpsychologist Elliot Aronson said, "All we had to do was sit around and we could generate ten good hypotheses in an evening. . . the kinds of hypotheses that no one would even have dreamed of a few years earlier."26 The theory also explained a number of kinds of social behavior that could not be accounted for within behaviorist theory. Here are a few examples, all verified by experiments:
- The harder it is to gain membership in a group (as, for instance, when there is grueling screening or hazing), the more highly the group is valued by a person who is accepted. We convince ourselves we love what has caused us pain in order to feel that the pain was worthwhile.
- When people behave in ways they are likely to see as either stupid or immoral, they change their attitudes so as to believe that their behavior is sensible and justified. Smokers, for instance, say that the evidence about smoking and cancer is inconclusive; students who cheat say that everyone else cheats and therefore they have to in order not to be at a disadvantage.
- People who hold opposing views are apt to interpret the same news reports or factual material about the disputed subject quite differently; each sees and remembers what supports his views but glosses over and forgets what would create dissonance.
- When people who think of themselves as reasonably humane are in a situation where they hurt innocent others, as soldiers often harm civilians in the course of combat, they reduce the resulting dissonance by derogating their victims ("Those SOBs are helping the enemy. They'd knife you in the back if they could"). When people benefit from social inequities that cause others to suffer, they often tell themselves that the sufferers aren't capable of anything better, are content with their way of life, and are dirty, lazy, and immoral.
Finally, one case of a "natural experiment" that illustrates the human tendency to reduce cognitive dissonance by rationalization:
-After a 1983 California earthquake the city of Santa Cruz, in compliance with a new California law, commissioned Dave Steeves, a well-regarded engineer, to assess how local buildings would fare in a major earthquake. Steeves identified 175 buildings that would' suffer severe damage, many of them in the prime downtown shopping area. The city council, aghast at the report and what it implied about the work that would have to be done, dismissed his findings and voted unanimously to wait for clarification of the state law. Steeves was called an alarmist and his report a threat to the wellbeing of the town, and no further action was taken. On October 17, 1989, an earthquake of magnitude 7.1 hit just outside Santa Cruz. Three hundred homes were destroyed and five thousand seriously damaged in Santa Cruz County; the downtown area was reduced to ruins; five people were killed and two thousand injured.
Because of its explanatory power, cognitive dissonance theory easily survived all attacks. Twenty-five years after Festinger first advanced it and sixteen years after he left social psychology to study archaeology, a survey of social psychologists found that 79 percent considered him the person who had contributed most to their field. Today, a generation later, Festinger's name and fame have dimmed, but cognitive dissonance remains a bedrock principle of social psychological theory. But one criticism of cognitive dissonance research has been difficult to rebut. The researchers almost always gulled the volunteers into doing things they would not ordinarily do (such as lying for money), subjected them without their consent to strenuous or embarrassing experiences, or revealed to them aspects of themselves that damaged their self-esteem. The investigators "debriefed" subjects after the experiment, explaining the real purpose, the reason deception had been necessary, and the benefit to science of their participation. This was intended to restore to them their sense of well-being, but critics have insisted that it is unethical to subject other people to such experiences without their knowledge and consent.
The Psychology of Imprisonment
These ethical problems were not peculiar to dissonance studies; they existed in more severe form in other kinds of sociopsychological research. A famous case in point is an experiment conducted in 1971 by Professor Philip G. Zimbardo, a social psychologist at Stanford University, and three colleagues. To study the social psychology of imprisonment, they enlisted undergraduate men as volunteers in a simulation of prison life, in which each would play the part of a guard or a prisoner. All volunteers were interviewed and given personality tests; twenty-one middle-class whites were selected after being rated emotionally stable, mature, and law-abiding. By the flip of a coin, ten were designated as prisoners, eleven as guards, for the duration of a two-week experiment.
The "prisoners" were "arrested" by police one quiet Sunday morning, handcuffed, booked at the police station, taken to the "prison" (a set of cells built in the basement of the Stanford psychology building), and there stripped, searched, deloused, and issued uniforms. The guards were supplied with billy clubs, handcuffs, whistles, and keys to the cells; they were told that their job was to maintain "law and order" in the prison and that they could devise their own methods of prisoner control. The warden (a colleague of Zimbardo's) and guards drew up a list of sixteen rules the prisoners had to obey: they were to be silent at meals, rest periods, and after lights out; they were to eat at mealtimes but no other time; they were to address one another by their ID number and any guard as Mr. Correctional Officer, and so on. Violation of any rule could result in punishment.
The relations between guards and prisoners quickly assumed a classic pattern: the guards began to think of the prisoners as inferior and dangerous, the prisoners to view the guards as bullies and sadists. As one guard reported:
I was surprised at myself. . . I made them call each other names and clean out the toilets with their bare hands. I practically considered the prisoners cattle, and I kept thinking I have to watch out for them in case they try something.
In a few days the prisoners organized a rebellion. They tore off their ID numbers and barricaded themselves inside their cells by shoving beds against the doors. The guards sprayed them with a fire extinguisher to drive them back from the doors, burst into their cells, stripped them, took away their beds, and in general thoroughly intimidated them.
The guards, from that point on, kept making up additional rules, waking the prisoners frequently at night for head counts, forcing them to perform tedious and useless tasks, and punishing them for "infractions." The prisoners, humiliated, became obsessed by the unfairness of their treatment. Some grew disturbed, one so much so that by the fifth day the experimenters began to consider releasing him before the end of the experiment.
The rapid development of sadism in the guards was exemplified by the comments of one of them who, before the experiment, said that he was a pacifist, was nonaggressive, and could not imagine himself maltreating another person. By the fifth day he noted in his diary:
I have singled him [one prisoner] out for special abuse both because he begs for it and because I simply don't like him. . . The new prisoner (416) refuses to eat his sausage. . . I decided to force feed him, but he wouldn't eat. I let the food slide down his face. I didn't believe it was me doing it. I hated myself for making him eat but I hated him more for not eating.
Zimbardo and his colleagues had not expected so rapid a transformation in either group of volunteers and later wrote in a report:
What was most surprising about the outcome of this simulated prison experience was the ease with which sadistic behavior could be elicited from quite normal young men, and the contagious spread of emotional pathology among those carefully selected precisely for their emotional stability.
On the sixth day the researchers abruptly terminated the experiment for the good of all concerned. They felt, however, that it had been valuable; it had shown how easily "normal, healthy, educated young men could be so radically transformed under the institutional pressures of a 'prison environment.' "
That finding may have been important, but in the eyes of many ethicists the experiment was grossly unethical. It had imposed on its volunteers physical and emotional stresses that they had not anticipated or agreed to undergo. In so doing, it had violated the principle, affirmed by the Supreme Court in 1914, that "every human being of adult years and sound mind has a right to determine what shall be done with his own body." Because of the ethical problems, the prison experiment has not been replicated; it is a closed case.
Even this was bland in comparison with another experiment, also of major value, and also now a closed case. Let us open the file and see what was learned, and by what extraordinary means.
In the aftermath of the Holocaust, many behavioral scientists sought to understand how so many normal, civilized Germans could have behaved toward other human beings with such incomprehensible savagery. A massive study published in 1950, carried out by an interdisciplinary team with a psychoanalytic orientation, ascribed prejudice and ethnic hatred to the "authoritarian personality," an outgrowth of particular kinds of parenting and childhood experience. But social psychologists found this too general an explanation; they thought the answer more likely to involve a special social situation that caused ordinary people to commit out-of-character atrocities.
It was to explore this possibility that an advertisement in a New Haven newspaper in the early 1960s called for volunteers for a study of memory and learning at Yale University. Any adult male not in high school or college would be eligible, and participants would be paid $4 (roughly the equivalent of$25 today) an hour plus carfare.
The researcher explained to the two men, the real and false volunteers, that he was studying the effect of punishment on learning. One of them would be the "teacher" and the other the "learner" in an experiment in which the teacher would give the learner an electric shock whenever he made an error. The two volunteers then drew slips of paper to see who would be which. The one selected by the "naive" volunteer read "Teacher." (To ensure this result, both slips read "Teacher," but the accomplice discarded his without showing it.)
Forty men ranging from twenty to fifty years old were selected and given separate appointments. Each was met at an impressive laboratory by a small, trim young man in a gray lab coat. Arriving at the same time was another "volunteer," a pleasant middle-aged man of Irish-American appearance. The man in the lab coat, the ostensible researcher, was actually a thirty-one-year-old high school biology teacher, and the middle-aged man was an accountant by profession. Both were accomplices of the social psychologist conducting the experiment, Stanley Milgram of Yale, and would act the parts he had scripted.
The researcher then led the two subjects into a small room, where the learner was seated at a table, his arms strapped down, and electrodes attached to his wrists. He said he hoped the shocks wouldn't be too severe; he had a heart condition. The teacher was then taken into an adjoining room from which he could speak to and hear the learner but not see him. On a table was a large shiny metal box said to be a shock generator. On the front was a row of thirty switches, each marked with the voltage it delivered (ranging from 15 to 450) plus descriptive labels: "Slight Shock," "Moderate Shock," and so on, up to "Danger: Severe Shock" at 435, and finally two switches marked simply "XXX."
The teacher's role, the researcher explained, was to read a list of word pairs (such as blue, sky and dog, cat) to the learner, then test his memory by reading the first word of one pair and four possible second words, one of which was correct. The learner would indicate his choice by pushing a button lighting one of four bulbs in front of the teacher. Whenever he gave a wrong answer, the teacher was to depress a switch giving him a shock, starting at the lowest level. Each time the learner made an error, the teacher was to give him the next stronger shock.
At first the experiment proceeded easily and uneventfully; the learner would give some right answers and some wrong ones, the teacher would administer a mild shock after each wrong answer, and continue. But as the learner made more mistakes and the shocks became greater in intensity-the apparatus was fake, of course, and no shocks were delivered-the situation grew unpleasant. At 75 volts the learner grunted audibly; at 120 he called out that the shocks were becoming painful; at ISO volts he shouted, "Get me out of here. I refuse to go on!" Whenever the teacher wavered, the researcher, standing beside him, said, "Please continue." At 180 volts the learner called, "I can't stand the pain!" and at 270 he howled. When the teacher hesitated or balked, the researcher said, "The experiment requires that you continue." Later, when the learner was banging on the wall, or still later, when he was screaming, the researcher said sternly, "It is absolutely essential that you continue." Beyond 330, when there was only silence from the next room-to be interpreted as equivalent to an incorrect answer-the experimenter said, "You have no other choice; you must go on."
Astonishingly-Milgram himself was amazed-63 percent of the teachers did go on, all the way. But not because they were sadists who enjoyed the agony they thought they were inflicting (standard personality tests showed no difference between the fully obedient subjects and those who at some point refused to continue); on the contrary, many of them suffered acutely while obeying the researcher's orders. As Milgram reported:
In a large number of cases the degree of tension reached extremes that are rarely seen in sociopsychological laboratory studies. Subjects were observed to sweat, tremble, stutter, bite their lips, groan, and dig their fingernails into their flesh. . . A mature and initially poised businessman entered the laboratory smiling and confident. Within 20 minutes he was reduced to a twitching, stuttering wreck who was rapidly approaching a point of nervous collapse... yet he continued to respond to every word of the experimenter, and obeyed to the end. Milgram did not, alas, report any symptoms he himself may have had while watching his teachers suffer. A spirited, feisty little man, he gave no indication in his otherwise vivid account that he was ever distressed by his subjects' misery.
His interpretation of the results was that the situation, playing on cultural expectations, produced the phenomenon of obedience to authority. The volunteers entered the experiment in the role of cooperative and willing subjects, and the researcher played the part of the authority. In our society and many others, children are taught to obey authority and not to judge what the person in authority tells them to do. In the experiment, the teachers felt obliged to carry out orders; they could inflict pain and harm on an innocent human being because they felt that the researcher, not they themselves, was responsible for their actions.
In Milgram's opinion, his series of experiments went far to explain how so many otherwise normal Germans, Austrians, and Poles could have operated death camps or, at least, accepted the mass murder of the Jews, Gypsies, and other despised groups. (Adolf Eichmann said, when he was on trial in Israel, that he found his role in liquidating millions of Jews distasteful but that he had to carry out the orders of authority.)
Milgram validated his interpretation of the results by varying the script in a number of ways. In one variation, a phone call would summon the researcher away before he said anything to the teacher about the importance of continuing to ever higher shock levels; his place would be taken by a volunteer (another confederate) who seemed to hit on the idea of increasing the shocks as far as needed and kept telling the teacher to continue. But he was a substitute, not the real authority; in this version of the experiment only 20 percent of the teachers went all the way. Milgram also varied the composition of the team. Instead of an affable, pudgy, middle-aged learner and a trim, stern, young researcher, he reversed the personality types. In that condition, the proportion of teachers going all the way decreased but only to 50 percent. Apparently, the roles of authority and victim, not the personalities of the persons who played the parts, were the crucial factor.
A disturbing adjunct to Milgram's results was his investigation of how people thought they would behave in the situation. He described the experimental set-up in detail to groups of college students, behavioral scientists, psychiatrists, and laymen, and asked them at what level of shock people like themselves would refuse to go on. Despite the differences in their backgrounds, all groups said people like themselves
would defy the experimenter and break off at about 150 volts when the victim asked to be released. Milgram also asked a group of undergraduates at what level one should disobey; again the average answer was at about 150 volts. Thus, neither people's expectations of how they would behave nor their moral views of how they should behave had anything to do with how they actually behaved in an authority-dominated situation.
Milgram's obedience study attracted immense attention and won the 1964 award of the American Association for the Advancement of Science for sociopsychological research. (In 1984, when Milgram died of a heart attack at fifty-one, Roger Brown called him "perhaps the most gifted experimentalist in the social psychology of our time.") Within a decade or so, 130 similar studies had been undertaken, including a number in other countries. Most of them confirmed and enlarged Milgram's findings, and for some years his procedure, or variations of it, was the principal one used in studies of obedience. But for more than two decades no researcher has used such methods, or would dare to, as a result of historical developments we'll look at shortly.
The Bystander Effect
In March 1964, a murder in Kew Gardens, in New York City's borough of Queens, made the front page of the New York Times and shocked the nation, although there was nothing memorable about the victim, murderer, or method. Kitty Genovese, a young bar manager on her way home at 3 A.M., was stabbed to death by Winston Moseley, a business machine operator who did not know her, and who had previously killed two other women. What made the crime big news was that the attack lasted half an hour (Moseley stabbed Genovese, left, came back a few minutes later and stabbed her again, left again, and returned to attack her once more), during which time she repeatedly screamed and called for help, and was heard and seen by thirty-eight people looking out the windows of their apartments. Not one tried to defend her, came to help when she lay bleeding, or even telephoned the police. (One finally did call-after she was dead.)
News commentators and other pundits interpreted the inaction of the thirty-eight witnesses as evidence of the alienation and inhumanity of modern city dwellers, especially New Yorkers. But two young social psychologists living in the city, neither one a native New Yorker, were troubled by these glib condemnations.36 John Darley, an assistant professor at New York University, and Bibb Latane, an instructor at Columbia University who had been a student of Stanley Schachter's, met at a party soon after the murder and found that they had something in common. Though unlike in many ways-Darley was a dark-haired, urbane, Ivy League type; Latane a lanky, thatch-haired fellow with a Southern country-boy accent and manner-they both felt, as social psychologists, that there had to be a better explanation of the witnesses' inactivity.
They talked about it for hours that night and had a joint flash of inspiration. As Latane recalls:
The newspapers, TV, everybody, was carrying on about the fact that thirty-eight people witnessed the crime and nobody did anything, as if that were far harder to understand than if one or two had witnessed it and done nothing. And we suddenly had an insight: maybe it was the very fact that there were thirty-eight that accounted for their inactivity. It's an old trick in social psychology to turn a phenomenon around and see if what you thought was the effect was actually the cause. Maybe each of the thirty-eight knew that a lot of other people were watching-and that was why they did nothing.
Late though it was, the two immediately began designing an experiment to test their hypothesis. Many weeks later, after much planning and preparation, they launched an extended investigation of the responses of bystanders, under varied circumstances, to an emergency.
In the study, seventy-two NYU students in introductory psychology courses took part in an unspecified experiment in order to fulfill a class requirement. Each arriving participant was told by Darley, Latane, or a research assistant that the experiment involved a discussion of the personal problems of urban university students. The session was to be conducted in two-person, three-person, or six-person groups. To minimize embarrassment when revealing personal matters, they would be in separate cubicles and would communicate over an intercom system, taking turns and talking in an arranged sequence.
Whether the naive participant was supposedly talking to only one other person or to two or five others-supposedly, because in fact everything he heard others say was a tape-recorded script-the first voice was always that of a male student who told of difficulty adjusting to life in New York and to his studies, and confided that under stress he was prone to epileptic seizures. The voice was that of Richard Nisbett, then a graduate student at Columbia University and today a professor at the University of Michigan, who in tryouts had proved the best actor. The second time it was his turn to talk, he started to sound disordered and incoherent; he stammered and panted, said that he had "one of these things coming on," started choking and pleading for help, gasped, "I'm gonna die-er-er-help-er-er-seizure-er," and, after more choking sounds, fell silent.
Of the participants who thought that they and the epileptic were the only ones talking to each other, 85 percent popped out of their cubicles to report the attack even before the victim fell silent; of those who thought four other people were also hearing the attack, only 31 percent did so. Later, when the students were asked whether the presence of others had influenced their response, they said no; they had been genuinely unaware of its powerful effect on them.
Darley and Latane now had a convincing sociopsychological explanation of the Kew Gardens phenomenon, which they called "the social inhibition of bystander intervention in emergencies," or, more simply, "the bystander effect." As they had hypothesized, it was the presence of other witnesses to an emergency that made for passivity in a bystander. The explanation of the bystander effect, they said, "may lie more in the bystander's response to other observers than in presumed personality deficiencies of 'apathetic' individuals."
They suggested later that three processes underlie the bystander effect: hesitancy to act in front of others until one knows whether helping or other action is-appropriate; the feeling that the inactive others understand the situation and that nothing need be done; and, most important, "diffusion of responsibility" -the feeling that, since others know of the emergency, one's own obligation to act is lessened. A number of later experiments by Latane and Darley, and by other researchers, confirmed that, depending on whether bystanders can see other bystanders, are seen by them, or merely know that there are others, one or another of these three processes is at work.
The Darley and Latane experiment aroused widespread interest and generated a crop of offspring. Over the next dozen years, fifty-six studies conducted in thirty laboratories presented apparent emergencies to a total of nearly six thousand naive subjects who were alone or in the presence of one, several, or many others. (Conclusion: The more bystanders, the greater the bystander effect.) The staged emergencies were of many kinds: a crash in the next room followed by the sound of a female moaning; a decently dressed young man with a cane (or, alternatively, a dirty young man smelling of whiskey) collapsing in a subway car and struggling unsuccessfully to rise; a staged theft of books; the experimenter himself fainting; and many others. In forty-eight of the fifty-six studies, the bystander effect was clearly demonstrated; overall, about half the people who were alone when an emergency occurred offered help', as opposed to 22 percent of those who saw or heard emergencies in the presence of others.40 Since there is less than one chance in fifty-one million that this aggregate result is accidental, the bystander effect is one of the best-established hypotheses of social psychology. And having been so thoroughly established and the effects of so many conditions having been separately measured, it has ceased in recent years to be the subject of much research and become, in effect, another closed case.
However, research on helping behavior in general-the social and psychological factors that either favor or inhibit nonemergency altruistic acts-continued to grow in volume until the 1980s and has only lately leveled off. Helping behavior is part of prosocial behavior, which, during the idealistic 1960s, began to replace social psychology's postwar obsession with aggressive behavior, and it remains an important area of research in the discipline.
A Note on Deceptive Research: One factor common to most of the closed cases dealt with above-and to a great many other research projects in social psychology-is the use of elaborately contrived deceptive scenarios. There is almost nothing of the sort in experimental research on personality, development, or most other fields of present-day psychology, but for many years deceptive experimentation was the essence of social psychological research.
In the years following the Nuremberg Trials, criticism of experimentation with human subjects without their knowledge and consent was on the rise, and deceptive experimentation by biomedical researchers and social psychologists came under heavy attack. The Milgram obedience experiment drew particularly intense fire, not only because it inflicted suffering on people without forewarning them and obtaining their consent, but because it might have done them lasting psychological harm by showing them a detestable side of themselves. Milgram, professing to be "totally astonished" by the criticism, asked a sample of his former subjects how they felt about the experience, and reported that 84 percent said they were glad they had taken part in the experiment, 15 percent were neutral, and only 1 percent regretted having participated.
But in the era of expanding civil rights, the objections on ethical grounds to research of this sort triumphed. In 1971 the Department of Health, Education, and Welfare adopted regulations governing eligibility for research grants that sharply curtailed the freedom of social psychologists and biomedical researchers to conduct experiments with naive subjects. In 1974 it tightened the rules still further; the right of persons to have nothing done to them without their informed consent was so strictly construed as to put an end not only to Milgram-type procedures but to many relatively painless and benign experiments relying on deception, and social psychologists abandoned a number of interesting topics that seemed no longer researchable.
Protests by the scientific community mounted all through the 1970s, and in 1981 the Department of Health and Human Services (successor to DHEW) eased the restrictions somewhat, allowing minor deception or withholding of information in experiments with human beings provided there was "minimum risk to the subject," the research "could not practicably be carried out" otherwise, and the benefit to humanity would outweigh the risk to the subjects. "Risk-benefit" calculations, made by review boards before a research proposal is considered eligible for a grant, have permitted deceptive research-though not of the Milgram obedience sort-to continue to the present. Deception is still used in about half of all social psychology experiments but in relatively harmless forms and contexts.
Still, many ethicists regard even innocuous deception as an unjustifiable invasion of human rights; they also claim it is unnecessary, since research can use nonexperimental methods, such as questionnaires, survey research, observation of natural situations, interviews, and so on. But while these methods are practical in many areas of psychology, they are less so, and sometimes are quite impractical, in social psychology.
For one thing, the evidence produced by such methods is largely correlational, and a correlation between factor X and factor Y means only that they are related in some way; it does not prove that one is the cause of the other. This is particularly true of sociopsychological phenomena, which involve a multiplicity of simultaneous factors, any of which may seem to be a cause of the effect under study but may actually be only a concurrent effect of some other cause. The experimental method, however, isolates a single factor, the "independent variable," and modifies it (for instance, by changing the number of bystanders present during an emergency). If this produces a change in the "dependent variable," the behavior being studied, one has rigorous proof of cause and effect. Such experimentation is comparable to a chemical experiment in which a single reagent is added to a solution and produces a measurable effect. As Elliot Aronson and two co-authors said in their classic Handbook of Social Psychology, "The experiment is unexcelled in its ability to provide unambiguous evidence about causation, to permit control over extraneous variables, and to allow for analytic exploration of the dimensions and parameters of a complex phenomenon."
For another thing, no matter how rigorously the experimenter controls and manipulates the experimental variables, he or she cannot control the multiple variables inside the human head unless the subjects are deceived. If the subjects know that the investigator wants to see how they react to the sound of someone falling off a ladder in an adjoining room, they are almost sure to behave more admirably than they otherwise might. If they know that the investigator's interest is not in increasing memory through punishment but in seeing at what point they refuse to inflict pain on another person, they are very likely to behave more nobly than they would if ignorant of the real purpose. And so, for many kinds of sociopsychological research, deceptive experimentation is a necessity.
Many social psychologists formerly prized it not just for this valid reason but for a less valid one. Carefully crafted deceptive experimentation was a challenge; the clever and intricate scenario was highly regarded, prestigious, and exciting. Deceptive research was in part a game, a magic show, a theatrical performance; Aronson has likened the thrill felt by the experimenter to that felt by a playwright who successfully recreates a piece of ordinary life. (Aronson and a colleague once even designed an experiment in which the naive subject was led to believe that she was the confederate playing a part in a cover story. In fact, her role as confederate was the actual cover story and the purportedly naive subject was the actual confederate.) In the 1960s and 1970s, by which time most undergraduates had heard about deceptive research, it was an achievement to be able still to mislead one's subjects and later debrief them.
During the 1980s and 1990s, however, the vogue for artful, ingenious, and daring deceptive experiments waned, although deceptive research remains a major device in the social psychologists' toolbox. Today most social psychologists are more prudent and cautious than were Festinger, Zimbardo, Milgram, Darley, and Latane, and yet the special quality of deceptive experimentation appeals to a certain kind of researcher. When one meets and talks to practitioners of such research, one gets the impression that they are a competitive, nosy, waggish, daring, stuntloving, and exuberant lot, quite unlike such sobersides as Wundt, Pavlov, Binet, and Piaget.
Of the wide variety of topics in the vast, amorphous field of social psychology, some, as we have seen, are closed cases; others have been actively and continuously investigated for many decades; and many others have come to the fore more recently. The currently ongoing inquiries, though they cover a wide range of subjects, have one characteristic in common: relevance to human welfare. Nearly all are issues not only of scientific interest but of profound potential for the improvement of the human condition. We will look closely at two examples and briefly at a handful of others.
Over half a century ago social psychologists became interested in determining which factors promote cooperation rather than competition and whether people function more effectively in one kind of milieu than another. After a while, they redefined their subject as "conflict resolution" and their concern as the outcome when people compete, or when they cooperate, to achieve their goals.
Morton Deutsch, now a professor emeritus at Teachers College, Columbia University, was long the doyen of conflict-resolution research. He suspects that his interest in the subject may have its roots in his childhood. The fourth and youngest son of Polish-Jewish immigrants, he was always the underdog at home, an experience he transmuted into the lifelong study of social justice and methods for the peaceful resolution of conflict.
It took him a while to discover that this was his real interest. He became fascinated by psychology as a high school student when he read Freud and responded strongly to descriptions of emotional processes he had felt going on in himself, and in college he planned to become a clinical psychologist. But the social ferment of the 1930s and the upheavals of World War II gave him an even stronger interest in the study of social problems. After the war he sought out Kurt Lewin, whose magnetic personality and exciting ideas, particularly about social issues, convinced Deutsch to become a social psychologist. For his doctoral dissertation he studied conflict resolution, and continued to work in that area throughout his long career. The subject was congenial to his personality: unlike many other social psychologists, he is soft-spoken, kindly, and peace-loving, and as an experimenter relied largely on the use of games that involved neither deception nor discomfort for the participants.
A particular focus of his research was the behavior of people in "mixed-motive situations," such as labor-management disputes or disarmament negotiations, where each side seeks to benefit at the other's cost yet has interests in common with, and does not want to destroy, the other. In the 1950s he studied such situations intensively in the laboratory by means of his own modification of the Prisoner's Dilemma game. In Deutsch's version, each player seeks to win imaginary sums by making one of two choices-with results that depend on which of two choices the other player makes at the same time. Specifically, Player I can choose either X or Y, and Player 2 simultaneously can choose either A or B. Neither, in deciding what to do, knows what the other is going to do, but both know that every combination of their choices XA, XB, YA, and YB - has different consequences. Player 1, for instance, thinks: "If I do X and he does A, we each win $9 - but if he does B, I lose $10 and he wins $10. What if I do Y? If I do, and he does A, I win $10 and he loses $10- but if he does B, we each lose $9." Player 2 is confronted by similar dilemmas.
Since neither knows what the other is doing, each has to decide for himself what move might be best. But as in the original Prisoner's Dilemma, logical reasoning doesn't help; only if both players trust each other to do what is best for both will they choose X and A respectively, and each win $9. If either mistrusts the other or tries to do the best for himself without regard to the other's welfare, he may win $10 while the other loses that much-but is equally likely to lose $10 while the other wins that much, or, along with the other player, lose $9.
Deutsch varied the conditions under which his student volunteers played so as to simulate and test the effects of a number of real-life circumstances. To induce cooperative motivation, he told some volunteers, "You should consider yourself to be partners. You're interested in your partner's welfare as well as your own." To induce individualistic motivation, he told others, "Your only motivation should be to win as much as you can for yourself. You are to have no interest whatever in whether the other person wins or loses. This is not a competitive game."
Finally, to induce a competitive mind-set, he told still others, "Your motivation should be to win as much money as you can for yourself and also to do better than the other person. You want to make rather than lose money, but you also want to come out ahead of the other person."
Usually, players made their choices simultaneously without knowing each other's choice, but sometimes Deutsch had the first player choose and then transmit his choice to the second player, who would then make his choice. At other times, one or both players were allowed to change their choice when they heard what the other had chosen. And sometimes both were allowed to pass each other notes stating their intentions, such as, "I will cooperate, and 1 would like you to cooperate. That way we can both win."
As Deutsch had hypothesized, when the players were oriented to think of each other's welfare, they behaved in a trusting fashion (they chose X and A)-and did the best, collectively, even though either one would have been the big loser if the other had double-crossed him. But when they were told to try to win the most and to best the other, each usually assumed that the other was also out to win at his expense and made choices that were good for only one and bad for the other, or bad for both.
An encouraging result, Deutsch has said, is that "mutual trust can occur even under circumstances in which the people involved are clearly unconcerned with each other's welfare, provided that the characteristics of the situation are such that they lead one to expect one's trust to be fulfilled."5O That is the case when, for instance, one player is able to propose to the other a system of cooperation, with rules and penalties for infractions; or when one knows, before committing himself to a choice, what the other was going to do; or when one can influence the outcome for the other, with the result that it is not in the other's interest to violate an agreement.
Deutsch's use of the modified prisoner's Dilemma game was a seminal event in social psychology. It led to hundreds of similar studies by others who modified and varied the conditions of play in order to explore a range of other factors that encouraged either cooperative or competitive styles of conflict resolution.
Deutsch himself soon moved on to another game that he and a research assistant, Robert M. Krauss, constructed to investigate how threats affect conflict resolution. Many people, during conflicts, believe they can induce the other side to cooperate by making threats. Embattled spouses hint at separation or divorce in an effort to change each other's behavior; management warns strikers that unless they come to terms it will close down the company; nations in conflict mass troops on the border or conduct weapons tests in the attempt to wrest concessions from the other side.
In Deutsch and Krauss's Acme-Bolt Trucking Game there are two players, both "truck drivers," one with the Acme Company, the other with the Bolt Company. This map represents the world in 'which they interact.
Which works better- touching it out or cooperating?
Time is of the essence for each player. Quick trips mean profit; slow ones, loss. Each begins moving his truck at the same time and at the same speed (the positions appear on control panels), and each can choose to go by the circuitous route or the short one. The latter, although obviously preferable, involves a stretch of one-lane road that accommodates only one truck at a time. If both players choose that route at the same time, they reach a bumper-to-bumper deadlock and one or both have to back out, losing money. Obviously, the best course is for them to agree to take turns on the one-lane road, thus allowing both to make maximum and nearly equal profits.
To simulate threat making, Deutsch and Krauss gave each player control of a gate at his end of the one-lane strip. Each player, when bargaining, could threaten to close his gate to the other's truck unless the other agreed to his terms. The experiment consisted of twenty rounds of play in each of three conditions: bilateral threat (both players controlled gates), unilateral threat (only Acme controlled a gate), and no threat (neither player controlled agate). Another important variable was communication. In the first experiment, the players communicated their intentions only by the moves they made; in a second one, they could talk to each other; in a third, they had to talk to each other at every trial. Since the goal of both players was to make as much money as possible, the total amount of money they made in twenty rounds of play was a direct measure of their success in resolving the conflict. The major findings:
- The players made the greatest profit (collectively) when neither could make a threat; fared less well in the unilateral threat condition; and, contrary to common belief, did worst when each could threaten the other. (Could our former belief in "mutual deterrence" as the way to avoid nuclear war have been an unthinkably expensive misjudgment from which, through luck, we did not suffer?)
- Freedom to communicate helped little toward reaching an agreement, particularly if each could threaten the other. Nor did the obligation to communicate if both could threaten, although it did if only one could.
- If the players were coached about communicating and told to try to offer fair proposals to each other, they reached agreement more swiftly than when not tutored.
- When both players could make threats, verbal communication following a deadlock led to a useful agreement more quickly than if they were allowed to communicate only before the deadlock. Apparently, becoming deadlocked was a motivating experience.
- The higher the stakes, the more difficulty they had reaching agreement.
- Finally, when the experiment was run by an attractive female research assistant instead of a male, the players-male undergraduates-acted in a more macho fashion, used their gates more frequently, and had significantly more trouble reaching cooperative agreements.
The Acme-Bolt Trucking Game instantly became a classic, was widely cited, and won the prestigious MAS award for social science research. * Like many another ground-breaking study, it was the target of criticism, much of which questioned whether the variables it was based on are found in real life. But with time that question has been fairly well settled. The notion that a conflict can be thought of as a problem, and approached by thinking "What is the best way for us to solve it?" has been borne out by many other studies and has been turned into a number of programs of practical training. In 1986 Deutsch founded the International Center for Cooperation and Conflict Resolution at Teachers College, and this institute, the Program on Negotiation at Harvard Law School, the Conflict Resolution Consortium at the University of Colorado, and other similar centers have had considerable success in teaching constructive methods of settling disputes to negotiators for management and labor, divorce and corporate lawyers, government officials and legislators, teachers and students, tenants and landlords, family members, and others in conflict situations. If unresolved conflict is all too rife in our world, it's because all too few embattled individuals and peoples know about-or care about-peaceful resolutions of their disputes.
Research on the topic continues. Heidi Burgess, co-director with Guy Burgess of the Colorado Consortium, says that currently the areas of special interest are "the way people frame conflicts" and how this affects "the way the conflict process is conducted and/or resolved" (thus carrying on Deutsch's original work), and, branching out to other aspects of the field, "the impact of humiliation, anger, fear, and other strong emotions on conflicts and their resolution, the social-psychological effects of trauma, and approaches to trauma healing."
In the 1970s, cognitive dissonance was displaced as the leading topic of social psychology by a new subject, attribution. The term refers to the process by which we make inferences about the causes of events in our lives and the behavior of others.
Our attributions, whether correct or incorrect, are more responsible than objective reality for how we think, what we feel, and how we behave. Studies have shown, for example, that we commonly attribute greater warmth, sexiness, and other desirable traits to good-looking people than to homely people, and behave toward them accordingly. Again, those who ascribe women's lower employment status and pay scales to their fear of success and lack of assertiveness treat them differently from those who believe the causes are male prejudice, male dominance in the workplace, and traditional attitudes about woman's proper role. All these are examples of what social psychologists call the "fundamental attribution error" -namely, "the strong tendency to interpret other people's behavior as due to internal (dispositional) causes rather than external (situational) ones."
The phenomenon of attribution is captured in an old joke. Two men, one a Protestant and the other a Catholic, see a priest entering a brothel. The Protestant smiles sourly at the evidence of the hypocrisy of Catholics, the Catholic smiles proudly at the evidence that a priest will go anywhere, even into a brothel, to save the soul of a dying Catholic.
For those who prefer a more serious example, attribution is illustrated by an early experiment conducted by two former students of Lewin's, John Thibaut and .Henry Riecken. They assigned naive volunteers, one at a time, to work on a laboratory project, in the course of which each realized that he needed the help of two other people present, one a graduate student, the other a freshman. (Both were accomplices of the researchers.) Each volunteer sought their help and eventually got it. When the volunteers were later asked why they thought the others had helped them, most said the graduate student had helped because he wanted to, the freshman because he felt obliged to. These attributions were based not on anything they had experienced but on the volunteers' preconceptions about social status and power.
Much other research has examined an extremely serious form of attribution error-the reasons given by people as to why other people tolerated or committed acts of hatred against groups and even accepted genocide of the hated people. A 2003 study asked Jewish and German visitors to Anne Frank's home in Amsterdam, now a museum, whether the behavior of Germans during the Holocaust was due to their aggressive nature (an internal cause) or to the historical context in which the events occurred (external factors). By a considerable margin, the Jewish respondents attributed the German behavior to German aggressiveness, the German respondents to external factors (thus more or less absolving themselves of inner evil).
Fritz Heider, an Austrian psychologist, had suggested the concept of attribution as early as 1927, but little notice was taken of it for many years. In 1958, Heider, who had long since immigrated to the United States, broadened the concept, proposing in his Psychology of Interpersonal Relations that our perceptions of causality affect our social behavior, and that we respond not to actual stimuli but to what we think caused them. An example: If a wife is trying to annoy her husband by not talking to him, he may think either that she is worried or that he has done something to offend her, and his actions will depend not on the real reason for her behavior but on what he attributes it to. Heider also made a valuable distinction between those attributions which point toward external causes and those which point toward internal ones. This preceded by eight years Julian Rotter's important work on the attribution of internal versus external locus of control as a key personality trait.
Psychologists found Heider's ideas exciting, since knowledge of the factors that lead people to make attributions would greatly increase the predictability of human behavior. Interest in attribution grew throughout the 1960s, and by the 1970s it had become one of the hot topics in social psychology.
But more a topic than a theory; indeed, it was a mass of small theories, each a reworking in attributional terms of some previous explanation of a sociopsychological phenomenon. Cognitive dissonance was reinterpreted as the self-attributing of one's behavior to what one supposed one's beliefs and feelings must be. (If circumstances compel me to behave badly toward someone, I tell myself that he deserves it and attribute my behavior to my perception of his "real" nature.) The foot-in-the door phenomenon was similarly explained anew: if I give a little to a fund raiser the first time, and therefore give more a second time, it is because I attribute the first donation to my being a good and kindly person. And so on. Large areas of the territory of social psychology were invaded and laid claim to by the attributionists.
More important than the reinterpretation of previous findings was the multitude of new discoveries resulting from attribution research. A few notable examples:6D
- Lee Ross and two colleagues asked pairs of student volunteers to play a "quiz show game." One was appointed questioner, the other contestant. Questioners were asked to make up ten fairly difficult questions to which they knew the answers, then pose them to the contestants. (Contestants averaged about six correct answers.) Afterward, all participants were asked to rate one another's "general knowledge." Nearly all the contestants said they considered questioners more knowledgeable than themselves; so did impartial observers of the experiment. Even though .they knew that questioners had asked questions they knew the answers to, they attributed superior general knowledge to them because of the role they had played.
- Investigators discovered that we commonly attribute the behavior of highly noticeable, different-looking, or strikingly dressed people to inherent qualities, and the behavior of forgettable or ordinary looking people to external (situational) forces.
- People's reactions to the poor, alcoholics, accident victims, rape victims, and other unfortunates were explained in terms of the "just world hypothesis" -the need to believe that the world is orderly and just, and that it rewards us according to our deserts. This leads to the attribution of victims' misfortunes to their own carelessness, sloth, risk taking, seductiveness, and the like. Some studies have found that the worse the plight of the victim, the more he or she is seen as responsible for it.
- Male college students were asked by the psychologist Stuart Valins to look at slides of nude women and rate their attractiveness. While looking at them, each man, through earphones, heard what was supposedly his own heartbeat but was in reality recorded sound controlled by Valins. The lub-dub, lub-dub the volunteers heard was speeded up when they looked at certain slides but not others. When they later rated the appeal of the women, they named as particularly attractive those who seemed to have caused their heartbeat to speed up.
- Volunteers given false reports of how well they had done on tests tended to attribute supposed success to their own efforts or abilities, supposed failure to external causes such as the unfairness of the test, distracting noises, and so on.
- Researchers asked a group of nursery school children who had previously enjoyed drawing with multicolored felt-tip pens to play with them in order to receive Good Player awards. They asked a control group to play with the pens but said nothing about an award. Some time later, both groups were given access to the pens during free-play periods. The children who had received awards were much less interested in using them than the no-award group. The attributional interpretation: children who had expected a reward implicitly thought, "If I do it for the reward, I must not find drawing with it very interesting."
Since the 1980s attribution theory has been largely absorbed into the broader field of "social cognition," or the study of how people think about social issues, an expansive domain that includes such intriguing topics as self-fulfilling prophecies, how attitudes affect behavior, persuasion and attitude change, stereotyping and prejudice, and much more. Within that framework attribution remains a central concept in contemporary social psychology. It has added substantially to psychology's patchwork explanation of human behavior.
It has also yielded a number of practical applications in education (students are led to attribute their failures to lack of effort rather than inability), the treatment of depression (depressed persons are induced to minimize their sense of personal responsibility for negative events in their lives), the improvement of performance and motivation of fearful and defeatist persons (they are led to attribute feared failures to lack of practice and skill rather than to character defects), and so on.
Many other topics of both scientific interest and potential practical value have been explored by social psychologists in recent years and continue to be actively researched. Here are some of them, along with a few sample findings of each:
Interpersonal relations: Communication between spouses, friends, coworkers, and others, often ambiguous and misinterpreted, is usually much improved by experience in T-groups (T for training), therapy groups, and marital counseling. Participants are alerted to their own communication flaws and made more sensitive to what others are saying . . . Rules for clear and fair argument, taught to spouses in conflict, can considerably improve their communication and relationship. . . Only a fraction (possibly less than a tenth) of the information in emotional communications is conveyed by the words, the rest by body language, eye contact or avoidance, distance maintained between persons, and the like; nonverbal communication skills, too, can be taught. . . Guilt has social benefits; it protects and strengthens interpersonal relationships by, among other things, keeping people from acting in ways that would harm their relationships. . . Jealousy has adaptive functions, serving to keep mates together (signals of jealousy by one partner may inhibit the other from straying.
Mass communication and persuasion: Political, sales, and other presentations that do not indicate in advance that they will attempt to persuade are more successful than those which honestly announce what they're about to do . . . Two-sided presentations, offering and refuting the opposition's view, then offering and supporting one's own view, are far more persuasive than powerful presentations of a single view. . . Forthright arguments on any controversial topic are listened to chiefly by the already convinced and shunned by those who hold an opposite view; indirect, emotionally appealing, deceptive, and unfair methods are, regrettably, more effective in changing attitudes than straight talk about issues. . . People can be persuaded via the central route (rational thinking about a rational argument) or the peripheral route (being distracted by, say, a sexy celebrity while the message is being delivered obviously the favored and more effective choice of many advertisers).
Attraction: An unromantic reality: Physical proximity and membership in groups are major determinants of romantic preferences and of friendships. . . Within the parameters of nearness and group membership, physical beauty is by far the strongest factor in the initial attraction toward dating partners, yet persons with low or moderate self-esteem avoid approaching the most desirable partners out of fear of rejection . . . In both friendships and mate choices, similarity of personality and background have far more power to attract than the legendary appeal of opposite traits.
Attitude change (or persuasion): Persons low in self-esteem are more readily made to change their attitudes than persons with high selfesteem. . . People are more influenced by the statement of an authority than by an equally or even better documented statement of a nonauthority . . . They are also more easily persuaded by overheard information than by information directed at them, and by actions they have been induced to perform (as in Festinger's cognitive dissonance experiment) than by logical reasoning. . . Simply being repeatedly exposed to something-a name, a product, a slogan-often changes one's attitude toward it, generally in a favorable way (again, obviously, a psychological reality well-known to advertisers and politicos).
Prejudice: When people are assigned to or belong to a group, generally they come to think of it as better than other groups in order to maintain their self-esteem and positive self-image. . . People assume that others who share one of their tastes, beliefs, or attitudes are like them in other ways, and that those who differ with them on some issue are unlike them in other ways. . . The mutual antipathy of people in rival or hostile groups dissipates if the groups have to cooperate to achieve some goal valuable to both of them. . . Stereotyping can lead to prejudice, which may be conscious and intentional, conscious and unintentional, and, perhaps most serious, unconscious and unintentional.
Group decision making: Groups make either riskier or more conservative decisions than individuals, largely because group discussion and the airing of opinions frees many of the members to take a more extreme position than they would have on their own. . . Groups perform better than individuals on tasks where everyone's effort adds to the result but not on tasks where there is only one correct solution and where, if one member discovers it but is not supported by at least one other, the group may ignore the correct solution. . . In groups organized to solve a particular problem, two people assume particular importance: the task specialist, who speaks most, has the most ideas, and is seen as the leader; and the socioemotional specialist, who does the most to promote harmony and morale.
Altruism: The bystander effect, discussed above, can be counteracted by knowing about it. In an experiment, students who had heard a lecture on the bystander effect were helpful to a hurt stranger in a situation where normally they would have been passive. . . Self-interest is the major motivation of many altruistic acts (one helps a person in distress to relieve one's own discomfort or guilt at seeing that person's pain), but some altruistic acts are motivated solely by a perception of the other person's needs and by empathy that social experience has transformed into true compassion. . . Altruism, or at least empathy, can be successfully taught in the classroom by role playing in little psychodramas, projective completion of stories, group discussions, and other methods.
Social neuroscience: Many social psychological processes are now being investigated by means of brain scans to see if observable differences in neural activity and blood flow occur when certain interpersonal events take place. In one study, for instance, photos of whites, blacks, females, and males were shown for one second each to participants almost all of whom were white. Recordings of several kinds of brain potentials showed that photographs of black persons elicited more attention than those of white persons, and females more than males and that the differences were manifested within one hundred milliseconds of seeing each photo, an indication that we very swiftly assign people we see to categories.
This is only a sample of the active fields and topics in social psychology. Others range from excuse-making and self-handicapping (arranging things so that one is likely to fail and has an excuse for failure) to the effects of TV violence on behavior; from changing patterns of love and marriage to the decision-making processes of juries; and from territoriality and crowding to race relations and social justice. No wonder it is all but impossible to draw the boundaries of social psychology; like the former British Empire, it sprawls across a vast world of human thought, feeling, and behavior.
The Value of Social Psychology
Like that empire and many another, social psychology has undergone attacks from without and rebellions from within. Its hodgepodge of topics, overextended battle lines, bold and sometimes offensive experimental methods, and lack of integrating theory have all made it an inviting target.
The most intense attack came from within. For half a dozen or more years beginning in the early 1970s, during the so-called Crisis of Social Psychology, social psychologists were engaged in an orgy of selfcriticism. Among the sundry charges they lashed themselves with were that their field paid too little attention to practical applications (but conversely that it paid too little attention to theory); that it devoted far too much effort to studies of trivial details (but conversely that it hopped from one big issue to another without completing studies of the details); and that it made unjustifiable generalizations about human nature on the basis of mini-experiments with American college undergraduates.
This last criticism was the most troubling. In 1974, when self-criticism was at its peak, college students were the experimental subjects in 87 percent of the studies reported in one leading journal and 74 percent of those in another. Such laboratory research, critics said, might be internally valid (it showed what it said it showed), but might not be and probably was not externally valid (what it showed did not necessarily apply to the outside world). A laboratory situation as highly artificial and special as the Milgram obedience experiment, and the behavior it elicited, could hardly be compared, they said, with a Nazi death camp and the confident, unfaltering barbarity of the officers and guards who daily herded crowds of naked Jews into the "showers" and turned on the poison gas.
The most disturbing assault, expanding the charge that the findings of sociopsychological research lack external validity, was made by Kenneth Gergen of Swarthmore College in 1973. In a journal article that torched his own profession, he asserted that social psychology is not a science but a branch of history. It claims to discover principles of behavior that hold true for all humankind but that really account only for phenomena pertaining to a given sample of people in a specific cultural setting at a particular time in history.
As examples, Gergen said the Milgram obedience experiment was dependent on contemporary attitudes toward authority but that these were not universal; cognitive dissonance claims that human beings find inconsistency unpleasant, but early existentialists welcomed it; and conformity research reports that people are swayed more by the views of friends than of others, a conclusion that may hold good in America but not in societies where friendship plays a different role. Gergen's drastic conclusion:
It is a mistake to consider the processes in social psychology as basic in the natural science sense. Rather, they may be largely considered the psychological counterpart of cultural norms. . . Social psychological research is primarily the systematic study of contemporary history.
For some years following the publication of Gergen's scathing critique, social psychologists held many soul-searching symposia devoted to his thesis. Edward Jones said that since Gergen's pessimistic conclusions were not especially novel, "one can wonder why contemporary social psychologists paid such lavish attention to them," and suggested that "a widespread need for self-flagellation, perhaps unique to social psychologists, may account for some of the mileage of the Gergen message." Whence that special need? Jones does not say, but perhaps it was penance for the brashness, egotism, and chutzpa characteristic of the profession up to that point.
Eventually, the debate did yield sound answers to the barbed questions hurled by Gergen and others, and restored the image of social psychology as a science.
To the charge that what is true of college undergraduates may not be true of the rest of human kind, methodologists replied that for purposes of testing a hypothesis, the population being studied is not a critical issue. If variable X leads to variable Y, and in the absence of X there is no Y, the causal connection between X and Y is proven for that group; to the extent that it is also found true of other groups, it is likely to be a general truth. (The recent emphasis on cross-cultural psychology has proven that to be the case with many a finding, including the Milgram obedience phenomenon and Latane's social-loafing principle, each of which has been demonstrated in varied groups of experimental subjects in this country and in other countries.)
In a thoroughgoing rebuttal of Gergen's charges, Barry Schlenker of the University of Florida pointed out that the physical sciences, too, began with limited and contradictory observations and gradually developed general theories that harmonized their seeming inconsistencies. In the same way, the social sciences have identified, in limited contexts, what seem to be human universals, and brought together wider-ranging proof. Anthropologists and sociologists, for instance, first supposed and later demonstrated that all societies have incest taboos, some form of the family, and some system for maintaining order. Social psychology, said Schlenker,' was following the same route, and the principles of social learning, conformity, and status dominance were among the findings that have already been shown to have multicultural validity.
By the end of the 1970s the crisis was abating, and a few years later Edward Jones could view it and the future of the field with optimism:
The crisis of social psychology has begun to take its place as a minor perturbation in the long history of the social sciences. The intellectual momentum of the field has not been radically affected. . . The future of social psychology is assured not only by the vital importance of its subject matter but also by its unique conceptual and methodological strengths that permit the identification of underlying processes in everyday life.
Nonetheless, from that time to this, again and again some wannabe proclaims, usually in an obscure, offbeat journal, that social psychology is wrongly oriented and points out which way it should go, not that anyone pays such preachments any attention. It remains true that social psychology has no unifying theory, but many of its middle-range theories have been widely validated, and their jumbled mass of findings impressively adds to humankind's understanding of its own nature and behavior.
But from Triplett's day to the present, the value of social psychology has been as much a matter of practical application to real-life concerns as of deeper understanding of fundamental principles. The beneficial uses of social psychology are remarkable: among them are ways to get better compliance by medical patients; the use of cooperative rather than competitive classroom methods; social support groups and networks for the widowed and divorced, substance abusers, and others in crisis; training in interpersonal communication in T-groups; the improving of the mood and mental functioning of nursing home patients by giving them greater control and decision-making power; new ways of treating depression, loneliness, and shyness; classroom training in empathy and prosocial behavior; control of family conflict by means of small-group and family therapy.
Some years ago, after the Crisis in Social Psychology had passed and the discipline was back in good health, Elliot Aronson voiced what he and many other social psychologists felt about their field:
[It] is my belief that social psychology is extremely important-that social psychologists can playa vital role in making the world a better place. . . [and can have] a profound and beneficial impact on our lives by providing an increased understanding of such important phenomena as conformity, persuasion, prejudice, love, and aggression.
Today, nearly two decades later, social psychologists retain that passionate affirmative belief in the value of their discipline. As the authors of a leading textbook proclaimed in 2006:
Virtually everything we do, feel, or think is related in some way to the social side of life. In fact, our relations with other people are so central to our lives and happiness that it is hard to imagine existing without them. . . Survivors of shipwrecks or plane crashes who spend long periods of time alone often state that not having relationships with other people was the hardest part of their ordeal-more difficult to bear than lack of food or shelter. In short, the social side of life is, in many ways, the core of our existence. It is this basic fact that makes social psychology-the branch of psychology that studies all aspects of social behavior and social thought-so fascinating and essential.
That view of the discipline may be why, despite the compelling attractions of the glamorous newer fields of cognitive science, evolutionary psychology, and cognitive neuroscience, the membership of the Society for Personality and Social Psychology has grown by 50 percent in just the past dozen years and now has 4,500 members.
What matter, then, if social psychology has no proper boundaries, no agreed-upon definition, and no unifying theory?”