Appendix 1

How We Did the Survey

This study takes up a subject that is at the heart of the historian's inquiry (the past) but investigates it at a moment in time (the present) that historians generally leave to sociologists and anthropologists. In our foray across conventional temporal and disciplinary borders, we were guided by our talented colleagues at the Center for Survey Research (CSR) at Indiana University, who helped us design the survey, carried it out, and ensured that it followed the highest professional standards of academic survey research. Indeed, throughout the book, we use the generic "we" to denote not only our own collaboration but also our collaboration with the CSR's director, John M. Kennedy; its field director, Nancy Bannister; its assistant field director, Kevin Tharp; the project managers for this survey, Barbara Hopkins and Christopher Botsko; and the more than sixty energetic interviewers who made the hundreds of phone calls so essential to our study. That "we" also includes research assistant Andy Draheim, our day-to-day intermediary with the CSR, who worked intensely with them on every phase of the study and then worked closely with us on the analysis of the data, especially the quantitative analysis embodied in the tables.

Because we wanted this book to be inviting to readers not trained in social science, we have tried not to burden it with excessive methodological or statistical discussions. After reading the overview in this appendix, some readers might want more detailed information on the survey methodology and results. For them, we offer two additional sources of information. First, we invite them to consult a Web site that we have set up (http://chnm.gmu.edu/survey), which contains additional statistical tables as well as the full text of the survey questionnaires. Second, we are happy to respond to further questions, which can be directed to rrosenzw@gmu.edu or thelen@indiana.edu.

Discovering What to Ask and How to Ask It

To move from the idea of a survey to implementing it required figuring out the questions to ask--and the way they should be asked--about a subject that no one had seriously studied. We assembled the first list of possible questions in October 1990 in a meeting at the Smithsonian Institution's National Museum of American History, and early the next year a venturesome group of 15 public history graduate students at Arizona State University tried out and refined the questions in one- to two-hour face-to-face interviews with 135 people in the Phoenix area; subsequently, a smaller group of graduate students at the University of Toledo did the same. We put together a written questionnaire based on this initial project and sent it to 75 people who, in turn, administered it to 75 history professionals and 75 nonprofessionals. Lois Silverman, then the director of the Center for History-Making in America, played an important part in organizing and summarizing the written questionnaire. After the Spencer Foundation and the National Endowment for the Humanities agreed to fund the survey in 1993, we further revised the questions and systematically tested them in seven nights of telephone interviews carried out by the CSR. To widen the circle of discussion, we solicited comments on our draft questionnaire from survey experts as well as more than 40 public and academic historians.

This extensive piloting and pretesting taught us two crucial lessons. First, we needed to ask broadly framed questions if we were to learn what people were thinking and doing. That meant, in turn, that we had to ask people about "the past" and not just about "history." Our pilot survey showed that three quarters of those we interviewed thought of "the past" and "history" as different concepts, with most people defining the past in more inclusive terms and history as something more formal, analytical, official, or distant. As Melissa Keane, one of the Arizona graduate students, later observed, most people "drew a clear line between 'history' and their own lives. 'History' was often remote 'book learning'--Columbus, Abe Lincoln, Henry VIII, the Norman Conquest--the 'boring stuff from school.' "1

Such comments persuaded us that if we were genuinely interested in the relationship between the past and the present in Americans' lives, we should use the term that they found more immediate and meaningful. When it came time to do the survey in the late spring of 1994, we announced this as clearly as possible to the people we called. The interviewers began by explaining: "We're conducting a nationwide study funded by the National Endowment for the Humanities to find out how people understand and use the past in their everyday lives." Then when interviewers began the questioning they explained, "I am going to ask you some questions about the past. By the past, we mean everything from the very recent past to the very distant past, from your personal and family past to the past of the United States and other nations."

Framing our survey in these inclusive terms gave us different answers than if we had focused more narrowly on "history," but since we set out to understand the past as our respondents understood and defined it--and not as professional historians did--this was the best phrasing. Our pilot surveys, which sought reactions to a number of terms ("history," "tradition," "heritage," "the past"), had shown us that "the past" was the most accessible term--one that left respondents with the greatest latitude to describe their engagements on their own terms. We are aware, of course, that some readers, perhaps professional historians in particular, will not share the capacious definition of the past used in this study, but our respondents think about the past in these elastic terms. To the charge that our instructions and questions encouraged people to talk about the past in more expansive and less professionally conventional terms, we plead guilty--by design.

Our piloting and pretesting taught us a second, related lesson. On the one hand, questions whose answers could be easily quantified--ones that were answered with a "yes" or a "no" or a numeric score--were valuable in developing benchmarks for participation in and attitudes toward past-related activities. But on the other hand, a study that restricted itself to such closed-ended inquiries would never yield the richer and deeper (if sometimes perplexing) insights into how the past figured in people's lives that had emerged in the open-ended interviews during the piloting of the survey. Although we asked a number of conventional, closed-ended questions (e.g., "During the last 12 months, have you visited any history museums or historic sites?"), we confined them to only about one third of a typical interview. We realized that we had to find out why someone had gone to a museum and what they had gotten from the experience.

Follow-up questions told us more, but they required some compromises in the survey design. We could not ask follow-up questions about everything (for instance, why people did each of ten different activities) without exhausting our relatively modest budget for long-distance telephone calls and interviewers' wages--or, for that matter, exhausting the patience of our respondents. In such situations, our survey research colleagues advised us, it was conventional to ask certain questions of only a limited--but randomly selected--portion of our larger sample. For example, only 27 percent of people who said they had been to a history museum were also asked to talk about the occasions or reasons for doing so. In addition, we included some open-ended questions--for instance, "What about the past do you think is important for children to know?"--that went to a portion of the sample (about one tenth in this case).2 Some follow-up questions only went to people who offered specific answers; for instance, those who rated a source as particularly trustworthy or particularly untrustworthy were asked about the basis of their rating, but those who gave a middling rating weren't.

Even so, merely asking conventional survey follow-up questions frequently evoked brief and vague answers. In collaboration with the staff at the CSR, we developed less conventional probing strategies that borrowed from techniques of ethnography and oral history. We encouraged interviewers to use their own experiences and natural curiosity in questioning respondents. If a respondent talked about a hobby that the interviewer was familiar with, he or she could use that familiarity to prompt further comments. As one interviewer later explained, "by simply saying 'any other?' we usually do not get to the heart of the matter, and this study, with its open probing, allowed us to do so. Because the tone was so conversational, I feel people opened up to us more. . . . I think they felt I was genuinely interested in what they had to say."

Our goal was to convince the people we called that we were interested in their perspectives. We sought to elicit what mattered to respondents, not interviewers, even when interviewers shared their own experiences and feelings as part of the strategy for getting people to talk more openly. As Christopher Botsko and John M. Kennedy later observed in a paper given at the annual meeting of the American Association for Public Opinion Research: "The interviewing process was far more collaborative between the interviewing staff, the study managers, and the investigators than a standardized survey. It was also more labor intensive." But the results seemed to justify the effort. In the same paper, Botsko and Kennedy recommended these innovative techniques to other survey researchers, arguing that "nonstandardized techniques permit the use of survey research beyond the method's reach when only standardized techniques are used."3

A postsurvey survey of interviewers found that they too endorsed the open-ended questions and the nonstandard probing. On the one hand, every interviewer described the interviewing techniques and the "experience with respondents" as different from those of other surveys in which they had participated. And about three quarters of the interviewers called their work on the survey "much" or "somewhat" "more difficult than most other surveys" (only one said it was "less difficult"). Yet on the other hand, all but two found it "more rewarding than other studies you have worked on." Most interviewers specifically praised the open-ended questions and the unconventional probing techniques. "The main difference" between this and other surveys, explained an interviewer who had worked on at least eight other CSR surveys, "was that the respondents in history were sharing a lot of themselves, the different things they did and why they did these things, while in other studies we just are getting facts or numbers to be analyzed. Many times at the end of an interview I would feel like I had actually met the respondent." He and many other interviewers attributed this quality of "real sharing" to the "more personal" nature of the questions, to the open-ended follow-up questions, and to their ability to ask more natural questions as part of the probing. One interviewer nicely summarized the "different probing technique" as "the ability to respond to the respondent in a more human fashion. For example, if the respondent told a hilarious joke, we could laugh, which would make the respondent open up more, instead of giving generic feedback and going on."

Not surprisingly, some interviewers at first worried (as did we) that this interview approach would be "directive" or less "scientific." Yet after they were finished, they decided it wasn't. One noted, for example, that despite the "more personal, even intimate conversations," interviewers could maintain "a comparable professionalism (respect, neutrality, listening skills, and consideration) with other surveys." Some argued that this approach actually made the study more objective and complete. What one called "history-style probing" gave "more complete and more specific information." The ability to "follow up on what they [respondents] were saying," explained another experienced interviewer, "made them realize more acutely that I was actually listening and not just hoping that they would fit their answer into my category so that I could finally move on." Still another long-term CSR interviewer commented that this study differed from the nine others that she had done because she could converse "with the respondent as if we were both human beings" and didn't have to force all "answer[s] into a category like we usually make people do." Over and over, interviewers used words like "rapport," "sharing," "conversation," and "communication" to describe these interviews. One older interviewer who had taken part in ten different CSR surveys spoke for many when she concluded, "I always felt I got more information for the client than in any other study."

Because of the particularly demanding nature of this survey and the use of unconventional probing techniques, the CSR selected experienced interviewers with the strongest performance records on previous projects. More than half had worked at the CSR for over a year; one third had been there for two or more years. Even so, we and the CSR intensively trained the interviewers for this project. They learned about the background of the study and about the techniques of probing for answers. Then they watched supervisors role-play interviews and practiced interviewing each other. In addition, the CSR took unusual steps to monitor the interviewers and ensure the quality of the data collected. For example, both we and project manager Barbara Hopkins reviewed answers to open-ended questions and gave the interviewers feedback on which probes were most successful. In the first few weeks of the study, the interviewers and their supervisors held regular postshift meetings to talk about problems they were encountering and approaches they found effective in getting respondents to report on their own thoughts and experiences. Supervisors also held individual discussions with interviewers to talk about the pros and cons of different approaches. The interviewers' insights from the field played a central role in developing survey procedures that would encourage interviewees to open up.

Formulating the Questions

These general lessons about the need for framing questions in broad terms and for following up and probing guided us as we composed the actual survey, but that process still required difficult decisions. Our questions reflected our interest in two general areas: activities and attitudes. We wanted to carry out a census of behaviors, attitudes, and experiences related to the past. Then we wanted to map and understand the multiple ways that Americans value, use, and experience their pasts. After much discussion and debate, we decided that we could get at activities and attitudes through questions organized in four areas: "activities related to the past," "trustworthiness of sources of information about the past," "how connected to the past people feel on certain occasions," and "the importance of various pasts." (See Web site for the complete questionnaire.)

Our method for deciding what to ask about activities was typical of how we developed all the questions. In piloting, we had questioned people in a more open-ended way, asking in the Arizona study, for example, "What activities in your life are related to the past?" From such questioning, we made a checklist of more than thirty items. Since we did not want to devote more than one third of the interview to questions about activities, we had to limit ourselves to ten questions that incorporated the range of what people do but also emphasized the most important areas of activities. Thus, we asked whether they had encountered the past as it was synthesized and presented by others in historical films and television programs, in history books, and at historic sites and museums. We also asked whether they had constructed, documented, or preserved the past themselves by working on hobbies or collections, participating in historical groups or organizations, writing in diaries and journals, taking photos, or investigating the history of their family. And because we knew from piloting that the past was often encountered in intimate circles of family and friends, we asked whether people had, in the past twelve months, looked at photos with family or friends or attended a family reunion or a reunion of some other group of people with whom they shared a common experience.

These ten items embodied the broad definition of historymaking that we had sought, but they also reflected some inevitable compromises. We would have preferred to ask separately about fiction and nonfiction books, documentary films and Hollywood features, hobbies and collections, and historic sites and history museums. But such distinctions would have greatly lengthened the interviews. Moreover, one advantage to asking the questions in a less specific format was that respondents could, in follow-up questions, make the distinctions among books and films that mattered to them. Other, more specific activities--membership in a reenactment group or attendance at historical lectures, for instance--would have been interesting to know about, but piloting and pretesting told us that the number of positive responses would have been too small to be genuinely useful. Information on frequency of participation--hours per week or number of times per year--would also have been valuable, though very time-consuming to collect and perhaps of questionable reliability, given the difficulty respondents have in remembering such specific information. Our general remedy was the use of follow-up questions to gather more impressionistic data on things that we could not ask systematically of the entire sample.

The most important demonstration of the relevance of our checklist to popular historymaking is found in what our respondents said rather than in what we thought. When we followed up on our simple questions about whether or not people had done any of these ten things in the past year and asked why, when, or how they pursued the past in these ways, their answers indicated that most implicitly shared our broad definition of historymaking. Some respondents even worried that we might not share their own expansive view of significant historical work. A 39-year-old lawyer from Alabama, for example, described in detail his hobby of building model warships, which he had become interested in through his father and his grandfather. "You might not consider it the past, but I do," he noted.

The questions about "trustworthiness" of sources were also directed at learning about historymaking broadly construed. We began with a problem familiar to all professional historians: How do you evaluate different sources from and perspectives on the past? Then, in the pilot, we questioned people in a more open-ended way that might allow them to tell us whether their historical methodology was similar to or different from that of professional historians. For example, we simply asked people to "list what source(s) you trust for information about the past."

From the long list we compiled, we selected for the phone survey seven areas in which people reported they had encountered the past--museums, movies, television programs, personal accounts from relatives, conversations with eyewitnesses, and interactions with high school and college history teachers. (College history classes were a less widely shared arena of experience than the other six, but we decided that once we, as college professors ourselves, opened high school history to critical scrutiny, we could hardly exempt ourselves.) In order to facilitate the comparisons that we thought were essential to this section of the survey, we asked people to rate trustworthiness of sources on a 10-point scale--a standard measuring stick in phone surveys.

Our piloting also suggested that the word "trust" (as compared, for example, to "accurate" or "valuable" or "authentic") was most likely to evoke our respondents' sense of what they did and didn't value about a particular source or authority. This was especially important in the follow-up questions, where we asked a percentage of those who rated a source as especially trustworthy or untrustworthy why they had given that rating.

Our questions about "connectedness" were also intended to make comparisons among different modes and arenas for historymaking. Using a 10-point scale, a conventional approach for survey research, allowed us to compare the intensity of engagement with the past that people felt celebrating holidays, visiting history museums or historic sites, gathering with their family, watching a movie or TV program about the past, studying history in school, and reading a book about the past. In part, these questions were meant to build upon the activities questions, gauging intensity of participation and degree of emotional investment. We asked about some of the same arenas (e.g. movies, museums, books), but also added others (e.g., holiday celebrations and schools) that would not have made sense in an inventory of activities over the past year.

As with other closed-ended questions, we asked follow-up questions in a proportion of the cases--between one eighth and one half. The precise wording of these questions varied depending on what the pretesting had shown to be most effective. For example, we asked simply, "How does gathering with your family make you feel connected to the past?" But for museums, films, and books, we found that we needed a more concrete follow-up like "Please give me an example of a museum or historic site that made you feel connected to the past. Why did that museum or historic site make you feel that way?" Perhaps because our phrasing probed the emotional content of the experience, the follow-up questions on the connectedness series elicited some of the richest and most detailed answers in the survey.

The final group of questions focused on the relative importance of various pasts. The only one of these queries that was phrased in a closed-ended way asked people to say which of four areas of the past (family, racial or ethnic group, current community, and the United States) was "most important" to them. We later added an open-ended question about whether the "past of any other area or group" is "very important to you." This proved an economical way of picking up answers that we might have otherwise missed.

We also tried a different way of getting at the same subject by asking people "What about the past is important for children to know?" Our advisers had generally responded enthusiastically to this question, but the answers in pretesting were somewhat disappointing. As a result, we decided to ask the question of only about one tenth of our sample. In general, our guiding principle in deciding which questions to ask (and how often to ask them) was whether or not they seemed to provide useful insights into popular historymaking. (See tables on Web site for information on how many people were asked each question.)

Still another way of assessing what respondents found important about the past was to ask who or what had affected them. We deliberately phrased this question as openly as possible so that people would have the opportunity to describe either Lincoln or their grandmother as the person from the past "who has particularly affected you" and either the Vietnam War or the divorce of their parents as the event from "the past that has most affected you." The follow-ups to these two questions, which asked what the respondent had learned from the person or event, allowed us to probe the ways that people both used and interpreted the past. We knew from piloting, moreover, that the concept of "learning from the past" emerged frequently in discussions about the past, and these questions offered a focused way of investigating what people meant by that phrase.

One question that greatly interested us was whether popular historymaking had changed over time. Unfortunately, surveys conducted at one point in time are not particularly useful for measuring change and continuity. Nevertheless, we tested out various questions--e.g., "In what ways do you believe your knowledge or understanding of the past is similar to and different from your parents?"--that might give us clues. None of them provided particularly telling responses. We did get some useful answers to the question we ultimately asked: "In what ways do you think differently about the past now than you did when you were younger?" That respondents generally answered this question in terms of changes in their personal development instead of changes in the world around them was itself an intriguing finding that could be the subject of another survey.

In deciding on the questions, we consulted extensively with historians and sociologists who study African Americans, Mexican Americans, and American Indians. They encouraged us to develop some additional questions for the minority samples, which are described below. Thus, in those samples we asked African Americans about their sense of connection to the past on Martin Luther King Jr.'s birthday, and we questioned Mexican Americans about Cinco de Mayo, which celebrates the Battle of Puebla on May 5, 1862, in which French forces were defeated near Mexico City. (We tried, however, to phrase these questions in a neutral fashion that would not suggest an expected answer.) We also asked both groups (as well as American Indians) about their views on the relationship of their history to the history of other Americans--for example, "How much of a common history do you think you share with other Americans?" Trying to get at the relationship of African Americans to Africa and Mexican Americans to Mexico, we asked minority respondents: "Is the past of any other place in the world more important to you than the past of the United States?" and then followed up if they said "yes." These additional questions--as well as our decision to ask certain questions of a higher percentage of minority respondents--partially explain why the three minority samples averaged between forty-seven and fifty-nine minutes, whereas the interviews in the national sample averaged thirty minutes.4

Choosing Respondents

Since one of our most important goals was to use a survey as a baseline for future investigations, we needed a random and representative national survey. Here we followed standard survey practice that considers samples as small as 800 people adequate for a national sample from which to make statistically valid comparisons among large subgroups within the population--e.g., men and women, high school graduates and college graduates, people under 30 and those over 65. Although earlier chapters discuss those statistical variations, some readers will be particularly interested in the more detailed tables of cross-tabulations and regressions provided in appendix 2 and especially on our Web site.

To ensure that a random sample of Americans would be called, the CSR did random-digit dialing to a list of phone exchanges randomly selected from all phone exchanges in the United States. This approach is more expensive than calling numbers drawn from a phone directory, but it does not exclude people with unpublished or new listings. In other words, any household in the continental United States with a telephone could have been called as part of the national sample.5 In order to make sure that anyone within that household could be part of the sample, we took additional steps. First, we called to determine that it was a residence; many of the randomly generated phone numbers, as expected, were businesses or numbers not in service. Second, we asked the person who answered the phone to tell us the number of people over the age of eighteen in the household. The interviewer's computer then randomly selected the actual respondent from this group. Since some people are more likely to answer the phone than others, this procedure enabled us to have a group of respondents that better represented the overall population in terms of age, sex, and household status.

Our piloting had suggested that members of "minority" groups had some distinctive responses to our questions about the past. Melissa Keane, for example, described the Arizona pilot interviews with nonwhite respondents as particularly "compelling."6 Moreover, because debates about the place of history in our national culture--specifically about how history should be taught in the schools--have focused, in part, on the relationship between minority and majority cultures, we thought it especially important to hear the voices of members of minority groups. We wanted to know about bases for developing a common curriculum as well as, more generally, about the possible relationship between cultural background on the one hand, and attitudes toward and experiences of the past on the other hand. Unfortunately, a nationwide random sample will not provide sufficient numbers to do that fully. Our 808 national interviews, for example, included 76 African Americans, but only 20 who identified themselves as Native Americans, 33 as Latinos, and 13 as Asian Americans.

We decided, therefore, to develop three minority samples (African Americans, Mexican Americans, and American Indians), using procedures that would give us the most representative samples within the constraints of our budget. For each group we wanted to interview about 200 additional people, since samples of this size provide sufficient statistical precision to compare different minority groups, though not to make fine-grained analyses of them. For African Americans, our initial plan was to keep the same random-digit dialing we used for the national sample but to make a higher percentage of calls in areas with high concentrations of African Americans. Interviewers would begin by asking about age, education, and race (questions that came at the end of the national sample questionnaire). If we learned that the interviewee was not an African American, Mexican American, or American Indian, we would ask four brief questions about the past and then terminate the interview. This approach would make it possible for any African American with a telephone to be included in the survey, even though some (those living in areas with lesser concentrations of black residents) would have a smaller chance of being included.

Using a system developed by a commercial firm, Genesys Sampling Systems of Ft. Washington, Pennsylvania, we randomly generated a list of telephone exchanges and then used demographic data to divide them into three groups--exchanges where the black population was greater than 75 percent; exchanges where it was between 50 and 75 percent; and exchanges where it was below 50 percent. Unfortunately, making calls to phone exchanges with black concentrations of under 50 percent proved prohibitively expensive. We completed only one interview after 400 phone calls in low-concentration exchanges. As a result, we decided to focus our efforts on the medium- and high-concentration exchanges, and as time and money pressures became more intense, we did most of our calling in the high-concentration exchanges. Of the 224 completed interviews in the black sample, 161 were done in high-concentration exchanges and 48 in the medium-density areas. (We also included 14 African Americans whom we reached while making calls in Mexican American and American Indian areas.)

The African Americans in our minority sample were thus atypical in that they were more likely than African Americans in general to live in areas with high concentrations of black residents. (About 15.9 percent of African American households in the United States with telephones are in areas of high-concentration African American telephone exchanges; another 13.6 percent are in medium-concentration exchanges.) Because patterns of black-white segregation extend across class lines, our sample was not dominated by poorer and "inner-city" blacks. We did not, for example, interview any black residents of Fairfax County, a middle-class suburb of Washington, D.C. with a heavily white population, but we did interview residents of Prince George's County, a middle-class suburb of Washington with a majority black population. A more systematic way of demonstrating this point is to compare the 224 black respondents in the minority sample with the black population of the United States. Kennedy and Botsko did this in their AAPOR paper and concluded that our "sampling methods resulted in a diverse group of respondents that came fairly close to approximating the national population of African Americans."7

Our sample and the overall population differed most in place of residence. Whereas only one fifth of the American black population lives in the Midwest, one third of our respondents came from that region. On the other hand, 9 percent of blacks live in the West, but only 4 percent of our respondents were from the West. These differences reflect the greater concentration of high-density black areas in the Midwest as well as the generally greater difficulty in reaching respondents in the West.8 To a lesser degree, our minority sample underrepresented men, younger and older people, the less educated, and the poor. Many of these differences, however, were relatively modest. For example, 47.7 percent of our sample had household incomes under $25,000 per year versus 59.3 percent of the overall black population; 12.2 percent of our sample (versus 17.7 percent of the African American population) was 18-24 years old; 62.8 percent of our sample (compared to 54.5 percent of the black population) was female. More important, these differences are typical of all phone surveys. The bias toward the better educated and more affluent, for example, results partly from the large proportion (an estimated 15.6 percent) of African American households without telephone service.9

The respondents in our special black sample were not only reasonably representative of the nation's black population in sociodemographic terms, they also seem likely to have answered our questions in the same ways as the more broadly drawn black sample. Here, our basis of comparison is the 76 black Americans we reached as part of the national sample. Although African Americans who lived in high concentrations with other African Americans might be expected to think differently about the past (or experience the past differently) than those who lived primarily among whites, they actually gave strikingly similar answers to those of blacks in the national sample. We read and analyzed the text of the 76 interviews with black respondents before the special black sample was undertaken. The subsequent interviews confirmed our initial hypotheses based on reading the open-ended answers given in the national sample.

The answers that black respondents in the two different samples gave to closed-ended, quantifiable questions were also quite similar. In 18 of the 24 questions for which we have quantifiable answers, black respondents in both samples stood in the same relation to the national sample. For example, black respondents in the national sample gave a 8.0 "trust" score to history museums and those in the minority sample gave an 8.1 score; both were lower than the overall result in the national sample (8.4). In six cases there were divergences; they were usually small. For instance, black respondents in the national sample gave a 7.1 "connectedness" score to holiday celebrations, which was higher than the overall national sample (7.0) and blacks in the minority sample (6.9), but obviously not by very much.

Even when the differences between the samples appear greater, they do not turn out to be statistically significant except in one case. For example, more blacks in the national sample than in the black sample (81.8 percent versus 71.3 percent) said that they had seen a film or television program about the past in the previous year. But using a "difference of proportions test"--a standard technique for measuring the statistical significance of the difference between independent samples--we find that this difference is not significant at the .05 level. (To be significant at the .05 level would mean that in only 5 out of 100 samples like the ones used here would we find a difference as large as the one we observe if the populations from which the samples were drawn were, in fact, identical. In other words, a statistically significant difference in our data is one that is unlikely to have occurred simply because of the random fluctuation inherent in survey samples.)

That same test also belies the statistical significance of seeming differences in answers to the question about which area of the past respondents considered most important. In the national sample 18.4 percent put "the past of your ethnic or racial group" first; 28.1 percent of the respondents in the African American sample did the same. It is plausible to speculate that on this question the different manner of creating the samples affected the answers and that those who live in areas with higher concentrations of black residents are more likely to see the past of black America as important. But the difference of proportion test tells us that we would need a difference of 11.3 percent (rather than 9.7 percent) for it to be considered statistically significant.10

The connectedness and trust questions, because they were done on a 1-10 scale, permit us to use a more sensitive "difference of means" test to compare the samples. Even so, the differences in the samples turn out to be statistically significant in only one case--the question about how connected to the past respondents felt when studying history in school. Blacks in the minority sample gave schools a lower connectedness score than the national mean (4.9 versus 5.7), but blacks in the national sample gave a slightly higher number (6.0). Could this be the result of the different schooling experiences that African Americans in heavily black areas might have? This is highly speculative, since we don't know for sure that people who currently live in integrated neighborhoods went to school in similar neighborhoods.

Another possible explanation for this difference had to do with some small differences in the questionnaires. For example, because the screening process required us to ask the respondent's race early in the questionnaire and because the minority sample included an additional question comparing Martin Luther King Jr.'s birthday with other holidays, the respondent might have been thinking in more racially inflected terms by the time we got to this question.11 It is also possible, of course, that this difference is just a statistical accident--in comparing differences between the samples on twenty-four different questions, it would not be implausible for one to be a "false positive."12

Because the special black sample was not particularly different (either in their demographic composition or their answers) from the African Americans in the national sample, we have not distinguished between the two groups in most of the book. When we report on the open-ended responses to questions, we don't indicate whether they came in the national sample or the minority sample. This procedure seems defensible because we are most interested in giving a flavor of different black voices that we heard, because each of the black respondents is offering an individual perspective, and because we didn't find any systematic differences among the black respondents in the two samples. We have also combined the quantifiable answers of the two groups, because this makes the data much easier to follow and separating the answers would not add much to the argument.

We had initially planned a sample of Latinos, but after consulting with survey specialists and experts on Latino Americans, we decided that it made more sense to focus on a particular Latino group. We selected Mexican Americans, who make up 61 percent of the nation's Hispanic population. We used a similar sampling approach to the one employed for African Americans, but focused on high-concentration (70 percent or more) Hispanic-surnamed phone exchanges in Arizona, New Mexico, Colorado, California, and Texas. We chose the states and this approach based on the sampling strategy used in the Latino National Political Survey. That survey spent about $1,000,000 to develop a plan for efficiently surveying Mexican Americans.13 Although we did most of the calls in the high-concentration phone exchanges, we made some calls to a sample of numbers from all exchanges in the five states.

Once again, the greatest difference between our Mexican American respondents and the Mexican American population of the United States involved place of residence. About 16 percent of the nation's 13.5 million Mexican Americans live outside the five states where we surveyed, but only 2 percent of our sample did. (These six people were picked up in doing the African American and Indian samples and then included in the overall Mexican American sample.) But a more important difference had to with the five states where we made our calls. Although only 29 percent of all Mexican Americans live in Texas, 63 percent of our survey respondents did. By contrast, whereas California is home to 45 percent of all Mexican Americans, only 29 percent of our respondents lived there. One reason for the discrepancy is the large number of high-concentration Mexican American areas in Texas, particularly south Texas. Another reason was the higher response rate in Texas than in California. The lower response rate in California probably stems from the generally greater difficulty in reaching respondents in California and the concurrent campaign for the passage of anti-illegal-immigrant Proposition 187 in that state, which probably made Mexican Americans less responsive to calls from non-Hispanic interviewers.

Despite these geographic discrepancies, our Mexican American sample matched the overall Mexican American population in a number of other ways. We had similar numbers of low-income respondents as in the overall population (about 31 percent in both cases had incomes under $15,000, for example). And the match in ages was quite close. Our survey included fewer high-income Mexican Americans, perhaps a result of doing more interviews in south Texas and fewer in California. Women, as in the black sample, were overrepresented (61 percent versus 48 percent). People born in the United States also appear to have been overrepresented (62 percent versus 54 percent), but given general uneasiness over questions of citizenship, our numbers may overstate the number of native-born respondents.

In order to include the experiences of recent Mexican immigrants, we asked Mexican American respondents whether they were more comfortable conversing in Spanish or English and had a Spanish speaker call those who preferred Spanish. We did more than one quarter of the 196 Mexican American interviews in Spanish.

We faced a more serious challenge as we tried to implement our original plan of doing a nationwide sample of American Indians. As we consulted with specialists, we learned that this would require an extraordinary number of phone calls. We also gradually realized that such a sampling would distort the experience of distinctive and often widely scattered tribes.14 On the advice of leading American Indian scholars, we decided to focus instead on a single reservation and selected the Pine Ridge reservation in southwestern South Dakota, home of the Oglala Sioux.

Pine Ridge, the second-largest reservation in the United States, covers two million acres--an expanse bigger than Connecticut or Rhode Island--and is home to 25,000 people.15 We developed a technique for randomly calling phone numbers in one major section of Pine Ridge. (We are not specifying that area here in order to preserve as much anonymity as possible for our respondents.) Our sample differed from the overall Pine Ridge population in some ways that are similar to the other samples--for example, we talked to more women, fewer people under 30, and more people over 65. More significantly, three fifths of our sample reported that they had household incomes under $25,000 per year, whereas the census described more than four fifths as being in this category. (The disparity was even greater for those with household incomes under $15,000 per year.)

Moreover, the general population of Pine Ridge seems to include three times as many non-high school graduates as our sample, and our sample had five times as many college graduates as the overall population. We would speculate that lack of telephones may have had a greater effect in biasing results on Pine Ridge toward the better educated and more affluent than in any of our other samples. We don't have figures for phone service, but about 16 percent of Pine Ridge homes have no electricity and 20 percent have no plumbing; probably the number without phone service is higher.16 Even so, the notion of affluence is relative; most of our respondents, like most Pine Ridge residents, were poor. The Pine Ridge residents we interviewed were more than twice as likely as respondents in the national sample to have income under $25,000 a year. Indeed, Pine Ridge reservation is one of the poorest places in the United States; estimates of unemployment run as high as 75 percent, and Shannon County, South Dakota (which lies entirely within the reservation) is the fourth poorest county in the nation.17

Although this sample can only claim to be reasonably representative of the Oglala Sioux residents of Pine Ridge in telephone households and not of all American Indians, this was the most cost-effective way to incorporate the perspective of at least one major group of American Indians. Further, we thought readers might find the perspectives of the Pine Ridge Sioux particularly intriguing, since that reservation had acquired much national visibility through protests and films and the presence of the Wounded Knee memorial.

To have a limited basis of comparison to the 186 Sioux residents of Pine Ridge, we also called 28 American Indian residents of the Minneapolis area because Minneapolis is the city with the highest proportion of Indians among its inhabitants. (In addition, we interviewed eleven people who identified themselves as American Indians from other parts of the country whom we reached when calling Mexican Americans and African Americans, as well another twenty Native Americans we called in the national sample.) Indeed, another reason for selecting Pine Ridge was that it was near the only major city that had particular census tracts where more than 50 percent of the population was made up of American Indians. In addition to calling people from those census tracts (using reverse phone directories), we also called people from some lists of Minneapolis-area Indians that we were able to obtain. It had been our original intention to call 100 Indian residents of Minneapolis, but the calls to the census tracts proved more difficult to complete than we had anticipated, and we decided that it would instead be worthwhile to have a larger sample of Pine Ridge residents.

We had also initially planned on surveying a sample of Asian Americans, possibly focusing on Chinese Americans. But we could not readily come up with a cost-effective way to sample this group. One problem is that Chinese Americans are a much smaller group (about 1.6 million people in 1990) than Mexican Americans (13.5 million) or African Americans (30 million). And while there are concentrations of Chinese Americans in particular areas, the data on such concentrations in specific telephone exchanges are not readily available and the differences between the Chinese American populations of particular areas (in terms of income, education, and place of birth) are often great. Moreover, we faced the additional problem of locating a sufficient number of qualified survey interviewers in Bloomington, Indiana who could conduct interviews in Mandarin and Cantonese. We reluctantly canceled that portion of the study.

Besides close studies of other groups within the United States, we also need to find out whether these patterns are replicated in other nations and cultures. Attempts by colleagues in Britain (Patricia Clavin) and Australia (Paula Hamilton and her team) to develop surveys of how people in those countries understand and use the past are encouraging. The largest significance of our findings can only become clear when similar studies have been done elsewhere.18

Facing up to the Realities of Telephone Surveys

Although our survey topic and some of our survey methods were unconventional, we faced many of the same problems and limitations as any telephone survey. The questions you ask and the way you ask them will affect the answers you receive. That was just as true in our survey as in any other. For instance, asking about "the past" rather than "history" emphasized certain answers over others. If we had asked people to name an important historical figure we would have received a different set of answers than we did from asking people to "name a person, either a historical figure or one from your personal past, who has particularly affected you."

As with all surveys, responses were also shaped by the context in which the questions were asked. Without another systematic study, there is no way to know which contextual factors may have affected the responses we received. Some of those factors are common to all phone surveys--for example, what confidences are people prepared to offer a distant stranger? Others would have been shared by almost any survey carried out by the Center for Survey Research at Indiana University--for example, did it matter that the calls came from a university in the Midwest or that most of interviewers sounded young (almost three quarters were under 25), white, and female (more than three quarters were women), and started out by speaking in English?19

Probably the best known of these potential "interviewer effects" are those involving cross-racial interviewing. We were concerned about that since only one of our interviewers was nonwhite (an African American woman), and we did a substantial number of interviews among nonwhite populations. We have no way of knowing whether this procedure influenced the answers we received. Survey researchers who have studied the effect of the interviewer's race generally conclude that it makes little difference in regard to nonracial subjects. Thus, it is unlikely that the answers to our questions about, for example, what activities people did in the previous year or how connected to the past they felt on different occasions were affected by having a largely white interviewing staff. A few of our questions (e.g., Which area of the past is most important to you?) did touch on race, and respondents did talk extensively about race in their open-ended answers. But they did not necessarily get into the areas that researchers find most affected by the race of the interviewer--for instance, expressions of hostility or closeness to whites by black respondents.20

Since we are arguing that black and Sioux respondents expressed a significant amount of racial feeling and racial distinctiveness, the use of white interviewers probably only served to understate the degree of that feeling. In other words, we suspect that the use of African American or Indian interviewers would further strengthen rather than undercut our conclusions. It is possible that these interviewer effects also understated ethnic distinctiveness in the answers given by Mexican Americans, but we suspect that the larger impact in that sample, as we note in chapter 5, was to encourage more patriotic answers among people worried about current anti-immigrant feeling.

Presumably, the particular time that we called also made a difference. (Almost all the calls in the national sample were made in April and May of 1994. We began the minority samples in June 1994, but did not complete the last of those interviews until mid-November of that year.) Calling in 1994 probably meant that we were a bit more likely to hear about Schindler's List, which won the Academy Award for Best Picture in March of that year, or about Richard Nixon and Jacqueline Kennedy, both of whom died while we were doing the survey. Similarly, people in the national sample, who were generally called in the spring, might have been more likely to talk about Easter, just as Mexican Americans, some of whom were called in the fall, might have been more likely to mention Thanksgiving. The appearance of Proposition 187 on the California ballot in the fall of 1994 may have influenced some of the Mexican Americans' answers, even though only one person mentioned it explicitly.21 There is no way to do survey research insulated from particular historical or social contexts, and this survey, like all others (and indeed like all forms of social and historical research) was inevitably shaped by the people who carried it out and the times in which they lived.

Another important way that our survey was shaped by the realities of the moment had to do with the problem of getting people to spend time on the phone with strangers. As survey researchers know all too well (and anyone with a telephone probably recognizes), this problem has gotten much worse in the recent past along with the explosion in telemarketing and telephone soliciting for charities and causes. Americans have grown increasingly irritated with being interrupted by unrequested phone calls, and their response has increasingly been to simply hang up. We admit to having done the same thing.

Since one of the goals of surveys like ours is to get a random sample of views, these hang-ups ("refusals" as the survey researchers call them) are a potential problem. The worry, of course, is that the people who are willing to talk are in some way unrepresentative. We don't think that this was a particular problem in our survey, but we should acknowledge that many people refused to talk with us.

In the national sample, for example, the CSR called 2,105 different phone numbers (not counting the calls to numbers that were determined to be either businesses or out of service) to complete the 808 interviews. They classified these 1,300 uncompleted interviews into four categories. Despite at least eight different attempts (including twice each during a weekday morning, afternoon, and evening and the weekend), 250 people "never answered." Another 121 people were away during the entire survey period, were ill, or didn't speak English or Spanish.22 The CSR categorized another 211 people as "persistently unavailable" when repeated attempts (at different times of the day) to contact the selected respondent failed. Finally, in 772 cases, people who answered the phone refused to participate despite at least two attempts by the CSR to win their cooperation. These include 187 hang-ups that occurred before our interviewers were even able to establish that they were speaking to a residence.

In these and other cases, it is difficult to discern the reasons for the refusals, even though the interviewers recorded detailed notes on their efforts. In our rough count of responses, the most common explanation for refusing to participate was a generic comment like "not interested" that people often use when trying to get off the phone quickly. For example, on Sunday, April 24, 1994, an interviewer reached a woman, who listened to the background information about the confidentiality of the survey and then responded that "she wasn't interested." The interviewer tried to explain the importance of getting everyone's opinions and then offered to call back if this was a bad time. She said it was, and the interviewer arranged to call back the next day. The interviewer's notes record her frustration: "I identified myself, and the female respondent remembered talking to us before. However, after she stated remembering our previous calls, she politely, but firmly, said she still wasn't interested and hung up. I was not able to explain the purpose of the interview or do much in the way of conversion." The interviewer tried one more time five days later; the respondent repeated once again that she "wasn't interested" and hung up.

Overall, minority respondents were more likely to refuse to be interviewed--a pattern well known to survey researchers.23 Some of these refusals no doubt reflected the suspicion that a Mexican American in south Texas or an African American in Brooklyn might have about a call from an obviously white interviewer from Indiana. And, indeed, many of the minority refusals came almost immediately. We had 348 refusals (and 165 completions) in calls to high-density African American phone exchanges, but 170 of those refusals came before we could establish the identity of the potential respondent. Similarly, the Mexican American sample included 207 completions with 407 refusals--168 of these coming before residency was established. American Indians from Pine Ridge were, however, more receptive to our calls; we had 186 completions and only 126 refusals.

Our immediate concern was whether our refusal rate was "high." John Kennedy said the refusal rate for this survey was the same as for other surveys of this length that CSR conducted at the same time. Our survey was unusually long; respondents were told that they would likely devote thirty minutes to the survey, and this scared away some people. After "not interested," probably the next most common comment among those who refused to be interviewed was "not enough time."24 And we did our interviews at a time when refusal rates were rising, presumably because of the boom in telemarketing. Our refusal rates might have been high five years earlier, but not in 1994.25 Although we would have liked to talk to all the people we called, we have no reasons to suspect that those who were willing to talk with us differed significantly from those who refused.

Our interviewers used some methods employed by ethnographers and oral historians, but this study was neither a true ethnography nor an oral history. Such studies would supply enormously rich insight into particular people, but they cannot readily offer the representativeness of a national telephone sample that reaches a much wider range of people. Still, that search for representativeness meant we were required to follow some of the standard features of phone interviews, in which interviewers rapidly type out the responses from respondents and repeat them back for accuracy in recording. Not surprisingly, such techniques result in a less freely flowing conversation than would occur in an oral history interview.

Moreover, the pressure of quickly getting down respondents' words inevitably results in typing errors. Because this survey required considerable typing by interviewers, "keeping up with what the respondent was saying in typing down text" was one of the most frequent difficulties cited by interviewers in the postsurvey interviews. As a result, we have occasionally taken the liberty of silently adding missing words and correcting errors in order to get the respondents' words into something closer to what they actually said and to improve the flow of the words. We tried hard, however, not to alter the meanings or even the nuances of the answers.

Although our survey shared the limitations of all telephone survey research and the possible biases introduced by the method should always be in the reader's mind, we do not believe that the answers we received were unusually unrepresentative or distorted. Quite the contrary, we (and the interviewers as well) were often quite surprised by the fullness and candor with which people answered these inquiries.

Notes

1. "The Pilot Study Report: People and the Past," and Melissa Keane, "Asking Questions About the Past," both in Mosaic (Spring/Summer 1992). Back.

2. We sometimes raised or lowered the percentage of people getting particular questions as we began to see which questions produced the richest answers. Back.

3. Christopher Botsko and John M. Kennedy, "Exploring the Past Using Survey Research: Procedures and Problems." Paper presented at the Annual Meeting of the American Association for Public Opinion Research, Fort Lauderdale, Florida, May 1995. Back.

4. The average times were: Oglala Sioux--59 minutes; African American--48 minutes; Mexican American--47 minutes. Back.

5. Because of time zone differences, we did not include Hawaii in the sample. Back.

6. Keane, "Asking Questions About the Past," 8. Back.

7. Kennedy and Botsko, "Exploring the Past," 5. Back.

8. Survey researchers find that response rates for surveys are significantly lower in California; they attribute this pattern to the greater degree of mobility among Californians, the tendency to spend more time out of doors, and a general attitude of "noncompliance" toward surveys. Back.

9. Owen T. Thornberry and James T. Massey, "An Overview of Telephone Coverage," in R. M. Groves et al., eds., Telephone Survey Methodology (New York: Wiley, 1988), 30. (Only 5.8 percent of white households lack phone service.) Back.

10. We recognize that because the samples we are comparing are relatively small, the power of our tests of significance is fairly low. Thus, observed differences among the samples must be large in order to be judged as statistically significant. Back.

11. Minority respondents were also asked how much of a common history they thought they shared with other Americans and whether the past of any other place in the world was more important to them than the past of the United States. But these questions came after the question about how connected the respondents felt studying history in school. Back.

12. One other factor may have affected the differences. Blacks who did not graduate from high school gave a higher rating on connectedness to the past when studying history in school, and the national sample had a higher proportion of non-high school graduates (22 percent) than the black sample (12.4 percent). Back.

13. See "A Methodological Report on the Latino National Political Survey (Partial Draft)" prepared by Robert Santos (University of Michigan) and Carolyn Rahe and Ann Shinefeld (Temple University). See also Rodolfo del Garza et al., Latino Voices: Mexican, Puerto Rican, and Cuban Perspectives on American Politics (Boulder: Westview Press, 1992). Back.

14. For a discussion of the distortions introduced by aggregating data from different groups of Native Americans as well as some of the other methodological and ethical problems in surveying Native Americans, see Fred Solop, "Surveying American Indians: Rethinking the Boundaries of Western Methodology," unpublished paper given to authors by Professor Solop, Department of Political Science, Northern Arizona University. Back.

15. Peter T. Kilborn, "For Poorest Indians, Casinos Aren't Enough," New York Times, 11 June 1997, A1. Back.

16. Jim Naughton, "The Sioux and the Soil: The Black Hills Battle," Washington Post, 24 March 1988, C1. Solop, "Surveying American Indians" reports that "according to tribal officials, 79 percent of homes on the Navajo reservation do not have telephones." Back.

17. Kilborn, "For Poorest Indians." Back.

18. As this book was in press, we learned of the very interesting study of "youth and history" in Europe, based on 30,000 interviews with adolescents in twenty-seven countries. The findings have some intriguing similarities to ours. European adolescents, for example, were much more interested in the pasts of their families than of their nations. They trusted museums and historical places more than other sources; found school textbooks less engaging than other sources; and were highly critical of the same features of their history classes that troubled our respondents: rote memorization of textbooks and facts. See Magne Angvik and Bodo von Borries, eds., Youth and History: A Comparative European Survey on Historical Consciousness and Political Attitudes Among Adolescents (2 vols., Hamburg: Koerber-Stiftung, 1997). References are to Volume A, pp. 74, 76, 86, 87, 88, 96. Back.

19. Twenty-six of the 36 interviewers on whom we have information were under 25; most of them were college or graduate students. Back.

20. See Barbara Anderson, Brian D. Silber, and Paul Abramson, "The Effects of the Race of the Interviewer on Race-Related Attitudes of Black Respondents in SRC/CPS National Election Studies," Public Opinion Quarterly 52 (Fall 1988): 289-342. See also, for a brief summary of previous research, Steven E. Finkel, Thomas Guterbock, and Marian J. Borg, "Race-of-Interviewer Effects in a Pre-election Poll, Virginia 1989," Public Opinion Quarterly 55 (Fall 1991): 313-30. Back.

21. The interviewers who did most of the Spanish-language interviews thought that immigration concerns affected the response rate among Mexican Americans. Back.

22. Janice Sebold, "Survey Period Length, Unanswered Numbers, and Non-response in Telephone Surveys," in Groves, Telephone Survey Methodology, 251-53, cites studies showing that 38 to 47 percent of unanswered calls are to nonresidential numbers. According to John Kennedy, language problems accounted for very few uncompleted interviews. Back.

23. See Thornberry and Massey, "Trends in United States Telephone Coverage," 48, for evidence of lower response rates among African Americans as well as men, those under 25 and over 65, and those with less education. Back.

24. One British study found refusal rates of 14 percent for a forty-minute questionnaire and 9 percent for a twenty-minute version. Martin Collins, et al., "Nonresponse: The UK Experience" in Groves, Telephone Survey Methodology, 229. See also Robert M. Groves and Lars E. Lyberg, "An Overview of Non-response Issues in Telephone Surveys" in the same volume, 191-211. Back.

25. For a study showing "disturbingly high" and increasing refusal rates for surveys in the mid-1990s, see "CMOR Refusal Rates and Industry Image Survey (Summary of Results)," Survey Research 28 (Winter-Spring 1997): 1-4. Back.

Home/List of Tables/Methdological Appendix