Contents
Surveys represent one of the most common types of quantitative, social science research. In survey research, the researcher selects a sample of respondents from a population and administers a standardized questionnaire to them. The questionnaire, or survey, can be a written document that is completed by the person being surveyed, an online questionnaire, a face-to-face interview, or a telephone interview. Using surveys, it is possible to collect data from large or small populations (sometimes referred to as the universe of a study).
Different types of surveys are actually composed of several research techniques, developed by a variety of disciplines. For instance, interview began as a tool primarily for psychologists and anthropologists, while sampling got its start in the field of agricultural economics (Angus and Katona, 1953, p. 15).
Survey research does not belong to any one field and it can be employed by almost any discipline. According to Angus and Katona, "It is this capacity for wide application and broad coverage which gives the survey technique its great usefulness..." (p. 16).
Surveys come in a wide range of forms and can be distributed using a variety of media.
General Instructions: We are interested in your writing and computing experiences and attitudes. Please take a few minutes to complete this survey. In general, when you are presented with a scale next to a question, please put an X over the number that best corresponds to your answer. For example, if you strongly agreed with the following question, you might put an X through the number 5. If you agreed moderately, you might put an X through number 4, if you neither agreed nor disagreed, you might put an X through number 3.
Example Question:
|
Strongly Disagree |
Strongly Agree |
||||
I like to read magazines like TIME or Newsweek. |
1 |
2 |
3 |
4 |
5 |
As is the case with all of the information we are collecting for our study, we will keep all the information you provide to us completely confidential. Your teacher will not be made aware of any of your responses. Thanks for your help.
Your Name: ___________________________________________________________
Your Instructor's Name: __________________________________________________
Expectations about Writing: |
Very Little |
Very Much |
||||||||
1. In general, how much writing do you think will be required in your classes at CSU? |
1 |
2 |
3 |
4 |
5 |
|||||
2. How much writing do you think you will be required to do after you graduate? |
1 |
2 |
3 |
4 |
5 |
|||||
3. How important do you think writing will be to your career? |
1 |
2 |
3 |
4 |
5 |
|||||
Grades: |
||||||||||
4. In this class, I expect to receive a grade of . . . . |
A |
B |
C |
D |
F |
|||||
5. In previous writing classes, I have usually received a grade of . . . |
A |
B |
C |
D |
F |
|||||
Attitudes about Writing: |
Strongly Disagree |
Strongly Agree |
||||||||
6. Good writers are born, not made. |
1 |
2 |
3 |
4 |
5 |
|||||
7. I avoid writing. |
1 |
2 |
3 |
4 |
5 |
|||||
8. Some people have said, "Writing can be learned but it can't be taught." Do you believe it can be learned? |
1 |
2 |
3 |
4 |
5 |
|||||
9. Do you believe writing can be taught? |
1 |
2 |
3 |
4 |
5 |
|||||
10. Practice is the most important part of being a good writer. |
1 |
2 |
3 |
4 |
5 |
|||||
11. I am able to express myself clearly in my writing. |
1 |
2 |
3 |
4 |
5 |
|||||
12. Writing is a lot of fun. |
1 |
2 |
3 |
4 |
5 |
|||||
13. Good teachers can help me become a better writer. |
1 |
2 |
3 |
4 |
5 |
|||||
14. Talent is the most important part of being a good writer. |
1 |
2 |
3 |
4 |
5 |
|||||
15. Anyone with at least average intelligence can learn to be a good writer. |
1 |
2 |
3 |
4 |
5 |
|||||
16. I am no good at writing. |
1 |
2 |
3 |
4 |
5 |
|||||
17. I enjoy writing. |
1 |
2 |
3 |
4 |
5 |
|||||
18. Discussing my writing with others is an enjoyable experience. |
1 |
2 |
3 |
4 |
5 |
|||||
19. Compared to other students, I am a good writer. |
1 |
2 |
3 |
4 |
5 |
|||||
20. Teachers who have read my writing think I am a good writer. |
1 |
2 |
3 |
4 |
5 |
|||||
21. Other students who have read my writing think I am a good writer. |
1 |
2 |
3 |
4 |
5 |
|||||
22. My writing is easy to understand. |
1 |
2 |
3 |
4 |
5 |
|||||
Experiences in Previous Writing Classes: |
Strongly Disagree |
Strongly Agree |
||||||||
23. On some of my past writing assignments, I have been required to submit rough drafts of my papers. |
1 |
2 |
3 |
4 |
5 |
|||||
24. I've taken some courses that focused primarily on spelling, grammar, and punctuation. |
1 |
2 |
3 |
4 |
5 |
|||||
25. In previous writing classes, I've had to revise my papers. |
1 |
2 |
3 |
4 |
5 |
|||||
26. Some of my former writing teachers were more interested in my ideas than in my spelling, punctuation, and grammar. |
1 |
2 |
3 |
4 |
5 |
|||||
27. In some of my former writing classes, I've commented on other students' papers. |
1 |
2 |
3 |
4 |
5 |
|||||
28. In some of my former writing classes, I spent a lot of time working in groups. |
1 |
2 |
3 |
4 |
5 |
|||||
29. Some of my former teachers acted as though the most important part of writing was spelling, punctuation, and grammar. |
1 |
2 |
3 |
4 |
5 |
|||||
Please indicate the TIMES PER MONTH or HOURS PER WEEK you engage in the following activities: |
||||||||||
Writing Activities: How many TIMES PER MONTH do you ... |
||||||||||
30. Write in your journal |
0 |
1 |
2 |
3 |
4+ |
|||||
31. Write poetry on your own |
0 |
1 |
2 |
3 |
4+ |
|||||
32. Write letters to friends or family |
0 |
1 |
2 |
3 |
4+ |
|||||
33. Write fiction |
0 |
1 |
2 |
3 |
4+ |
|||||
34. Write papers for class |
0 |
1 |
2 |
3 |
4+ |
|||||
35. Write for publication |
0 |
1 |
2 |
3 |
4+ |
|||||
Reading Activities: How many HOURS PER WEEK do you ... |
||||||||||
36. Read the newspaper |
0 |
1 |
2 |
3 |
4+ |
|||||
37. Read fiction for pleasure |
0 |
1 |
2 |
3 |
4+ |
|||||
38. Read magazines |
0 |
1 |
2 |
3 |
4+ |
|||||
39. Read for class |
0 |
1 |
2 |
`3 |
4+ |
|||||
Attitudes about Computers: |
Strongly Disagree |
Strongly Agree |
||||||||
40. The challenge of learning about computers is exciting. |
1 |
2 |
3 |
4 |
5 |
|||||
41. I am confident that I can learn computer skills. |
1 |
2 |
3 |
4 |
5 |
|||||
42. Anyone can learn to use a computer if they are patient and motivated. |
1 |
2 |
3 |
4 |
5 |
|||||
43. Learning to operate computers is like learning any new skill-- the more you practice, the better you become. |
1 |
2 |
3 |
4 |
5 |
|||||
44. I feel apprehensive about working with computers. |
1 |
2 |
3 |
4 |
5 |
|||||
45. I have difficulty in understanding the technical aspects of computers. |
1 |
2 |
3 |
4 |
5 |
|||||
46. It scares me to think that I could cause the computer to destroy a large amount of information by hitting the wrong key. |
1 |
2 |
3 |
4 |
5 |
|||||
47. You have to be a genius to understand all the special commands used by most computer programs. |
1 |
2 |
3 |
4 |
5 |
|||||
48. If given the opportunity, I would like to learn about and use computers. |
1 |
2 |
3 |
4 |
5 |
|||||
49. I have avoided computers because they are unfamiliar and somewhat intimidating to me. |
1 |
2 |
3 |
4 |
5 |
|||||
50. I feel computers are necessary tools in both educational and work settings. |
1 |
2 |
3 |
4 |
5 |
|||||
51. I own my own computer. |
No |
Yes |
||||||||
52. I don't own my own computer, but I regularly use my parents' or a friend's computer. |
No |
Yes |
Imagine that you are interested in exploring the attitudes college students have about writing. Since it would be impossible to interview every student on campus, choosing the mail-out survey as your method would enable you to choose a large sample of college students. You might choose to limit your research to your own college or university, or you might extend your survey to several different institutions. If your research question demands it, the mail survey allows you to sample a very broad group of subjects at small cost.
Strengths
Cost: Mail surveys are low in cost compared to other methods of surveying. This type of survey can cost up to 50% less than the self-administered survey, and almost 75% less than a face-to-face survey (Bourque and Fielder 9). Mail surveys are also substantially less expensive than drop-off and group-administered surveys.
Convenience: Since many of these types of surveys are conducted through a mail-in process, the participants are able to work on the surveys at their leisure.
Bias: Because the mail survey does not allow for personal contact between the researcher and the respondent, there is little chance for personal bias based on first impressions to alter the responses to the survey. This is an advantage because if the interviewer is not likeable, the survey results will be unfavorably affected. However, this could be a disadvantage as well.
Sampling--internal link: It is possible to reach a greater population and have a larger universe (sample of respondents) with this type of survey because it does not require personal contact between the researcher and the respondents.
Weaknesses
Low Response Rate: One of the biggest drawbacks to written survey, especially as it relates to the mail-in, self-administered method, is the low response rate. Compared to a telephone survey or a face-to-face survey, the mail-in written survey has a response rate of just over 20%.
Ability of Respondent to Answer Survey: Another problem with self-administered surveys is three-fold: assumptions about the physical ability, literacy level and language ability of the respondents. Because most surveys pull the participants from a random sampling, it is impossible to control for such variables. Many of those who belong to a survey group have a different primary language than that of the survey. They may also be illiterate or have a low reading level and therefore might not be able to accurately answer the questions. Along those same lines, persons with conditions that cause them to have trouble reading, such as dyslexia, visual impairment or old age, may not have the capabilities necessary to complete the survey.
Imagine that you are interested in finding out how instructors who teach composition in computer classrooms at your university feel about the advantages of teaching in a computer classroom over a traditional classroom. You have a very specific population in mind, and so a mail-out survey would probably not be your best option. You might try an oral survey, but if you are doing this research alone this might be too time consuming. The group administered questionnaire would allow you to get your survey results in one space of time and would ensure a very high response rate (higher than if you stuck a survey into each instructor's mailbox). Your challenge would be to get everyone together. Perhaps your department holds monthly technology support meetings that most of your chosen sample would attend. Your challenge at this point would be to get permission to use part of the weekly meeting time to administer the survey, or to convince the instructors to stay to fill it out after the meeting. Despite the challenges, this type of survey might be the most efficient for your specific purposes.
Rate of Response: This second type of written survey is generally administered to a sample of respondents in a group setting, guaranteeing a high response rate.
Specificity: This type of written survey can be very versatile, allowing for a spectrum of open and closed ended types of questions and can serve a variety of specific purposes, particularly if you are trying to survey a very specific group of people.
Weaknesses of Group Administered Questionnaires
Sampling: This method requires a small sample, and as a result is not the best method for surveys that would benefit from a large sample. This method is only useful in cases that call for very specific information from specific groups.
Scheduling: Since this method requires a group of respondents to answer the survey together, this method requires a slot of time that is convenient for all respondents.
Imagine that you would like to find out about how the dorm dwellers at your university feel about the lack of availability of vegetarian cuisine in their dorm dining halls. You have prepared a questionnaire that requires quite a few long answers, and since you suspect that the students in the dorms may not have the motivation to take the time to respond, you might want a chance to tell them about your research, the benefits that might come from their responses, and to answer their questions about your survey. To ensure the highest response rate, you would probably pick a time of the day when you are sure that the majority of the dorm residents are home, and then work your way from door to door. If you don't have time to interview the number of students you need in your sample, but you don't trust the response rate of mail surveys, the drop-off survey might be the best option for you.
Strengths
Convenience: Like the mail survey, the drop-off survey allows the respondents to answer the survey at their own convenience.
Response Rates: The response rates for the drop-off survey are better than the mail survey because it allows the interviewer to make personal contact with the respondent, to explain the importance of the survey, and to answer any questions or concerns the respondent might have.
Weaknesses
Time: Because of the personal contact this method requires, this method takes considerably more time than the mail survey.
Sampling: Because of the time it takes to make personal contact with the respondents, the universe of this kind of survey will be considerably smaller than the mail survey pool of respondents.
Response: The response rate for this type of survey, although considerably better than the mail survey, is still not as high as the response rate you will achieve with an oral survey.
Oral surveys are considered more personal forms of survey than the written or electronic methods. Oral surveys are generally used to get thorough opinions and impressions from the respondents.
Oral surveys can be administered in several different ways. For instance, in a group interview, as opposed to a group administered written survey, each respondent is not given an instrument (an individual questionnaire). Instead, the respondents work in groups to answer the questions together while one person takes notes for the whole group. Another more familiar form of oral survey is the phone survey. Phone surveys can be used to get short one word answers (yes/no), as well as longer answers.
Strengths
Personal Contact: Oral surveys conducted either on the telephone or in person give the interviewer the ability to answer questions from the participant. If the participant, for example, does not understand a question or needs further explanation on a particular issue, it is possible to converse with the participant. According to Glastonbury and MacKean, "interviewing offers the flexibility to react to the respondent's situation, probe for more detail, seek more reflective replies and ask questions which are complex or personally intrusive" (p. 228).
Response Rate: Although obtaining a certain number of respondents who are willing to take the time to do an interview is difficult, the researcher has more control over the response rate in oral survey research than with other types of survey research. As opposed to mail surveys where the researcher must wait to see how many respondents actually answer and send back the survey, a researcher using oral surveys can, if the time and money are available, interview respondents until the required sample has been achieved.
Weaknesses
Cost: The most obvious disadvantage of face-to-face and telephone survey is the cost. It takes time to collect enough data for a complete survey, and time translates into payroll costs and sometimes payment for the participants.
Bias: Using face-to-face interview for your survey may also introduce bias, from either the interviewer or the interviewee.
Types of Questions Possible: Certain types of questions are not convenient for this type of survey, particularly for phone surveys where the respondent does not have a chance to look at the questionnaire. For instance, if you want to offer the respondent a choice of 5 different answers, it will be very difficult for respondents to remember all of the choices, as well as the question, without a visual reminder. This problem requires the researcher to take special care in constructing questions to be read aloud.
Attitude: Anyone who has ever been interrupted during dinner by a phone interviewer is aware of the negative feelings many people have about answering a phone survey. Upon receiving these calls, many potential respondents will simply hang up.
With the growth of the Internet (and in particular the World Wide Web) and the expanded use of electronic mail for business communication, the electronic survey is becoming a more widely used survey method. Electronic surveys can take many forms. They can be distributed as electronic mail messages sent to potential respondents. They can be posted as World Wide Web forms on the Internet. And they can be distributed via publicly available computers in high-traffic areas such as libraries and shopping malls. In many cases, electronic surveys are placed on laptops and respondents fill out a survey on a laptop computer rather than on paper.
Strengths
Cost-savings: It is less expensive to send questionnaires online than to pay for postage or for interviewers.
Ease of Editing/Analysis: It is easier to make changes to questionnaire, and to copy and sort data.
Faster Transmission Time: Questionnaires can be delivered to recipients in seconds, rather than in days as with traditional mail.
Easy Use of Preletters: You may send invitations and receive responses in a very short time and thus receive participation level estimates.
Higher Response Rate: Research shows that response rates on private networks are higher with electronic surveys than with paper surveys or interviews.
More Candid Responses: Research shows that respondents may answer more honestly with electronic surveys than with paper surveys or interviews.
Potentially Quicker Response Time with Wider Magnitude of Coverage: Due to the speed of online networks, participants can answer in minutes or hours, and coverage can be global.
Weaknesses
Sample Demographic Limitations: Population and sample limited to those with access to computer and online network.
Lower Levels of Confidentiality: Due to the open nature of most online networks, it is difficult to guarantee anonymity and confidentiality.
Layout and Presentation issues: Constructing the format of a computer questionnaire can be more difficult the first few times, due to a researcher's lack of experience.
Additional Orientation/Instructions: More instruction and orientation to the computer online systems may be necessary for respondents to complete the questionnaire.
Potential Technical Problems with Hardware and Software: As most of us (perhaps all of us) know all too well, computers have a much greater likelihood of "glitches" than oral or written forms of communication.
Response Rate: Even though research shows that e-mail response rates are higher, Opermann (1995) warns that most of these studies found response rates higher only during the first few days; thereafter, the rates were not significantly higher.
Initial planning of the survey design and survey questions is extremely important in conducting survey research. Once surveying has begun, it is difficult or impossible to adjust the basic research questions under consideration or the tool used to address them since the instrument must remain stable in order to standardize the data set. This section provides information needed to construct an instrument that will satisfy basic validity and reliability issues. It also offers information about the important decisions you need to make concerning the types of questions you are going to use, as well as the content, wording, order and format of your survey questionnaire.
Four key issues should be considered when designing a survey or questionnaire: respondent attitude, the nature of the items (or questions) on the survey, the cost of conducting the survey, and the suitability of the survey to your research questions.
Respondent attitude: When developing your survey instrument, it is important to try to put yourself into your target population's shoes. Think about how you might react when approached by a pollster while out shopping or when receiving a phone call from a pollster while you are sitting down to dinner. Think about how easy it is to throw away a response survey that you've received in the mail. When developing your instrument, it is important to choose the method you think will work for your research, but also one in which you have confidence. Ask yourself what kind of survey you, as a respondent, would be most apt to answer.
Nature of questions: It is important to consider the relationship between the medium that you use and the questions that you ask. For instance, certain types of questions are difficult to answer over the telephone. Think of the problems you would have in attempting to record Likert scale responses, as in closed-ended questions, over the telephone--especially if a scale of more than five points is used. Responses to open-ended questions would also be difficult to record and report in telephone interviews.
Cost: Along with decisions about the nature of the questions you ask, expense issues also enter into your decision making when planning a survey. The population under consideration, the geographic distribution of this sample population, and the type of questionnaire used all affect costs.
Ability of instrument to meet needs of research question: Finally, there needs to be a logical link between your survey instrument and your research questions. If it is important to get a large number of responses from a broad sample of the population, you obviously will not choose to do a drop-off written survey or an in-person oral survey. Because of the size of the needed sample, you will need to choose a survey instrument that meets this need, such as a phone or mail survey. If you are interested in getting thorough information that might need a large amount of interaction between the interviewer and respondent, you will probably pick in-person oral survey with a smaller sample of respondents. Your questions, then, will need to reflect both your research goals and your choice of medium.
Developing well-crafted questionnaires is more difficult than it might seem. Researchers should carefully consider the type, content, wording, and order of the questions that they include. In this section, we discuss the steps involved in questionnaire development and the advantages and disadvantages of various techniques.
All researchers must make two basic decisions when designing a survey--they must decide: 1) whether they are going to employ an oral, written, or electronic method, and 2) whether they are going to choose questions that are open or close-ended.
Closed-Ended Questions: Closed-ended questions limit respondents' answers to the survey. The participants are allowed to choose from either a pre-existing set of dichotomous answers, such as yes/no, true/false, or multiple choice with an option for "other" to be filled in, or ranking scale response options. The most common of the ranking scale questions is called the Likert scale question. This kind of question asks the respondents to look at a statement (such as "The most important education issue facing our nation in the year 2000 is that all third graders should be able to read") and then "rank" this statement according to the degree to which they agree ("I strongly agree, I somewhat agree, I have no opinion, I somewhat disagree, I strongly disagree").
Open-Ended Questions: Open-ended questions do not give respondents answers to choose from, but rather are phrased so that the respondents are encouraged to explain their answers and reactions to the question with a sentence, a paragraph, or even a page or more, depending on the survey. If you wish to find information on the same topic as asked above (the future of elementary education), but would like to find out what respondents would come up with on their own, you might choose an open-ended question like "What do you think is the most important educational issue facing our nation in the year 2000?" rather than the Likert scale question. Or, if you would like to focus on reading as the topic, but would still not like to limit the participants' responses, you might pose the question this way: "Do you think that the most important issue facing education is literacy? Explain your answer below."
Note: Keep in mind that you do not have to use close-ended or open-ended questions exclusively. Many researchers use a combination of closed and open questions; often researchers use close-ended questions in the beginning of their survey, then allow for more expansive answers once the respondent has some background on the issue and is "warmed-up."
Rating scales: ask respondents to rate something like an idea, concept, individual, program, product, etc. based on a closed ended scale format, usually on a five-point scale. For example, a Likert scale presents respondents with a series of statements rather than questions, and the respondents are asked to which degree they disagree or agree.
Ranking scales: ask respondents to rank a set of ideas or things, etc. For example, a researcher can provide respondents with a list of ice cream flavors, and then ask them to rank these flavors in order of which they like best, with the rank of "one" representing their favorite. These are more difficult to use than rating scales. They will take more time, and they cannot easily be used for phone surveys since they often require visual aids. However, since ranking scales are more difficult, they may actually increase appropriate effort from respondents.
Magnitude estimation scales: ask respondents to provide numeric estimation of answers. For example, respondents might be asked: "Since your least favorite ice cream flavor is vanilla, we'll give it a score of 10. If you like another ice cream 20 times more than vanilla, you'll give it a score of 200, and so on. So, compared to vanilla at a score of ten, how much do you like rocky road?" These scales are obviously very difficult for respondents. However, these scales have been found to help increase variance explanations over ordinal scaling.
Split or unfolding questions: begin by asking respondents a general question, and then follow up with clarifying questions.
Funneling questions: guide respondents through complex issues or concepts by using a series of questions that progressively narrow to a specific question. For example, researchers can start asking general, open-ended questions, and then move to asking specific, closed-ended, forced-choice questions.
Inverted funneling questions: ask respondents a series of questions that move from specific issues to more general issues. For example, researchers can ask respondents specific, closed-ended questions first and then ask more general, open-ended questions. This technique works well when respondents are not expected to be knowledgeable about a content area or when they are not expected to have an articulate opinion regarding an issue.
Factorial questions: use stories or vignettes to study judgment and decision-making processes. For example, a researcher could ask respondents: "You're in a dangerous, rapidly burning building. Do you exit the building immediately or go upstairs to wake up the other inhabitants?" Converse and Presser (1986) warn that little is known about how this survey question technique compares with other techniques.
The wording of survey questions is a tricky endeavor. It is difficult to develop shared meanings or definitions between researchers and the respondents, and among respondents.
In The Practice of Social Research, Keith Crew, a professor of Sociology at the University of Kentucky, cites a famous example of a survey gone awry because of wording problems. An interview survey that included Likert-type questions ranging from "very much" to "very little" was given in a small rural town. Although it would seem that these items would accurately record most respondents' opinions, in the colloquial language of the region the word "very" apparently has an idiomatic usage which is closer to what we mean by "fairly" or even "poorly." You can just imagine what this difference in definition did to the survey results (p. 271).
This, however, is an extreme case. Even small changes in wording can shift the answers of many respondents. The best thing researchers can do to avoid problems with wording is to pretest their questions. However, researchers can also follow some suggestions to help them write more effective survey questions.
To write effective questions, researchers need to keep in mind these four important techniques: directness, simplicity, specificity, and discreteness.
When considering the content of your questionnaire, obviously the most important consideration is whether the content of the questions will elicit the kinds of questions necessary to answer your initial research question. You can gauge the appropriateness of your questions by pretesting your survey, but you should also consider the following questions as you are creating your initial questionnaire:
Although there are no general rules for ordering survey questions, there are still a few suggestions researchers can follow when setting up a questionnaire.
The following general guidelines for ordering survey questions can address these questions:
Before developing a survey questionnaire, Converse and Presser (1986) recommend that researchers consult published compilations of survey questions, like those published by the National Opinion Research Center and the Gallup Poll. This will not only give you some ideas on how to develop your questionnaire, but you can even borrow questions from surveys that reflect your own research. Since these questions and questionnaires have already been tested and used effectively, you will save both time and effort. However, you will need to take care to only use questions that are relevant to your study, and you will usually have to develop some questions on your own.
While designing questions for a survey, researchers should to be aware of a few problems and how to avoid them:
"Everyone has an opinion": It is incorrect to assume that each respondent has an opinion regarding every question. Therefore, you might offer a "no opinion" option to avoid this assumption. Filters can also be created. For example, researchers can ask respondents if they have any thoughts on an issue, to which they have the option to say "no."
Agree and disagree statements: according to Converse and Presser (1986), these statements suffer from "acquiescence" or the tendency of respondents to agree despite question content (p.35). Researchers can avoid this problem by using forced-choice questions with these statements.
Response order bias: this occurs when a respondent loses track of all options and picks one that comes easily to mind rather than the most accurate. Typically, the respondent chooses the last or first response option. This problem might occur if researchers use long lists and/or rating scales.
Response set: this problem can occur when using a close-ended question format with response options like yes/no or agree/disagree. Sometimes respondents do not consider each question and just answer no or disagree to all questions.
Telescoping: occurs when respondents report that an event took place more recently than it actually did. To avoid this problem, Frey and Mertens (1995) say researchers can use "aided recall"-using a reference point or landmark, or list of events or behaviors (p. 101).
Forward telescoping: occurs when respondents include events that have actually happened before the time frame established. This results in overreporting. According to Converse and Presser (1986), researchers can use "bounded recall" to avoid this problem (p.21). Bounded recall is when researchers interview respondents several months or so after the initial interview to inquire about events that have happened since then. This technique, however, requires more resources. Converse and Presser said that researchers can also just try to narrow the reference points used, which has been shown to reduce this problem too.
Fatigue effect: happens when respondents grow bored or tired during the interview. To avoid this problem, Frey and Mertens (1995) say researchers can use transitions, vary questions and response options, and they can put easy to answer questions at the end of the questionnaire.
Ultimately, designing the perfect survey questionnaire is impossible. However, researchers can still create effective surveys. To determine the effectiveness of your survey questionnaire, it is necessary to pretest it before actually using it. Pretesting can help you determine the strengths and weaknesses of your survey concerning question format, wording and order.
There are two types of survey pretests: participating and undeclared.
General Applications of Pretesting:
Whether or not you use a participating or undeclared pretest, pretesting should ideally also test specifically for question variation, meaning, task difficulty, and respondent interest and attention. Your pretests should also include any questions you borrowed from other similar surveys, even if they have already been pretested, because meaning can be affected by the particular context of your survey. Researchers can also pretest the following: flow, order, skip patterns, timing, and overall respondent well-being.
Pretesting for reliability and validity:
Researchers might also want to pretest the reliability and validity of the survey questions. To be reliable, a survey question must be answered by respondents the same way each time. According to Weisberg et. al (1989), researchers can assess reliability by comparing the answers respondents give in one pretest with answers in another pretest. Then, a survey question's validity is determined by how well it measures the concept(s) it is intended to measure. Both convergent validity and divergent validity can be determined by first comparing answers to another question measuring the same concept, then by measuring this answer to the participant's response to a question that asks for the exact opposite answer.
For instance, you might include questions in your pretest that explicitly test for validity: if a respondent answers "yes" to the question, "Do you think that the next president should be a Republican?" then you might ask "What party do you think you might vote for in the next presidential election?" to check for convergent validity, then "Do you think that you will vote Democrat in the next election?" to check the answer for divergent validity.
Once you have constructed a questionnaire, you'll need to make a plan that outlines how and to whom you will administer it. There are a number of options available in order to find a relevant sample group amongst your survey population. In addition, there are various considerations involved with administering the survey itself.
This section attempts to answer the question: "How do I go about getting my questionnaire answered?"
For all types of surveys, some basic practicalities need to be considered before the surveying begins. For instance, you need to find the most convenient time to carry out the data collection (this becomes particularly important in interview surveying and group-administered surveys), how long the data collection is likely to take. Finally, you need to make practical arrangements for administering the survey. Pretesting your survey will help you determine the time it takes to administer, process, and analyze your survey, and will also help you clear out some of the bugs.
Written surveys can be handled in several different ways. A research worker can deliver the questionnaires to the homes of the sample respondents, explain the study, and then pick the questionnaires up on a later date (or, alternately, ask the respondent to mail the survey back when completed). Another option is mailing questionnaires directly to homes and having researchers pick up and check the questionnaires for completeness in person. This method has proven to have higher response rates than straightforward mail surveys, although it tends to take more time and money to administer.
It is important to put yourself into the role of respondent when deciding how to administer your survey. Most of us have received and thrown away a mail survey, and so it may be useful to think back to the reasons you had for not filling it out and returning it. Here are some ideas for boosting your response rate:
Face-To-Face Surveys
Oftentimes conducting oral surveys requires a staff of interviewers; to control this variable as much as possible, the presentation and preparation of the interviewer is an important consideration.
When actually administering the survey, you need to make decisions about how much of the participants' responses need to be recorded, how much the interviewer will need to "probe" for responses, and how much the interviewer will need to account for context (what is the respondent's age, race, gender, reaction to the study, etc.) If you are administering a close-ended question survey, these may not be considerations. On the other hand, when recording more open-ended responses, the researcher needs to decide beforehand on each of these factors:
Phone Surveys
Phone surveys certainly involve all of the preparedness of the face-to-face surveys, but encounter new problems because of their reputation. It is much easier to hang-up on a phone surveyor than it is to slam the door in someone's face, and so the sheer number of calls needed to complete a survey can be baffling. Computer innovation has tempered this problem a bit by allowing more for quick and random number dialing and the ability for interviewers to type answers programs that automatically set up the data for analysis. Systems like CATI (Computer-assisted survey interview) have made phone surveys a more cost and time effective method, and therefore a popular one, although respondents are getting more and more reluctant to answer phone surveys because of the increase in telemarketing.
Before conducting a survey, you must choose a relevant survey population. And, unless a survey population is very small, it is usually impossible to survey the entire relevant population. Therefore, researchers usually just survey a sample of a population from an actual list of the relevant population, which in turn is called a sampling frame. With a carefully selected sample, researchers can make estimations or generalizations regarding an entire population's opinions, attitudes or beliefs on a particular topic.
There are two different types of sampling procedures--probability and nonprobability. Probability sampling methods ensure that there is a possibility for each person in a sample population to be selected, whereas nonprobability methods target specific individuals. Nonprobability sampling methods include the following:
Clearly, there can be an inherent bias in nonprobability methods. Therefore, according to Weisberg, Krosnick, and Bowen (1989), it is not surprising that most survey researchers prefer probability sampling methods. Some commonly used probability sampling methods for surveys are:
Directly related to sample size are the concepts of sampling and nonsampling errors. According to Fox and Tracy (1986), surveys are subject to both sampling errors and nonsampling errors.
A sampling error arises from the fact that inevitably samples differ from their populations. Therefore, survey sample results should be seen only as estimations. Weisberg et. al. (1989) said sampling errors cannot be calculated for nonprobability samples, but they can be determined for probability samples. First, to determine sample error, look at the sample size. Then, look at the sampling fraction--the percentage of the population that is being surveyed. Thus, the more people surveyed, the smaller the error. This error can also be reduced, according to Fox and Tracy (1986), by increasing the representativeness of the sample.
Then, there are two different kinds of nonsampling error--random and nonrandom errors. Fox and Tracy (1986) said random errors decrease the reliability of measurements. These errors can be reduced through repeated measurements. Nonrandom errors result from a bias in survey data, which is connected to response and nonresponse bias.
Any statement of sampling error must contain two essential components: the confidence level and the confidence interval. These two components are used together to express the accuracy of the sample's statistics in terms of the level of confidence that the statistics fall within a specified interval from the true population parameter. For example, a researcher may be "95 percent confident" that the sample statistic (that 50 percent favor candidate X) is within plus or minus 5 percentage points of the population parameter. In other words, the researcher is 95 percent confident that between 45 and 55 percent of the total population favor candidate X.
Lauer and Asher (1988) provide a table that gives the confidence interval limits for percentages based upon sample size (p. 58):
Sample Size and Confidence Interval Limits
(95% confidence intervals based on a population incidence of 50% and a large population relative to sample size.)
Sample size |
Confidence interval limits for percentages |
10 + |
31% |
20 + |
22% |
30 + |
18% |
40 + |
16% |
50 + |
14% |
60 + |
13% |
70 + |
12% |
80 + |
11% |
90 + |
10.3% |
100 + |
9.8% |
150 + |
8.0% |
200 + |
6.9% |
250 + |
6.2% |
300 + |
5.6% |
400 + |
4.9% |
500 + |
4.4% |
1000 + |
3.1% |
When selecting a sample size, one can consider that a higher number of individuals surveyed from a target group yields a tighter measurement, a lower number yields a looser range of confidence limits. The confidence limits may need to be corrected if, according to Lauer and Asher (1988), "the sample size starts to approach the population size" or if "the variable under scrutiny is known to have a much [original emphasis] smaller or larger occurrence than 50% in the whole population" (p. 59). For smaller populations, Singleton (1988) said the standard error or confidence interval should be multiplied by a correction factor equal to sqrt(1 - f), where "f" is the sampling fraction, or proportion of the population included in the sample.
Lauer and Asher (1988) give a table of correction factors for confidence limits where sample size is an important part of population size (p. 60) and also a table of correction factors for where the percentage incidence of the parameter in the population is not 50% (p. 61).
Tables for Calculating Confidence Limits vs. Sample Size
Correction Factors for Confidence Limits When Sample Size (n) Is an Important Part of Population Size (N >= 100)
Sample percentage of population |
Correction factor |
5% |
.98 |
10% |
.95 |
15% |
.92 |
20% |
.89 |
25% |
.87 |
30% |
.84 |
35% |
.81 |
40% |
.78 |
45% |
.74 |
50% |
.71 |
55% |
.67 |
60% |
.63 |
65% |
.59 |
70% |
.55 |
(For n over 70% of N, take all of N)
From Lauer and Asher (1988, p. 60)
Correction Factors for Rare and Common Percentage of Variables
Percentage incidence |
Correction factor |
50% |
none |
40% or 60% |
.98 |
35% or 65% |
.95 |
30% or 70% |
.92 |
25% or 75% |
.87 |
20% or 80% |
.80 |
15% or 85% |
.71 |
10% or 90% |
.60 |
5% or 95% |
.44 |
2.5% or 97.5% |
.31 |
From Lauer and Asher (1988, p. 61)
After creating and conducting your survey, you must now process and analyze the results. These steps require strict attention to detail and, in some cases, knowledge of statistics and computer software packages. How you conduct these steps will depend on the scope of your study, your own capabilities, and the audience to whom you wish to direct the work.
It is clearly important to keep careful records of survey data in order to do effective work. Most researchers recommend using a computer to help sort and organize the data. Additionally, Glastonbury and MacKean point out that once the data has been filtered though the computer, it is possible to do an unlimited amount of analysis (p. 243).
Jolliffe (1986) believes that editing should be the first step to processing this data. He writes, "The obvious reason for this is to ensure that the data analyzed are correct and complete . At the same time, editing can reduce the bias, increase the precision and achieve consistency between the tables [regarding those produced by social science computer software] (p. 100). Of course, editing may not always be necessary, if for example you are doing a qualitative analysis of open-ended questions, or the survey is part of a larger project and gets distributed to other agencies for analysis. However, editing could be as simple as checking the information input into the computer.
All of this information should be used to test for statistical significance. See our guide on Statistics for more on this topic.
Information may be recorded in any number of ways. Charts and graphs are clear, visual ways to record findings in many cases. For instance, in a mail-out survey where response rate is an issue, you might use a response rate graph to make the process easier. The day the surveys are mailed out should be recorded first. Then, every day thereafter, the number of returned questionnaires should be logged on the graph. Be sure to record both the number returned each day, and the cumulative number, or percentage. Also, as each completed questionnaire is returned, each should be opened, scanned and assigned an identification number.
Before actually beginning the survey the researcher should know how they want to analyze the data. As stated in the Processing the Results section, if you are collecting quantifiable data, a code book is needed for interpreting your data and should be established prior to collecting the survey data. This is important because there are many different formulas needed in order to properly analyze the survey research and obtain statistical significance. Since computer programs have made the process of analyzing data vastly easier than it was, it would be sensible to choose this route. Be sure to pick your program before you design your survey - - some programs require the data to be laid out in different ways.
After the survey is conducted and the data collected, the results must be assembled in some useable format that allows comparison within the survey group, between groups, or both. The results could be analyzed in a number of ways. A T-test may be used to determine if scores of two groups differ on a single variable--whether writing ability differs among students in two classrooms, for instance. A matched T-Test could also be applied to determine if scores of the same participants in a study differ under different conditions or over time. An ANOVA could be applied if the study compares multiple groups on one or more variables. Correlation measurements could also be constructed to compare the results of two interacting variables within the data set.
Secondary Analysis
Secondary analysis of survey data is an accepted methodology which applies previously collected survey data to new research questions. This methodology is particularly useful to researchers who do not have the time or money to conduct an extensive survey, but may be looking at questions for which some large survey has already collected relevant data. A number of books and chapters have been written about this methodology, some of which are listed in the annotated bibliography under "Secondary Analysis."
Advantages
Disadvantages
SNAP: Offers simple survey analysis, is able to help with the survey from start to finish, including the designing of questions and questionnaires.
SPSS: Statistical package for social sciences; can cope with most kinds of data.
SAS: A flexible general purpose statistical analysis system.
MINITAB: A very easy-to-use and fairly limited general purpose package for "beginners."
STATGRAPHS: General interactive statistical package with good graphics but not very flexible.
The final stage of the survey is to report your results. There is not an established format for reporting a survey's results. The report may follow a pattern similar to formal experimental write-ups, or the analysis may show up in pitches to advertising agencies--as with Arbitron data--or the analysis may be presented in departmental meetings to aid curriculum arguments. A formal report might contain contextual information, a literature review, a presentation of the research question under investigation, information on survey participants, a section explaining how the survey was conducted, the survey instrument itself, a presentation of the quantified results, and a discussion of the results.
You can choose to graphically represent your data for easier interpretation by others outside your research project. You can use, for example, bar graphs, histograms, frequency polygrams, pie charts and consistency tables.
In this section, we present several commentaries on survey research.
Strengths:
Weaknesses:
Surveys tend to be weak on validity and strong on reliability. The artificiality of the survey format puts a strain on validity. Since people's real feelings are hard to grasp in terms of such dichotomies as "agree/disagree," "support/oppose," "like/dislike," etc., these are only approximate indicators of what we have in mind when we create the questions. Reliability, on the other hand, is a clearer matter. Survey research presents all subjects with a standardized stimulus, and so goes a long way toward eliminating unreliability in the researcher's observations. Careful wording, format, content, etc. can reduce significantly the subject's own unreliability.
Because electronic mail is rapidly becoming such a large part of our communications system, this survey method deserves special attention. In particular, there are four basic ethical issues researchers should consider if they choose to use email surveys.
Sample Representatives: Since researchers who choose to do surveys have an ethical obligation to use population samples that are inclusive of race, gender, educational and income levels, etc., if you choose to utilize e-mail to administer your survey you face some serious problems. Individuals who have access to personal computers, modems and the Internet are not necessarily representative of a population. Therefore, it is suggested that researchers not use an e-mail survey when a more inclusive research method is available. However, if you do choose to do an e-mail survey because of its other advantages, you might consider including as part of your survey write up a reminder of the limitations of sample representativeness when using this method.
Data Analysis: Even though e-mail surveys tend to have greater response rates, researchers still do not necessarily know exactly who has responded. For example, some e-mail accounts are screened by an unintended viewer before they reach the intended viewer. This issue challenges the external validity of the study. According to Goree and Marszalek (1995), because of this challenge, "researchers should avoid using inferential analysis for electronic surveys" (p. 78).
Confidentiality versus Anonymity: An electronic response is never truly anonymous, since researchers know the respondents' e-mail addresses. According to Goree and Marszalek (1995), researchers are ethically required to guard the confidentiality of their respondents and to assure respondents that they will do so.
Responsible Quotation: It is considered acceptable for researchers to correct typographical or grammatical errors before quoting respondents since respondents do not have the ability to edit their responses. According to Goree and Marszalek (1995), researchers are also faced with the problem of "casual language" use common to electronic communication (p. 78). Casual language responses may be difficult to report within the formal language used in journal articles.
Each year, nonresponse and response rates are becoming more and more important issues in survey research. According to Weisberg, Krosnick and Bowen (1989), in the 1950s it was not unusual for survey researchers to obtain response rates of 90 percent. Now, however, people are not as trusting of interviewers and response rates are much lower--typically 70 percent or less. Today, even when survey researchers obtain high response rates, they still have to deal with many potential respondent problems.
Nonresponse Errors Nonresponse is usually considered a source of bias in a survey, aptly called nonresponse bias. Nonresponse bias is a problem for almost every survey as it arises from the fact that there are usually differences between the ideal sample pool of respondents and the sample that actually responds to a survey. According to Fox and Tracy (1986), "when these differences are related to criterion measures, the results may be misleading or even erroneous" (p. 9). For example, a response rate of only 40 or 50 percent creates problems of bias since the results may reflect an inordinate percentage of a particular demographic portion of the sample. Thus, variance estimates and confidence intervals become greater as the sample size is reduced, and it becomes more difficult to construct confidence limits.
Nonresponse bias usually cannot be avoided and so inevitably negatively affects most survey research by creating errors in a statistical measurement. Researchers must therefore account for nonresponse either during the planning of their survey or during the analysis of their survey results. If you create a larger sample during the planning stage, confidence limits may be based on the actual number of responses themselves.
Household-Level Determinants of Nonresponse
According to Couper and Groves (1996), reductions in nonresponse and its errors should be based on a theory of survey participation. This theory of survey participation argues that a person's decision to participate in a survey generally occurs during the first moments of interaction with an interviewer or the text. According to Couper and Groves, four types of influences affect a potential respondent's decision of whether or not to cooperate in a survey. First, potential respondents are influenced by two factors that the researcher cannot control: by their social environments and by their immediate households. Second, potential respondents are influenced by two factors the researcher can control: the survey design and the interviewer.
To minimize nonresponse, Couper and Groves suggest that researchers manipulate the two factors they can control--the survey design and the interviewer.
Not only do survey researchers have to be concerned about nonresponse rate errors, but they also have to be concerned about the following potential response rate errors:
These response errors can seriously distort a survey's results. Unfortunately, according to Fox and Tracy (1986), response bias is difficult to eliminate; even if the same respondent is questioned repeatedly, he or she may continue to falsify responses. Response order bias and response set errors, however, can be reduced through careful development of the survey questionnaire.
Related to the issue of response errors, especially response order bias and response bias, is the issue of satisficing. According to Krosnick, Narayan, and Smith (1996) satisficing is the notion that certain survey response patterns occur as respondents "shortcut the cognitive processes necessary for generating optimal answers" (p. 29). This theoretical perspective arises from the belief that most respondents are not highly motivated to answer a survey's questions, as reflected in the declining response rates in recent years. Since many people are reluctant to be interviewed, it is presumptuous to assume that respondents will devote a lot of effort to answering a survey.
The theoretical notion of satisficing can be further understood by considering what respondents must do to provide optimal answers. According to Krosnick et. al. (1996), "respondents must carefully interpret the meaning of each question, search their memories extensively for all relevant information, integrate that information carefully into summary judgments, and respond in ways that convey those judgments' meanings as clearly and precisely as possible"(p. 31). Therefore, satisficing occurs when one or more of these cognitive steps is compromised.
Satisficing takes two forms: weak and strong. Weak satisficing occurs when respondents go through all of the cognitive steps necessary to provide optimal answers, but are not as thorough in their cognitive processing. For example, respondents can answer a question with the first response that seems acceptable instead of generating an optimal answer. Strong satisficing, on the other hand, occurs when respondents omit the steps of judgment and retrieval altogether.
Even though they believe that not enough is known yet to offer suggestions on how to increase optimal respondent answers, Krosnick et. al. (1996) argue that satisficing can be reduced by maximizing "respondent motivation" and by "minimizing task difficulty" in the survey questionnaire (p. 43).
General Survey Information:
Allan, Graham, & Skinner, Chris (eds.) (1991). Handbook for Research Students in the Social Sciences. The Falmer Press: London.
This book is an excellent resource for anyone studying in the social sciences. It is not only well-written, but it is clear and concise with pertinent research information.
Alreck, P. L., & Settle, R. B. (1995). The survey research handbook: Guidelines and strategies for conducting a survey (2nd). Burr Ridge, IL: Irwin.
Provides thorough, effective survey research guidelines and strategies for sponsors, information seekers, and researchers. In a very accessible, but comprehensive, format, this handbook includes checklists and guidelists within the text, bringing together all the different techniques and principles, skills and activities to do a "really effective survey."
Babbie, E.R. (1973). Survey research methods. Belmont, CA: Wadsworth.
A comprehensive overview of survey methods. Solid basic textbook on the subject.
Babbie, E.R. (1995). The practice of social research (7th). Belmont, CA: Wadsworth.
The reference of choice for many social science courses. An excellent overview of question construction, sampling, and survey methodology. Includes a fairly detailed critique of an example questionnaire. Also includes a good overview of statistics related to sampling.
Belson, W.A. (1986). Validity in survey research. Brookvield, VT: Gower.
Emphasis on construction of survey instrument to account for validity.
Bourque, Linda B. & Fiedler, Eve P. (1995).How to Conduct Self-Administered and Mail Surveys. Sage Publications: Thousand Oaks.
Contains current information on both self-administered and mail surveys. It is a great resource if you want to design your own survey; there are step-by-step methods for conducting these two types of surveys.
Bradburn, N.M., & Sudman, S. (1979). Improving interview method and questionnaire design. San Francisco: Jossey-Bass Publishers.
A good overview of polling. Includes setting up questionnaires and survey techniques.
Bradburn, N. M., & Sudman, S. (1988). Polls and Surveys: Understanding What They Tell Us. San Francisco: Jossey-Bass Publishers.
These veteran survey researchers answer questions about survey research that are commonly asked by the general public.
Campbell, Angus, A., ∧ Katona, Georgia. (1953). The Sample Survey: A Technique for Social Science Research. In Newcomb, Theodore M. (Ed). Research Methods in the Behavioral Sciences. The Dryden Press: New York. p 14-55.
Includes information on all aspects of social science research. Some chapters in this book are outdated.
Converse, J. M., & Presser, S. (1986). Survey questions: Handcrafting the standardized questionnaire. Newbury Park, CA: Sage.
A very helpful little publication that addresses the key issues in question construction.
Dillman, D.A. (1978). Mail and telephone surveys: The total design method. New York: John Wiley & Sons.
An overview of conducting telephone surveys.
Frey, James H., & Oishi, Sabine Mertens. (1995). How To Conduct Interviews By Telephone and In Person. Sage Publications: Thousand Oaks.
This book has a step-by-step breakdown of how to conduct and design telephone and in person interview surveys.
Fowler, Floyd J., Jr. (1993). Survey Research Methods (2nd.). Newbury Park, CA: Sage.
An overview of survey research methods.
Fowler, F. J. Jr., & Mangione, T. W. (1990). Standardized survey interviewing: Minimizing interviewer-related error. Newbury Park, CA: Sage.
Another aspect of validity/reliability--interviewer error.
Fox, J. & Tracy, P. (1986). Randomized Response: A Method for Sensitive Surveys. Beverly Hills, CA: Sage.
Authors provide a good discussion of response issues and methods of random response, especially for surveys with sensitive questions.
Frey, J. H. (1989). Survey research by telephone (2nd). Newbury Park, CA: Sage.
General overview to telephone polling.
Glock, Charles (ed.) (1967). Survey Research in the Social Sciences. New York: Russell Sage Foundation.
Although fairly outdated, this collection of essays is useful in illustrating the somewhat different ways in which different disciplines regard and use survey research.
Hoinville, G. & Jowell, R. (1978). Survey research practice. London: Heinemann.
Practical overview of the methods and procedures of survey research, particularly discussing problems which may arise.
Hyman, H. H. (1972). Secondary Analysis of Sample Surveys. New York: John Wiley & Sons.
This source is particularly useful for anyone attempting to do secondary analysis. It offers a comprehensive overview of this research method, and couches it within the broader context of social scientific research.
Hyman, H. H. (1955). Survey design and analysis: Principles, cases, and procedures. Glencoe, IL: Free Press.
According to Babbie, an oldie but goodie--a classic.
Jones, R. (1985). Research methods in the social and behavioral sciences. Sunderland, MA: Sinauer.
General introduction to methodology. Helpful section on survey research, especially the discussion on sampling.
Kalton, G. (1983). Compensating for missing survey data. Ann Arbor, MI: Survey Research Center, Institute for Social Research, the University of Michigan.
Addresses a problem often encountered in survey methodology.
Kish, L. (1965). Survey sampling. New York: John Wiley & Sons.
Classic text on sampling theories and procedures.
Lake, C.C., & Harper, P. C. (1987). Public opinion polling: A handbook for public interest and citizen advocacy groups. Washington, D.C.: Island Press.
Clearly written easy to read and follow guide for planning, conducting and analyzing public surveys. Presents material in a step-by-step fashion, including checklists, potential pitfalls and real-world examples and samples.
Lauer, J.M., & Asher, J. W. (1988). Composition research: Empirical designs. New York: Oxford UP.
Excellent overview of a number of research methodologies applicable to composition studies. Includes a chapter on "Sampling and Surveys" and appendices on basic statistical methods and considerations.
Monette, D. R., Sullivan, T. J, & DeJong, C. R. (1990). Applied Social Research: Tool for the Human Services (2nd). Fort Worth, TX: Holt.
A good basic general research textbook which also includes sections on minority issues when doing research and the analysis of "available" or secondary data..
Rea, L. M., & Parker, R. A. (1992). Designing and conducting survey research: A comprehensive guide. San Francisco: Jossey-Bass.
Written for the social and behavioral sciences, public administration, and management.
Rossi, P.H., Wright, J.D., & Anderson, A.B. (eds.) (1983). Handbook of survey research. New York: Academic Press.
Handbook of quantitative studies in social relations.
Salant, P., & Dillman, D. A. (1994). How to conduct your own survey. New York: Wiley.,
A how-to book written for the social sciences.
Sayer, Andrew. (1992). Methods In Social Science: A Realist Approach. Routledge: London and New York.
Gives a different perspective on social science research.
Schuldt, Barbara A., & Totter, Jeff W. (1994, Winter). Electronic Mail vs. Mail Survey Response Rates. Marketing Research, 6. 36-39.
An article with specific information for electronic and mail surveys. Mainly a technical resource.
Schuman, H. & Presser, S. (1981). Questions and answers in attitude surveys. New York: Academic Press.
Detailed analysis of research question wording and question order effects on respondents.
Schwartz, N. & Seymour, S. (1996) Answering Questions: Methodology for Determining Cognitive and Communication Processes in Survey Research. San Francisco: Josey-Bass.
Authors provide a summary of the latest research methods used for analyzing interpretive cognitive and communication processes in answering survey questions.
Seymour, S., Bradburn, N. & Schwartz, N. (1996) Thinking About Answers: The Application of Cognitive Processes to Survey Methodology. San Francisco: Josey-Bass.
Explores the survey as a "social conversation" to investigate what answers mean in relation to how people understand the world and communicate.
Simon, J. (1969). Basic research methods in social science: The art of empirical investigation. New York: Random.
An excellent discussion of survey analysis. The definitions and descriptions begin from a fairly understandable (simple) starting point, then the discussion unfolds to cover some fairly complex interpretive strategies.
Singleton, R. Jr., et. al. (1988). Approaches to social research. New York: Oxford UP.
Has a very accessible chapter on sampling as well as a chapter on survey research.
Smith, Robert B. (Ed.) (1982). A Handbook of Social Science Methods, Volume 3. Prayer: New York.
There is a series of handbooks, each one with specific topics in social science research. A good technical resource, yet slightly dated.
Sul Lee, E., Forthofer, R.N.,& Lorimor, R.J. (1989). Analyzing complex survey data. Newbury Park, CA: Sage Publications.
Details on the statistical analysis of survey data.
Singer, E., & Presser, S., eds. (1989). Survey research methods: A reader. Chicago: U of Chicago P.
The essays in this volume originally appeared in various issues of Public Opinion Quarterly.
Survey Research Center (1983). Interviewer's manual. Ann Arbor, MI: University of Michigan Press.
Very practical, step-by-step guide to conducting a survey and interview with lots of examples to illustrate the process.
Pearson, R.W., &Borouch, R.F. (Eds.) (1986). Survey Research Design: Towards a Better Understanding of Their Costs and Benefits. Springer-Verag: Berlin.
Explains, in a technical fashion, the financial aspects of research design. Somewhat of a cost-analysis book.
Weissberg, H.F., Krosnick , J.A., & Bowen, B.D. (1989). An introduction to survey research and data analysis. Glenview, IL: Scott Foresman.
A good discussion of basic analysis and statistics, particularly what statistical applications are appropriate for particular kinds of data.
Studies:
Anderson, B., Puur, A., Silver, B., Soova, H., & Voormann, R. (1994). Use of a lottery as an incentive for survey participation: a pilot survey in Estonia. International Journal of Public Opinion Research, 6, 64-71.
Looks at return results in a study that offers incentives, and recommends incentive use to increase response rates.
Bare, J. (1994). Truth about daily fluctuations in 1992 pre-election polls. Newspaper Research Journal, 15, 73-81.
Comparison of variations between daily poll results of the major polls used during the 1992 American Presidential race.
Chi, S. (1993). Computer knowledge, interests, attitudes, and uses among faculty in two teachers' universities in China. DAI-A, 54/12, 4412-4623.
Survey indicating a strong link between subject area and computer usage.
Cowans, J. (1994). Wielding the people: Opinion polls and the problem of legitimacy in France since 1944. DAI-A, 54/12, 4556-5027.
Study looks at how the advent of opinion polling has affected the legitimacy of French governments since World War II.
Crewe, I. (1993). A nation of liars? Opinion polls and the 1992 election. Journal of the Market Research Society, 35, 341-359.
Poses possible reasons the British polls were so wrong in predicting the outcomes of the 1992 national elections.
Daly, J., & Miller, M. (1975). The empirical development of an instrument to measure writing apprehension. Research in the teaching of English, 9 (3), 242-249.
Discussion of basics in question development and data analysis. Also includes some sample questions.
Daniell, S. (1993). Graduate teaching assistants' attitudes toward and responses to academic dishonesty. DAI-A,54/06, 2065- 2257.
Study explores the ethical and academic responses to cheating, using a large survey tool.
Mittal, B. (1994). Public assessment of TV advertising: Faint praise and harsh criticism. Journal of Advertising Research, 34, 35-53.
Results of a survey of Southern U.S. television viewers' perceptions of television advertisements.
Palmquist, M., & Young, R.E. (1992). Is writing a gift? The impact on students who believe it is. Reading empirical research studies: The rhetoric of research. Hayes et al. eds. Hillsdale NJ: Erlbaum.
This chapter presents results of a study of student beliefs about writing. Includes sample questions and data analysis.
Serow, R. C., & Bitting, P. F. (1995). National service as educational reform: A survey of student attitudes. Journal of research and development in education, 28 (2), 87-90.
This study assessed college students' attitude toward a national service program.
Stouffer, Samuel. (1955). Communism, Conformity, and Civil Liberties. New York: John Wiley & Sons.
This is a famous old survey worth examining. This survey examined the impact of McCarthyism on the attitudes of both the general public and community leaders, a asking whether the repression of the early 1950s affected support for civil liberties.
Wanta, W. & Hu, Y. (1993). The agenda-setting effects of international news coverage: An examination of differing news frames. International Journal of Public Opinion Research, 5, 250-264.
Discusses results of Gallup polls on important problems in relation to the news coverage of international news.
Worcester, R. (1992). The performance of the political opinion polls in the 1992 British general election. Marketing and Research Today, 20, 256-263.
A critique of the use of polls in an attempt to predict voter actions.
Yamada, S, & Synodinos, N. (1994). Public opinion surveys in Japan. International Journal of Public Opinion Research, 6, 118-138.
Explores trends in opinion poll usage, response rates, and refusals in Japanese polls from 1975 to 1990.
Criticism/Critique/Evaluation:
Bangura, A. K. (1992). The limitations of survey research methods in assessing the problem of minority student retention in higher education. San Francisco: Mellen Research UP.
Case study done at a Maryland university addressing an aspect of validity involving intercultural factors.
Bateson, N. (1984). Data construction in social surveys. London: Allen & Unwin.
Tackles the theory of the method (but not the methods of the method) of data construction. Deals with validity of the data by validizing the process of data construction.
Braverman, M. (1996). Sources of Survey Error: Implications for Evaluation Studies. New Directions for Evaluation: Advances in Survey Research,70, 17-28.
Looks at how evaluations using surveys can benefit from using survey design methods that reduce various survey errors.
Brehm, J. (1994). Stubbing our toes for a foot in the door? Prior contact, incentives and survey response. International Journal of Public Opinion Research, 6, 45-63.
Considers whether incentives or the original contact letter lead to increased response rates.
Bulmer, M. (1977). Social-survey research. In M. Bulmer (ed.), Sociological research methods: An introduction. London: Macmillan.
The section includes discussions of pros and cons of survey research findings, inferences and interpreting relationships found in social-survey analysis.
Couper, M. & Groves, R. (1996). Household-Level Determinants of Survey Nonresponse. . New Directions for Evaluation: Advances in Survey Research, 70, 63-80.
Authors discuss their theory of survey participation. They believe that decisions to participate are based on two occurences: interactions with the interviewer, and the sociodemographic characteristics of respondents.
Couto, R. (1987). Participatory research: Methodology and critique. Clinical Sociology Review, 5, 83-90.
Criticism of survey research. Addresses knowledge/power/change issues through the critique.
Dillman, D., Sangster, R., Tarnai, J., & Rockwood, T. (1996) Understanding Differences in People's Answers to Telephone and Mail Surveys. New Directions for Evaluation: Advances in Survey Research, 70, 45-62.
Explores the issue of differences in respondents' answers in telephone and mail surveys, which can affect a survey's results.
Esaiasson, P. & Granberg, D. (1993). Hidden negativism: Evaluation of Swedish parties and their leaders under different survey methods. International Journal of Public Opinion Research, 5, 265-277.
Compares varying results of mailed questionnaires vs. telephone and personal interviews. Findings indicate methodology affected results.
Guastello, S. & Rieke, M. (1991). A review and critique of honesty test research. Behavioral Sciences and the Law, 9, 501-523.
Looks at the use of honesty, or integrity, testing to predict theft by employees, questioning further use of the tests due to extremely low validity. Social and legal implications are also considered.
Hamilton, R. (1991). Work and leisure: On the reporting of poll results. Public Opinion Quarterly, 55, 347-356.
Looks at methodology changes that affected reports of results in the Harris poll on American Leisure.
Juster, F. & Stanford, F. (1991). Comment on work and leisure: On reporting of poll results. Public Opinion Quarterly, 55, 357-359.
Rebuttal of the Hamilton essay, cited above. The rebuttal is based upon statistical interpretation methods used in the cited survey.
Krosnick, J., Narayan, S., & Smith, W. (1996). Satisficing in Surveys: Initial Evidence. New Directions in Evaluation: Advances in Survey Research, 70, 29-44.
Authors discuss "satisficing," a cognitive approach to survey response, which they believe helps researchers understand how survey respondents arrive at their answers.
Lindsey, J.K. (1973). Inferences from sociological survey data: A unified approach. San Francisco: Jossey-Bass.
Examines the statistical analysis of survey data.
Morgan, F. (1990). Judicial standards for survey research: An update and guidelines. Journal of Marketing, 54, 59-70.
Looks at legal use of survey information as defined and limited in recent cases. Excellent definitions.
Pottick, K. (1990). Testing the underclass concept by surveying attitudes and behavior. Journal of Sociology and Social Welfare, 17, 117-125.
Review of definitional tests constructed to define "underclass."
Rohme, N. (1992). The state of the art of public opinion polling worldwide. Marketing and Research Today, 20, 264-271.
A quick review of the use of polling in several countries, concluding that the use of polling is on the rise worldwide.
Sabatelli, R. (1988). Measurement issues in marital research: A review and critique of contemporary survey instruments. Journal of Marriage and the Family, 55, 891-915.
Examines issues of methodology.
Schriesheim, C. A.,& Denisi, A. S. (1980). Item Presentation as an Influence on Questionnaire Validity: A Field Experiment. Educational-and-Psychological-Measurement; 40 (1), 175-82.
Two types of questionnaire formats measuring leadership variables were examined: one with items measuring the same dimensions grouped together and the second with items measuring the same dimensions distributed randomly. The random condition showed superior validity.
Smith, T. (1990). "A critique of the Kinsey Institute/Roper organization national sex knowledge survey." Public Opinion Quarterly, Vol. 55, 449-457.
Questions validity of the survey based upon question selection and response interpretations. A rejoinder follows, defending the poll.
Smith, Tom W. (1990). "The First Straw? A Study of the Origins of Election Polls," Public Opinion Quarterly, Vol. 54 (Spring: 21-36).
This article offers a look at the early history of American political polling, with special attention to media reactions to the polls. This is an interesting source for anyone interested in the ethical issues surrounding polling and survey.
Sniderman, P. (1986). Reflections on American racism. Journal of Social Issues, 42, 173-187.
Rebuttal of critique of racism research. Addresses issues of bias and motive attribution.
Stanfield, J. H. II, & Dennis, R. M., eds (1993). Race and Ethnicity in Research Methods. Newbury Park, CA: Sage.
The contributions in this volume examine the array of methods used in quantitative, qualitative, and comparative and historical research to show how research sensitive to ethnic issues can best be conducted.
Stapel, J. (1993). Public opinion polling: Some perspectives in response to 'critical perspectives.' International Journal of Public Opinion Research, 5, 193-194.
Discussion of the moral power of polling results.
Wentland, E. J., & Smith, K. W. (1993). Survey responses: An evaluation of their validity. San Diego: Academic Press.
Reviews and analyzes data from studies that have, through the use of external criteria, assessed the validity of individuals' responses to questions concerning personal characteristics and behavior in a wide variety of areas.
Williams, R. M., Jr. (1989). "The American Soldier: An Assessment, Several Wars Later." Public Opinion Quarterly. Vol. 53 (Summer: 155-174).
One of the classic studies in the history of survey research is reviewed by one of its authors.
Secondary Analysis:
Jolliffe, F.R. (1986). Survey Design and Analysis. Ellis Horwood Limited: Chichester.
Information about survey design as well as secondary analysis of surveys.
Kiecolt, K. J., & Nathan, L. E. (1985). Secondary analysis of survey data. Beverly Hills, CA: Sage.
Discussion of how to use previously collected survey data to answer a new research question.
Monette, D. R., Sullivan, T. J, & DeJong, C. R. (1990). Analysis of available data. In Applied Social Research: Tool for the Human Services (2nd ed., pp. 202-230). Fort Worth, TX: Holt.
Gives some existing sources for statistical data as well as discussing ways in which to use it.
Rubin, A. (1988). Secondary analyses. In R. M. Grinnell, Jr. (Ed.), Social work research and evaluation. (3rd ed., pp. 323-341). Itasca, IL: Peacock.
Chapter discusses inductive and deductive processes in relation to research designs using secondary data. It also discusses methodological issues and presents a case example.
Dale, A., Arber, S., & Procter, M. (1988). Doing Secondary Analysis. London: Unwin Hyman.
A whole book about how to do secondary analysis.
Electronic Surveys:
Carr, H. H. (1991). Is using computer-based questionnaires better than using paper? Journal of Systems Management September, 19, 37.
Reference from Thach.
Dunnington, Richard A. (1993). New methods and technologies in the organizational survey process. American Behavioral Scientist, 36 (4), 512-30.
Asserts that three decades of technological advancements in communications and computer techhnology have transformed, if not revolutionized, organizational survey use and potential.
Goree, C. & Marszalek, J. (1995). Electronic Surveys: Ethical Issues for Researchers. The College Student Affairs Journal, 15 (1), 75-79.
Explores how the use of electronic surveys challenge existing ethical standards of survey research, and how that researchers need to be aware of these new ethical issues.
Hsu, J. (1995). The Development of Electronic Surveys: A Computer Language-Based Method. The Electronic Library, 13 (3), 195-201.
Discusses the need for a markup language method to properly support the creation of survey questionnaires.
Kiesler, S. & Sproull, L. S. (1986). Response effects in the electronic survey. Public Opinion Quarterly, 50, 402-13.
Reference from Thach.
Opperman, M. (1995) E-Mail Surveys--Potentials and Pitfalls. Marketing Research, 7 (3), 29-33.
A discussion of the advantages and disadvantages of using E-Mail surveys.
Sproull, L. S. (1986). Using electronic mail for data collection in organizational research. Academy of Management Journal, 29, 159-69.
Reference from Thach.
Synodinos, N. E., & Brennan, J. M. (1988). Computer interactive interviewing in survey research. Psychology & Marketing, 5(2), 117-137.
Reference from Thach.
Thach, Liz. (1995). Using electronic mail to conduct survey research. Educational Technology, 35, 27-31.
A review of the literature on the topic of survey research via electronic mail concentrating on the key issues in design, implementation, and response using this medium.
Walsh, J. P., Kiesler, S., Sproull, L. S., & Hesse, B. W. (1992). Self-selected and randomly selected respondents in a computer network survey. Public Opinion Quarterly, 56, 241-244.
Reference from Thach.
Further Investigation
Bery, David N., & Smith , Kenwyn K. (eds.) (1988). The Self in Social Inquiry: Researching Methods. Sage Publications: Newbury Park.
Has some ethical issues about the role of researcher in social science research.
Paul Barribeau, Bonnie Butler, Jeff Corney, Megan Doney, Jennifer Gault, Jane Gordon, Randy Fetzer, Allyson Klein, Cathy Ackerson Rogers, Irene F. Stein, Carroll Steiner, Heather Urschel, Theresa Waggoner, and Mike Palmquist. (1994-[m]DateFormat(Now(), 'yyyy')[/m]). Survey Research. The WAC Clearinghouse. Colorado State University. Available at https://wac.colostate.edu/repository/writing/guides-old/.
Copyright © 1994-[m]DateFormat(Now(), 'yyyy')[/m] Colorado State University and/or this site's authors, developers, and contributors. Some material displayed on this site is used with permission.