Using Surveys

This chapter provides suggestions for designing and using surveys for assessment purposes. Included are guidelines for constructing effective surveys as well as suggestions for how different types of surveys can meet the various assessment needs of a department.

Definitions (Survey, Response Rate, and Response Bias)

A survey is a list of pre-determined questions created to gather responses to specific questions from a range of people. Information is usually gathered from one person at a time, but the format can vary (e.g., telephone, paper, or web). Surveys can be used as a sole source of data or in conjunction with institutional data or other information. (Dillman, 2000; Suskie, 1992)

The response rate refers to the number of people who participated in the survey. The response rate can affect the reliability of the survey. Response bias occurs when respondents to a survey are different from those who did not respond. This can affect the validity of the survey.

Appropriate Use of Surveys

Surveys are used to:

  • explore attitudes, opinions, values, experiences, expectations, and needs
  • gather information from and about large populations
  • make comparisons among subgroups of the population • compare results from year to year
  • gather data for statistical projections
  • gather statistically representative data

Surveys should not be used:

  • for audiences that are uncomfortable with numbers and statistics
  • for when there are small numbers of participants
  • without a clear understanding of the issues
  • when investigating issues of a sensitive or intrusive nature

Advantages and Disadvantages of Surveys

Advantages

  • Surveys can gather information from a large number of people.
  • The responses to a well-designed survey with a high response rate can be generalized to a larger population.
  • Survey data usually allow for statistical analysis that examines relationships among variables or groups of variables.

Disadvantages

  • Surveys can be expensive, especially paper surveys that require printing, postage, and processing.
  • Important issues can be overlooked on surveys when the questions and responses are predetermined.
  • The quality of survey data is strongly dependent on the survey design.
  • Response rates and response bias are difficult to control.

Types of Surveys and Survey Questions

Types of Surveys

The paper survey is being used less frequently due to the prevalence of web-based surveys. Respondents usually mark their responses directly onto a printed paper or scan form. The assessment planning team must consider survey printing costs, how the survey will be distributed and collected, and how the data will be processed and analyzed.

The web or on-line survey is commonly used. Planning a web survey requires computer expertise to ensure that the form and collection work properly. An accurate list of email accounts is imperative, since participants are usually invited to complete this survey via email.

Types of Survey Questions

Open-ended questions contain a blank area where participants give their response. There are no pre-set categories or limit in choices, although the length of the answer may be controlled. Closed questions give a set of response choices, usually on a Likert type scale, such as 1-Strongly Agree, 2-Agree, 3-Undecided, 4-Disagree, and 5-Strongly Disagree. Responses can also be alternative choices such as when participants are asked to indicate their class level (freshman, sophomore, junior, senior, or graduate student).

General guidelines for writing survey questions are included below. However, assessment teams would be wise to consider securing help from experts, because writing good survey questions can be challenging.

Planning Surveys

The assessment team must decide if the survey will be anonymous. Anonymous surveys protect respondent privacy, thus encouraging more candid responses and higher response rates. However, anonymous surveys do not allow for matching survey responses to institutional data, making it necessary to request information such as sex, major, and class level. These surveys also do not allow for tracking of respondents for second mailings or longitudinal projects.

Who will be asked to participate in the survey? Depending on the size of the group and the purpose of the assessment, the team must choose a sample, based on the goals and objectives being assessed. The survey would target graduating seniors, for example, if an objective includes determining opinions of graduating seniors.

A final consideration during planning stages is whether to use incentives. Incentives can be small items such as bookmarks, coupons, or extra credit given to all participants. They can be larger prizes (cash, free books, or gift certificates) given to a randomly selected few through a drawing.

Constructing Survey Items

  • Surveys should include wording that is simple, clear, non-ambiguous, direct, concrete, and uniformly understood.
  • Survey items should:
    • be stated in a neutral manner.
    • generate a variety of responses.
    • be simple sentences. (Compound sentences and multiple phrases can be ambiguous.)
    • consist of only one question. (Beware of double-barreled questions.)
  • Survey items should not:
    • include universals (e.g., all, always, none, and never), limiters (e.g., only, just, merely), double negatives, abbreviations, or unconventional phrases.
    • be too intrusive or personal.

Analyzing Survey Results

Closed questions are usually analyzed with statistical procedures. Simple frequencies of responses are adequate for many assessment projects. (Frequencies show the number or percentage of respondents who indicated each response choice.) Measures of central tendency and distribution are useful for variables that have a continuous scale, such as grade point average. (Examples are maximum, mean, median, minimum, mode, standard deviation, variance, and skewness measures.)

Open-ended questions can be categorized, grouped, and summarized. Categories should be developed using actual responses that become variables to be used, similar to closed-ended items. Groups of responses can be made on any variable according to major, class level, etc. Summarizing comments is useful when the number of comments is large or when a particular audience is not likely to read the entire set of comments.

Frequently-Asked Questions

What affects the survey response rate?

The length of the survey is one of the most crucial factors. The longer or more complicated a survey, the less likely participants are to complete it. Appearance is important. The survey should be easy to complete, with clear and concise instructions. Survey completion can be encouraged with a cover letter that explains the purpose of the survey and assures confidentiality of respondent answers. Survey timing has an effect on response rate. For example, a survey administered during midterm exams will probably not have as great a response rate as one administered at a less hectic time in the semester. The type of survey and collection method can also have a bearing on survey response rates. Surveys that are distributed and collected in classes will have higher response rates than those sent through the mail or via the web. Response rates can also be influenced by situations that are not within the control of the assessment team (participant interest in the topic, other surveys being administered at the same time, and current events).

What kind of follow-up can or should be done?

Second mailings, reminder e-mails, and reminder telephone calls will generate additional survey responses and will also increase the cost and resources needed to conduct a survey.

References

Dillman, D. (2000). Mail and internet surveys: The tailored design method. New York: John Wiley.

Suskie, L. A. (1992). Questionnaire survey research: What works. Tallahassee, FL: The Association for Institutional Research.