Basics of research (part 14) survey research methodology: Designing the survey instrument

Basics of research (part 14) survey research methodology: Designing the survey instrument

SPECIAL COMMUNICATION Basics of Research (Part 14) Survey Research Methodology: Designing the Survey Instrument Vicken Y. Totten, MD,’ Edward A. Pana...

2MB Sizes 4 Downloads 206 Views

SPECIAL COMMUNICATION

Basics of Research (Part 14) Survey Research Methodology: Designing the Survey Instrument Vicken Y. Totten, MD,’ Edward A. Panacek, MD,2 Daniel Price, MD3

1. 2. 3.

Catholic Medical Center, New York University of California-Davis Medical Center, Sacramento, Calif. Oregon Health Sciences University, Portland, Ore.

Address for correspondence and reprints: Edward A. Panacek, MD, UC Davis Medical Center, 2315 Stockton Blvd., Ste .2100, Sacramento, CA 95817 Key Words: interviews, search scales, surveys, Copyright Associates

0 1999

by the Air Medical

1067-991)(/99/$5.00 Reprint

26

questionnaires, survey research

no. 74/l/93553

+ 0

re-

Journal

Introduction Using surveys is a remarkably common approach to performing research, particularly for novice researchers.‘” Unfortunately, much survey-based research is not well done, and as a result, the field often suffers from a tarnished reputation.’ However, some research questions can be approached only by using surveys, and a large number of other research questions are best answered with this method. In addition, performing appropriate survey research involves a %cience.n The principles are relatively straightforward and fairly well established. This article will cover those principles and attempt to provide the reader with the tools necessary to perform effective and scientifically valid survey studies. Surveys can be an excellent research tool because they are relatively inexpensive and allow quick data acquisition. One advantage is they generally sample people under real world conditions rather than in the controlled laboratory or study protocol environment A carefully drawn sample from a well-defined population can provide the data necessary to answer important research questions. Although surveys usually are considered to have a “nonexperimental” design, they have more in common with the more scientitic “true experimental” or “quasiexperimental” type studies if done properly.5 However, survey research is not as simple as it may seem. Quality data require a well-designed study using a carefully crafted questionnaire.

The term survey describes a type of study that consists of asking people to re spond to questions. Possible formats include personal interviews, telephone interviews, mailed questionnaires, etc. Paper questionnaires are the most common and generally are familiar to both potential subjects and scientific readers. These forms are like answering a written test. Most of the principles of performing survey research are equally applicable to all survey formats, but some are unique to a specitic format. Like all other forms of research, survey studies should start with an appropriate and important research question6 Generally, that research question attempts to establish or predict a relationship between one or more independent variables (ie, exposures, characteristics, traits, or interventions) and a dependent variable (an outcome, result, or effect) of interest.5 An example would be to ask questions regarding different intubation training techniques (independent variable) and subsequent comfort level with performing intubations in the prehospital environment (dependent variable). However, in all designs, the researchers must consider the role of extraneous variables, also known as confounding variables.”These outside factors can have an effect on the study results or tinclings and potentially invalidate the study conclusions. For the previous example, potential extraneous variables could include number of years of tlight nurse experience or prior experience with intubation training before becoming a flight nurse. Either of

January-March

1999 18:l

Air Medical Journal

these factors could intluence the answers to the study survey. In “true experimental” research designs, extraneous variables are controlled through the process of randomization, such as in prospective, randomized clinical trials. However, surveys are unable to perform randomization and therefore must control for these variables through other analytic methods, which will be discussed in greater detail in the sections dealing with survey design and result analysis. These topics, as well as that of validating a survey instrument, can be complex. This article does not afford enough spaceto go into them in appropriate detail, and readers interested in sophisticated research study design are re ferred to reference texts listed at the end. Like other types of research, the process of performing a survey study re quires the steps of identifying a testable hypothesis, determining the tools to be used, selecting a sample from a population, collecting data, and deciding how to analyze and interpret the results.5.8 Design issues unique to survey research include creating the survey instrument and administering it. Each design decision can affect the time and cost to perform the study. The first section discusses the methods of organizing and administering the survey. Organizing a Survey Study Organizing a research survey involves several separate but important steps. Proper attention to the early steps of study design will save much time and trouble later. As already mentioned, the process starts with clearly identifying the question researchers hope to answer. This question should be very clear to the investigators and be summarized in one sentence. Although a couple of secondary questions are acceptable, they should be minimized. The question(s) then should be examined to determine whether a survey is indeed the best way to get answers. Generally, surveys are ideal for collecting data about people’s attitudes, behaviors, knowledge, and personal history. Obtaining information in each of these areas through other forms of research is difficult. If a survey approach is deemed the best way to answer the question, the next step is to identity Air Medical Journal l&l

January-March

information that is crucial to answering the question. The survey then is organized around the best efforts to provide those data and thereby answer the research question. Unless the researchers already are well-experienced with conducting surveys, it is best to discuss the research de sign with a statistician before implementing the study. The most important data should be identified in this discussion, as well as anticipated comparisons. How much of a difference in the outcome would the investigators like to detect? How many responses will be necessary to be able to detect that difference? How many questionnaires will have to be administered to get that number of responses? These appropriate questions should be addressed up front. Some of this information is needed to perform a “sample size calculation” with the assistance of a statistician. If you meet with a statistician, make sure you keep a copy of the actual calculations and take notes of the discussion for future reference. In your final report or manuscript, you likely will need to include some of this data in the discussion of study methods. When consulting with a statistician, it also may be wise to choose a database format and then enter mock pilot data be fore administering the questionnaire. Issues to consider in creating the database include ease of entry, completeness, and the statistical package you plan to use.8 Not all spreadsheet or database software programs are compatible with all statistics programs. You don’t want to encounter this problem at the end of your study. costs Researchers tend to underestimate the costs of surveys. Although surveys generally are less expensive than other research designs, the costs can be sub stantial nonetheless. Mailed surveys cost up to $35 per case (1998 dollars); facetoface interviews cost up to $50 to $75 per subject. These costs highly depend on who is doing the work and whether they are being paid. However, the actual costs come from several sources, such as postage, paper, additional mailings, and follow-up phone calls, in addition to 1999

time costs for just folding and addressing surveys, filing the responses, entering data, etc. In general, mailed surveys have a low tirst-pass response rate, and repeat mail ings will be necessary because you want to achieve a high response rate for statis tical validity. Additionally, costs can be incurred for gathering information about nonrespondents. Budget enough time and money for all these items, including statistician consultation at project beginning and end. Maximizing Response Rates Response rates to surveys can vary tremendously, depending on the way in which they are administered. However, poor response rates are a particular prob lem for self-administered mail surveys. In general, the response rate to a first mailing should be assumed to be low. Steps will need to be taken to increase the re sponse rate, adjust for a low response rate, or explain the low rates and defend the study results as still valid. Table 1 offers tips on dealing with low response rates. Very large mail surveys often do not achieve a response rate greater than 50%.‘.3,g However, response rates in that range can be very problematic. In general, a desirable target response rate should be at least 75%.To publish surveys in major journals, a response rate of 85% to 90%often is desired. However, not just a specitic percentage response rate is important The important question is whether the nonre sponse rate could substantially affect the survey results and therefore the study conclusions. A useful question to consider: if all nonrespondents had responded to the survey with answers that were the opposite of the main survey r-e suits, would that substantially change the conclusions? If the answer is yes, the nonresponse rate is problematic regardless of the number. Several reasons exist why nonresponders may have beliefs or answers that are substantially different from the survey responders. Responders and nonresponders commonly differ in many important characteristics. Some characteristics of nonresponders that have been established, regardless of the survey sub ject, are listed in Table 2. If lower re27

the importance of the survey and that their individual responses are important Include information on who else is inMaximize response rate through proper research survey design that will increase volved in the survey and why, State that likelihood of respondent completion and survey return. participation is voluntary; offer assurance For mail surveys, plan multiple repeat mailings to nonrespondents. of contldentiality or anonymity. Provide a Keep track of general demographic information on nonrespondents, then compare contact location that respondents can call that demographic information with the study respondents, looking for significant or write if they have questions. differences. Mailed surveys should include a selfCall a random subset of nonresponders and ask them the same questions as on the addressed, stamped envelope (SASE). written survey. If answers are relatively the same as those obtained from the study An actual, hand-applied stamp has been respondents, likely conclude that nonresponders do not bias the study results. shown to increase response rates. Make Compare the responses obtained from the last 25% of respondents with those of sure the SASEfits into the main envelope the larger respondent group. Late-responders generally have similar answers to and the survey itself fits into each! nonrespondents. In a telephone or face-to-face interConduct an intensive telephone survey of all nonresponders. This step may be necview, establishing a trust relationship is essary if one of the above approaches indicates a study bias may be introduced because nonresponders’ answers are different from responders’. crucial to obtain good response rates. Trust can be improved if investigators are identitled with a familiar or respected orduring the holidays (ie, from Thanks- ganization, such as the American Heart giving to New Years or even the end of Association or the Red Cross. Using qualCharacteristics of January) unless the survey is specifically ity white paper with an impressive letterNonresponders about holiday-related issues. Summer va- head offers some of the same benefits for cation may be problematic for some sur- mailed surveys. 1. Least interested in subject veys, too. The role of incentives is controversial. 2. Lowest incomes The content of the questionnaire also Although it is unclear whether they reli3. Least educated can influence response rates. People may ably improve response rates, they do ap 4. Least motivated to change hesitate to answer controversial or sensi- pear to have some effect. No clear linear 5. Least naive members of groups tive topics. They may provide a socially relationship exists between the value of correct answer rather than their true feel- the incentive and the response rate. sponses from any of these subgroups ings depending on how the question is However, most institutions consider it could severely affect the survey results, worded. “Social desirability” is the de- ethical and appropriate to offer a “small efforts to absolutely minimize low re- gree to which respondents feel com- token of appreciation.” Even a simple sponse rates should be implemented in pelled to answer according to what they parking pass or token can be useful in the study design. Nonresponders may in- perceive as the culturally acceptable an- persuading people to come to an in-perclude people with exceptionally strong swer. Social desirability is more an issue son interview. Larger incentives actually beliefs, opinions, or attitudes different in face-to-faceinterviews than in mail sur- may have a negative effect because some from responders. Alternatively, they may veys for reasons discussedlater. respondents may suspect manipulation. not care at all about the survey subject. Minimizing respondent effort also im- Even paying for respondents’ time may Because it is impossible to know the di- proves response rates. How long does it insult them, particularly if you set the rection of those beliefs, opinions, and atti- take to complete the survey? Do the an- wrong hourly rate. The primary incentive tudes, nonresponses can bias the survey swers require additional tasks, such as for most respondents is the benefit of in unpredictable directions. Even small looking at medications or reviewing being in the study itself. Respondents numbers of nonrespondents may cause flight records? Complex questions are want to believe their responses are valuserious bias. best addressed through face-to-faceinter- able and will be used. For such people, Proper survey study design attempts views that motivate the respondent to the investigator may find it helpful to to minimize nonresponses. Several steps please the interviewer by replying. promise a copy of the results. can be taken. One approach is to select “Preparing” potential subjects may inIf incentives are used, many rethe smallest sample necessary to answer crease response rates. For mailed sur- searchers believe bias is miniiized and the study question, then vigorously pur- veys, a good cover letter is crucial. A response rates improved if the incentive sue follow-up efforts until nearly all mem- brief preliminary telephone call to make is offered to everyone up front rather bers of the sample have responded. This an appointment for a telephone survey in- than providing it later as a reward for re approach is ideal but not possible with all creases response rates. A cover letter turning the survey. Such techniques as study populations. Some fairly simple fac- should not exceed one page, and a pre attaching the incentive directly to the retors can affect response rates. The timing liminary call should be very brief, but sponse (eg, having the questions on the of survey initiation can help or hinder re- each must cover certain elements. They back of a check that cannot be cashed sponse rates. Do not initiate a survey should convince potential respondents of unless filled out) can greatly improve re-

Dealing

with Low Survey

Response

Rates

l

l

l

l

l

l

28

January-March

1999

18:l

Air Medical

Journal

sponse rates. However, be careful to results. The overall goal is to prove that consider whether this incentive will bias the study results are valid despite an inadequate response rate. This validation responses. Plan follow-up efforts. For mail sur- can be accomplished by showing that the veys, reminder letters can be sent to nonresponders are not substantially difeveryone or only to nonresponders if a ferent from the responders and therefore tracking system is used. If a letter is sent did not bias the study results. Several ap to everyone, apologize to and thank proaches are possible at this point, but all those who already have responded. of them require gathering data on the Alternatively, with the first questionnaire, nonresponders, including their geoyou might want to include a separate graphic or work location and any demo postcard that reads “I am returning your graphic information that may be available questionnaire” or “I do not want to fill out or practical. Although nonresponse bias cannot be your questionnaire, don’t send me reminders.” Although this method may proven precisely, its potential effects make the process more efficient, it re- often can be estimated. For example, if moves your ability to persuade nonre- basic demographic information (eg, gensponders. In general, plan to send at least der, hospital type, position) is available three reminders, usually sent at 2- to 4- for the entire survey population, the de week intervals. Each mailing may be a mographic characteristics of responders reminder postcard with a box for “please and nonrespondents can be compared. If send me a new questionnaire” or include the two groups are similar, at least some a whole new questionnaire. Repeat calls potential sources of bias are eliminated. to schedule an interview serve the same At a minimum, the investigator must document the percentage of nonrespondents purpose for telephone surveys. One approach to maximizing re- and note the number that are a result of sponse rates in mail surveys is to send mechanical factors, such as outdated adthem to an identified individual who then dresses or job changes. is responsible for distribution within a Other steps also can be taken to adgiven site. This approach is not practical dress nonresponder bias. The characterfor all types of mail surveys; however, istics of late responders are usually more when attempting to survey multiple indi- similar to nonresponders than early reviduals per site or program, it can be sponders. Comparing the responses of more effective than mailing the surveys early and late responders may provide indirectly to each person. Examples in- formation about the direction and magniclude mailing the entire packet of forms tude of the bias caused by nonresponto a residency training program director, ders. If these groups are no different, the director of a helicopter transport ser- nonresponders may not be a problem. vice, or the supervisor for an EMS sysA more direct method of dealing with tem. These individuals become responsi- nonresponse bias is intensively following ble for distributing and recovering the up a subset or all nonresponders by surveys from the individuals at their phone. The same questions written on given site. This method works particu- the mail survey simply are asked by larly well if the investigator knows the phone. If the answers in a random subset contact individuals at each site or if they of nonresponders are not different from have an incentive to comply with the sur- the responders, “nonresponder bias” vey. This approach can be particularly ef- may not exist. If the phone responses in fective if the survey is “endorsed” by the the subset are substantially different, as respective national organization or pro- many of the nonresponders as possible fessional society. should be contacted and given the surRegardless of all the techniques used vey by phone. Otherwise nonresponder to maximize response, every survey can bias could invalidate the survey results. have problems with inadequate response Unfortunately, the most popular aprates. If the response rate still is deemed proach to the problem of low-response inadequate and cannot be improved by rates seems to be surveying more subother practical methods, steps should be jects. However, this solution probably taken to assess its impact on the study will result only in more respondents with AirMedicalJournal

l&l

January-March

1999

characteristics like the early responders and not truly represent the overall study population. This method is not a scientitic approach to the problem. Choosing the Administration Method Surveys can be administered orally or in writing, in person or by telephone interview, by mail or by posting (paper or electronically). Each method has specific advantages and disadvantages.‘” Several issues affect decisions as to which method is best for a given study. Question content and complexity may favor one method over another. Cost is always a consideration, and mail surveys are the least expensive. In general, if the same data can be obtained using multiple methods with the same expected accuracy, use the least costly method. However, the strengths and weaknesses of each method should be understood before making a final decision. Mail interviews. Mailed questionnaires are the most popular but have multiple drawbacks. On the positive side, they can be answered at the respondent’s convenience, allowing a more thoughtful response. Mail also offers the highest privacy level by insulating the respondent from the interviewer’s expectations. Paper surveys allow visual input and visual scale answers (more on this later) but do not allow the respondent to directly question the interviewer. In a mailed survey, the burden of understanding is on the respondent. The researcher has little or no control over when or even if the respondent will answer. Response rates will depend on the interest level and time availability of the respondents, among other factors. Mail surveys have the lowest response rates, on average, of all the methods. Self-administered

questionnaires.

Self-administered paper or computerized questionnaires, conducted in front of the researcher, have many advantages. They are particularly good during the “pilot phase” of survey development and can be done either in a group situation or individually. This method maximizes the balance between question clarity and answer confidentiality. The researcher also can monitor completion rates. The group administration format also ensures highly consistent instruction. This method is an 29

excellent way to pilot a new instrument (ie, the questionnaire) when evaluating its effectiveness rather than obtaining data is the goal. Unfortunately, these se lected respondents may not be representative of the target population. Another disadvantage is the higher cost: having the interviewer constantly present while respondents answer the questionnaire is expensive (at least in terms of time). As mentioned in the discussion on maximizing response rates, another ap preach to written questionnaires falls somewhere between mail interviews and self-administered questionnaires: identifying a responsible contact person who handles questionnaire dissemination and collection at that given site. This strategy almost always ensures a better response rate than using a direct mail questionnaire approach. The contact person at that site can directly administer the questionnaire as a group situation or distribute and collect later. At least this method is not the investigator’s own personal labor at each individual site. This approach yields many of the advantages of a self-administered questionnaire to a group with the cost-savings of mail interviews. Although this option is not always practical or available, it should be considered when the goal is to get clusters of responses from multiple individuals at speciIic programs or sites. Electronic interviews. The new wrinkle on performing survey questionnaires is to use the Internet and do them electronically. This method obviously has a number of attractive aspects, including the lowest cost, quickest dissemination, and perhaps quickest results. For subjects who do not require confidentiality or for whom even low response rates might be acceptable, electronic interviews can be ideal. This approach is particularly good when distributed over a focused list server system subscribed to by a high percentage of the relevant study population. An example is the Council of Residency Directors (CORD) in Emergency Medicine list server to which more than 90% of all EM residency training program directors subscribe. That group also has a system whereby surveys are “endorsed” by the CORD board of directors, and response is strongly encouraged. 30

However, most surveys do not yet have a convenient Internet-connected group by which they can be circulated in a focused manner. Nonetheless, the Internet has tremendous potential for certain types of survey-basedresearch in the future. One chief advantage is that any lack of clarity can be addressed directly by the investigator distributing the survey. In addition, filters probably can be implemented that will ensure true confidentiality. One current approach to provide some degree of confidentiality for electronic interviews is to have all responses go back to a third party, not directly to the investigator. That thiid party then re moves all identifying information and forwards the responses to the investigator while keeping a tally of both responders and nonresponders to allow for appropriate follow-up reminders. Electronic dissemination of surveys is expected to grow in frequency and improve in sophis tication in the very near future. Currently, however, electronic survey administration is limited to selected questions and populations. Personal interviews. Personal, open-ended interviews or focus groups are especially useful during survey development. Less popular opinions are more likely to be expressed in a small rather than a large group. In a small group, everyone can be encouraged to respond. Smaller samples are needed than for mail surveys because the response rate is ob viously much higher. If the topic is particularly sensitive, one-on-one interviews offer the greatest confidentiality and may provide the most candid answers. In large groups the most honest answers generally occur if the group is homage nous and each person thinks his or her opinions are relatively similar to the groups. As noted previously, incentives may help. Paying people is an option; feeding them is often just as effective. Some disadvantages exist to group interviews. One problem is they are poorly generalizable because of sampling prob lems. For most research questions, effectively sampling the population of interest using just group interviews is very hard. Also, personal interviews (either individual or group) are the most expensive method of conducting a survey.

Telephone interviews. These interviews are a compromise between written questionnaires and face-to-face interviews” and fall somewhere in between with respect to cost, necessary training, and allowable complexity of questions. Telephone surveys obviously are biased toward people with phones and those available when the interviewer calls. An advantage over personal interviews is that phone surveys do not require geographic proximity. They permit longer, more complex, or more open-ended questions than mail surveys. The relationship estab lished with the interviewer increases the likelihood that tedious or complex questions, which may be ignored on paper questionnaires, are answered. Designing the Survey Instrument Now that you have your study question and have decided on an administration method, you need to develop the survey instrument (ie, the questionnaire itself). Writing your own survey items might seem simple and quick. It is not. Poorly written questions have compro mised survey results and wasted the hard work of many researchers. Politicians and advertisers have deliberately used poorly written surveys to bias and manipulate the results of polls in nefarious ways. The tirst step in developing the questionnaire is to search the literature and look for previously validated instruments you can use or adapt. Such questionnaires may be found by searching not only the medical literature but the psychology and business literature as well. Talk to coinvestigators and colleagues who are active in the field. They may know of relevant surveys heady in existence. Review articles on similar topics. Even if a prior investigation used a “nanvalidated” instrument, it may be appropriate for your purposes and would allow you to directly compare results.’ Many validated instruments already exist, but often none is quite appropriate. Sometimes you have no choice but to develop an entirely new survey questionnaire. Deciding Content Questionnaire content is determined by the primary research question(s). As already mentioned, after finalizing the

January-March

1999

18:l

Air Medical Journal

Income is a particularly sensitive question. Make sure the question is relevant before asking it. Most people are more comfortable with checking a cate gory than filling in a blank. If check boxes are used, be sure to include the full range of income for your proposed sample! Also make sure your categories are mutually exclusive. Order of Questions The order of questions can be important People who normally would object to answering certain questions can be persuaded to do so if they already have “warmed up” and begun to answer other, less sensitive questions. Experts are divided as to whether sensitive or potentially objectionable items should be placed in the middle or at the end of a questionnaire. If these items are in the middle, they are surrounded by less threatening questions. If they are at the end, a context already has been established that shows why such information might be important In addition, by the end respondents are into a pattern of answering and will be less likely to omit items or reject the survey entirely. In either case, do not place potentially sensitive or objectionable items at the beginning of the survey. Where to place basic demographic questions (age, gender, education, etc.) is also controversial. Some investigators believe demographic questions should be placed at the beginning of questionnaires because they are easy to answer and acclimatize the respondents to their task. Others say at the end because they are boring. The predominant opinion ap pears to hold that demographic questions should be placed at the end because placing them at the beginning may give respondents the impression that the investigators are more interested in their personal characteristics than the purpose of the survey. Demographic questions sometimes r-e quest potentially sensitive information that may cause some prospective respondents to reject the entire questionnaire. On the other hand, if respondents complete the questionnaire, they may see the value of demographic information and be less reluctant to provide it.

32

Language Level Most people do not wish to appear stupid and often would rather guess than ask. Be aware of the general reading level of your respondents and your ques tions. The average American reads at the fifth-grade level. Some word processing programs allow you to check your readability. (In WordPerfect 6.1, choose “tools,” click “Grammatik,” then choose “view” and click on “readability.“) Keep it short and simple, be specific, and define your terms. “Birth control” might mean pills-only to one woman and condoms to another. Time periods should be speciiic because people tend to be event-anchored, not date-anchored. “Recently” or “lately” are interpreted differently from person to person, so it is better to ask about “the past 12 months” or “the 12 months preceding your diagnosis” rather than “last year” (which could mean the previous calendar year or up to 12 months ago). Asking open-ended questions encourages event-anchored responses. For example, the question, “When did you move to Michigan?” can be event-anchored for some people, who may answer “when I was pregnant” or “after I joined the Army.” Supplying mutually exclusive categories and closed questions usually is preferable. Multiple negatives and lengthy or double-barreled questions are difficult to understand. For example, “I am not in favor of the outpatient clinic not having evening hours” is better written as “1 am in favor of evening outpatient clinic hours.” Double-barreled questions have more than one embedded question. Of the clauses, one may apply, the other might not. For example, “Would you like to be rich and famous?” could be answered, “I already am famous, but I sure would like to be rich” or “I would love to be rich, but fame would embarrass me.” Avoid the use of “and” or “or” in survey questions whenever possible. Limit answers to categories when possible. Avoid abbreviations. Ambiguous pronouns will confuse some re spondents. Don’t use “they” unless it is very clear who “they” are. Do not make gender assumptions. Value-laden or loaded questions result in higher nom-e sponse rates. Examples of loaded ques-

tions are, “Do lawyers make too much money?” and “Do you call the EMS ambulance for trivial problems? Respondents may become annoyed if they are required to read multiple items that do not apply to them. One method to avoid this problem is to use “skip patterns” by which respondents are directed to later items based on their re sponses to earlier ones. Although skip patterns may reduce annoyance and I&+ tration, they also can be confusing. Be particularly careful with the instructions if respondents are required to skip to a different page. If you must use skip patterns, guide arrows and boxes mimmixe confusion. Some respondents may shorten their task by purposely providing responses that permit skipping, which means they are providing invalid information. In general, skip patterns should be used only when absolutely necessary. Large numbers of skip patterns in a questionnaire raise questions about survey structure or content (ie, much of the information being requested is irrelevant). Translating a Survey Surveys established and validated for one population may not be applicable to another. Language difference is but one reason a survey may not apply to a diierent group. If you wish to survey a population that uses more than one language, you need your questionnaire translated. The tirst rule is to have it professionally translated, not just by “someone who speaks the language.” The translator should pay attention to the dialect and subculture of your target population and needs to know whether you are writing for a professional group, average people, or a group that may have lower literacy rates (eg, immigmnt day laborers). After the survey has been translated into the target language, you may tind it useful to have a second person, perhaps a fluently bilingual member of the target population, translate it back to English to compare it with the original. Discuss discrepancies with both of your tram& tors. Culture and country matter, too. For example, the social implications of handgun ownership are very different in Great Britain than in the United States, although English is the primary lan-

January-March

1999 18:l

Alr Medical Journal

research question(s), the next step is to identify data items necessary to fully anQuestion Types swer the question. The survey questionUse open-ended questions if: Use closed questions % naire should focus on accurately and efficiently obtaining that data. If the investigator is unsure as to which data are critically important for answering a specific research question, several resources can be used. Colleagues, both Respondents’ Respondents are capable of or You want respondents to anlocally and nationally, are a good start. characteristics willing to provide answers in their swer using a specified set of reSometimes local universities can be a own words sponse choices good source of expertise, particularly if you are delving into fields that are not primarily clinical medicine. Some exYou have the skills to analyze You prefer to count the number Analyzing the perts may not be knowledgeable in the respondents’ comments even of choices results specific area under investigation but can though answers may vary conprovide insight into general survey desiderably velopment. Contacting national experts You can handle responses that within a given field and asking for their occur infrequently input is relatively easy with the Internet. Most researchers are willing to give their time to discuss the topic by Email or telephone. Another approach to developing a list tages. For example, questions can be you do not want to answer the question, of the most important items to answer a open-ended or closed. They can have dis- please put a line through it. You do not research question is to conduct a focus crete or scalar or open responses. Open- have to tell us why you do not want to group, which involves convening a rela- ended questions elicit a “free text” or answer the question (although we are intively small group of individuals and lead- essay-type response; closed questions terested) ., .” This strategy can help ining them through a series of questions or must be answered by selected options. vestigators distinguish between a discussion items that can be used to de- The advantages of each question type are missed page and too-sensitive informavelop actual survey questions. A focus summarized in Table 3. Open-ended tion when questions are returned unangroup should consist of no more than 15 questions encourage respondents to swered, Usually the best idea is to inoffer more information (which is why clude an “other” category and allow people, and smaller groups encourage greater individual participation. The parti- questions like “Why are you here today?” respondents to specify their answers. cipants should be very similar to the in- are encouraged early in the medical inWatch for hidden assumptions. A tended target group. The faciitator should terview), whereas closed questions sim- question that begins, When you exerbe a professional who is comfortable with plify data input and analysis. Responses cise, do you...” implies that the respongroup dynamics and able to probe and to open questions generally must be cate dent actually does exercise. The proverguide the group. This person generally gorized by the researcher before analy- bial example of problem questions is, sis. Open questions are most useful dur- When did you stop beating your wife?” should not be the principal investigator. Creating a new questionnaire involves ing the initial study question design and Avoid assumptions within questions three stages: item development, pilot development process. After the first few whenever possible. design iterations, the abiity to use appro study, and field-testing (or validation). Marital status can be a surprisingly priate closed questions usually becomes difficult question to formulate. Ask yourItem development begins with writing apparent. Whenever possible, closed self what you really want to know. Do the appropriate questions (questions often are called “items” in survey re- questions are preferred because they you want status now or status eversearch). Write at least twice as many simplify result analysis. phrase accordiily. For example, “Have questions to measure each parameter as you ever been married, divorced, sepayou expect to use. Items may be se- Formatting Questions rated?” is different from “Are you now livAddressing “sensitive” topics is a reing with someone?” If you are not sure lected from existing questionnaires, what you will need, it is better to get studies reported in the literature, or current problem in survey research. more rather than less information. Properly formatted sensitive questions other sources, such as your focus sometimes will determine whether the Ethnicity and race also can be a sensitive groups or brainstorming. data you get are usable or not. You area. What categories make most sense would rather have some information Writing Questions for your study? What is the logical samQuestions can take many forms, each from a respondent than none, so in the ple? If a question is not relevant to your of which has advantages and disadvan- instructions it can be helpful to write, “If research, don’t ask it. Air Medical Journal

18:l

January-March

1999

31

guage of each. A Spanish version of a questionnaire suitable for Latinos living in California might not be appropriate for Latinos in Europe or South America. The questionnaire should be revalidated for each language and culture being studied. Creating

Response

Categories

After you are comfortable with the general format of your questionnaire, decide how you want the responsesformatted. Careful attention to response format will save hours of data entry. People tend not to read directions, so using the same format throughout is preferable. If you must change styles, highlight the change and make it very clear. Many types of response formats are available: fill-in-the-blank and other/ specify options are open-ended and hard to encode for analysis purposes. In general, they should be used sparingly. However, open-ended questions can have two very good purposes. First, they can provide free-form explanations or stories that can be very useful in some settings. For example, when studying political issues, policies, or the effects of laws, an extremely effective method is to use direct quotes about problems that people have encountered. Similarly, when evaluating new pieces of equipment or other technology, the experiences of nurses, paramedics, pilots, physicians, or others familiar with that equipment often can be best understood through direct descriptions of problems or advantages. These types of evaluations do not lend themselves as well to closed questions. Open-ended questions also can be very useful during the initial pilot development phase in which the investigator is not sure which answers would be most likely or most relevant. After obtaining information through open-ended questions, the appropriate closed answer options can become much clearer and choices made for the final survey questionnaire. In general, open-ended questions should be used sparingly in the final questionnaire unless they clearly are serving a specitic purpose. “Circle the best answer” may be easy for respondents but difficult for a machine to read. A “mask,” commonly used Air Medical

Journal

Types of Scales

18:l

January-March

1. Likert (summative) How comfortable 1 Very bad

is this stretcher? 2

2. Forced Likett How comfortable 1 Very uncomfortable

is this stretcher? 2

3. Semantic My illness Painful I Embarrassing Serious I

(circle

(circle 3

4

5 comfortable

Very

best answer) 4

6 Very comfortable

5

Differential (place “x” in box) is.. . I I I I Painless I I I I I Not embarrassing I I I I Mild

4. Guttman (cumulative) Circle the letter of evev statement A. Drinking can cause injury. 6. Drinking is an important cause C. Drinking is an important cause D. Drinking is the most important 5. Visual Analog (VAS) Please mark on this line where No pain at all

with which

you agree:

of injury. of injury and death. cause of acute injury

your

and death

in the United

States.

pain falls.

6. Numerical Descriptor (NDS) Place an X on a number to describe your 0 1 2 .3 4 No pain Moderate

in schools to help grade multiplechoice tests that use circle answer systems or fill-in-the-box, is the answer page with holes over the correct answers. However, a mask works better with boxes or circles that have been blackened or marked rather than circled. Be as specific as possible without losing information. For example, asking “Age: ---” is usually not as good as “Age in years at last birthday: _ yrs. old.” If you use the common “date of birth,” specify what each blank means. The typical American answers in a diiferent order (mo/da/yr) than a European (yr/mo/da) . Mutually exclusive response categories prevent confusion. For example, if you are asking about the timing of headaches, “in the last week” is also within the period “in the last month.” Therefore, consider phrasing the second period as “in the last month but not in the last week.” Providing adequate numbers of response categories also is important. In general, more options are better than fewer. For example, information from the question, “How often are you depressed?” will be more useful with five 1999

best answer) 3 neutral

The worst

level of pain. 5 6 pain

7

pain I ever experienced

8

9 Worst

10 possible

pain

potential responses (always, usually, sometimes, rarely, never) than three (always, sometimes, never). Using

Scales

Scales can be very useful to quantity responses because they transform answers into numeric variables that are easily tabulated and analyzed statistically. They are particularly good for converting subjective answers into numeric data for analysis and comparison. In general, the numeric values assigned to each answer have an inherent meaning that reflects the underlying rank, order, or degree of that answer. A number of different scalesexist. Common scale formats used in paper questionnaires include Likert (summative) and visual analog scales (VAS). Semantic differential scales, Guttman (cumulative) scales, and numeric differential scales also can be used, each of which has unique applications. Examples of each scale are listed in Table 4. Likert scales. These tools collect discrete but ordinal data. Respondents generally are given a list of statements or questions and asked to select a response 33

that most closely represents the degree, sponse categories usually are discrete rank, or severity of their answer. For but can be continuous. The number of reanalysis purposes, each response is as- sponse categories can be odd or even. signed a number of points. The investigaVAS. This measure is simply a line on tor later can compute a score for the an- which the respondent makes a mark (see swer to that question and compare the Table 4). The size of a VAS is usually 10 scores between groups. Another possibii- mm and the response frequently scored ity is computing an overall score from the in millimeters by measuring with a simresult of answers to multiple questions ple ruler. However, for this method to simply by adding up the points for each work, all survey copies must be printed and totaling them. This approach, how- alike. Beware of photocopies or faxes ever, assumes each individual question that reproduce slightly larger or smaller has a relatively hiih degree of “internal and distort the size of the scale. One adconsistency,” which means each item es- vantage of VAS is that the responses gensentially is measuring the same or a simi- erally are treated as “continuous” data, lar characteristic. This assumption can be which have the potential to use paramettested statistically using such measures ric statistical tests if the other requireas Cronbach’s alpha (Ref. Cronbach). ments (eg, normal distribution) are met. When the number of potential answers is Numeric description scale (NDS). odd, a neutral response is possible. Although very similar to a VAS, this op Forced Likert scales have an even num- tion forces respondents to choose a num ber of response points and force the re- ber rather than simply mark the line. The spondent to come down on one side or tool generally uses a lO-point scale and the other of a choice (see example in requires respondents to choose a numTable 4). Likert scales make entering ber along the continuum (Table 4 feadata easy, but they offer less precision tures an example). In some ways, scales than a VAS. like this make it easier to lump numbers Semantic differential response. together when reporting the results, This tool is similar to a Likert scale but such as “at least 55%of respondents indiforces the respondent to classify some- cated they had moderate pain (markings thing between two or more semantic op of 4, 5, or 6), whereas 10%reported the posites. Like the Likert scale, it provides worst possible pain (marking it 10).” original responses to subjective answers. The differences between the VAS and It also can look at overall patterns by an NDS are not entirely clear. Certainly a combining questions during analysis. In VAS allows answers to be marked and the example from Table 4, the respon- measured more precisely. Some investident may find his illness mild and very gators treat VAS answers as “continuembarrassing but not painful. The re- ous” data but treat NDS answers as ordi-

nal data. However, investigators have shown that differences of less than 10% on a VAS probably are not significant. Therefore, if an NDS uses 10 categories, a difference in one category probably represents the absolute minimum amount of difference that would be analytically important. Conclusion

This article has covered the basic steps involved in developing a research survey questionnaire. Although surveys often are considered a relatively easy and simple form of research, we have emphasized that properly conducted surveys are actually much more complex. We have covered the importance of carefully considering the ways in which survey questions are written and the use of scales. The next article in the series, which will conclude our discussion of survey research methodology, will cover the process of finalizing the survey questionnaire, validating the survey instrument, field testing, and then fully implementing the survey, analyzing the resultant data, and reporting the study results. It should be noted that an extensive body of literature about survey research methodology exists. Several excellent texts are available, including The Sage Series,a surprisingly sophisticated, easyto-use, and relatively inexpensive resource. Readers with an interest in this field are strongly encouraged to obtain that or similar reference texts.

References 1. Myers KJ, Rodenberg H, Woodard D. Influence of the helicopter environment on patient care capabilities: &ght crew perceptions. Air Med J 1995;14:21-5. 2. Erler CJ, Thompson CB. Determining the National Flight Nurses Association’s research priorities. Air Med J 1995;14:1620. 3. Wrobleski DS, Vukov LF. Training of flight nurses on fixed-wing air ambulance services. Air Med J 1996;15:15&62. 4. Peterson FV. Air medical research advice from the Mad Hatter. Air Med J 1995;14:53.

Suggested

How

9. Moorhead JC, Gallery ME, Mannle T, et al. A study of the workforce in emergency medicine. Ann Emerg Med 1998;31:595-607. 10. OToole BI, Battistutta D, Long A, et al. A corn parison of costs and data quality of three health survey methods: mail, telephone, and personal home interview. Am J Epidemiol X%6$24:31728.

11. Siemiatycki J, Campbell S, Richardson I+ et al. Quality of response in different population groups in mail and telephone surveys. Am J Epidemiol 1984$20:302-14.

Readings

The Sage Publications (Thousand Oaks, Calif.) Survey Research Kit contains: l rite Survey Handbook by Arlene Fib& l How to Ask Survey Questions by Arlene Fii l

5. Panacek F,A, ‘Thompson CB. Basics of research (part 3): research study design. Air Med J 1995;14:13946. 6. Panacek EA, Thompson CB. Basics of research (part 1): why conduct clinical research and how to get started? Air Med J 1995;14:33-6. 7. Thompson CB, Panacek EA. Davis E. Basics of research @art 4): research study design @art 2). Air Med J 1995;14:222-31. 8. Thompson CB, Schwartz R, Davis E, Panacek EA. Basics of research (part 6): quantitative data analysis. Air Med J 1996;15:73-84.

to Conduct

Self-Adminis;tered

and Mail

Surveys Fielder l

by Linda

B. Bourques

and Eve P.

l

How to Conduct Interviews b Telephone and in Person by James H. Frey and Sabine

Mertens

l

Oishi

l

l l

34

January-March

How to Design Srmys by Arlene Fink How to Sample in Surveys by Arlene Fii How to Measure Survey Reliability and Validi& by Arlene Fmk How to Analyze Survey Data by Arlene Fii How to Report on Suwqs by Arlene Fii

1999

18: 1

Air Medical Journal