Hide Folder InformationTurnitin™Turnitin™ enabledThis assignment will be submitted to Turnitin™.Instructions

Evaluate the survey provided to measure the attitude of hospital employees regarding patient safety. The survey can be found as part of Case 15.1 in the course textbook (Zikmund et al., 2013). 

More information about the survey, its constructs (see Survey Items and Composite Measures), and survey guidance is available in the AHRQ (Agency for Healthcare Research and Quality) website.  This survey is the Hospital Survey on Patient Safety Culture.

For your evaluation be sure to consider:

  • the information that is being sought,
  • the content and words of individual questions,
  • the response forms to the questions,
  • the level of measurement,
  • and question sequence.

Length: Your paper should be between 5-7 pages, not including title and reference page.

References:  Include a minimum of five (5) scholarly sources.

Week 6 – Assignment: Evaluate a Questionnaire

Instructions

Evaluate the survey provided to measure the attitude of hospital employees regarding patient safety. The survey can be found as part of Case 15.1 in the course textbook (Zikmund et al., 2013).

More information about the survey, its constructs (see Survey Items and Composite Measures), and survey guidance is available in the AHRQ (Agency for Healthcare Research and Quality) website. This survey is the Hospital Survey on Patient Safety Culture.

For your evaluation be sure to consider:

the information that is being sought,

the content and words of individual questions,

the response forms to the questions,

the level of measurement,

and question sequence.

Length: Your paper should be between 5-7 pages, not including title and reference page.

References: Include a minimum of five (5) scholarly sources.

References

Ten top tips for designing a questionnaire [Video file]. (2017). Retrieved from SAGE Research Methods.

Thwaites Bee, D., & Murdoch-Eaton, D. (2016). Questionnaire design: the good, the bad and the pitfalls. Archives Of Disease In Childhood.Education And Practice Edition, 101(4), 210–212.

Zikmund, W., Babin, B. J., Carr, J., & Griffin, M. (2013). Business research methods (9th ed.). Mason, OH: Cengage Learning.

,

SAGE Research Methods Video

Ten Top Tips for Designing a Questionnaire

Pub. Date: 2016

Product: SAGE Research Methods Video

DOI: https://dx.doi.org/10.4135/9781473997592

Methods: Questionnaire design, Survey research

Keywords: practices, strategies, and tools

Disciplines: Anthropology, Business and Management, Criminology and Criminal Justice, Communication

and Media Studies, Counseling and Psychotherapy, Economics, Education, Geography, Health, History,

Marketing, Nursing, Political Science and International Relations, Psychology, Social Policy and Public Policy,

Social Work, Sociology, Science, Technology, Computer Science, Engineering, Medicine

Access Date: November 30, 2022

Publishing Company: SAGE Publications Ltd

City: London

Online ISBN: 9781473997592

© 2016 SAGE Publications Ltd All Rights Reserved.

[MUSIC PLAYING] [Ten Top Tips for Designing a Questionnaire] A questionnaire is a set of questions

used in survey research to collect information from people about their opinions, attitudes, beliefs,

and behavior. Questionnaires let you collect data

in a standardized way, which can be quantified and analyzed statistically. But how do you ensure

your survey is measuring what you want to measure? [Tip #1: Stay focused on the aims of your re-

search] When you're planning your survey, ask yourself, what are the aims of my research? List out

the things that you're trying to find out, and then break each topic down and think about how to con-

struct a survey question which will measure the underlying concept. [Tip #2: See what's already out

there]

Start by doing some research to see if similar questions have been asked in other established sur-

veys, like the General Social Survey. Using tried and tested questions can help ensure the reliability

and validity of your measures. Don't feel you need to reinvent the wheel, but do exercise caution.

The internet is full of poorly designed questionnaires, so make sure you're drawing from a reliable

source.

Tip #3: Think about your mode] More and more surveys are administered online. But it's also possible

to do surveys over the phone, face-to-face, or with old-fashioned pencil and paper. Your question-

naire should be tailored to the mode you choose. For online or mail surveys, remember to explain

the purpose of the survey to respondents in a covering mail or an introduction to the survey.

If you're using more than one interviewer to administer your survey face-to-face or by phone, think

carefully about how you'll train your interviewers to ensure consistency. [Tip #4: Keep it short] Re-

spondents are more likely to complete a questionnaire if you keep it short, especially if it's online. [Tip

#5: Think carefully about your questions] Design the survey questions with your specific audience in

mind, wording them carefully and clearly.

It is important that the respondent knows exactly what you are asking them and that questions are

not open to multiple interpretations. Don't ask leading questions, and avoid long and complex ques-

tions which could confuse respondents. [Tip #6: Question order matters] Make initial questions easy

for respondents to answer. If you're planning to ask personal or sensitive questions, think about ask-

ing these later.

And remember to use branching or skip logic to avoid asking people unnecessary questions. [Tip #7:

Make recall easy] If you're asking your respondents how often or how recently they have done some-

thing, be sure to be specific about the time frame. But think carefully about how able the respondent

will be to answer the question. Can you remember what you did a year ago? [Tip #8: Don't forget

SAGE

(c) SAGE Publications Ltd., 2017

SAGE Research Methods Video

Page 2 of 3 Ten Top Tips for Designing a Questionnaire

demographic questions] For some research questions, it's

important to find out things about who your respondents are– for example, where they live or how

old they are, as well as contact information, in some cases. Don't forget to ask these questions if you

hope to analyze your data in relation to demographics. [Tip #9: Think about analysis] Try and think of

the bigger picture and how you're going to analyze the data you collect. If you find you have lots of

open-ended questions

because you're finding it hard to think of the answers people might give, it's possible that a survey

is not the best method to use. You might need to conduct some qualitative research first in order

to develop a set of survey questions and answer options. [Tip #10: Test, test, test] After testing the

survey yourself, make sure you test it with your colleagues– or even better, a small sample of the

respondents you hope to recruit.

This will help highlight any issues or confusing questions, which you can then revise before sending

it out more widely. [Good luck with your research!] [MUSIC PLAYING]

https://dx.doi.org/10.4135/9781473997592

SAGE

(c) SAGE Publications Ltd., 2017

SAGE Research Methods Video

Page 3 of 3 Ten Top Tips for Designing a Questionnaire

  • SAGE Research Methods Video
  • Ten Top Tips for Designing a Questionnaire

,

Writing Survey Questions

Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions. Creating good measures involves both writing good questions and organizing them to form the questionnaire.

Questionnaire design  is a multistage process that requires attention to many details at once. Designing the questionnaire is complicated because surveys can ask about topics in varying degrees of detail, questions can be asked in different ways, and questions asked earlier in a survey may influence how people respond to later questions. Researchers are also often interested in measuring change over time and therefore must be attentive to how opinions or behaviors have been measured in prior surveys.

Surveyors may conduct pilot tests or focus groups in the early stages of questionnaire development in order to better understand how people think about an issue or comprehend a question. Pretesting a survey is an essential step in the questionnaire design process to evaluate how people respond to the overall questionnaire and specific questions, especially when questions are being introduced for the first time.

For many years, surveyors approached questionnaire design as an art, but substantial research over the past forty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire. Here, we discuss the pitfalls and best practices of designing questionnaires.

Question development

There are several steps involved in developing a survey questionnaire. The first is identifying what topics will be covered in the survey. For Pew Research Center surveys, this involves thinking about what is happening in our nation and the world and what will be relevant to the public, policymakers and the media. We also track opinion on a variety of issues over time so we often ensure that we update these trends on a regular basis to better understand whether people’s opinions are changing.

At Pew Research Center, questionnaire development is a collaborative and iterative process where staff meet to discuss drafts of the questionnaire several times over the course of its development. We frequently test new survey questions ahead of time through qualitative research methods such as  focus groups , cognitive interviews, pretesting (often using an  online, opt-in sample ), or a combination of these approaches. Researchers use insights from this testing to refine questions before they are asked in a production survey, such as on the ATP.

Measuring change over time

Many surveyors want to track changes over time in people’s attitudes, opinions and behaviors. To measure change, questions are asked at two or more points in time. A cross-sectional design surveys different people in the same population at multiple points in time. A panel, such as the ATP, surveys the same people over time. However, it is common for the set of people in survey panels to change over time as new panelists are added and some prior panelists drop out. Many of the questions in Pew Research Center surveys have been asked in prior polls. Asking the same questions at different points in time allows us to report on changes in the overall views of the general public (or a subset of the public, such as registered voters, men or Black Americans), or what we call “trending the data”.

When measuring change over time, it is important to use the same question wording and to be sensitive to where the question is asked in the questionnaire to maintain a similar context as when the question was asked previously (see  question wording  and  question order  for further information). All of our survey reports include a topline questionnaire that provides the exact question wording and sequencing, along with results from the current survey and previous surveys in which we asked the question.

The Center’s transition from conducting U.S. surveys by live telephone interviewing to an online panel (around 2014 to 2020) complicated some opinion trends, but not others. Opinion trends that ask about sensitive topics (e.g.,  personal finances  or attending  religious services ) or that elicited  volunteered  answers (e.g., “neither” or “don’t know”) over the phone tended to show larger differences than other trends when shifting from phone polls to the online ATP. The Center adopted  several strategies  for coping with changes to data trends that may be related to this change in methodology. If there is evidence suggesting that a change in a trend stems from switching from phone to online measurement, Center reports flag that possibility for readers to try to head off confusion or erroneous conclusions.

Open- and closed-ended questions

One of the most significant decisions that can affect how people answer questions is whether the question is posed as an open-ended question, where respondents provide a response in their own words, or a closed-ended question, where they are asked to choose from a list of answer choices.

For example, in a poll conducted after the 2008 presidential election, people responded very differently to two versions of the question: “What one issue mattered most to you in deciding how you voted for president?” One was closed-ended and the other open-ended. In the closed-ended version, respondents were provided five options and could volunteer an option not on the list.

When explicitly offered the economy as a response, more than half of respondents (58%) chose this answer; only 35% of those who responded to the open-ended version volunteered the economy. Moreover, among those asked the closed-ended version, fewer than one-in-ten (8%) provided a response other than the five they were read. By contrast, fully 43% of those asked the open-ended version provided a response not listed in the closed-ended version of the question. All of the other issues were chosen at least slightly more often when explicitly offered in the closed-ended version than in the open-ended version. (Also see  “High Marks for the Campaign, a High Bar for Obama”  for more information.)

Researchers will sometimes conduct a pilot study using open-ended questions to discover which answers are most common. They will then develop closed-ended questions based off that pilot study that include the most common responses as answer choices. In this way, the questions may better reflect what the public is thinking, how they view a particular issue, or bring certain issues to light that the researchers may not have been aware of.

When asking closed-ended questions, the choice of options provided, how each option is described, the number of response options offered, and the order in which options are read can all  influence  how people respond. One example of the impact of how categories are defined can be found in a Pew Research Center poll conducted in January 2002. When half of the sample was asked whether it was “more important for President Bush to focus on domestic policy or foreign policy,” 52% chose domestic policy while only 34% said foreign policy. When the category “foreign policy” was narrowed to a specific aspect – “the war on terrorism” – far more people chose it; only 33% chose domestic policy while 52% chose the war on terrorism.

In most circumstances, the number of answer choices should be kept to a relatively small number – just four or perhaps five at most – especially in telephone surveys. Psychological research indicates that people have a hard time keeping more than this number of choices in mind at one time. When the question is asking about an objective fact and/or demographics, such as the religious affiliation of the respondent, more categories can be used. In fact, they are encouraged to ensure inclusivity. For example, Pew Research Center’s standard religion questions include more than 12 different categories, beginning with the most common affiliations (Protestant and Catholic). Most respondents have no trouble with this question because they can expect to see their religious group within that list in a self-administered survey.

In addition to the number and choice of response options offered, the order of answer categories can influence how people respond to closed-ended questions. Research suggests that in telephone surveys respondents more frequently choose items heard later in a list (a “recency effect”), and in self-administered surveys, they tend to choose items at the top of the list (a “primacy” effect).

Because of concerns about the effects of category order on responses to closed-ended questions, many sets of response options in Pew Research Center’s surveys are programmed to be randomized to ensure that the options are not asked in the same order for each respondent. Rotating or randomizing means that questions or items in a list are not asked in the same order to each respondent. Answers to questions are sometimes affected by questions that precede them. By presenting questions in a different order to each respondent, we ensure that each question gets asked in the same context as every other question the same number of times (e.g., first, last or any position in between). This does not eliminate the potential impact of previous questions on the current question, but it does ensure that this bias is spread randomly across all of the questions or items in the list. For instance, in the example discussed above about what issue mattered most in people’s vote, the order of the five issues in the closed-ended version of the question was randomized so that no one issue appeared early or late in the list for all respondents. Randomization of response items does not eliminate order effects, but it does ensure that this type of bias is spread randomly.

Questions with ordinal response categories – those with an underlying order (e.g., excellent, good, only fair, poor OR very favorable, mostly favorable, mostly unfavorable, very unfavorable) – are generally not randomized because the order of the categories conveys important information to help respondents answer the question. Generally, these types of scales should be presented in order so respondents can easily place their responses along the continuum, but the order can be reversed for some respondents. For example, in one of Pew Research Center’s questions about abortion, half of the sample is asked whether abortion should be “legal in all cases, legal in most cases, illegal in most cases, illegal in all cases,” while the other half of the sample is asked the same question with the response categories read in reverse order, starting with “illegal in all cases.” Again, reversing the order does not eliminate the recency effect but distributes it randomly across the population.

Question wording

The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way. Even small wording differences can substantially affect the answers people provide.

[View more  Methods 101 Videos ]

An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” 68% said they favored military action while 25% said they opposed military action. However, when asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule  even if it meant that U.S. forces might suffer thousands of casualties,” responses were dramatically different; only 43% said they favored military action, while 48% said they opposed it. The introduction of U.S. casualties altered the context of the question and influenced whether people favored or opposed military action in Iraq.

There has been a substantial amount of research to gauge the impact of different ways of asking questions and how to minimize differences in the way respondents interpret what is being asked. The issues related to question wording are more numerous than can be treated adequately in this short space, but below are a few of the important things to consider:

First, it is important to ask questions that are clear and specific and that each respondent will be able to answer. If a question is open-ended, it should be evident to respondents that they can answer in their own words and what type of response they should provide (an issue or problem, a month, number of days, etc.). Closed-ended questions should include all reasonable responses (i.e., the list of options is exhaustive) and the response categories should not overlap (i.e., response options should be mutually exclusive). Further, it is important to discern when it is best to use forced-choice close-ended questions (often denoted with a radio button in online surveys) versus “select-all-that-apply” lists (or check-all boxes).  A 2019 Center study  found that forced-choice questions tend to yield more accurate responses, especially for sensitive questions.  Based on that research, the Center generally avoids using select-all-that-apply questions.

It is also important to ask only one question at a time. Questions that ask respondents to evaluate more than one concept (known as double-barreled questions) – such as “How much confidence do you have in President Obama to handle domestic and foreign policy?” – are difficult for respondents to answer and often lead to responses that are difficult to interpret. In this example, it would be more effective to ask two separate questions, one about domestic policy and another about foreign policy.

In general, questions that use simple and concrete language are more easily understood by respondents. It is especially important to consider the education level of the survey population when thinking about how easy it will be for respondents to interpret and answer a question. Double negatives (e.g., do you favor or oppose  not allowing gays and lesbians to legally marry) or unfamiliar abbreviations or jargon (e.g., ANWR instead of Arctic National Wildlife Refuge) can result in respondent confusion and should be avoided.

Similarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke. For example, in a 2005 Pew Research Center survey, 51% of respondents said they favored “making it legal for doctors to give terminally ill patients the means to end their lives,” but only 44% said they favored “making it legal for doctors to assist terminally ill patients in committing suicide.” Although both versions of the question are asking about the same thing, the reaction of respondents was different. In another example, respondents have reacted differently to questions using the word “welfare” as opposed to the more generic “assistance to the poor.” Several experiments have shown that there is much greater public support for expanding “assistance to the poor” than for expanding “welfare.”

We often write two versions of a question and ask half of the survey sample one version of the question and the other half the second version. Thus, we say we have two  forms of the questionnaire. Respondents are assigned randomly to receive either form, so we can assume that the two groups of respondents are essentially identical. On questions where two versions are used, significant differences in the answers between the two forms tell us that the difference is a result of the way we worded the two versions.

One of the most common formats used in survey questions is the “agree-disagree” format. In this type of question, respondents are asked whether they agree or disagree with a particular statement. Research has shown that, compared with the better educated and better informed, less educated and less informed respondents have a greater tendency to agree with such statements. This is sometimes called an “acquiescence bias” (since some kinds of respondents are more likely to acquiesce to the assertion than are others). This behavior is even more pronounced when there’s an interviewer present, rather than when the survey is self-administered. A better practice is to offer respondents a choice between alternative statements. A Pew Research Center experiment with one of its routinely asked values questions illustrates the difference that question format can make. Not only does the forced choice format yield a very different result overall from the agree-disagree format, but the pattern of answers between respondents with more or less formal education also tends to be very different.

One other challenge in developing questionnaires is what is called “social desirability bias.” People have a natural tendency to want to be accepted and liked, and this may lead people to provide inaccurate answers to questions that deal with sensitive subjects. Research has shown that respondents understate alcohol and drug use, tax evasion and racial bias. They also may overstate church attendance, charitable contributions and the likelihood that they will vote in an election. Researchers attempt to account for this potential bias in crafting questions about these topics. For instance, when Pew Research Center surveys ask about past voting behavior, it is important to note that circumstances may have prevented the respondent from voting: “In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote?” The choice of response options can also make it easier for people to be honest. For example, a question about church attendance might include three of six response options that indicate infrequent attendance. Research has also shown that social desirability bias can be greater when an interviewer is present (e.g., telephone and face-to-face surveys) than when respondents complete the survey themselves (e.g., paper and web surveys).

Lastly, because slight modifications in question wording can affect responses, identical question wording should be used when the intention is to compare results to those from earlier surveys. Similarly, because question wording and responses can  vary based on the mode used to survey respondents, researchers should carefully evaluate the likely effects on trend measurements if a different survey mode will be used to assess change in opinion over time.

Question order

Once the survey questions are developed, particular attention should be paid to how they are ordered in the questionnaire. Surveyors must be attentive to how questions early in a questionnaire may have unintended effects on how respondents answer subsequent questions. Researchers have demonstrated that the order in which questions are asked can influence how people respond; earlier questions can unintentionally provide context for the questions that follow (these effects are called “order effects”).

One kind of order effect can be seen in responses to open-ended questions. Pew Research Center surveys generally ask open-ended questions about national problems, opinions about leaders and similar topics near the beginning of the questionnaire. If closed-ended questions that relate to the topic are placed before the open-ended question, respondents are much more likely to mention concepts or considerations raised in those earlier questions when responding to the open-ended question.

For closed-ended opinion questions, there are two main types of order effects: contrast effects ( where the order results in greater differences in responses), and assimilation effects (where responses are more similar as a result of their order).

An example of a contrast effect can be seen in a Pew Research Center poll conducted in October 2003, a dozen years before same-sex marriage was legalized in the U.S. That poll found that people were more likely to favor allowing gays and lesbians to enter into legal agreements that give them the same rights as married couples when this question was asked after one about whether they favored or opposed allowing gays and lesbians to marry (45% favored legal agreements when asked after the marriage question, but 37% favored legal agreements without the immediate preceding context of a question about same-sex marriage). Responses to the question about same-sex marriage, meanwhile, were not significantly affected by its placement before or after the legal agreements question.

Table  Description automatically generated

Another experiment embedded in a December 2008 Pew Research Center poll also resulted in a contrast effect. When people were asked “All in all, are you satisfied or dissatisfied with the way things are going in this country today?” immediately after having been asked “Do you approve or disapprove of the way George W. Bush is handling his job as president?”; 88% said they were dissatisfied, compared with only 78% without the context of the prior question.

Responses to presidential approval remained relatively unchanged whether national satisfaction was asked before or after it. A similar finding occurred in December 2004 when both satisfaction and presidential approval were much higher (57% were dissatisfied when Bush approval was asked first vs. 51% when general satisfaction was asked first).

Several studies also have shown that asking a more specific question before a more general question (e.g., asking about happiness with one’s marriage before asking about one’s overall happiness) can result in a contrast effect. Although some exceptions have been found, people tend to avoid redundancy by excluding the more specific question from the general rating.

Assimilation effects occur when responses to two questions are more consistent or closer together because of their placement in the questionnaire. We found an example of an assimilation effect in a Pew Research Center poll conducted in November 2008 when we asked whether Republican leaders should work with Obama or stand up to him on important issues and whether Democratic leaders should work with Republican leaders or stand up to them on important issues. People were more likely to say that Republican leaders should work with Obama when the question was preceded by the one asking what Democratic leaders should do in working with Republican leaders (81% vs. 66%). However, when people were first asked about Republican leaders working with Obama, fewer said that Democratic leaders should work with Republican leaders (71% vs. 82%).

The order questions are asked is of particular importance when tracking trends over time. As a result, care should be taken to ensure that the context is similar each time a question is asked. Modifying the context of the question could call into question any observed changes over time (see  measuring change over time  for more information).

A questionnaire, like a conversation, should be grouped by topic and unfold in a logical order. It is often helpful to begin the survey with simple questions that respondents will find interesting and engaging. Throughout the survey, an effort should be made to keep the survey interesting and not overburden respondents with several difficult questions right after one another. Demographic questions such as income, education or age should not be asked near the beginning of a survey unless they are needed to determine eligibility for the survey or for routing respondents through particular sections of the questionnaire. Even then, it is best to precede such items with more interesting and engaging questions. One virtue of survey panels like the ATP is that demographic questions usually only need to be asked once a year, not in each survey.

U.S. SURVEYS

About Our U.S. Surveys The American Trends Panel U.S. Survey Methodology Writing Survey Questions Polling Policy Statement Frequently Asked Questions The National Public Opinion Reference Survey (NPORS)

OTHER RESEARCH METHODS

International Surveys Data Science Demographic Analysis

image1.png

image2.png

image3.png

image4.png

,

Questionnaire design: the good, the bad and the pitfalls

Denise Thwaites Bee, Deborah Murdoch-Eaton

Academic Unit of Medical Education, The Medical School, University of Sheffield, Sheffield, UK

Correspondence to Professor Deborah Murdoch- Eaton, Academic Unit of Medical Education, The Medical School, University of Sheffield, Beech Hill Road, Sheffield S10 2RX, UK; [email protected]

Received 21 September 2015 Revised 27 January 2016 Accepted 29 January 2016 Published Online First 24 February 2016

To cite: Thwaites Bee D, Murdoch-Eaton D. Arch Dis Child Educ Pract Ed 2016;101:210–212.

You have a question, or want to find out current perceptions about a subject, and a comprehensive literature search does not give the answer. A questionnaire or survey, if appropriately designed and administered, can be an easy and efficient way to collect data. However, a well- designed tool is essential to provide meaningful answers. Guidance on good questionnaire design

is available.1–4 This can be framed around three simple steps: preparation— evaluation—delivery. Analysis and inter- pretation are the final stages of complet- ing the research.

PREPARATION Is a survey method the most appropriate research tool to answer the question? Questionnaires are useful to investigate opinions or attitudes of a population. If a questionnaire is chosen as the research tool, the next step is to identify whether a validated instrument already exists. If a tool needs to be designed, what format would be of greatest value in answering the enquiry; a structured interview or a self-completed written form? The latter can gather a large amount of rich data, while the former provides a deeper understanding through semistructured questioning.3

Self-completed questionnaires require careful construction with clear articula- tion of purpose. Their success depends strongly on format as well as the wording; use an attractive, easy to navi- gate presentation and ensure the length is kept as short as possible. Consider whether to include open or closed ques- tions, or a combination of both. Questions should only include a single point, written unambiguously and con- tained within short sentences. Wording should be appropriate for your survey population and avoid jargon to reduce potential confusion. Closed questions can provide large amounts of easily handled

(often numerical) data. Open questions, as in free text responses and larger inter- view surveys will collect rich information, but will require considerable resource time for the analysis, including methods for sorting and coding of the data.

Sampling How much data are needed to answer the question? The intention is that results from a ‘sample’ can be generalised back to the whole population. Sampling within the whole population, or subgroup, may be the most manageable way to answer the research question as the amount of data gathered from a census would be overwhelming. Recruitment can be by advertising, although selection should include random participation to reduce investigator bias; response bias cannot always be avoided.5 If a subgroup (eg, ethnic, geographic, socioeconomic) is the subject of study, then participant selection needs to be targeted, systematic and con- sistent. Despite this general ‘rule’ of random selection, there is a place for opportunistic or convenience sampling (eg, finding participants from among your colleagues or those attending a meeting or lecture), provided the poten- tial limitations are articulated. Identifying appropriate sample size can be facilitated using freely available on-line sample-size calculators. The smaller the population being studied, the greater the proportion of this population the sample should be. And however the sampling is done, it is essential the limitations this places on data interpretation are understood. This is particularly true for convenience sam- pling when participants may hold very similar opinions and thus not truly reflect the study population.

Response formats and scales Many formats may be used as illustrated in figure 1, the most common being the Likert or modified Likert. Additionally,

RESEARCH IN PRACTICE

210 Thwaites Bee D, Murdoch-Eaton D. Arch Dis Child Educ Pract Ed 2016;101:210–212. doi:10.1136/archdischild-2015-309450

simple binary decisions (yes/no) or items with mul- tiple choice options can be very useful. The scale choice should be designed to contribute positively to data collection. Confusion within questionnaires arises when combining or changing formats; these should be well signposted to ensure participants continue to respond appropriately.6 Including some free text items is valuable in exploring context, broadening the scope of answers and providing rich data to enhance numer- ical result interpretation.

EVALUATION Field testing a new instrument is essential to evaluate whether it will provide the information needed for the study, and that final versions will elicit reliable and valid responses. Questionnaire evaluators should include researchers, stakeholders and respondents. A clear brief should cover purpose, clarity and layout, and whether items are likely to be well understood in a similar fashion by all respondents. Acceptability and feasibility are important considerations and should include both questionnaire size and length of time required for completion; these should not exceed the participants’ patience threshold. Negative influences of length on response rates and quality are frequently seen in the latter portions of lengthy questionnaires.7

Students are particularly prone to ‘survey fatigue’ from frequent in-course evaluations.

DELIVERY Even though questionnaires can easily be kept anonymous, ethical approval is necessary, especially if publication is desired. Consent may be considered implicit through return of a completed form, and an

opening statement or a covering letter can make this explicit. Questionnaire distribution should be tailored to the study population to enhance return rate. Postal paper questionnaires or relying on opportunist encounters can be useful. Electronic means can reach large numbers of potential respondents and thus create a large data set. When using email, make sure the ‘bcc’ and not the main or cc addressee lines are used to maintain confidentiality. Accompanying material, attached or accessed via a web link, provides valuable insight into the importance of the research and thus enhances participant interest and completion rate. Consideration of when would be the best time to undertake the survey will depend on the population sampled. For example, avoiding exam time for stu- dents or distributing questionnaires to a ‘captive’ audi- ence during lectures is opportunistically useful. Small rewards are often used for surveys, such as entry into a ‘lottery’ or the gift of a chocolate bar, and enhance return rate. However, this can be considered ethically challenging, especially if the inducements are consid- ered likely to significantly influence compliance and should be discussed within ethical approval processes.

ANALYSIS AND INTERPRETATION The chosen questionnaire format should include con- sideration of how the data will be processed. The availability of scannable forms or web forms such as ‘Survey Monkey’ or ‘Typeform’ provides automatic collation, data management and often some statistical analyses. When presenting results, provide compre- hensive information on design methodology to facili- tate a critical review of outcomes. Condensing data to provide only the central tendencies, that is, means/

Figure 1 For the Likert-type scales, the question becomes a statement that can be agreed or disagreed with. Where subjects are to choose a position on a line or within a number range, the ends of the scale must be explained and a clear set of statements or anchors used. There is often a neutral point shown here as neither agree nor disagree, the number 5 and the vertical line on the Visual Analogue Scale (VAS). Face-Pain Scales are useful for young children to indicate severity, with the assisting adult describing pain levels (“this one hurts a lot and you want to cry”).

Research in practice

Thwaites Bee D, Murdoch-Eaton D. Arch Dis Child Educ Pract Ed 2016;101:210–212. doi:10.1136/archdischild-2015-309450 211

medians may hide extremes of opinion, thus an indi- cation of range or SD should be included. Connections between items or groups of items from the questionnaire may be important, and factor ana- lysis can demonstrate these and provide tool validity assessments. Graphical representations provide valu- able visual overviews of complex data. Analysis of qualitative (free text) data requires

coding (or sorting) data into themes, which will form the basis of an interpretive discussion. There are helpful programmes that aid sorting (theming) of data (eg, NVivo) but any systematic means of sorting com- ments into themes can be used; cards with quotes written on them are particularly useful for finding

themes. Qualitative data may also be presented graph- ically (eg, flowcharts) and demonstrate data through a density analysis. Remember that a single comment is as valuable in understanding an issue and can be as powerful as a commonly held belief.

SUMMARY Preparation, evaluation and delivery of a survey instrument are crucial. This includes well-researched background material to confirm the question, ethics approval, a consideration of validity and whether find- ings might be generalisable. Surveys gather quantifi- able data efficiently, but contextual richness and interpretation often come through the free text. Quotes can usefully illustrate your interpretations and conclusions when presenting results (table 1).

Contributors Both authors have contributed equally to this paper. DTB completed the first draft; both authors have worked together on the subsequent and final versions.

Competing interests None declared.

Provenance and peer review Not commissioned; externally peer reviewed.

REFERENCES 1 Woodward CA. Questionnaire construction and question

writing for research in medical education. Med Educ 1988;22:345–63.

2 Artino AR Jr, La Rochelle JS, Dezee KJ, et al. Developing questionnaires for educational research: AMEE Guide No. 87. Med Teach 2014;36:463–74.

3 Tavakol M, Sandars J. Quantitative and qualitative methods in medical education research: AMEE Guide No 90: Part II. Med Teach 2014;36:838–48.

4 Mathers N, Fox N, Hunn A. Surveys and Questionnaires. The NIHR Research Design Service for the East Midlands/Yorkshire & the Humber, 2009. http://www.rds-yh.nihr.ac.uk/ wp-content/uploads/2013/05/12_Surveys_and_Questionnaires_ Revision_2009.pdf (accessed 11 Jul 2015).

5 McFarlane E, Olmsted MG, Murphy J, et al. Nonresponse bias in a mail survey of physicians. Eval Health Prof. 2007;30:170–85.

6 Jenkins CR, Dillman DA. Towards a theory of self-administered questionnaire design. In: Lyberg LE, et al., eds. Survey Measurement and Process Quality. John Wiley and sons, 2012: Chapter 7, p165–96.

7 Galesic M, Bosnjak M. Effects of questionnaire length on participation and indicators of response quality in a web survey. Public Opin Q 2009;73:349–60.

Table 1 The do’s (the good) and don’ts (the pitfalls) of questionnaire design

Questionnaires?

The good ideas The pitfalls

Well-articulated research topic Lack of consideration of the field of scholarship when starting.

Comprehensive literature review

Not searching broadly enough. You may need to research journals or the grey literature outside your usual reading.

Considered choice of survey format

Not taking advantage of validated questionnaires already available. Not considering what is acceptable and feasible.

Clear visual design with signposting

Small font. Poorly organised. Difficult to navigate. Too long.

Questions with a single point Complex questions, ambiguous, more than one point, unclear wording.

Field test or pilot your instrument appropriately

Missing out relevant stakeholder groups in the review. Inappropriate opportunistic sampling—it is too easy to think your colleagues will do!

For sampling, refer back to your research question, randomise or purposively sample

Insufficient consideration. Sampling can disenfranchise certain persons or groups of people.

Choose when, where and how to deliver the questionnaire for maximum uptake

Not answering the research question through poor sampling or poor returns.

Enhance quantitative questions with free text boxes

Can take many hours of analysis. Know your limits regarding expertise and time.

Research in practice

212 Thwaites Bee D, Murdoch-Eaton D. Arch Dis Child Educ Pract Ed 2016;101:210–212. doi:10.1136/archdischild-2015-309450

  • edpract-101-210_16264.pdf
    • Questionnaire design: the good, the bad and the pitfalls
      • Preparation
        • Sampling
        • Response formats and scales
      • Evaluation
      • Delivery
      • Analysis and interpretation
      • Summary
      • References