By Jill Jones
K-12 leaders often turn to surveys as an effective and efficient means to gather feedback from stakeholders including staff, parents, students, and the community. However, surveys are only useful if designed, administered, and analyzed according to best practices.
Building upon a previous blog post titled “Strategies for Successful K-12 Survey Administration,” this post explores two other aspects of survey research: (1) survey design and (2) survey analysis. To design an effective survey, it is essential to understand the audience, the intended goals for the survey analysis, and the content of the intended survey.
This post provides recommendations for survey design and analysis and is grounded in survey research and Hanover’s extensive practical experience surveying students, parents, staff members, and community members in the K-12 sphere.
Less is More, Keep it Short
Have you ever opened a survey expecting a 5- to 10-minute time commitment and found yourself clicking through questions 15 minutes later? Setting clear survey length expectations at the onset, meeting those expectations, and keeping the survey short improves the respondents’ experience, the quality of your data, and the likelihood that respondents will complete the entire survey.
What is a “short” survey? To effectively determine an appropriate survey length, it is important to evaluate the expected attention span of respondents and their commitment to the topic. For example, for a school climate survey, you may ask more questions of parents and staff members compared to community members or students. This is because parents and staff members often have a greater commitment to providing feedback and willingness to take the time to provide feedback. Here are some length considerations for various K-12 stakeholders:
- Students may have shorter attention spans and get fatigued when taking long surveys, thus yielding low-quality data. For example, when analyzing data from long surveys, we often see student respondents uniformly select “strongly agree” for multiple Likert questions in a row during the later stages of their participation. Ideally student surveys would take a maximum of 10 minutes to complete for this reason.
- Staff members frequently have the highest attention span and level of commitment to providing high-quality responses. Assuming staff members are not over-surveyed by their district, asking them to complete a longer survey, up to 15 minutes, is unlikely to affect their survey quality and response rate.
- Parents are often the second most committed audience in terms of providing quality responses during longer surveys; however, parents are often hard-to-reach and therefore a 10-minute survey is appropriate for parents.
- Community members, on the other hand, typically complete surveys at much lower rates, so a short survey that lasts approximately 5 minutes or less is best. Although incentives are uncommon for our K-12 surveys and partners, offering a survey incentive to hard-to-reach populations like community members can effectively increase their participation and completion.
Target Your Audience
Use Screening Questions to Eliminate Unwanted Stakeholder Groups
It’s important to include all possible screening questions even when using a contact list. Survey respondents may forward their survey invitation onto other unintended groups or obtain access through web sources inadvertently. To combat these realities, an effective survey instrument should always include screening questions to appropriately target the intended audience. In some cases, it might also be necessary to duplicate specific screening and background questions for quality control purposes. For example, students are sometimes asked to identify their grade at both the beginning and end of the survey, thus allowing for identification (and potential removal) of respondents who provide inconsistent grade selections.
Only Show Relevant Questions to Specific Stakeholder Groups
To keep surveys brief and meaningful, ensure that participants respond to relevant questions but not all questions. For example, only staff members should see questions about staff professional development. Similarly, students and staff could report on teachers’ use of instructional technology during class, whereas parents and students could report on students’ use of technology at home, but all three groups should not answer both in-class and at-home questions.
Intentionally Place Survey Topics and Questions
Ease Respondents into the Survey
In some K-12 surveys, we ask about sensitive topics such as school bullying or emotional health and well-being. These more sensitive topics should appear after survey respondents have warmed up and answered easier questions – including questions about school or grade affiliation or other less sensitive questions like “school cleanliness.” Each topic area and section of questions should also be evaluated to understand the extent to which one section of questions may influence responses to another section of survey questions.
Place Demographic Questions at the End
Common demographic questions such as race and gender may influence respondents’ answers to other survey questions. This is commonly known as “order effects.” For example, if we first ask a respondent to specify their gender identity and then ask social-emotional learning questions, respondents may unconsciously reflect on their gender identity and respond in gender conforming or non-conforming ways. For this reason, we often place demographic questions at the end of a survey.
Randomize Questions and Options where Appropriate
The row order of Likert scale questions, “select all that apply” questions, or multiple-choice questions is also important. Unless there is a natural ordinal nature to the response options (e.g., Kindergarten, Grade 1, Grade 2, etc.) response options should be randomized within the survey to minimize order effects. For example, a respondent may “satisfice” and only pay attention to the first few options in a “select all that apply” question, thus overrepresenting those response options due to ease of access and not thoughtful reflection on the respondents’ part.
Each Question Matters
As indicated above, survey designs consist of many moving parts and competing demands. In addition to the design principles, there are other more nuanced aspects of item-level question design, including the following design principles:
- Provide reference frames for time and location. Make sure respondents are answering questions about the same time frame and place. For example, “In the past week […],” “In the past month […]” are time-oriented reference frames whereas “My school […]” or “The district […]” are location-oriented reference frames. When using time-oriented reference frames, ensure that the time is cognitively appropriate for the target population. For example, 8th graders probably can’t answer accurately about events that occurred over a year ago.
- Avoid double-barreled questions. Double-barreled questions try to measure two (or more) things. Consider the example item “The Central Office staff is helpful and friendly.” Ideally, the survey would ask about “helpfulness” and “friendliness” separately since these are two distinct aspect of customer service.
- Use mutually exclusive answer options. This protects the validity of your data by ensuring respondents can only qualify for one answer category. For example, asking about income should include mutually exclusive options such as $10,000-$24,999 and $25,000-$49,999 instead of $10,000-$25,000 and $25,000-$50,000.
- Provide a “Don’t Know,” “Not Applicable,” or “No Opinion” option if relevant. We should only include one response option (at the end of the scale) that captures these sentiments. Consider if it is reasonable for respondents not to have an answer to a specific question and then consider “Don’t Know” is best suited to factual questions and “No Opinion” to perception questions.
- Three-, five-, or seven-point scales are best. Fewer options are less fatiguing than more options, but more options allow for more nuances (or variance) in the data. Determine the optimal number of points based on overall survey length, population (e.g., children/adults), and necessity of a various scale points for interpretation.
- Provide a middle point and anchor. Scales with a middle point tend to produce better data. For example, you might ask a question that employs the scale “Not at All Familiar,” “Slightly Familiar,” “Moderately Familiar,” “Very Familiar,” “Extremely Familiar.” In this example, “moderately” is the middle point and “familiar” is the anchor.
- Limit the use of open-ended questions. We typically recommend no more than one or two open-ended questions per survey. Open-ended questions should take approximately one minute to complete. Open-ended responses are supplementary to a traditional close-ended survey and should be used sparingly since open-ended responses attract mostly divergent perspective and thus are often not representative of the overall respondent group.
Consider the Analysis
It is important to plan for data analysis during survey design process. Each of the previously mentioned design principles support the analysis phase, but it is also important to consider how you plan to use the survey analysis and whether it will be important to segment and compare responses across stakeholder groups. For example, if you want to compare survey responses across schools, then the survey instrument needs to include a question asking respondents for their school affiliation. These types of essential questions should be situated earlier in the survey instrument and should require responses.