Survey animation

90 Survey Question Examples + Best Practices Checklist

What makes a good survey question, what is the importance of asking the right questions, 9 types of survey questions + examples, how to conduct surveys effectively, make surveys easier with fullsession, fullsession pricing plans, install your first website survey today, faqs about survey questions.

An effective survey is the best way to collect customer feedback. It will serve as your basis for multiple functions, such as improving your product, supplementing market research, creating new marketing strategies, and much more. But what makes an effective survey?

The answer is simple–you have to ask the right questions. Good survey questions gather concrete information from your audience and give you a solid idea of what you need to do next. However, the process of creating a survey is not that easy–you want to make every question count.

In this article we’ll cover everything you need to know about survey questions, with 90 examples and use cases.

Understanding the anatomy of a good survey question can transform your approach to data collection, ensuring you gather information that’s both actionable and insightful. Let’s dive deeper into the elements that make a survey question effective:

  • Clarity is Key:  Questions should be straightforward and leave no room for interpretation, ensuring uniform understanding across all respondents.
  • Conciseness Matters:  Keep questions short and to the point. Avoid unnecessary wording that could confuse or disengage your audience.
  • Bias-Free Questions:  Ensure questions are neutral and do not lead respondents toward a particular answer. This maintains the integrity of your data.
  • Avoiding Ambiguity:  Specify the context clearly and ask questions in a way that allows for direct and clear answers, eliminating confusion.
  • Ensuring Relevance:  Each question should have a clear purpose and be directly related to your survey’s objectives, avoiding any irrelevant inquiries.
  • Easy to Answer:  Design questions in a format that is straightforward for respondents to understand and respond to, whether open-ended, multiple-choice, or using a rating scale.

Keep these points in mind as you prepare to write your survey questions. It also helps to refer back to these goals after drafting your survey so you can see if you hit each mark.

The primary goal of a survey is to collect information that would help meet a specific goal, whether that be gauging customer satisfaction or getting to know your target audience more. Asking the right survey questions is the best way to achieve that goal. More specifically, a good survey can help you with:

Informed Decision-Making

A solid foundation of data is essential for any business decision, and the right survey questions point you in the direction of the most valuable information.

Survey responses serve as a basis for the strategic decisions that can propel a business forward or redirect its course to avoid potential pitfalls. By understanding what your audience truly wants or needs, you can tailor your products or services to meet those demands more effectively.

Uncovering Customer Preferences

Today’s consumers have more options than ever before, and their preferences can shift with the wind. Asking the right survey questions helps you tap into the current desires of their target market, uncovering trends and preferences that may not be immediately obvious.

This insight allows you to adapt your products, services, and marketing messages to resonate more deeply with the target audience, fostering loyalty and encouraging engagement.

Identifying Areas for Improvement

No product, service, or customer experience is perfect, but the path to improvement lies in understanding where the gaps are. The right survey questions can shine a light on these areas, offering a clear view of what’s working and what’s not.

This feedback is invaluable for continuous improvement, helping you refine your products and enhance the customer experience. In turn, this can lead to increased satisfaction, loyalty, and positive word-of-mouth.

Reducing Churn Rate

Churn rate is the percentage of customers who stop using your service or product over a given period. High churn rates can be a symptom of deeper issues, such as dissatisfaction with the product or service, poor customer experience, or unmet needs. Including good survey questions can help you identify the reasons behind customer departure and take proactive steps to address them.

For example, survey questions that explore customer satisfaction levels, reasons for discontinuation, or the likelihood of recommending the service to others can pinpoint specific factors contributing to churn.

Minimizing Website Bounce Rate

Bounce rate  is the percentage of visitors leaving a website after viewing just one page. High bounce rates may signal issues with a site’s content, layout, or user experience not meeting visitor expectations.

Utilizing surveys to ask about visitors’ web experiences can provide valuable insights into website usability, content relevance, and navigation ease. Effectively, well-crafted survey questions aimed at understanding the user experience can lead to strategic adjustments, improving overall website performance, and fostering a more engaged audience.

three people filling out a feedback form animated picture

A good survey consists of two or more types of survey questions. However, all questions must serve a purpose. In this section, we divide survey questions into nine categories and include the best survey question examples for each type:

1. Open Ended Questions

Open-ended questions  allow respondents to answer in their own words instead of selecting from pre-selected answers.

“What features would you like to see added to our product?”

“How did you hear about our service?”

“What was your reason for choosing our product over competitors?”

“Can you describe your experience with our customer service?”

“What improvements can we make to enhance your user experience?”

“Why did you cancel your subscription?”

“What challenges are you facing with our software?”

“How can we better support your goals?”

“What do you like most about our website?”

“Can you provide feedback on our new product launch?”

When to use open-ended questions: Using these survey questions is a good idea when you don’t have a solid grasp of customer satisfaction yet. Customers will have the freedom to express all their thoughts and opinions, which, in turn, will let you have an accurate feel of how customers perceive your brand.

2. Multiple Choice Questions

Multiple-choice questions offer a set of predefined answers, usually three to four. Businesses usually use multiple-choice survey questions to gather information on participants’ attitudes, behaviors, and preferences.

“Which of the following age groups do you fall into? (Under 18, 19-25, 26-35, 36-45, 46-55, 56+)”

“What is your primary use of our product? (Personal, Business, Educational)”

“How often do you use our service? (Daily, Weekly, Monthly, Rarely)”

“Which of our products do you use? (Product A, Product B, Product C, All of the above)”

“What type of content do you prefer? (Blogs, Videos, Podcasts, eBooks)”

“Where do you usually shop for our products? (Online, In-store, Both)”

“What is your preferred payment method? (Credit Card, PayPal, Bank Transfer, Cash)”

“Which social media platforms do you use regularly? (Facebook, Twitter, Instagram, LinkedIn)”

“What is your employment status? (Employed, Self-Employed, Unemployed, Student)”

“Which of the following best describes your fitness level? (Beginner, Intermediate, Advanced, Expert)”

When to use multiple-choice questions: Asking multiple-choice questions can help with market research and segmentation. You can easily divide respondents depending on what pre-determined answer they choose. However, if this is the purpose of your survey, each question must be based on behavioral types or customer personas.

3. Yes or No Questions

Yes or no questions are straightforward, offering a binary choice.

“Have you used our product before?”

“Would you recommend our service to a friend?”

“Are you satisfied with your purchase?”

“Do you understand the terms and conditions?”

“Was our website easy to navigate?”

“Did you find what you were looking for?”

“Are you interested in receiving our newsletter?”

“Have you attended one of our events?”

“Do you agree with our privacy policy?”

“Have you experienced any issues with our service?”

When to use yes/no questions: These survey questions are very helpful in market screening and filtering out certain people for targeted surveys. For example, asking “Have you used our product before?” helps you separate the people who have tried out your product, a.k.a. the people who qualify for your survey.

4. Rating Scale Questions

Rating scale questions ask respondents to rate their experience or satisfaction on a numerical scale.

“On a scale of 1-10, how would you rate our customer service?”

“How satisfied are you with the product quality? (1-5)”

“Rate your overall experience with our website. (1-5)”

“How likely are you to purchase again? (1-10)”

“On a scale of 1-10, how easy was it to find what you needed?”

“Rate the value for money of your purchase. (1-5)”

“How would you rate the speed of our service? (1-10)”

“Rate your satisfaction with our return policy. (1-5)”

“How comfortable was the product? (1-10)”

“Rate the accuracy of our product description. (1-5)”

When to use rating scale questions: As you can see from the survey question examples above, rating scale questions give you excellent  quantitative data  on customer satisfaction.

5. Checkbox Questions

Checkbox questions allow respondents to select multiple answers from a list. You can also include an “Others” option, where the respondent can answer in their own words.

“Which of the following features do you value the most? (Select all that apply)”

“What topics are you interested in? (Select all that apply)”

“Which days are you available? (Select all that apply)”

“Select the services you have used. (Select all that apply)”

“What types of notifications would you like to receive? (Select all that apply)”

“Which of the following devices do you own? (Select all that apply)”

“Select any dietary restrictions you have. (Select all that apply)”

“Which of the following brands have you heard of? (Select all that apply)”

“What languages do you speak? (Select all that apply)”

“Select the social media platforms you use regularly. (Select all that apply)”

When to use checkbox questions: Checkbox questions are an excellent tool for collecting  psychographic data , including information about customers’ lifestyles, behaviors, attitudes, beliefs, etc. Moreover, survey responses will help you correlate certain characteristics to specific market segments.

6. Rank Order Questions

Rank order questions ask respondents to prioritize options according to their preference or importance.

“Rank the following features in order of importance to you. (Highest to Lowest)”

“Please rank these product options based on your preference. (1 being the most preferred)”

“Rank these factors by how much they influence your purchase decision. (Most to Least)”

“Order these services by how frequently you use them. (Most frequent to Least frequent)”

“Rank these issues by how urgently you think they need to be addressed. (Most urgent to Least urgent)”

“Please prioritize these company values according to what matters most to you. (Top to Bottom)”

“Rank these potential improvements by how beneficial they would be for you. (Most beneficial to Least beneficial)”

“Order these content types by your interest level. (Most interested to Least interested)”

“Rank these brands by your preference. (Favorite to Least favorite)”

“Prioritize these activities by how enjoyable you find them. (Most enjoyable to Least enjoyable)”

When to use rank order questions: Respondents must already be familiar with your brand or products to answer these questions, which is why we recommend using these for customers in the middle or bottom of your  conversion funnel .

Checklist of items animated

7. Likert Scale Questions

Likert scale questions measure the intensity of feelings towards a statement on a scale of agreement or satisfaction. Usually, these survey questions use a 5 to 7-point scale, ranging from “Strongly Agree” to “Strongly Disagree” or something similar.

  • “I am satisfied with the quality of customer service. (Strongly Agree, Agree, Neutral, Disagree, Strongly Disagree)”
  • “The product meets my needs. (Strongly Agree to Strongly Disagree)”
  • “I find the website easy to navigate. (Strongly Agree to Strongly Disagree)”
  • “I feel that the pricing is fair for the value I receive. (Strongly Agree to Strongly Disagree)”
  • “I would recommend this product/service to others. (Strongly Agree to Strongly Disagree)”
  • “I am likely to purchase from this company again. (Strongly Agree to Strongly Disagree)”
  • “The company values customer feedback. (Strongly Agree to Strongly Disagree)”
  • “I am confident in the security of my personal information. (Strongly Agree to Strongly Disagree)”
  • “The product features meet my expectations. (Strongly Agree to Strongly Disagree)”
  • “Customer service resolved my issue promptly. (Strongly Agree to Strongly Disagree)”

When to use Likert scale questions: You can use these survey question examples in different types of surveys, such as customer satisfaction (CSAT) surveys. Likert scale questions give you precise measurements of how satisfied respondents are with a specific aspect of your product or service.

8. Matrix Survey Questions

Matrix survey questions allow respondents to evaluate multiple items using the same set of response options. Many companies combine matrix survey questions with Likert scales to make the survey easier to do.

  • “Please rate the following aspects of our service. (Customer support, Product quality, Delivery speed)”
  • “Evaluate your level of satisfaction with these website features. (Search functionality, Content relevance, User interface)”
  • “Rate the importance of the following factors in your purchasing decision. (Price, Brand, Reviews)”
  • “Assess your agreement with these statements about our company. (Innovative, Ethical, Customer-focused)”
  • “Rate your satisfaction with these aspects of our product. (Ease of use, Durability, Design)”
  • “Evaluate these aspects of our mobile app. (Performance, Security, Features)”
  • “Rate how well each of the following describes our brand. (Trustworthy, Innovative, Responsive)”
  • “Assess your satisfaction with these elements of our service. (Responsiveness, Accuracy, Friendliness)”
  • “Rate the effectiveness of these marketing channels for you. (Email, Social Media, Print Ads)”
  • “Evaluate your agreement with these workplace policies. (Flexibility, Diversity, Wellness initiatives)”

When to use matrix survey questions: Ask matrix survey questions when you want to make your survey more convenient to answer, as they allow multiple questions on various topics without repeating options. This is particularly helpful when you want to cover many points of interest in one survey.

9. Demographic Questions

Lastly, demographic questions collect basic information about respondents, aiding in data segmentation and analysis.

  • “What is your age?”
  • “What is your gender? (Male, Female, Prefer not to say, Other)”
  • “What is your highest level of education completed?”
  • “What is your employment status? (Employed, Self-employed, Unemployed, Student)”
  • “What is your household income range?”
  • “What is your marital status? (Single, Married, Divorced, Widowed)”
  • “How many people live in your household?”
  • “What is your ethnicity?”
  • “In which city and country do you currently reside?”
  • “What is your occupation?”

When to use demographic questions: From the survey question examples, you can easily tell that these questions aim to collect information on your respondents’ backgrounds, which will be helpful in creating buyer personas and improving market segmentation.

Checklist pointer arrow on tablet held in hands animation

Surveys can help you accomplish many things for your business, but only if you do it right. Creating the perfect survey isn’t just about crafting the best survey questions, you also have to:

1. Define Your Objectives

Before crafting your survey, be clear about what you want to achieve. Whether it’s understanding customer satisfaction, gauging interest in a new product, or collecting feedback on services, having specific objectives will guide your survey design and ensure you ask the right questions.

2. Know Your Audience

Understanding who your respondents are will help tailor the survey to their interests and needs, increasing the likelihood of participation. Consider demographics, behaviors, and preferences to make your survey relevant and engaging to your target audience.

3. Choose the Right Type of Survey Questions

Utilize a mix of the nine types of survey questions to gather a wide range of data. Balance open-ended questions for qualitative insights with closed-ended questions for easy-to-analyze quantitative data. Ensure each question aligns with your objectives and is clear and concise.

4. Keep It Short and Simple (KISS)

Respondents are more likely to complete shorter surveys. Aim for a survey that takes 5-10 minutes to complete, focusing on essential questions only. A straightforward and intuitive survey design encourages higher response rates.

5. Use Simple Language

Avoid technical jargon, complex words, or ambiguous terms. The language should be accessible to all respondents, ensuring that questions are understood as intended.

6. Ensure Anonymity and Confidentiality

Assure respondents that their answers are anonymous and their data will be kept confidential. This assurance can increase the honesty and accuracy of the responses you receive.

7. Test Your Survey

Pilot your survey with a small group before full deployment. This testing phase can help identify confusing questions, technical issues, or any other aspects of the survey that might hinder response quality or quantity.

8. Choose the Right Distribution Channels

Select the most effective channels to reach your target audience. This could be via email, social media, your website, or in-app notifications, depending on where your audience is most active and engaged.

9. Offer Incentives

Consider offering incentives to increase participation rates. Incentives can range from discounts, entry into a prize draw, or access to exclusive content. Ensure the incentive is relevant and appealing to your target audience.

10. Analyze and Act on the Data

After collecting the responses, analyze the data to extract meaningful insights. Use these insights to make informed decisions, implement changes, or develop strategies that align with your objectives. Sharing key findings and subsequent actions with respondents can also demonstrate the value of their feedback and encourage future participation.

11. Follow Up

Consider following up with respondents after the survey, especially if you promised to share results or if you’re conducting longitudinal studies. A follow-up can reinforce their importance to your research and maintain engagement over time.

12. Iterate and Improve

Surveys are not a one-time activity. Regularly conducting surveys and iterating based on previous feedback and results can help you stay aligned with your audience’s changing needs and preferences.

Checklist of items animated

These survey question examples are a great place to start in creating efficient and effective surveys. Why not take it a step further by integrating a  customer feedback tool  on your website?

FullSession  lets you collect instant visual feedback with an intuitive in-app survey. With this tool, you can:

  • Build unique surveys
  • Target feedback based on users’ devices or specific pages
  • Measure survey responses

Aside from FullSession’s customer feedback tool, you also gain access to:

  • Interactive heat maps: A  website heat map  shows you which items are gaining the most attention and which ones are not, helping you optimize UI and UX.
  • Session recordings: Watch  replays  or live sessions to see how users are navigating your website and pinpoint areas for improvement.
  • Funnels and conversions: Analyze funnel data to figure out what’s causing  funnel drops  and what contributes to successful conversions.

fullsession pricing image

The FullSession platform offers a  14-day free trial.  It provides two paid plans—Basic and Business. Here are more details on each plan.

  • The Basic plan costs $39/month and allows you to monitor up to 5,000 monthly sessions.
  • The Business plan costs $149/month and helps you to track and analyze up to 25,000 monthly sessions.
  • The Enterprise plan starts from 100,000 monthly sessions and has custom pricing.

If you need more information, you can  get a demo.

It takes less than 5 minutes to set up your first website or app survey form, with  FullSession , and it’s completely free!

How many questions should I include in my survey?

Aim for 10-15 questions to keep surveys short and engaging, ideally taking 5-10 minutes to complete. Focus on questions that directly support your objectives.

How can I ensure my survey questions are not biased?

Use neutral language, avoid assumptions, balance answer choices, and pre-test your survey with a diverse group to identify and correct biases.

How do I increase my survey response rate?

To boost response rates, ensure your survey is concise and relevant to the audience. Use engaging questions, offer incentives where appropriate, and communicate the value of respondents’ feedback. Choose the right distribution channels to reach your target audience effectively.

how many survey questions in research

Enhance Your Insights With Richer User Behavior Data

Discover FullSession's Digital Experience Intelligence solution firsthand. Explore FullSession for free

FullSession Heatmap

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Writing Survey Questions

Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions. Creating good measures involves both writing good questions and organizing them to form the questionnaire.

Questionnaire design is a multistage process that requires attention to many details at once. Designing the questionnaire is complicated because surveys can ask about topics in varying degrees of detail, questions can be asked in different ways, and questions asked earlier in a survey may influence how people respond to later questions. Researchers are also often interested in measuring change over time and therefore must be attentive to how opinions or behaviors have been measured in prior surveys.

Surveyors may conduct pilot tests or focus groups in the early stages of questionnaire development in order to better understand how people think about an issue or comprehend a question. Pretesting a survey is an essential step in the questionnaire design process to evaluate how people respond to the overall questionnaire and specific questions, especially when questions are being introduced for the first time.

For many years, surveyors approached questionnaire design as an art, but substantial research over the past forty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire. Here, we discuss the pitfalls and best practices of designing questionnaires.

Question development

There are several steps involved in developing a survey questionnaire. The first is identifying what topics will be covered in the survey. For Pew Research Center surveys, this involves thinking about what is happening in our nation and the world and what will be relevant to the public, policymakers and the media. We also track opinion on a variety of issues over time so we often ensure that we update these trends on a regular basis to better understand whether people’s opinions are changing.

At Pew Research Center, questionnaire development is a collaborative and iterative process where staff meet to discuss drafts of the questionnaire several times over the course of its development. We frequently test new survey questions ahead of time through qualitative research methods such as  focus groups , cognitive interviews, pretesting (often using an  online, opt-in sample ), or a combination of these approaches. Researchers use insights from this testing to refine questions before they are asked in a production survey, such as on the ATP.

Measuring change over time

Many surveyors want to track changes over time in people’s attitudes, opinions and behaviors. To measure change, questions are asked at two or more points in time. A cross-sectional design surveys different people in the same population at multiple points in time. A panel, such as the ATP, surveys the same people over time. However, it is common for the set of people in survey panels to change over time as new panelists are added and some prior panelists drop out. Many of the questions in Pew Research Center surveys have been asked in prior polls. Asking the same questions at different points in time allows us to report on changes in the overall views of the general public (or a subset of the public, such as registered voters, men or Black Americans), or what we call “trending the data”.

When measuring change over time, it is important to use the same question wording and to be sensitive to where the question is asked in the questionnaire to maintain a similar context as when the question was asked previously (see  question wording  and  question order  for further information). All of our survey reports include a topline questionnaire that provides the exact question wording and sequencing, along with results from the current survey and previous surveys in which we asked the question.

The Center’s transition from conducting U.S. surveys by live telephone interviewing to an online panel (around 2014 to 2020) complicated some opinion trends, but not others. Opinion trends that ask about sensitive topics (e.g., personal finances or attending religious services ) or that elicited volunteered answers (e.g., “neither” or “don’t know”) over the phone tended to show larger differences than other trends when shifting from phone polls to the online ATP. The Center adopted several strategies for coping with changes to data trends that may be related to this change in methodology. If there is evidence suggesting that a change in a trend stems from switching from phone to online measurement, Center reports flag that possibility for readers to try to head off confusion or erroneous conclusions.

Open- and closed-ended questions

One of the most significant decisions that can affect how people answer questions is whether the question is posed as an open-ended question, where respondents provide a response in their own words, or a closed-ended question, where they are asked to choose from a list of answer choices.

For example, in a poll conducted after the 2008 presidential election, people responded very differently to two versions of the question: “What one issue mattered most to you in deciding how you voted for president?” One was closed-ended and the other open-ended. In the closed-ended version, respondents were provided five options and could volunteer an option not on the list.

When explicitly offered the economy as a response, more than half of respondents (58%) chose this answer; only 35% of those who responded to the open-ended version volunteered the economy. Moreover, among those asked the closed-ended version, fewer than one-in-ten (8%) provided a response other than the five they were read. By contrast, fully 43% of those asked the open-ended version provided a response not listed in the closed-ended version of the question. All of the other issues were chosen at least slightly more often when explicitly offered in the closed-ended version than in the open-ended version. (Also see  “High Marks for the Campaign, a High Bar for Obama”  for more information.)

how many survey questions in research

Researchers will sometimes conduct a pilot study using open-ended questions to discover which answers are most common. They will then develop closed-ended questions based off that pilot study that include the most common responses as answer choices. In this way, the questions may better reflect what the public is thinking, how they view a particular issue, or bring certain issues to light that the researchers may not have been aware of.

When asking closed-ended questions, the choice of options provided, how each option is described, the number of response options offered, and the order in which options are read can all influence how people respond. One example of the impact of how categories are defined can be found in a Pew Research Center poll conducted in January 2002. When half of the sample was asked whether it was “more important for President Bush to focus on domestic policy or foreign policy,” 52% chose domestic policy while only 34% said foreign policy. When the category “foreign policy” was narrowed to a specific aspect – “the war on terrorism” – far more people chose it; only 33% chose domestic policy while 52% chose the war on terrorism.

In most circumstances, the number of answer choices should be kept to a relatively small number – just four or perhaps five at most – especially in telephone surveys. Psychological research indicates that people have a hard time keeping more than this number of choices in mind at one time. When the question is asking about an objective fact and/or demographics, such as the religious affiliation of the respondent, more categories can be used. In fact, they are encouraged to ensure inclusivity. For example, Pew Research Center’s standard religion questions include more than 12 different categories, beginning with the most common affiliations (Protestant and Catholic). Most respondents have no trouble with this question because they can expect to see their religious group within that list in a self-administered survey.

In addition to the number and choice of response options offered, the order of answer categories can influence how people respond to closed-ended questions. Research suggests that in telephone surveys respondents more frequently choose items heard later in a list (a “recency effect”), and in self-administered surveys, they tend to choose items at the top of the list (a “primacy” effect).

Because of concerns about the effects of category order on responses to closed-ended questions, many sets of response options in Pew Research Center’s surveys are programmed to be randomized to ensure that the options are not asked in the same order for each respondent. Rotating or randomizing means that questions or items in a list are not asked in the same order to each respondent. Answers to questions are sometimes affected by questions that precede them. By presenting questions in a different order to each respondent, we ensure that each question gets asked in the same context as every other question the same number of times (e.g., first, last or any position in between). This does not eliminate the potential impact of previous questions on the current question, but it does ensure that this bias is spread randomly across all of the questions or items in the list. For instance, in the example discussed above about what issue mattered most in people’s vote, the order of the five issues in the closed-ended version of the question was randomized so that no one issue appeared early or late in the list for all respondents. Randomization of response items does not eliminate order effects, but it does ensure that this type of bias is spread randomly.

Questions with ordinal response categories – those with an underlying order (e.g., excellent, good, only fair, poor OR very favorable, mostly favorable, mostly unfavorable, very unfavorable) – are generally not randomized because the order of the categories conveys important information to help respondents answer the question. Generally, these types of scales should be presented in order so respondents can easily place their responses along the continuum, but the order can be reversed for some respondents. For example, in one of Pew Research Center’s questions about abortion, half of the sample is asked whether abortion should be “legal in all cases, legal in most cases, illegal in most cases, illegal in all cases,” while the other half of the sample is asked the same question with the response categories read in reverse order, starting with “illegal in all cases.” Again, reversing the order does not eliminate the recency effect but distributes it randomly across the population.

Question wording

The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way. Even small wording differences can substantially affect the answers people provide.

[View more Methods 101 Videos ]

An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” 68% said they favored military action while 25% said they opposed military action. However, when asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule  even if it meant that U.S. forces might suffer thousands of casualties, ” responses were dramatically different; only 43% said they favored military action, while 48% said they opposed it. The introduction of U.S. casualties altered the context of the question and influenced whether people favored or opposed military action in Iraq.

There has been a substantial amount of research to gauge the impact of different ways of asking questions and how to minimize differences in the way respondents interpret what is being asked. The issues related to question wording are more numerous than can be treated adequately in this short space, but below are a few of the important things to consider:

First, it is important to ask questions that are clear and specific and that each respondent will be able to answer. If a question is open-ended, it should be evident to respondents that they can answer in their own words and what type of response they should provide (an issue or problem, a month, number of days, etc.). Closed-ended questions should include all reasonable responses (i.e., the list of options is exhaustive) and the response categories should not overlap (i.e., response options should be mutually exclusive). Further, it is important to discern when it is best to use forced-choice close-ended questions (often denoted with a radio button in online surveys) versus “select-all-that-apply” lists (or check-all boxes). A 2019 Center study found that forced-choice questions tend to yield more accurate responses, especially for sensitive questions.  Based on that research, the Center generally avoids using select-all-that-apply questions.

It is also important to ask only one question at a time. Questions that ask respondents to evaluate more than one concept (known as double-barreled questions) – such as “How much confidence do you have in President Obama to handle domestic and foreign policy?” – are difficult for respondents to answer and often lead to responses that are difficult to interpret. In this example, it would be more effective to ask two separate questions, one about domestic policy and another about foreign policy.

In general, questions that use simple and concrete language are more easily understood by respondents. It is especially important to consider the education level of the survey population when thinking about how easy it will be for respondents to interpret and answer a question. Double negatives (e.g., do you favor or oppose  not  allowing gays and lesbians to legally marry) or unfamiliar abbreviations or jargon (e.g., ANWR instead of Arctic National Wildlife Refuge) can result in respondent confusion and should be avoided.

Similarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke. For example, in a 2005 Pew Research Center survey, 51% of respondents said they favored “making it legal for doctors to give terminally ill patients the means to end their lives,” but only 44% said they favored “making it legal for doctors to assist terminally ill patients in committing suicide.” Although both versions of the question are asking about the same thing, the reaction of respondents was different. In another example, respondents have reacted differently to questions using the word “welfare” as opposed to the more generic “assistance to the poor.” Several experiments have shown that there is much greater public support for expanding “assistance to the poor” than for expanding “welfare.”

We often write two versions of a question and ask half of the survey sample one version of the question and the other half the second version. Thus, we say we have two  forms  of the questionnaire. Respondents are assigned randomly to receive either form, so we can assume that the two groups of respondents are essentially identical. On questions where two versions are used, significant differences in the answers between the two forms tell us that the difference is a result of the way we worded the two versions.

how many survey questions in research

One of the most common formats used in survey questions is the “agree-disagree” format. In this type of question, respondents are asked whether they agree or disagree with a particular statement. Research has shown that, compared with the better educated and better informed, less educated and less informed respondents have a greater tendency to agree with such statements. This is sometimes called an “acquiescence bias” (since some kinds of respondents are more likely to acquiesce to the assertion than are others). This behavior is even more pronounced when there’s an interviewer present, rather than when the survey is self-administered. A better practice is to offer respondents a choice between alternative statements. A Pew Research Center experiment with one of its routinely asked values questions illustrates the difference that question format can make. Not only does the forced choice format yield a very different result overall from the agree-disagree format, but the pattern of answers between respondents with more or less formal education also tends to be very different.

One other challenge in developing questionnaires is what is called “social desirability bias.” People have a natural tendency to want to be accepted and liked, and this may lead people to provide inaccurate answers to questions that deal with sensitive subjects. Research has shown that respondents understate alcohol and drug use, tax evasion and racial bias. They also may overstate church attendance, charitable contributions and the likelihood that they will vote in an election. Researchers attempt to account for this potential bias in crafting questions about these topics. For instance, when Pew Research Center surveys ask about past voting behavior, it is important to note that circumstances may have prevented the respondent from voting: “In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote?” The choice of response options can also make it easier for people to be honest. For example, a question about church attendance might include three of six response options that indicate infrequent attendance. Research has also shown that social desirability bias can be greater when an interviewer is present (e.g., telephone and face-to-face surveys) than when respondents complete the survey themselves (e.g., paper and web surveys).

Lastly, because slight modifications in question wording can affect responses, identical question wording should be used when the intention is to compare results to those from earlier surveys. Similarly, because question wording and responses can vary based on the mode used to survey respondents, researchers should carefully evaluate the likely effects on trend measurements if a different survey mode will be used to assess change in opinion over time.

Question order

Once the survey questions are developed, particular attention should be paid to how they are ordered in the questionnaire. Surveyors must be attentive to how questions early in a questionnaire may have unintended effects on how respondents answer subsequent questions. Researchers have demonstrated that the order in which questions are asked can influence how people respond; earlier questions can unintentionally provide context for the questions that follow (these effects are called “order effects”).

One kind of order effect can be seen in responses to open-ended questions. Pew Research Center surveys generally ask open-ended questions about national problems, opinions about leaders and similar topics near the beginning of the questionnaire. If closed-ended questions that relate to the topic are placed before the open-ended question, respondents are much more likely to mention concepts or considerations raised in those earlier questions when responding to the open-ended question.

For closed-ended opinion questions, there are two main types of order effects: contrast effects ( where the order results in greater differences in responses), and assimilation effects (where responses are more similar as a result of their order).

how many survey questions in research

An example of a contrast effect can be seen in a Pew Research Center poll conducted in October 2003, a dozen years before same-sex marriage was legalized in the U.S. That poll found that people were more likely to favor allowing gays and lesbians to enter into legal agreements that give them the same rights as married couples when this question was asked after one about whether they favored or opposed allowing gays and lesbians to marry (45% favored legal agreements when asked after the marriage question, but 37% favored legal agreements without the immediate preceding context of a question about same-sex marriage). Responses to the question about same-sex marriage, meanwhile, were not significantly affected by its placement before or after the legal agreements question.

how many survey questions in research

Another experiment embedded in a December 2008 Pew Research Center poll also resulted in a contrast effect. When people were asked “All in all, are you satisfied or dissatisfied with the way things are going in this country today?” immediately after having been asked “Do you approve or disapprove of the way George W. Bush is handling his job as president?”; 88% said they were dissatisfied, compared with only 78% without the context of the prior question.

Responses to presidential approval remained relatively unchanged whether national satisfaction was asked before or after it. A similar finding occurred in December 2004 when both satisfaction and presidential approval were much higher (57% were dissatisfied when Bush approval was asked first vs. 51% when general satisfaction was asked first).

Several studies also have shown that asking a more specific question before a more general question (e.g., asking about happiness with one’s marriage before asking about one’s overall happiness) can result in a contrast effect. Although some exceptions have been found, people tend to avoid redundancy by excluding the more specific question from the general rating.

Assimilation effects occur when responses to two questions are more consistent or closer together because of their placement in the questionnaire. We found an example of an assimilation effect in a Pew Research Center poll conducted in November 2008 when we asked whether Republican leaders should work with Obama or stand up to him on important issues and whether Democratic leaders should work with Republican leaders or stand up to them on important issues. People were more likely to say that Republican leaders should work with Obama when the question was preceded by the one asking what Democratic leaders should do in working with Republican leaders (81% vs. 66%). However, when people were first asked about Republican leaders working with Obama, fewer said that Democratic leaders should work with Republican leaders (71% vs. 82%).

The order questions are asked is of particular importance when tracking trends over time. As a result, care should be taken to ensure that the context is similar each time a question is asked. Modifying the context of the question could call into question any observed changes over time (see  measuring change over time  for more information).

A questionnaire, like a conversation, should be grouped by topic and unfold in a logical order. It is often helpful to begin the survey with simple questions that respondents will find interesting and engaging. Throughout the survey, an effort should be made to keep the survey interesting and not overburden respondents with several difficult questions right after one another. Demographic questions such as income, education or age should not be asked near the beginning of a survey unless they are needed to determine eligibility for the survey or for routing respondents through particular sections of the questionnaire. Even then, it is best to precede such items with more interesting and engaging questions. One virtue of survey panels like the ATP is that demographic questions usually only need to be asked once a year, not in each survey.

U.S. Surveys

Other research methods.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

how many survey questions in research

How Many Survey Questions Should I Use?

  • Survey Tips

If you’re wondering how many survey questions you need to include in your questionnaire, the short answer is: as few as possible.

Keeping your survey question count low is crucial, because survey fatigue is a real danger for survey makers hoping to collect the best, most accurate data.

A few well worded, well designed survey questions are usually no problem for respondents to complete. But, once a survey starts to get bogged down with page after page of radio buttons, essay boxes, and convoluted question phrasing, respondents either lose interest and become too frustrated to complete the rest of the survey.

Deciding the exact number of survey questions you need to reach your goals is, of course, more complicated. It depends largely on your purpose and audience. But, that’s not all.

The quick and dirty guidelines for determining how many questions to use in a survey are:

  • Get to the point: Ask only as many survey questions as you need to achieve your goal.
  • Stay on track: Every question you ask must be directly related to your survey’s purpose.
  • Respect their time: Your respondents are busy people! Faster is better for response rates.

In this post, I cover each of these considerations and give you tips for determining the optimal length for your next survey project.

How Your Goals Should Influence the Number of Survey Questions You Use

The first step for you to take, long before you start writing survey questions, is to determine your survey’s purpose and goals.

Ask yourself:

  • Why am I making this survey?
  • What kind of data am I looking for?

The answers to these questions will help you determine the kind of survey you are running, the survey question types you will use, and how many survey questions you need to ask to get you to where you want to be.

What follows in an example of a survey maker going through the purpose-setting process.

What is The Purpose of This Survey?

A small business owner wants to expand his current web design business to include new services. He has a few ideas of what offerings he could make, like mobile app development, copywriting, or digital marketing consulting, but before he makes the investment in new personnel, he wants to make sure his customers are interested.

So, he decides to make a survey.

The purpose of his survey is to determine which services existing customers would be most interested in seeing from his team.

The goal is to identify which service his business should develop next and, importantly, where he will be investing his time and money. He wants to make sure the survey data points him in the right direction!

What Kind of Data Am I Looking For in Response to My Survey Questions?

Now that he’s decided his survey’s purpose, he can dive right into picking survey question types, right?

Not exactly.

While it may seem like common sense, it’s important to take an extra moment to think about what kind of data you need to be able to act on your survey data after you have it.

In our imaginary business owner’s case, he is looking for concrete feedback from his existing customers. .

In this case, he could put together a very simple survey centered around a check box question type asking customers to select any additional services they would be interested in. (Of course, he remembers to include an “Other – Write In” option so that customers can submit their own ideas.)

This is the simplest version of the survey that the business owner could make.

But, it’s likely that he would want more information, like how likely they would be to use a particular service if he provided it, and what kinds of projects, if any, they already have in mind.

This kind of information may give him greater insight into what his customers want versus what services they really need.

To collect this kind of data, it would be best to use more advanced question types like text boxes, Likert Scales, and even Drag & Drop Rankings to determine potential projects, likelihood of using the new services, and ordering which new services they would like to see unveiled first, second, and third.

Do you see how thinking about what kind of data you need really determines how many questions (and what kind of questions) you need to ask in your survey?

The basic version of our business owner’s new service survey could have been made in a question or two.

But, when looking for more robust data, more and more advanced questions are needed. That said, always keep in mind that your respondents’ time is valuable. If your purpose is broad, then consider breaking the survey down into multiple micro-topics.

Fighting Survey Fatigue with Micro-Surveys

Micro-surveys are bite-sized surveys that require very little time and may only involve a question or two. Because they are the very definition of short and sweet, they may be exactly what busy respondents need to give you their honest feedback without bogging down their day.

In the example of our business owner, he could choose to break up his new service research into smaller steps. The first would be to survey his customers to see which new services they would be interested in.

Let’s say that, of the customers that responded, most are looking for an app development service.

The business owner could then follow up with those exact customers for more information on project ideas and timelines.

This way, he will be able to gauge their interest, determine timeline, and discover exactly which skills he will need to look for in a new hire.

The Ideal Number of Survey Questions for Most Surveys

To be clear, there is no magic number for every survey and every audience.

But, in general and for most survey types, it’s best to keep the survey completion time under 10 minutes. Five minute surveys will see even higher completion rates, especially with customer satisfaction and feedback surveys.

This means, you should aim for 10 survey questions (or fewer, if you are using multiple text and essay box question types).

When you start moving into long surveys with lots of questions and over 10 minute completion times, you may want to consider offering respondents an incentive to compensate them for their time. Online gift cards are popular, but you can also use custom prizes or coupons.

In the Alchemer application, we do our best to help you out with keeping surveys on track.

Under the “Test” tab, the Survey Diagnostic panel that estimates how long your survey will take, how fatiguing it is, and how accessible it is to sight-and-hearing impaired audiences.

Alchemer Blog: How Many Survey Questions Should I Use? - Completion Time Dashboard

While you should definitely have a real, live person run through the survey to catch any errors, this diagnostic panel is a great way to make sure you’re balancing your data needs with your respondent’s busy lives. It will also alert you to any potential problems within the survey itself.

Happy surveying!

how many survey questions in research

See all blog posts >

how many survey questions in research

  • Market Research

Introducing Alchemer Pulse

  • AI , Alchemer Pulse , Customer Experience , Customer Feedback

Graphic showing customer feedback

  • AI , Alchemer Pulse , Press Release , Product News

See it in Action

how many survey questions in research

  • Privacy Overview
  • Strictly Necessary Cookies
  • 3rd Party Cookies

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.

If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.

This website uses Google Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages.

Keeping this cookie enabled helps us to improve our website.

Please enable Strictly Necessary Cookies first so that we can save your preferences!

Are you an agency specialized in UX, digital marketing, or growth? Join our Partner Program

Learn / Blog / Article

Back to blog

Survey questions 101: 70+ survey question examples, types of surveys, and FAQs

How well do you understand your prospects and customers—who they are, what keeps them awake at night, and what brought them to your business in search of a solution? Asking the right survey questions at the right point in their customer journey is the most effective way to put yourself in your customers’ shoes.

Last updated

Reading time.

how many survey questions in research

This comprehensive intro to survey questions contains over 70 examples of effective questions, an overview of different types of survey questions, and advice on how to word them for maximum effect. Plus, we’ll toss in our pre-built survey templates, expert survey insights, and tips to make the most of AI for Surveys in Hotjar. ✨

Surveying your users is the simplest way to understand their pain points, needs, and motivations. But first, you need to know how to set up surveys that give you the answers you—and your business—truly need. Impactful surveys start here:

❓ The main types of survey questions : most survey questions are classified as open-ended, closed-ended, nominal, Likert scale, rating scale, and yes/no. The best surveys often use a combination of questions.

💡 70+ good survey question examples : our top 70+ survey questions, categorized across ecommerce, SaaS, and publishing, will help you find answers to your business’s most burning questions

✅ What makes a survey question ‘good’ : a good survey question is anything that helps you get clear insights and business-critical information about your customers 

❌ The dos and don’ts of writing good survey questions : remember to be concise and polite, use the foot-in-door principle, alternate questions, and test your surveys. But don’t ask leading or loaded questions, overwhelm respondents with too many questions, or neglect other tools that can get you the answers you need.

👍 How to run your surveys the right way : use a versatile survey tool like Hotjar Surveys that allows you to create on-site surveys at specific points in the customer journey or send surveys via a link

🛠️ 10 use cases for good survey questions : use your survey insights to create user personas, understand pain points, measure product-market fit, get valuable testimonials, measure customer satisfaction, and more

Use Hotjar to build your survey and get the customer insight you need to grow your business.

6 main types of survey questions

Let’s dive into our list of survey question examples, starting with a breakdown of the six main categories your questions will fall into:

Open-ended questions

Closed-ended questions

Nominal questions

Likert scale questions

Rating scale questions

'Yes' or 'no' questions

1. Open-ended survey questions

Open-ended questions  give your respondents the freedom to  answer in their own words , instead of limiting their response to a set of pre-selected choices (such as multiple-choice answers, yes/no answers, 0–10 ratings, etc.). 

Examples of open-ended questions:

What other products would you like to see us offer?

If you could change just one thing about our product, what would it be?

When to use open-ended questions in a survey

The majority of example questions included in this post are open-ended, and there are some good reasons for that:

Open-ended questions help you learn about customer needs you didn’t know existed , and they shine a light on areas for improvement that you may not have considered before. If you limit your respondents’ answers, you risk cutting yourself off from key insights.

Open-ended questions are very useful when you first begin surveying your customers and collecting their feedback. If you don't yet have a good amount of insight, answers to open-ended questions will go a long way toward educating you about who your customers are and what they're looking for.

There are, however, a few downsides to open-ended questions:

First, people tend to be less likely to respond to open-ended questions in general because they take comparatively more effort to answer than, say, a yes/no one

Second, but connected: if you ask consecutive open-ended questions during your survey, people will get tired of answering them, and their answers might become less helpful the more you ask

Finally, the data you receive from open-ended questions will take longer to analyze compared to easy 1-5 or yes/no answers—but don’t let that stop you. There are plenty of shortcuts that make it easier than it looks (we explain it all in our post about how to analyze open-ended questions , which includes a free analysis template.)

💡 Pro tip: if you’re using Hotjar Surveys, let our AI for Surveys feature analyze your open-ended survey responses for you. Hotjar AI reviews all your survey responses and provides an automated summary report of key findings, including supporting quotes and actionable recommendations for next steps.

2. Closed-ended survey questions

Closed-end questions limit a user’s response options to a set of pre-selected choices. This broad category of questions includes

‘Yes’ or ‘no’ questions

When to use closed-ended questions

Closed-ended questions work brilliantly in two scenarios:

To open a survey, because they require little time and effort and are therefore easy for people to answer. This is called the foot-in-the-door principle: once someone commits to answering the first question, they may be more likely to answer the open-ended questions that follow.

When you need to create graphs and trends based on people’s answers. Responses to closed-ended questions are easy to measure and use as benchmarks. Rating scale questions, in particular (e.g. where people rate customer service or on a scale of 1-10), allow you to gather customer sentiment and compare your progress over time.

3. Nominal questions

A nominal question is a type of survey question that presents people with multiple answer choices; the answers are  non-numerical in nature and don't overlap  (unless you include an ‘all of the above’ option).

Example of nominal question:

What are you using [product name] for?

Personal use

Both business and personal use

When to use nominal questions

Nominal questions work well when there is a limited number of categories for a given question (see the example above). They’re easy to create graphs and trends from, but the downside is that you may not be offering enough categories for people to reply.

For example, if you ask people what type of browser they’re using and only give them three options to choose from, you may inadvertently alienate everybody who uses a fourth type and now can’t tell you about it.

That said, you can add an open-ended component to a nominal question with an expandable ’other’ category, where respondents can write in an answer that isn’t on the list. This way, you essentially ask an open-ended question that doesn’t limit them to the options you’ve picked.

4. Likert scale questions

The Likert scale is typically a 5- or 7-point scale that evaluates a respondent’s level of agreement with a statement or the intensity of their reaction toward something.

The scale develops symmetrically: the median number (e.g. a 3 on a 5-point scale) indicates a point of neutrality, the lowest number (always 1) indicates an extreme view, and the highest number (e.g. a 5 on a 5-point scale) indicates the opposite extreme view.

Example of a Likert scale question:

#The British Museum uses a Likert scale Hotjar survey to gauge visitors’ reactions to their website optimizations

When to use Likert scale questions

Likert-type questions are also known as ordinal questions because the answers are presented in a specific order. Like other multiple-choice questions, Likert scale questions come in handy when you already have some sense of what your customers are thinking. For example, if your open-ended questions uncover a complaint about a recent change to your ordering process, you could use a Likert scale question to determine how the average user felt about the change.

A series of Likert scale questions can also be turned into a matrix question. Since they have identical response options, they are easily combined into a single matrix and break down the pattern of single questions for users.

5. Rating scale questions

Rating scale questions are questions where the answers map onto a numeric scale (such as rating customer support on a scale of 1-5, or likelihood to recommend a product from 0-10).

Examples of rating questions:

How likely are you to recommend us to a friend or colleague on a scale of 0-10?

How would you rate our customer service on a scale of 1-5?

When to use rating questions

Whenever you want to assign a numerical value to your survey or visualize and compare trends , a rating question is the way to go.

A typical rating question is used to determine Net Promoter Score® (NPS®) : the question asks customers to rate their likelihood of recommending products or services to their friends or colleagues, and allows you to look at the results historically and see if you're improving or getting worse. Rating questions are also used for customer satisfaction (CSAT) surveys and product reviews.

When you use a rating question in a survey, be sure to explain what the scale means (e.g. 1 for ‘Poor’, 5 for ‘Amazing’). And consider adding a follow-up open-ended question to understand why the user left that score.

Example of a rating question (NPS):

#Hotjar's Net Promoter Score® (NPS®) survey template lets you add open-ended follow-up questions so you can understand the reasons behind users' ratings

6. ‘Yes’ or ‘no’ questions

These dichotomous questions are super straightforward, requiring a simple ‘yes’ or ‘no’ reply.

Examples of yes/no questions:

Was this article useful? (Yes/No)

Did you find what you were looking for today? (Yes/No)

When to use ‘yes’ or ‘no’ questions

‘Yes’ and ‘no’ questions are a good way to quickly segment your respondents . For example, say you’re trying to understand what obstacles or objections prevent people from trying your product. You can place a survey on your pricing page asking people if something is stopping them, and follow up with the segment who replied ‘yes’ by asking them to elaborate further.

These questions are also effective for getting your foot in the door: a ‘yes’ or ‘no’ question requires very little effort to answer. Once a user commits to answering the first question, they tend to become more willing to answer the questions that follow, or even leave you their contact information.

#Web design agency NerdCow used Hotjar Surveys to add a yes/no survey on The Transport Library’s website, and followed it up with an open-ended question for more insights

70+ more survey question examples

Below is a list of good survey questions, categorized across ecommerce, software as a service (SaaS), and publishing. You don't have to use them word-for-word, but hopefully, this list will spark some extra-good ideas for the surveys you’ll run immediately after reading this article. (Plus, you can create all of them with Hotjar Surveys—stick with us a little longer to find out how. 😉)

📊 9 basic demographic survey questions

Ask these questions when you want context about your respondents and target audience, so you can segment them later. Consider including demographic information questions in your survey when conducting user or market research as well. 

But don’t ask demographic questions just for the sake of it—if you're not going to use some of the data points from these sometimes sensitive questions (e.g. if gender is irrelevant to the result of your survey), move on to the ones that are truly useful for you, business-wise. 

Take a look at the selection of examples below, and keep in mind that you can convert most of them to multiple choice questions:

What is your name?

What is your age?

What is your gender?

What company do you work for?

What vertical/industry best describes your company?

What best describes your role?

In which department do you work?

What is the total number of employees in your company (including all locations where your employer operates)?

What is your company's annual revenue?

🚀 Get started: gather more info about your users with our product-market fit survey template .

👥 20+ effective customer questions

These questions are particularly recommended for ecommerce companies:

Before purchase

What information is missing or would make your decision to buy easier?

What is your biggest fear or concern about purchasing this item?

Were you able to complete the purpose of your visit today?

If you did not make a purchase today, what stopped you?

After purchase

Was there anything about this checkout process we could improve?

What was your biggest fear or concern about purchasing from us?

What persuaded you to complete the purchase of the item(s) in your cart today?

If you could no longer use [product name], what’s the one thing you would miss the most?

What’s the one thing that nearly stopped you from buying from us?

👉 Check out our 7-step guide to setting up an ecommerce post-purchase survey .

Other useful customer questions

Do you have any questions before you complete your purchase?

What other information would you like to see on this page?

What were the three main things that persuaded you to create an account today?

What nearly stopped you from creating an account today?

Which other options did you consider before choosing [product name]?

What would persuade you to use us more often?

What was your biggest challenge, frustration, or problem in finding the right [product type] online?

Please list the top three things that persuaded you to use us rather than a competitor.

Were you able to find the information you were looking for?

How satisfied are you with our support?

How would you rate our service/support on a scale of 0-10? (0 = terrible, 10 = stellar)

How likely are you to recommend us to a friend or colleague? ( NPS question )

Is there anything preventing you from purchasing at this point?

🚀 Get started: learn how satisfied customers are with our expert-built customer satisfaction and NPS survey templates .

Set up a survey in seconds

Use Hotjar's free survey templates to build virtually any type of survey, and start gathering valuable insights in moments.

🛍 30+ product survey questions

These questions are particularly recommended for SaaS companies:

Questions for new or trial users

What nearly stopped you from signing up today?

How likely are you to recommend us to a friend or colleague on a scale of 0-10? (NPS question)

Is our pricing clear? If not, what would you change?

Questions for paying customers

What convinced you to pay for this service?

What’s the one thing we are missing in [product type]?

What's one feature we can add that would make our product indispensable for you?

If you could no longer use [name of product], what’s the one thing you would miss the most?

🚀 Get started: find out what your buyers really think with our pricing plan feedback survey template .

Questions for former/churned customers

What is the main reason you're canceling your account? Please be blunt and direct.

If you could have changed one thing in [product name], what would it have been?

If you had a magic wand and could change anything in [product name], what would it be?

🚀 Get started: find out why customers churn with our free-to-use churn analysis survey template .

Other useful product questions

What were the three main things that persuaded you to sign up today?

Do you have any questions before starting a free trial?

What persuaded you to start a trial?

Was this help section useful?

Was this article useful?

How would you rate our service/support on a scale of 1-10? (0 = terrible, 10 = stellar)

Is there anything preventing you from upgrading at this point?

Is there anything on this page that doesn't work the way you expected it to?

What could we change to make you want to continue using us?

If you did not upgrade today, what stopped you?

What's the next thing you think we should build?

How would you feel if we discontinued this feature?

What's the next feature or functionality we should build?

🚀 Get started: gather feedback on your product with our free-to-use product feedback survey template .

🖋 20+ effective questions for publishers and bloggers

Questions to help improve content.

If you could change just one thing in [publication name], what would it be?

What other content would you like to see us offer?

How would you rate this article on a scale of 1–10?

If you could change anything on this page, what would you have us do?

If you did not subscribe to [publication name] today, what was it that stopped you?

🚀 Get started: find ways to improve your website copy and messaging with our content feedback survey template .

New subscriptions

What convinced you to subscribe to [publication] today?

What almost stopped you from subscribing?

What were the three main things that persuaded you to join our list today?

Cancellations

What is the main reason you're unsubscribing? Please be specific.

Other useful content-related questions

What’s the one thing we are missing in [publication name]?

What would persuade you to visit us more often?

How likely are you to recommend us to someone with similar interests? (NPS question)

What’s missing on this page?

What topics would you like to see us write about next?

How useful was this article?

What could we do to make this page more useful?

Is there anything on this site that doesn't work the way you expected it to?

What's one thing we can add that would make [publication name] indispensable for you?

If you could no longer read [publication name], what’s the one thing you would miss the most?

💡 Pro tip: do you have a general survey goal in mind, but are struggling to pin down the right questions to ask? Give Hotjar’s AI for Surveys a go and watch as it generates a survey for you in seconds with questions tailored to the exact purpose of the survey you want to run.

What makes a good survey question?

We’ve run through more than 70 of our favorite survey questions—but what is it that makes a good survey question, well, good ? An effective question is anything that helps you get clear insights and business-critical information about your customers , including

Who your target market is

How you should price your products

What’s stopping people from buying from you

Why visitors leave your website

With this information, you can tailor your website, products, landing pages, and messaging to improve the user experience and, ultimately, maximize conversions .

How to write good survey questions: the DOs and DON’Ts

To help you understand the basics and avoid some rookie mistakes, we asked a few experts to give us their thoughts on what makes a good and effective survey question.

Survey question DOs

✅ do focus your questions on the customer.

It may be tempting to focus on your company or products, but it’s usually more effective to put the focus back on the customer. Get to know their needs, drivers, pain points, and barriers to purchase by asking about their experience. That’s what you’re after: you want to know what it’s like inside their heads and how they feel when they use your website and products.

Rather than asking, “Why did you buy our product?” ask, “What was happening in your life that led you to search for this solution?” Instead of asking, “What's the one feature you love about [product],” ask, “If our company were to close tomorrow, what would be the one thing you’d miss the most?” These types of surveys have helped me double and triple my clients.

✅ DO be polite and concise (without skimping on micro-copy)

Put time into your micro-copy—those tiny bits of written content that go into surveys. Explain why you’re asking the questions, and when people reach the end of the survey, remember to thank them for their time. After all, they’re giving you free labor!

✅ DO consider the foot-in-the-door principle

One way to increase your response rate is to ask an easy question upfront, such as a ‘yes’ or ‘no’ question, because once people commit to taking a survey—even just the first question—they’re more likely to finish it.

✅ DO consider asking your questions from the first-person perspective

Disclaimer: we don’t do this here at Hotjar. You’ll notice all our sample questions are listed in second-person (i.e. ‘you’ format), but it’s worth testing to determine which approach gives you better answers. Some experts prefer the first-person approach (i.e. ‘I’ format) because they believe it encourages users to talk about themselves—but only you can decide which approach works best for your business.

I strongly recommend that the questions be worded in the first person. This helps create a more visceral reaction from people and encourages them to tell stories from their actual experiences, rather than making up hypothetical scenarios. For example, here’s a similar question, asked two ways: “What do you think is the hardest thing about creating a UX portfolio?” versus “My biggest problem with creating my UX portfolio is…” 

The second version helps get people thinking about their experiences. The best survey responses come from respondents who provide personal accounts of past events that give us specific and real insight into their lives.

✅ DO alternate your questions often

Shake up the questions you ask on a regular basis. Asking a wide variety of questions will help you and your team get a complete view of what your customers are thinking.

✅ DO test your surveys before sending them out

A few years ago, Hotjar created a survey we sent to 2,000 CX professionals via email. Before officially sending it out, we wanted to make sure the questions really worked. 

We decided to test them out on internal staff and external people by sending out three rounds of test surveys to 100 respondents each time. Their feedback helped us perfect the questions and clear up any confusing language.

Survey question DON’Ts

❌ don’t ask closed-ended questions if you’ve never done research before.

If you’ve just begun asking questions, make them open-ended questions since you have no idea what your customers think about you at this stage. When you limit their answers, you just reinforce your own assumptions.

There are two exceptions to this rule:

Using a closed-ended question to get your foot in the door at the beginning of a survey

Using rating scale questions to gather customer sentiment (like an NPS survey)

❌ DON’T ask a lot of questions if you’re just getting started

Having to answer too many questions can overwhelm your users. Stick with the most important points and discard the rest.

Try starting off with a single question to see how your audience responds, then move on to two questions once you feel like you know what you’re doing.

How many questions should you ask? There’s really no perfect answer, but we recommend asking as few as you need to ask to get the information you want. In the beginning, focus on the big things:

Who are your users?

What do potential customers want?

How are they using your product?

What would win their loyalty?

❌ DON’T just ask a question when you can combine it with other tools

Don’t just use surveys to answer questions that other tools (such as analytics) can also answer. If you want to learn about whether people find a new website feature helpful, you can also observe how they’re using it through traditional analytics, session recordings , and other user testing tools for a more complete picture.

Don’t use surveys to ask people questions that other tools are better equipped to answer. I’m thinking of questions like “What do you think of the search feature?” with pre-set answer options like ‘Very easy to use,’ ‘Easy to use,’ etc. That’s not a good question to ask. 

Why should you care about what people ‘think’ about the search feature? You should find out whether it helps people find what they need and whether it helps drive conversions for you. Analytics, user session recordings, and user testing can tell you whether it does that or not.

❌ DON’T ask leading questions

A leading question is one that prompts a specific answer. Avoid asking leading questions because they’ll give you bad data. For example, asking, “What makes our product better than our competitors’ products?” might boost your self-esteem, but it won’t get you good information. Why? You’re effectively planting the idea that your own product is the best on the market.

❌ DON’T ask loaded questions

A loaded question is similar to a leading question, but it does more than just push a bias—it phrases the question such that it’s impossible to answer without confirming an underlying assumption.

A common (and subtle) form of loaded survey question would be, “What do you find useful about this article?” If we haven’t first asked you whether you found the article useful at all, then we’re asking a loaded question.

❌ DON’T ask about more than one topic at once

For example, “Do you believe our product can help you increase sales and improve cross-collaboration?”

This complex question, also known as a ‘double-barreled question’, requires a very complex answer as it begs the respondent to address two separate questions at once:

Do you believe our product can help you increase sales?

Do you believe our product can help you improve cross-collaboration?

Respondents may very well answer 'yes', but actually mean it for the first part of the question, and not the other. The result? Your survey data is inaccurate, and you’ve missed out on actionable insights.

Instead, ask two specific questions to gather customer feedback on each concept.

How to run your surveys

The format you pick for your survey depends on what you want to achieve and also on how much budget or resources you have. You can

Use an on-site survey tool , like Hotjar Surveys , to set up a website survey that pops up whenever people visit a specific page: this is useful when you want to investigate website- and product-specific topics quickly. This format is relatively inexpensive—with Hotjar’s free forever plan, you can even run up to 3 surveys with unlimited questions for free.

how many survey questions in research

Use Hotjar Surveys to embed a survey as an element directly on a page: this is useful when you want to grab your audience’s attention and connect with customers at relevant moments, without interrupting their browsing. (Scroll to the bottom of this page to see an embedded survey in action!) This format is included on Hotjar’s Business and Scale plans—try it out for 15 days with a free Ask Business trial .

Use a survey builder and create a survey people can access in their own time: this is useful when you want to reach out to your mailing list or a wider audience with an email survey (you just need to share the URL the survey lives at). Sending in-depth questionnaires this way allows for more space for people to elaborate on their answers. This format is also relatively inexpensive, depending on the tool you use.

Place survey kiosks in a physical location where people can give their feedback by pressing a button: this is useful for quick feedback on specific aspects of a customer's experience (there’s usually plenty of these in airports and waiting rooms). This format is relatively expensive to maintain due to the material upkeep.

Run in-person surveys with your existing or prospective customers: in-person questionnaires help you dig deep into your interviewees’ answers. This format is relatively cheap if you do it online with a user interview tool or over the phone, but it’s more expensive and time-consuming if done in a physical location.

💡 Pro tip: looking for an easy, cost-efficient way to connect with your users? Run effortless, automated user interviews with Engage , Hotjar’s user interview tool. Get instant access to a pool of 200,000+ participants (or invite your own), and take notes while Engage records and transcribes your interview.

10 survey use cases: what you can do with good survey questions

Effective survey questions can help improve your business in many different ways. We’ve written in detail about most of these ideas in other blog posts, so we’ve rounded them up for you below.

1. Create user personas

A user persona is a character based on the people who currently use your website or product. A persona combines psychographics and demographics and reflects who they are, what they need, and what may stop them from getting it.

Examples of questions to ask:

Describe yourself in one sentence, e.g. “I am a 30-year-old marketer based in Dublin who enjoys writing articles about user personas.”

What is your main goal for using this website/product?

What, if anything, is preventing you from doing it?

👉 Our post about creating simple and effective user personas in four steps highlights some great survey questions to ask when creating a user persona.

🚀 Get started: use our user persona survey template or AI for Surveys to inform your user persona.

2. Understand why your product is not selling

Few things are more frightening than stagnant sales. When the pressure is mounting, you’ve got to get to the bottom of it, and good survey questions can help you do just that.

What made you buy the product? What challenges are you trying to solve?

What did you like most about the product? What did you dislike the most?

What nearly stopped you from buying?

👉 Here’s a detailed piece about the best survey questions to ask your customers when your product isn’t selling , and why they work so well.

🚀 Get started: our product feedback survey template helps you find out whether your product satisfies your users. Or build your surveys in the blink of an eye with Hotjar AI.

3. Understand why people leave your website

If you want to figure out why people are leaving your website , you’ll have to ask questions.

A good format for that is an exit-intent pop-up survey, which appears when a user clicks to leave the page, giving them the chance to leave website feedback before they go.

Another way is to focus on the people who did convert, but just barely—something Hotjar founder David Darmanin considers essential for taking conversions to the next level. By focusing on customers who bought your product (but almost didn’t), you can learn how to win over another set of users who are similar to them: those who almost bought your products, but backed out in the end.

Example of questions to ask:

Not for you? Tell us why. ( Exit-intent pop-up —ask this when a user leaves without buying.)

What almost stopped you from buying? (Ask this post-conversion .)

👉 Find out how HubSpot Academy increased its conversion rate by adding an exit-intent survey that asked one simple question when users left their website: “Not for you? Tell us why.”

🚀 Get started: place an exit-intent survey on your site. Let Hotjar AI draft the survey questions by telling it what you want to learn.

I spent the better half of my career focusing on the 95% who don’t convert, but it’s better to focus on the 5% who do. Get to know them really well, deliver value to them, and really wow them. That’s how you’re going to take that 5% to 10%.

4. Understand your customers’ fears and concerns

Buying a new product can be scary: nobody wants to make a bad purchase. Your job is to address your prospective customers’ concerns, counter their objections, and calm their fears, which should lead to more conversions.

👉 Take a look at our no-nonsense guide to increasing conversions for a comprehensive write-up about discovering the drivers, barriers, and hooks that lead people to converting on your website.

🚀 Get started: understand why your users are tempted to leave and discover potential barriers with a customer retention survey .

5. Drive your pricing strategy

Are your products overpriced and scaring away potential buyers? Or are you underpricing and leaving money on the table?

Asking the right questions will help you develop a pricing structure that maximizes profit, but you have to be delicate about how you ask. Don’t ask directly about price, or you’ll seem unsure of the value you offer. Instead, ask questions that uncover how your products serve your customers and what would inspire them to buy more.

How do you use our product/service?

What would persuade you to use our product more often?

What’s the one thing our product is missing?

👉 We wrote a series of blog posts about managing the early stage of a SaaS startup, which included a post about developing the right pricing strategy —something businesses in all sectors could benefit from.

🚀 Get started: find the sweet spot in how to price your product or service with a Van Westendorp price sensitivity survey or get feedback on your pricing plan .

6. Measure and understand product-market fit

Product-market fit (PMF) is about understanding demand and creating a product that your customers want, need, and will actually pay money for. A combination of online survey questions and one-on-one interviews can help you figure this out.

What's one thing we can add that would make [product name] indispensable for you?

If you could change just one thing in [product name], what would it be?

👉 In our series of blog posts about managing the early stage of a SaaS startup, we covered a section on product-market fit , which has relevant information for all industries.

🚀 Get started: discover if you’re delivering the best products to your market with our product-market fit survey .

7. Choose effective testimonials

Human beings are social creatures—we’re influenced by people who are similar to us. Testimonials that explain how your product solved a problem for someone are the ultimate form of social proof. The following survey questions can help you get some great testimonials.

What changed for you after you got our product?

How does our product help you get your job done?

How would you feel if you couldn’t use our product anymore?

👉 In our post about positioning and branding your products , we cover the type of questions that help you get effective testimonials.

🚀 Get started: add a question asking respondents whether you can use their answers as testimonials in your surveys, or conduct user interviews to gather quotes from your users.

8. Measure customer satisfaction

It’s important to continually track your overall customer satisfaction so you can address any issues before they start to impact your brand’s reputation. You can do this with rating scale questions.

For example, at Hotjar, we ask for feedback after each customer support interaction (which is one important measure of customer satisfaction). We begin with a simple, foot-in-the-door question to encourage a response, and use the information to improve our customer support, which is strongly tied to overall customer satisfaction.

How would you rate the support you received? (1-5 scale)

If 1-3: How could we improve?

If 4-5: What did you love about the experience?

👉 Our beginner’s guide to website feedback goes into great detail about how to measure customer service, NPS , and other important success metrics.

🚀 Get started: gauge short-term satisfaction level with a CSAT survey .

9. Measure word-of-mouth recommendations

Net Promoter Score is a measure of how likely your customers are to recommend your products or services to their friends or colleagues. NPS is a higher bar than customer satisfaction because customers have to be really impressed with your product to recommend you.

Example of NPS questions (to be asked in the same survey):

How likely are you to recommend this company to a friend or colleague? (0-10 scale)

What’s the main reason for your score?

What should we do to WOW you?

👉 We created an NPS guide with ecommerce companies in mind, but it has plenty of information that will help companies in other industries as well.

🚀 Get started: measure whether your users would refer you to a friend or colleague with an NPS survey . Then, use our free NPS calculator to crunch the numbers.

10. Redefine your messaging

How effective is your messaging? Does it speak to your clients' needs, drives, and fears? Does it speak to your strongest selling points?

Asking the right survey questions can help you figure out what marketing messages work best, so you can double down on them.

What attracted you to [brand or product name]?

Did you have any concerns before buying [product name]?

Since you purchased [product name], what has been the biggest benefit to you?

If you could describe [brand or product name] in one sentence, what would you say?

What is your favorite thing about [brand or product name]?

How likely are you to recommend this product to a friend or colleague? (NPS question)

👉 We talk about positioning and branding your products in a post that’s part of a series written for SaaS startups, but even if you’re not in SaaS (or you’re not a startup), you’ll still find it helpful.

Have a question for your customers? Ask!

Feedback is at the heart of deeper empathy for your customers and a more holistic understanding of their behaviors and motivations. And luckily, people are more than ready to share their thoughts about your business— they're just waiting for you to ask them. Deeper customer insights start right here, with a simple tool like Hotjar Surveys.

Build surveys faster with AI🔥

Use AI in Hotjar Surveys to build your survey, place it on your website or send it via email, and get the customer insight you need to grow your business.

FAQs about survey questions

How many people should i survey/what should my sample size be.

A good rule of thumb is to aim for at least 100 replies that you can work with.

You can use our  sample size calculator  to get a more precise answer, but understand that collecting feedback is research, not experimentation. Unlike experimentation (such as A/B testing ), all is not lost if you can’t get a statistically significant sample size. In fact, as little as ten replies can give you actionable information about what your users want.

How many questions should my survey have?

There’s no perfect answer to this question, but we recommend asking as few as you need to ask in order to get the information you want. Remember, you’re essentially asking someone to work for free, so be respectful of their time.

Why is it important to ask good survey questions?

A good survey question is asked in a precise way at the right stage in the customer journey to give you insight into your customers’ needs and drives. The qualitative data you get from survey responses can supplement the insight you can capture through other traditional analytics tools (think Google Analytics) and behavior analytics tools (think heatmaps and session recordings , which visualize user behavior on specific pages or across an entire website).

The format you choose for your survey—in-person, email, on-page, etc.—is important, but if the questions themselves are poorly worded you could waste hours trying to fix minimal problems while ignoring major ones a different question could have uncovered. 

How do I analyze open-ended survey questions?

A big pile of  qualitative data  can seem intimidating, but there are some shortcuts that make it much easier to analyze. We put together a guide for  analyzing open-ended questions in 5 simple steps , which should answer all your questions.

But the fastest way to analyze open questions is to use the automated summary report with Hotjar AI in Surveys . AI turns the complex survey data into:

Key findings

Actionable insights

Will sending a survey annoy my customers?

Honestly, the real danger is  not  collecting feedback. Without knowing what users think about your page and  why  they do what they do, you’ll never create a user experience that maximizes conversions. The truth is, you’re probably already doing something that bugs them more than any survey or feedback button would.

If you’re worried that adding an on-page survey might hurt your conversion rate, start small and survey just 10% of your visitors. You can stop surveying once you have enough replies.

Related articles

how many survey questions in research

User research

5 tips to recruit user research participants that represent the real world

Whether you’re running focus groups for your pricing strategy or conducting usability testing for a new product, user interviews are one of the most effective research methods to get the needle-moving insights you need. But to discover meaningful data that helps you reach your goals, you need to connect with high-quality participants. This article shares five tips to help you optimize your recruiting efforts and find the right people for any type of research study.

Hotjar team

how many survey questions in research

How to instantly transcribe user interviews—and swiftly unlock actionable insights

After the thrill of a successful user interview, the chore of transcribing dialogue can feel like the ultimate anticlimax. Putting spoken words in writing takes several precious hours—time better invested in sharing your findings with your team or boss.

But the fact remains: you need a clear and accurate user interview transcript to analyze and report data effectively. Enter automatic transcription. This process instantly transcribes recorded dialogue in real time without human help. It ensures data integrity (and preserves your sanity), enabling you to unlock valuable insights in your research.

how many survey questions in research

Shadz Loresco

how many survey questions in research

An 8-step guide to conducting empathetic (and insightful) customer interviews in mid-market companies

Customer interviews uncover your ideal users’ challenges and needs in their own words, providing in-depth customer experience insights that inform product development, new features, and decision-making. But to get the most out of your interviews, you need to approach them with empathy. This article explains how to conduct accessible, inclusive, and—above all—insightful interviews to create a smooth (and enjoyable!) process for you and your participants.

Netigate

How many questions should be asked in a survey?

Of course, everyone wants to collect as much detailed data as possible. But while statisticians might love large amounts of data, when it comes to creating a survey, fewer questions are usually better – as long as they are the right questions.

Companies that already have some experience with surveys have likely noticed that asking more questions doesn’t necessarily lead to more insights. On the contrary, too many irrelevant questions may make it difficult to evaluate data. Check out the tips below to learn how to align the right questions with the right goals, and be efficient in your survey development process.

How many survey questions should one ask?

How many survey questions are too many?

Statistics show that most companies have room for improvement when it comes to the efficiency of their employee engagement surveys. In fact, only 22% of businesses are getting actionable results . Customer surveys pose yet another challenge. More than 70% of consumers perceive surveys as an unwelcome interruption in their user experience.

There’s no question the number of survey questions has an impact on the response rate. You can read more about how to improve that particular aspect of your survey in the article 5 keys to improving survey response rates. However, in this article, we want to focus on how the number of questions and length of a survey can help you keep your research goals focused and deliver significant results.

Determining one central survey question

What is the purpose of your survey? The core purpose of your survey should be based on a single question, examined from several different perspectives. Ultimately, you are trying to gather insight about one specific point, and then all the rest of the questions are about adding nuance and detail to that answer.

While it might seem challenging to narrow down your focus, having a single objective will actually help you build and structure your survey. It is important for respondents to understand the purpose of your survey to avoid becoming distracted. Using models such as logic & branching or multiple choice can help you group questions and increase the data volume while keeping the survey length effectively brief.

Track the average answering time

Although you might be focused on collecting the right data, you don’t want to miss one important yet often neglected metric: The average time respondents spend answering the questions.

According to customer case studies, 52% of survey participants tend to drop out after spending more than three minutes on a survey . Keeping track of how long it takes to answer your questions will help you to adjust the survey if necessary. Make sure to choose a survey tool that will provide you with these insights.

  • Create surveys based on our templates
  • Send surveys via email, links, API or individual logins
  • Analyse responses with filters & AI

Evaluate pulse surveys, NPS, and ESI

Some surveys cannot be short. For example, annual employee surveys or B2B partner surveys give survey respondents the opportunities to provide long-form answers. This kind of detail is hard to make up in other ways. Surveys like this give companies a bigger and more complete picture of their overall satisfaction, which helps them take immediate action.

But if a longer survey is not necessary, consider some shorter options.  Pulse surveys typically consist of 5-15 questions and are dispatched at a higher frequency than annual surveys. Pulse surveys are also an effective way to collect critical data and more often. Shorter surveys, such as Employee Satisfaction Index or Net Promoter Score, are best for providing straightforward, real-time feedback.

Consider market research panels

Market research involves a substantial amount of data to make more informed decisions. However, market research surveys directed towards non-existing customers are also the surveys with the lowest response rates. Therefore, consumer panels are highly recommended. Reaching out to your target group via a panel provider will allow you to collect data from a group of people with chosen demographics. You can easily track demographics data in your analytics tool.

Ask the right survey questions

For survey newbies, or for those looking for a starting point for your next survey, modifiable survey templates are your best friends. Remember, you aren’t the first to conduct a survey. There are a number of service providers available to provide you valuable advice regarding best survey designs and the best course of action. Which question format has a similar industry / organisation type / target group used? Why not leverage professional expertise and read case studies and success stories ? These resources can help you choose the right survey tool that just might make you the next success story.

How to utilise the advantages of qualitative research

How to utilise the advantages of qualitative research

When we think of survey, we often think of a single question to rate something…

How to speed up the pretest of your questionnaire: Practical tips for everyday survey work

How to speed up the pretest of your questionnaire: Practical tips for everyday survey work

The pretest of a questionnaire can often be done with simple means. We explain how this works.

5 essential tips for avoiding untruths and ensuring reliable survey data

5 essential tips for avoiding untruths and ensuring reliable survey data

Discover essential strategies for dealing with untruths in surveys, and unlock reliable data insights with Netigate's expertise in questionnaire design and untruth recognition.

Sign up to our monthly newsletter and get the latest insights

By submitting the form, you agree to Netigate's terms and conditions and order processing agreement and acknowledge that you have read Netigate's privacy policy .

✅ Get the latest insights, reports, and eBooks ✅ See feedback management tips and best practices ✅ Be the first to hear about platform enhancements and features

Almost there!

Please confirm your email address by clicking the link in the email we just sent you.

Quick start guide to Text Analysis

How many questions should be asked in a survey?

But first, cookies 🍪

Privacy overview.

CookieDurationDescription
cookielawinfo-checkbox-advertisement2 yearsRecords if the user has consented with marketing cookies
cookielawinfo-checkbox-analytics2 yearsRecords if the user has consented with analytics cookies
cookielawinfo-checkbox-functional2 yearsRecords if the user has consented with functional cookies
cookielawinfo-checkbox-necessary2 yearsRecords if the user has consented with necessary cookies
CookieLawInfoConsent2 yearsRecords the default button state of the corresponding category & the status of CCPA. It works only in coordination with the primary cookie.
viewed_cookie_policy2 yearsTo record if a cookie message box has been shown.
CookieDurationDescription
lpv294052sessionThis LPV cookie is set to keep Pardot from tracking multiple page views on a single asset over a 30-minute session. For example, if a visitor reloads a landing page several times over a 30-minute period, this cookie keeps each reload from being tracked as a page view.
visitor_id# [x2]14 monthsUnique visitor id related to Pardot account
visitor_id#-hash [x3]14 monthsSaves visitor id as a hash
CookieDurationDescription
_ga14 monthsThe _ga cookie, installed by Google Analytics, calculates visitor, session and campaign data and also keeps track of site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognize unique visitors.
_gid1 dayInstalled by Google Analytics, _gid cookie stores information on how visitors use a website, while also creating an analytics report of the website's performance. Some of the data that are collected include the number of visitors, their source, and the pages they visit anonymously.
AnalyticsSyncHistory1 monthUsed to store information about the time a sync with the lms_analytics cookie took place for users in the Designated Countries
CookieDurationDescription
_fbp3 monthsThis cookie is set by Facebook to display advertisements when either on Facebook or on a digital platform powered by Facebook advertising, after visiting the website.
_gcl_au3 monthsProvided by Google Tag Manager to experiment advertisement efficiency of websites using their services.
_uetsid1 dayStores and tracks visitors across websites
1P_JAR, CONSENT, NID2 yearsCollects site statistics and tracks conversion rate
bcookie2 yearsLinkedIn sets this cookie from LinkedIn share buttons and ad tags to recognize browser ID.
bscookie2 yearsThis cookie is a browser ID cookie set by Linked share Buttons and ad tags.
langsessionLinkedIn sets this cookie to remember a user's language setting.
lidc1 dayLinkedIn sets the lidc cookie to facilitate data center selection.
MUID1 year 24 daysBing sets this cookie to recognize unique web browsers visiting Microsoft sites. This cookie is used for advertising, site analytics, and other operations.
UserMatchHistory1 monthLinkedin - Used to track visitors on multiple websites, in order to present relevant advertisement based on the visitor's preferences.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Doing Survey Research | A Step-by-Step Guide & Examples

Doing Survey Research | A Step-by-Step Guide & Examples

Published on 6 May 2022 by Shona McCombes . Revised on 10 October 2022.

Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyse the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyse the survey results, step 6: write up the survey results, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research: Investigating the experiences and characteristics of different social groups
  • Market research: Finding out what customers think about products, services, and companies
  • Health research: Collecting data from patients about symptoms and treatments
  • Politics: Measuring public opinion about parties and policies
  • Psychology: Researching personality traits, preferences, and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and longitudinal studies , where you survey the same sample several times over an extended period.

Prevent plagiarism, run a free check.

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • University students in the UK
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18 to 24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalised to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every university student in the UK. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalise to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions.

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by post, online, or in person, and respondents fill it out themselves
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by post is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g., residents of a specific region).
  • The response rate is often low.

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyse.
  • The anonymity and accessibility of online surveys mean you have less control over who responds.

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping centre or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g., the opinions of a shop’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations.

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data : the researcher records each response as a category or rating and statistically analyses the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analysed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g., yes/no or agree/disagree )
  • A scale (e.g., a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g., age categories)
  • A list of options with multiple answers possible (e.g., leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analysed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an ‘other’ field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic.

Use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no bias towards one answer or another.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by post, online, or in person.

There are many methods of analysing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also cleanse the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organising them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analysing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analysed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyse it. In the results section, you summarise the key results from your analysis.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). Doing Survey Research | A Step-by-Step Guide & Examples. Scribbr. Retrieved 16 July 2024, from https://www.scribbr.co.uk/research-methods/surveys/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs quantitative research | examples & methods, construct validity | definition, types, & examples, what is a likert scale | guide & examples.

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

13.1 Writing effective survey questions and questionnaires

Learning objectives.

Learners will be able to…

  • Describe some of the ways that survey questions might confuse respondents and how to word questions and responses clearly
  • Create mutually exclusive, exhaustive, and balanced response options
  • Define fence-sitting and floating
  • Describe the considerations involved in constructing a well-designed questionnaire
  • Discuss why pilot testing is important

In the previous chapter, we reviewed how researchers collect data using surveys. Guided by their sampling approach and research context, researchers should choose the survey approach that provides the most favorable tradeoffs in strengths and challenges. With this information in hand, researchers need to write their questionnaire and revise it before beginning data collection. Each method of delivery requires a questionnaire, but they vary a bit based on how they will be used by the researcher. Since phone surveys are read aloud, researchers will pay more attention to how the questionnaire sounds than how it looks. Online surveys can use advanced tools to require the completion of certain questions, present interactive questions and answers, and otherwise afford greater flexibility in how questionnaires are designed. As you read this chapter, consider how your method of delivery impacts the type of questionnaire you will design.

how many survey questions in research

Start with operationalization

The first thing you need to do to write effective survey questions is identify what exactly you wish to know. As silly as it sounds to state what seems so completely obvious, we can’t stress enough how easy it is to forget to include important questions when designing a survey. Begin by looking at your research question and refreshing your memory of the operational definitions you developed for those variables from Chapter 11. You should have a pretty firm grasp of your operational definitions before starting the process of questionnaire design. You may have taken those operational definitions from other researchers’ methods, found established scales and indices for your measures, or created your own questions and answer options.

TRACK 1 (IF YOU ARE CREATING A RESEARCH PROPOSAL FOR THIS CLASS)

STOP! Make sure you have a complete operational definition for the dependent and independent variables in your research question. A complete operational definition contains the variable being measured, the measure used, and how the researcher interprets the measure. Let’s make sure you have what you need from Chapter 11 to begin writing your questionnaire.

List all of the dependent and independent variables in your research question.

  • It’s normal to have one dependent or independent variable. It’s also normal to have more than one of either.
  • Make sure that your research question (and this list) contain all of the variables in your hypothesis. Your hypothesis should only include variables from you research question.

For each variable in your list:

  • If you don’t have questions and answers finalized yet, write a first draft and revise it based on what you read in this section.
  • If you are using a measure from another researcher, you should be able to write out all of the questions and answers associated with that measure. If you only have the name of a scale or a few questions, you need to access to the full text and some documentation on how to administer and interpret it before you can finish your questionnaire.
  • For example, an interpretation might be “there are five 7-point Likert scale questions…point values are added across all five items for each participant…and scores below 10 indicate the participant has low self-esteem”
  • Don’t introduce other variables into the mix here. All we are concerned with is how you will measure each variable by itself. The connection between variables is done using statistical tests, not operational definitions.
  • Detail any validity or reliability issues uncovered by previous researchers using the same measures. If you have concerns about validity and reliability, note them, as well.

TRACK 2 (IF YOU  AREN’T CREATING A RESEARCH PROPOSAL FOR THIS CLASS)

You are interested in researching the decision-making processes of parents of elementary-aged children during the beginning of the COVID-19 pandemic in 2020. Specifically, you want to if and how parents’ socioeconomic class impacted their decisions about whether to send their children to school in-person or instead opt for online classes or homeschooling.

  • Create a working research question for this topic.
  • What is the dependent variable in this research question? The independent variable? What other variables might you want to control?

For the independent variable, dependent variable, and at least one control variable from your list:

  • What measure (the specific question and answers) might you use for each one? Write out a first draft based on what you read in this section.

If you completed the exercise above and listed out all of the questions and answer choices you will use to measure the variables in your research question, you have already produced a pretty solid first draft of your questionnaire! Congrats! In essence, questionnaires are all of the self-report measures in your operational definitions for the independent, dependent, and control variables in your study arranged into one document and administered to participants. There are a few questions on a questionnaire (like name or ID#) that are not associated with the measurement of variables. These are the exception, and it’s useful to think of a questionnaire as a list of measures for variables. Of course, researchers often use more than one measure of a variable (i.e., triangulation ) so they can more confidently assert that their findings are true. A questionnaire should contain all of the measures researchers plan to collect about their variables by asking participants to self-report.

Sticking close to your operational definitions is important because it helps you avoid an everything-but-the-kitchen-sink approach that includes every possible question that occurs to you. Doing so puts an unnecessary burden on your survey respondents. Remember that you have asked your participants to give you their time and attention and to take care in responding to your questions; show them your respect by only asking questions that you actually plan to use in your analysis. For each question in your questionnaire, ask yourself how this question measures a variable in your study. An operational definition should contain the questions, response options, and how the researcher will draw conclusions about the variable based on participants’ responses.

how many survey questions in research

Writing questions

So, almost all of the questions on a questionnaire are measuring some variable. For many variables, researchers will create their own questions rather than using one from another researcher. This section will provide some tips on how to create good questions to accurately measure variables in your study. First, questions should be as clear and to the point as possible. This is not the time to show off your creative writing skills; a survey is a technical instrument and should be written in a way that is as direct and concise as possible. As I’ve mentioned earlier, your survey respondents have agreed to give their time and attention to your survey. The best way to show your appreciation for their time is to not waste it. Ensuring that your questions are clear and concise will go a long way toward showing your respondents the gratitude they deserve. Pilot testing the questionnaire with friends or colleagues can help identify these issues. This process is commonly called pretesting, but to avoid any confusion with pretesting in experimental design, we refer to it as pilot testing.

Related to the point about not wasting respondents’ time, make sure that every question you pose will be relevant to every person you ask to complete it. This means two things: first, that respondents have knowledge about whatever topic you are asking them about, and second, that respondents have experienced the events, behaviors, or feelings you are asking them to report. If you are asking participants for second-hand knowledge—asking clinicians about clients’ feelings, asking teachers about students’ feelings, and so forth—you may want to clarify that the variable you are asking about is the key informant’s perception of what is happening in the target population. A well-planned sampling approach ensures that participants are the most knowledgeable population to complete your survey.

If you decide that you do wish to include questions about matters with which only a portion of respondents will have had experience, make sure you know why you are doing so. For example, if you are asking about MSW student study patterns, and you decide to include a question on studying for the social work licensing exam, you may only have a small subset of participants who have begun studying for the graduate exam or took the bachelor’s-level exam. If you decide to include this question that speaks to a minority of participants’ experiences, think about why you are including it. Are you interested in how studying for class and studying for licensure differ? Are you trying to triangulate study skills measures? Researchers should carefully consider whether questions relevant to only a subset of participants is likely to produce enough valid responses for quantitative analysis.

Many times, questions that are relevant to a subsample of participants are conditional on an answer to a previous question. A participant might select that they rent their home, and as a result, you might ask whether they carry renter’s insurance. That question is not relevant to homeowners, so it would be wise not to ask them to respond to it. In that case, the question of whether someone rents or owns their home is a filter question , designed to identify some subset of survey respondents who are asked additional questions that are not relevant to the entire sample. Figure 13.1 presents an example of how to accomplish this on a paper survey by adding instructions to the participant that indicate what question to proceed to next based on their response to the first one. Using online survey tools, researchers can use filter questions to only present relevant questions to participants.

example of filter question, with a yes answer meaning you had to answer more questions

Researchers should eliminate questions that ask about things participants don’t know to minimize confusion. Assuming the question is relevant to the participant, other sources of confusion come from how the question is worded. The use of negative wording can be a source of potential confusion. Taking the question from Figure 13.1 about drinking as our example, what if we had instead asked, “Did you not abstain from drinking during your first semester of college?” This is a double negative, and it’s not clear how to answer the question accurately. It is a good idea to avoid negative phrasing, when possible. For example, “did you not drink alcohol during your first semester of college?” is less clear than “did you drink alcohol your first semester of college?”

Another 877777771`issue arises when you use jargon, or technical language, that people do not commonly know. For example, if you asked adolescents how they experience imaginary audience , they would find it difficult to link those words to the concepts from David Elkind’s theory. The words you use in your questions must be understandable to your participants. If you find yourself using jargon or slang, break it down into terms that are more universal and easier to understand.

Asking multiple questions as though they are a single question can also confuse survey respondents. There’s a specific term for this sort of question; it is called a double-barreled question . Figure 13.2 shows a double-barreled question. Do you see what makes the question double-barreled? How would someone respond if they felt their college classes were more demanding but also more boring than their high school classes? Or less demanding but more interesting? Because the question combines “demanding” and “interesting,” there is no way to respond yes to one criterion but no to the other.

Double-barreled question asking more than one thing at a time.

Another thing to avoid when constructing survey questions is the problem of social desirability . We all want to look good, right? And we all probably know the politically correct response to a variety of questions whether we agree with the politically correct response or not. In survey research, social desirability refers to the idea that respondents will try to answer questions in a way that will present them in a favorable light. (You may recall we covered social desirability bias in Chapter 11. )

Perhaps we decide that to understand the transition to college, we need to know whether respondents ever cheated on an exam in high school or college for our research project. We all know that cheating on exams is generally frowned upon (at least I hope we all know this). So, it may be difficult to get people to admit to cheating on a survey. But if you can guarantee respondents’ confidentiality, or even better, their anonymity, chances are much better that they will be honest about having engaged in this socially undesirable behavior. Another way to avoid problems of social desirability is to try to phrase difficult questions in the most benign way possible. Earl Babbie (2010) [1] offers a useful suggestion for helping you do this—simply imagine how you would feel responding to your survey questions. If you would be uncomfortable, chances are others would as well.

Try to step outside your role as researcher for a second, and imagine you were one of your participants. Evaluate the following:

  • Is the question too general? Sometimes, questions that are too general may not accurately convey respondents’ perceptions. If you asked someone how they liked a certain book and provide a response scale ranging from “not at all” to “extremely well”, and if that person selected “extremely well,” what do they mean? Instead, ask more specific behavioral questions, such as “Will you recommend this book to others?” or “Do you plan to read other books by the same author?” 
  • Is the question too detailed? Avoid unnecessarily detailed questions that serve no specific research purpose. For instance, do you need the age of each child in a household or is just the number of children in the household acceptable? However, if unsure, it is better to err on the side of details than generality.
  • Is the question presumptuous? Does your question make assumptions? For instance, if you ask, “what do you think the benefits of a tax cut would be?” you are presuming that the participant sees the tax cut as beneficial. But many people may not view tax cuts as beneficial. Some might see tax cuts as a precursor to less funding for public schools and fewer public services such as police, ambulance, and fire department. Avoid questions with built-in presumptions.
  • Does the question ask the participant to imagine something? Is the question imaginary? A popular question on many television game shows is “if you won a million dollars on this show, how will you plan to spend it?” Most participants have never been faced with this large amount of money and have never thought about this scenario. In fact, most don’t even know that after taxes, the value of the million dollars will be greatly reduced. In addition, some game shows spread the amount over a 20-year period. Without understanding this “imaginary” situation, participants may not have the background information necessary to provide a meaningful response.

Try to step outside your role as researcher for a second, and imagine you were one of your participants. Use the following prompts to evaluate your draft questions from the previous exercise:

Cultural considerations

When researchers write items for questionnaires, they must be conscientious to avoid culturally biased questions that may be inappropriate or difficult for certain populations.

[insert information related to asking about demographics and how this might make some people uncomfortable based on their identity(ies) and how to potentially address]

You should also avoid using terms or phrases that may be regionally or culturally specific (unless you are absolutely certain all your respondents come from the region or culture whose terms you are using). When I first moved to southwest Virginia, I didn’t know what a holler was. Where I grew up in New Jersey, to holler means to yell. Even then, in New Jersey, we shouted and screamed, but we didn’t holler much. In southwest Virginia, my home at the time, a holler also means a small valley in between the mountains. If I used holler in that way on my survey, people who live near me may understand, but almost everyone else would be totally confused.

Testing questionnaires before using them

Finally, it is important to get feedback on your survey questions from as many people as possible, especially people who are like those in your sample. Now is not the time to be shy. Ask your friends for help, ask your mentors for feedback, ask your family to take a look at your survey as well. The more feedback you can get on your survey questions, the better the chances that you will come up with a set of questions that are understandable to a wide variety of people and, most importantly, to those in your sample.

In sum, in order to pose effective survey questions, researchers should do the following:

  • Identify how each question measures an independent, dependent, or control variable in their study.
  • Keep questions clear and succinct.
  • Make sure respondents have relevant lived experience to provide informed answers to your questions.
  • Use filter questions to avoid getting answers from uninformed participants.
  • Avoid questions that are likely to confuse respondents—including those that use double negatives, use culturally specific terms or jargon, and pose more than one question at a time.
  • Imagine how respondents would feel responding to questions.
  • Get feedback, especially from people who resemble those in the researcher’s sample.

Table 13.1 offers one model for writing effective questionnaire items.

Table 13.1 The BRUSO model of writing effective questionnaire items, with examples from a perceptions of gun ownership questionnaire
“Are you now or have you ever been the possessor of a firearm?” Have you ever possessed a firearm?
“Who did you vote for in the last election?” Note: Only include items that are relevant to your study.
“Are you a gun person?” Do you currently own a gun?”
How much have you read about the new gun control measure and sales tax?” “How much have you read about the new sales tax on firearm purchases?”
“How much do you support the beneficial new gun control measure?” “What is your view of the new gun control measure?”

Let’s complete a first draft of your questions.

  • In the first exercise, you wrote out the questions and answers for each measure of your independent and dependent variables. Evaluate each question using the criteria listed above on effective survey questions.
  • Type out questions for your control variables and evaluate them, as well. Consider what response options you want to offer participants.

Now, let’s revise any questions that do not meet your standards!

  •  Use the BRUSO model in Table 13.1 for an illustration of how to address deficits in question wording. Keep in mind that you are writing a first draft in this exercise, and it will take a few drafts and revisions before your questions are ready to distribute to participants.
  • In the first exercise, you wrote out the question and answers for your independent, dependent, and at least one control variable. Evaluate each question using the criteria listed above on effective survey questions.
  •  Use the BRUSO model in Table 13.1 for an illustration of how to address deficits in question wording. In real research, it will take a few drafts and revisions before your questions are ready to distribute to participants.

how many survey questions in research

Writing response options

While posing clear and understandable questions in your survey is certainly important, so too is providing respondents with unambiguous response options. Response options are the answers that you provide to the people completing your questionnaire. Generally, respondents will be asked to choose a single (or best) response to each question you pose. We call questions in which the researcher provides all of the response options closed-ended questions . Keep in mind, closed-ended questions can also instruct respondents to choose multiple response options, rank response options against one another, or assign a percentage to each response option. But be cautious when experimenting with different response options! Accepting multiple responses to a single question may add complexity when it comes to quantitatively analyzing and interpreting your data.

Surveys need not be limited to closed-ended questions. Sometimes survey researchers include open-ended questions in their survey instruments as a way to gather additional details from respondents. An open-ended question does not include response options; instead, respondents are asked to reply to the question in their own way, using their own words. These questions are generally used to find out more about a survey participant’s experiences or feelings about whatever they are being asked to report in the survey. If, for example, a survey includes closed-ended questions asking respondents to report on their involvement in extracurricular activities during college, an open-ended question could ask respondents why they participated in those activities or what they gained from their participation. While responses to such questions may also be captured using a closed-ended format, allowing participants to share some of their responses in their own words can make the experience of completing the survey more satisfying to respondents and can also reveal new motivations or explanations that had not occurred to the researcher. This is particularly important for mixed-methods research. It is possible to analyze open-ended response options quantitatively using content analysis (i.e., counting how often a theme is represented in a transcript looking for statistical patterns). However, for most researchers, qualitative data analysis will be needed to analyze open-ended questions, and researchers need to think through how they will analyze any open-ended questions as part of their data analysis plan. Open-ended questions cannot be operationally defined because you don’t know what responses you will get. We will address qualitative data analysis in greater detail in Chapter 19.

To write an effective response options for closed-ended questions, there are a couple of guidelines worth following. First, be sure that your response options are mutually exclusive . Look back at Figure 13.1, which contains questions about how often and how many drinks respondents consumed. Do you notice that there are no overlapping categories in the response options for these questions? This is another one of those points about question construction that seems fairly obvious but that can be easily overlooked. Response options should also be exhaustive . In other words, every possible response should be covered in the set of response options that you provide. For example, note that in question 10a in Figure 13.1, we have covered all possibilities—those who drank, say, an average of once per month can choose the first response option (“less than one time per week”) while those who drank multiple times a day each day of the week can choose the last response option (“7+”). All the possibilities in between these two extremes are covered by the middle three response options, and every respondent fits into one of the response options we provided.

Earlier in this section, we discussed double-barreled questions. Response options can also be double barreled, and this should be avoided. Figure 13.3 is an example of a question that uses double-barreled response options. Other tips about questions are also relevant to response options, including that participants should be knowledgeable enough to select or decline a response option as well as avoiding jargon and cultural idioms.

Double-barreled response options providing more than one answer for each option

Even if you phrase questions and response options clearly, participants are influenced by how many response options are presented on the questionnaire. For Likert scales, five or seven response options generally allow about as much precision as respondents are capable of. However, numerical scales with more options can sometimes be appropriate. For dimensions such as attractiveness, pain, and likelihood, a 0-to-10 scale will be familiar to many respondents and easy for them to use. Regardless of the number of response options, the most extreme ones should generally be “balanced” around a neutral or modal midpoint. An example of an unbalanced rating scale measuring perceived likelihood might look like this:

Unlikely  |  Somewhat Likely  |  Likely  |  Very Likely  |  Extremely Likely

Because we have four rankings of likely and only one ranking of unlikely, the scale is unbalanced and most responses will be biased toward “likely” rather than “unlikely.” A balanced version might look like this:

Extremely Unlikely  |  Somewhat Unlikely  |  As Likely as Not  |  Somewhat Likely  | Extremely Likely

In this example, the midpoint is halfway between likely and unlikely. Of course, a middle or neutral response option does not have to be included. Researchers sometimes choose to leave it out because they want to encourage respondents to think more deeply about their response and not simply choose the middle option by default. Fence-sitters are respondents who choose neutral response options, even if they have an opinion. Some people will be drawn to respond, “no opinion” even if they have an opinion, particularly if their true opinion is the not a socially desirable opinion. Floaters , on the other hand, are those that choose a substantive answer to a question when really, they don’t understand the question or don’t have an opinion. 

As you can see, floating is the flip side of fence-sitting. Thus, the solution to one problem is often the cause of the other. How you decide which approach to take depends on the goals of your research. Sometimes researchers specifically want to learn something about people who claim to have no opinion. In this case, allowing for fence-sitting would be necessary. Other times researchers feel confident their respondents will all be familiar with every topic in their survey. In this case, perhaps it is okay to force respondents to choose one side or another (e.g., agree or disagree) without a middle option (e.g., neither agree nor disagree) or to not include an option like “don’t know enough to say” or “not applicable.” There is no always-correct solution to either problem. But in general, including middle option in a response set provides a more exhaustive set of response options than one that excludes one. 

==This came from 10.3 under “Measuring unidimensional concepts” but it seems more appropriate in the chapter about writing survey questions. We need to make sure this section flows well. Maybe there should be a better organized subsection on rating scales?  Where does this go? Does it need any revision?===

The number of response options on a typical rating scale is usually five or seven, though it can range from three to 11. Five-point scales are best for unipolar scales where only one construct is tested, such as frequency (Never, Rarely, Sometimes, Often, Always). Seven-point scales are best for bipolar scales where there is a dichotomous spectrum, such as liking (Like very much, Like somewhat, Like slightly, Neither like nor dislike, Dislike slightly, Dislike somewhat, Dislike very much). For bipolar questions, it is useful to offer an earlier question that branches them into an area of the scale; if asking about liking ice cream, first ask “Do you generally like or dislike ice cream?” Once the respondent chooses like or dislike, refine it by offering them relevant choices from the seven-point scale. Branching improves both reliability and validity (Krosnick & Berent, 1993). [2] Although you often see scales with numerical labels, it is best to only present verbal labels to the respondents but convert them to numerical values in the analyses. Avoid partial labels or length or overly specific labels. In some cases, the verbal labels can be supplemented with (or even replaced by) meaningful graphics. The last rating scale shown in Figure 10.1 is a visual-analog scale, on which participants make a mark somewhere along the horizontal line to indicate the magnitude of their response.

Finalizing Response Options

The most important check before your finalize your response options is to align them with your operational definitions. As we’ve discussed before, your operational definitions include your measures (questions and responses options) as well as how to interpret those measures in terms of the variable being measured. In particular, you should be able to interpret all response options to a question based on your operational definition of the variable it measures. If you wanted to measure the variable “social class,” you might ask one question about a participant’s annual income and another about family size. Your operational definition would need to provide clear instructions on how to interpret response options. Your operational definition is basically like this social class calculator from Pew Research , though they include a few more questions in their definition.

To drill down a bit more, as Pew specifies in the section titled “how the income calculator works,” the interval/ratio data respondents enter is interpreted using a formula combining a participant’s four responses to the questions posed by Pew categorizing their household into three categories—upper, middle, or lower class. So, the operational definition includes the four questions comprising the measure and the formula or interpretation which converts responses into the three final categories that we are familiar with: lower, middle, and upper class.

It’s perfectly normal for operational definitions to change levels of measurement, and it’s also perfectly normal for the level of measurement to stay the same. The important thing is that each response option a participant can provide is accounted for by the operational definition. Throw any combination of family size, location, or income at the Pew calculator, and it will define you into one of those three social class categories.

Unlike Pew’s definition, the operational definitions in your study may not need their own webpage to define and describe. For many questions and answers, interpreting response options is easy. If you were measuring “income” instead of “social class,” you could simply operationalize the term by asking people to list their total household income before taxes are taken out. Higher values indicate higher income, and lower values indicate lower income. Easy. Regardless of whether your operational definitions are simple or more complex, every response option to every question on your survey (with a few exceptions) should be interpretable using an operational definition of a variable. Just like we want to avoid an everything-but-the-kitchen-sink approach to questions on our questionnaire, you want to make sure your final questionnaire only contains response options that you will use in your study.

One note of caution on interpretation (sorry for repeating this). We want to remind you again that an operational definition should not mention more than one variable. In our example above, your operational definition could not say “a family of three making under $50,000 is lower class; therefore, they are more likely to experience food insecurity.” That last clause about food insecurity may well be true, but it’s not a part of the operational definition for social class. Each variable (food insecurity and class) should have its own operational definition. If you are talking about how to interpret the relationship between two variables, you are talking about your data analysis plan . We will discuss how to create your data analysis plan beginning in Chapter 14 . For now, one consideration is that depending on the statistical test you use to test relationships between variables, you may need nominal, ordinal, or interval/ratio data. Your questions and response options should match the level of measurement you need with the requirements of the specific statistical tests in your data analysis plan. Once you finalize your data analysis plan, return to your questionnaire to confirm the level of measurement matches with the statistical test you’ve chosen.

In summary, to write effective response options researchers should do the following:

  • Avoid wording that is likely to confuse respondents—including double negatives, use culturally specific terms or jargon, and double-barreled response options.
  • Ensure response options are relevant to participants’ knowledge and experience so they can make an informed and accurate choice.
  • Present mutually exclusive and exhaustive response options.
  • Consider fence-sitters and floaters, and the use of neutral or “not applicable” response options.
  • Define how response options are interpreted as part of an operational definition of a variable.
  • Check level of measurement matches operational definitions and the statistical tests in the data analysis plan (once you develop one in the future)

Look back at the response options you drafted in the previous exercise. Make sure you have a first draft of response options for each closed-ended question on your questionnaire.

  • Using the criteria above, evaluate the wording of the response options for each question on your questionnaire.
  • Revise your questions and response options until you have a complete first draft.
  • Do your first read-through and provide a dummy answer to each question. Make sure you can link each response option and each question to an operational definition.

Look back at the response options you drafted in the previous exercise.

From this discussion, we hope it is clear why researchers using quantitative methods spell out all of their plans ahead of time. Ultimately, there should be a straight line from operational definition through measures on your questionnaire to the data analysis plan. If your questionnaire includes response options that are not aligned with operational definitions or not included in the data analysis plan, the responses you receive back from participants won’t fit with your conceptualization of the key variables in your study. If you do not fix these errors and proceed with collecting unstructured data, you will lose out on many of the benefits of survey research and face overwhelming challenges in answering your research question.

how many survey questions in research

Designing questionnaires

Based on your work in the previous section, you should have a first draft of the questions and response options for the key variables in your study. Now, you’ll also need to think about how to present your written questions and response options to survey respondents. It’s time to write a final draft of your questionnaire and make it look nice. Designing questionnaires takes some thought. First, consider the route of administration for your survey. What we cover in this section will apply equally to paper and online surveys, but if you are planning to use online survey software, you should watch tutorial videos and explore the features of of the survey software you will use.

Informed consent & instructions

Writing effective items is only one part of constructing a survey. For one thing, every survey should have a written or spoken introduction that serves two basic functions (Peterson, 2000) . [3] One is to encourage respondents to participate in the survey. In many types of research, such encouragement is not necessary either because participants do not know they are in a study (as in naturalistic observation) or because they are part of a subject pool and have already shown their willingness to participate by signing up and showing up for the study. Survey research usually catches respondents by surprise when they answer their phone, go to their mailbox, or check their e-mail—and the researcher must make a good case for why they should agree to participate. Thus, the introduction should briefly explain the purpose of the survey and its importance, provide information about the sponsor of the survey (university-based surveys tend to generate higher response rates), acknowledge the importance of the respondent’s participation, and describe any incentives for participating.

The second function of the introduction is to establish informed consent . Remember that this involves describing to respondents everything that might affect their decision to participate. This includes the topics covered by the survey, the amount of time it is likely to take, the respondent’s option to withdraw at any time, confidentiality issues, and other ethical considerations we covered in Chapter 6. Written consent forms are not always used in survey research (when the research is of minimal risk and completion of the survey instrument is often accepted by the IRB as evidence of consent to participate), so it is important that this part of the introduction be well documented and presented clearly and in its entirety to every respondent.

Organizing items to be easy and intuitive to follow

The introduction should be followed by the substantive questionnaire items. But first, it is important to present clear instructions for completing the questionnaire, including examples of how to use any unusual response scales. Remember that the introduction is the point at which respondents are usually most interested and least fatigued, so it is good practice to start with the most important items for purposes of the research and proceed to less important items. Items should also be grouped by topic or by type. For example, items using the same rating scale (e.g., a 5-point agreement scale) should be grouped together if possible to make things faster and easier for respondents. Demographic items are often presented last. This can be because they are easy to answer in the event respondents have become tired or bored, because they are least interesting to participants, or because they can raise concerns for respondents from marginalized groups who may see questions about their identities as a potential red flag. Of course, any survey should end with an expression of appreciation to the respondent.

Questions are often organized thematically. If our survey were measuring social class, perhaps we’d have a few questions asking about employment, others focused on education, and still others on housing and community resources. Those may be the themes around which we organize our questions. Or perhaps it would make more sense to present any questions we had about parents’ income and then present a series of questions about estimated future income. Grouping by theme is one way to be deliberate about how you present your questions. Keep in mind that you are surveying people, and these people will be trying to follow the logic in your questionnaire. Jumping from topic to topic can give people a bit of whiplash and may make participants less likely to complete it.

Using a matrix is a nice way of streamlining response options for similar questions. A matrix is a question type that lists a set of questions for which the answer categories are all the same. If you have a set of questions for which the response options are the same, it may make sense to create a matrix rather than posing each question and its response options individually. Not only will this save you some space in your survey but it will also help respondents progress through your survey more easily. A sample matrix can be seen in Figure 13.4.

Survey using matrix options--between agree and disagree--and opinions about class

Once you have grouped similar questions together, you’ll need to think about the order in which to present those question groups. Most survey researchers agree that it is best to begin a survey with questions that will want to make respondents continue (Babbie, 2010; Dillman, 2000; Neuman, 2003). [4] In other words, don’t bore respondents, but don’t scare them away either. There’s some disagreement over where on a survey to place demographic questions, such as those about a person’s age, gender, and race. On the one hand, placing them at the beginning of the questionnaire may lead respondents to think the survey is boring, unimportant, and not something they want to bother completing. On the other hand, if your survey deals with some very sensitive topic, such as child sexual abuse or criminal convictions, you don’t want to scare respondents away or shock them by beginning with your most intrusive questions.

Your participants are human. They will react emotionally to questionnaire items, and they will also try to uncover your research questions and hypotheses. In truth, the order in which you present questions on a survey is best determined by the unique characteristics of your research. When feasible, you should consult with key informants from your target population determine how best to order your questions. If it is not feasible to do so, think about the unique characteristics of your topic, your questions, and most importantly, your sample. Keeping in mind the characteristics and needs of the people you will ask to complete your survey should help guide you as you determine the most appropriate order in which to present your questions. None of your decisions will be perfect, and all studies have limitations.

Questionnaire length

You’ll also need to consider the time it will take respondents to complete your questionnaire. Surveys vary in length, from just a page or two to a dozen or more pages, which means they also vary in the time it takes to complete them. How long to make your survey depends on several factors. First, what is it that you wish to know? Wanting to understand how grades vary by gender and year in school certainly requires fewer questions than wanting to know how people’s experiences in college are shaped by demographic characteristics, college attended, housing situation, family background, college major, friendship networks, and extracurricular activities. Keep in mind that even if your research question requires a sizable number of questions be included in your questionnaire, do your best to keep the questionnaire as brief as possible. Any hint that you’ve thrown in a bunch of useless questions just for the sake of it will turn off respondents and may make them not want to complete your survey.

Second, and perhaps more important, how long are respondents likely to be willing to spend completing your questionnaire? If you are studying college students, asking them to use their very limited time to complete your survey may mean they won’t want to spend more than a few minutes on it. But if you ask them to complete your survey during down-time between classes and there is little work to be done, students may be willing to give you a bit more of their time. Think about places and times that your sampling frame naturally gathers and whether you would be able to either recruit participants or distribute a survey in that context. Estimate how long your participants would reasonably have to complete a survey presented to them during this time. The more you know about your population (such as what weeks have less work and more free time), the better you can target questionnaire length.

The time that survey researchers ask respondents to spend on questionnaires varies greatly. Some researchers advise that surveys should not take longer than about 15 minutes to complete (as cited in Babbie 2010), [5] whereas others suggest that up to 20 minutes is acceptable (Hopper, 2010). [6] As with question order, there is no clear-cut, always-correct answer about questionnaire length. The unique characteristics of your study and your sample should be considered to determine how long to make your questionnaire. For example, if you planned to distribute your questionnaire to students in between classes, you will need to make sure it is short enough to complete before the next class begins.

When designing a questionnaire, a researcher should consider:

  • Weighing strengths and limitations of the method of delivery, including the advanced tools in online survey software or the simplicity of paper questionnaires.
  • Grouping together items that ask about the same thing.
  • Moving any questions about sensitive items to the end of the questionnaire, so as not to scare respondents off.
  • Moving any questions that engage the respondent to answer the questionnaire at the beginning, so as not to bore them.
  • Timing the length of the questionnaire with a reasonable length of time you can ask of your participants.
  • Dedicating time to visual design and ensure the questionnaire looks professional.

Type out a final draft of your questionnaire in a word processor or online survey tool.

  • Evaluate your questionnaire using the guidelines above, revise it, and get it ready to share with other student researchers.
  • Take a look at the question drafts you have completed and decide on an order for your questions. E valuate your draft questionnaire using the guidelines above, and revise as needed.

how many survey questions in research

Pilot testing and revising questionnaires

A good way to estimate the time it will take respondents to complete your questionnaire (and other potential challenges) is through pilot testing . Pilot testing allows you to get feedback on your questionnaire so you can improve it before you actually administer it. It can be quite expensive and time consuming if you wish to pilot test your questionnaire on a large sample of people who very much resemble the sample to whom you will eventually administer the finalized version of your questionnaire. But you can learn a lot and make great improvements to your questionnaire simply by pilot testing with a small number of people to whom you have easy access (perhaps you have a few friends who owe you a favor). By pilot testing your questionnaire, you can find out how understandable your questions are, get feedback on question wording and order, find out whether any of your questions are boring or offensive, and learn whether there are places where you should have included filter questions. You can also time pilot testers as they take your survey. This will give you a good idea about the estimate to provide respondents when you administer your survey and whether you have some wiggle room to add additional items or need to cut a few items.

Perhaps this goes without saying, but your questionnaire should also have an attractive design. A messy presentation style can confuse respondents or, at the very least, annoy them. Be brief, to the point, and as clear as possible. Avoid cramming too much into a single page. Make your font size readable (at least 12 point or larger, depending on the characteristics of your sample), leave a reasonable amount of space between items, and make sure all instructions are exceptionally clear. If you are using an online survey, ensure that participants can complete it via mobile, computer, and tablet devices. Think about books, documents, articles, or web pages that you have read yourself—which were relatively easy to read and easy on the eyes and why? Try to mimic those features in the presentation of your survey questions. While online survey tools automate much of visual design, word processors are designed for writing all kinds of documents and may need more manual adjustment as part of visual design.

Realistically, your questionnaire will continue to evolve as you develop your data analysis plan over the next few chapters. By now, you should have a complete draft of your questionnaire grounded in an underlying logic that ties together each question and response option to a variable in your study. Once your questionnaire is finalized, you will need to submit it for ethical approval from your IRB. If your study requires IRB approval, it may be worthwhile to submit your proposal before your questionnaire is completely done. Revisions to IRB protocols are common and it takes less time to review a few changes to questions and answers than it does to review the entire study, so give them the whole study as soon as you can. Once the IRB approves your questionnaire, you cannot change it without their okay.

Key Takeaways

  • A questionnaire is comprised of self-report measures of variables in a research study.
  • Make sure your survey questions will be relevant to all respondents and that you use filter questions when necessary.
  • Effective survey questions and responses take careful construction by researchers, as participants may be confused or otherwise influenced by how items are phrased.
  • The questionnaire should start with informed consent and instructions, flow logically from one topic to the next, engage but not shock participants, and thank participants at the end.
  • Pilot testing can help identify any issues in a questionnaire before distributing it to participants, including language or length issues.

It’s a myth that researchers work alone! Get together with a few of your fellow students and swap questionnaires for pilot testing.

  • Use the criteria in each section above (questions, response options, questionnaires) and provide your peers with the strengths and weaknesses of their questionnaires.
  • See if you can guess their research question and hypothesis based on the questionnaire alone.

It’s a myth that researchers work alone! Get together with a few of your fellow students and compare draft questionnaires.

  • What are the strengths and limitations of your questionnaire as compared to those of your peers?
  • Is there anything you would like to use from your peers’ questionnaires in your own?
  • Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth. ↵
  • Krosnick, J.A. & Berent, M.K. (1993). Comparisons of party identification and policy preferences: The impact of survey question format. American Journal of Political Science, 27(3), 941-964. ↵
  • Peterson, R. A. (2000).  Constructing effective questionnaires . Thousand Oaks, CA: Sage. ↵
  • Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth; Dillman, D. A. (2000). Mail and Internet surveys: The tailored design method (2nd ed.). New York, NY: Wiley; Neuman, W. L. (2003). Social research methods: Qualitative and quantitative approaches (5th ed.). Boston, MA: Pearson. ↵
  • Babbie, E. (2010). The practice of social research  (12th ed.). Belmont, CA: Wadsworth. ↵
  • Hopper, J. (2010). How long should a survey be? Retrieved from  http://www.verstaresearch.com/blog/how-long-should-a-survey-be ↵

According to the APA Dictionary of Psychology, an operational definition is "a description of something in terms of the operations (procedures, actions, or processes) by which it could be observed and measured. For example, the operational definition of anxiety could be in terms of a test score, withdrawal from a situation, or activation of the sympathetic nervous system. The process of creating an operational definition is known as operationalization."

Triangulation of data refers to the use of multiple types, measures or sources of data in a research project to increase the confidence that we have in our findings.

Testing out your research materials in advance on people who are not included as participants in your study.

items on a questionnaire designed to identify some subset of survey respondents who are asked additional questions that are not relevant to the entire sample

a question that asks more than one thing at a time, making it difficult to respond accurately

When a participant answers in a way that they believe is socially the most acceptable answer.

the answers researchers provide to participants to choose from when completing a questionnaire

questions in which the researcher provides all of the response options

Questions for which the researcher does not include response options, allowing for respondents to answer the question in their own words

respondents to a survey who choose neutral response options, even if they have an opinion

respondents to a survey who choose a substantive answer to a question when really, they don’t understand the question or don’t have an opinion

An ordered outline that includes your research question, a description of the data you are going to use to answer it, and the exact analyses, step-by-step, that you plan to run to answer your research question.

A process through which the researcher explains the research process, procedures, risks and benefits to a potential participant, usually through a written document, which the participant than signs, as evidence of their agreement to participate.

a type of survey question that lists a set of questions for which the response options are all the same in a grid layout

Doctoral Research Methods in Social Work Copyright © by Mavs Open Press. All Rights Reserved.

Share This Book

  • (855) 776-7763

Training Maker

All Products

Qualaroo Insights

ProProfs.com

  • Get Started Free

FREE. All Features. FOREVER!

Try our Forever FREE account with all premium features!

How Many Questions Should Be Asked in a Survey?

how many survey questions in research

Sameer Bhatia

Founder and CEO - ProProfs

Review Board Member

Sameer Bhatia is the Founder and Chief Executive Officer of ProProfs.com. He believes that software should make you happy and is driven to create a 100-year company that delivers delightfully ... Read more

Sameer Bhatia is the Founder and Chief Executive Officer of ProProfs.com. He believes that software should make you happy and is driven to create a 100-year company that delivers delightfully smart software with awesome support. His favorite word is 'delight,' and he dislikes the term 'customer satisfaction,' as he believes that 'satisfaction' is a low bar and users must get nothing less than a delightful experience at ProProfs. Sameer holds a Masters in Computer Science from the University of Southern California (USC). He lives in Santa Monica with his wife & two daughters. Read less

 Emma David

Market Research Specialist

Emma David, a seasoned market research professional, specializes in employee engagement, survey administration, and data management. Her expertise in leveraging data for informed decisions has positively impacted several brands, enhancing their market position.

How Many Questions Should Be Asked in a Survey

“How many questions should I include in my survey?”

“With limited survey questions, will I be able to collect enough information from my target audience?”

“Or, will a large number of questions tire my customers and make them drop out of my survey?”

You might have faced the above dilemmas while creating a survey.

As a survey creator, you want to collect as much information as possible from your target audience. But remember, your survey takers will not always be willing to take up the surveys and most of them will have time constraints. They may find it challenging to fill out lengthy, open-ended questions and abandon the survey midway. So, it is crucial to find a balance between your data collection needs and your survey taker’s ease of survey response.

The ideal number of questions in a survey depends on many factors ranging from your survey goals to your audience type. Through this blog, let’s explore the tips and tricks to tackle the confusion of every survey creator “How many questions should be asked in a survey?”

What Are the Factors That Affect Your Survey Length

How long should a survey be? There is no definite answer to this question! It depends on many aspects ranging from what are your survey goals and objectives to the type of audience who will take your survey.

Let’s dig deeper into these factors:

1. Survey Type

There are many types of surveys like market research surveys , customer satisfaction survey , employee engagement surveys , Net Promoter Score (NPS) surveys, and more. An NPS survey aims to understand your customer loyalty with only one question on a scale of 0-10.  “How likely will you recommend our brand to your friends and family?”. Hence, an NPS survey is a very short survey that can be completed in seconds.

Market research survey example

On the other hand, customer satisfaction surveys help you understand the customer’s happiness with your products and services. This could consist of a number of open-ended and closed-ended questions ranging from five to ten in number. Hence, it takes more time to answer.

Customer Satisfaction Survey questions example

A powerful survey maker tool like ProProfs Survey Maker comes with customized survey templates to create any type of survey with ease.

2. Target Audience

Who is your ideal target audience? Are they your customers or potential customers or they are your employees? Based on your ideal survey base, you can keep your questions for a survey less or more in number.

 For example, when you are surveying your employees, you may consider asking more standard survey questions . Since your employees are well aligned with your vision and goals, they may not mind answering in-depth questions.

Employee Satisfaction Survey question example

On the other hand, when you want to know about your online visitor’s website experience, it is better to ask them a limited number of questions (1-2) in the form of a pop-up survey. Your website visitors easily get tired and frustrated with a lengthy survey. It is better to wait till they convert into customers before collecting and understanding their feedback about your brand.

Exit survey question

3. Survey Objectives

Largely, business survey questions are designed with a specific goal in mind. As a startup owner, you may have a goal of understanding the needs of your target audience to serve them better. A series of market research questions , designed to understand their choices will serve the purpose here. The survey can have different questions including demographic questions , open-ended questions , and more.

Market research survey example

Alternatively, the efforts made by a customer while interacting with your brand can be measured by a customer effort score (CES). , This survey consists of no more than 2-3 questions. Usually, it is embedded at major touchpoints like product purchases or customer complaint resolution.

Survey question type

Asking too many questions can reduce the quality of your online survey and affect the genuineness of the feedback collected.

Why Should You Avoid Asking Too Many Questions in Your Survey?

Asking too many questions in your survey brings about a lot of issues ranging from survey dropout to lack of accuracy in data collection. Here are a few reasons to keep the surveys short and crisp:

1. Avoid Survey Fatigue

Survey fatigue is a situation in which your survey taker feels bored or tired of taking the survey. Usually, too many questions can bring in a tiring effect on the survey participants. For example, when there are so many open-ended questions, your survey takers need to think more and frame in-depth answers to the questions. Most of them do not more than 2-3 mins from their busy schedule for the survey. They might see the entire survey process as a waste of time.

Survey Fatigue

2. Reduce Survey Dropout Rate

The chances of survey takers’ dropping out of the survey increase when the survey is long and exhausting. The more the survey dropout rate, the less accurate the survey responses.  Research shows that a survey with more than 25 minutes of completion time shows 3 times more dropout as compared to a survey that can be completed in just 5 minutes.

3. Ensure High Data Quality

With high survey dropout rates, the quality of your survey data can be inaccurate. For example, if only 10 out of 60 survey respondents take your survey, you may not get a clear view of your audience’s inputs. Maybe, the 10 respondents only speak in favor of your new product. You need inputs on every aspect of your product/service to understand the product challenges and serve them better.

Based on different survey types, you can frame varying questions to suit your business needs. Let’s see how.

What Are the Ideal Number of Questions For Different Survey Types?

How many questions should I ask in an online survey form? How long should an employee survey be? What is the i deal and maximum length for a web survey ? Each type of survey has its own dynamics when it comes to surveying length, survey question types, templates, customization options, and so on.

Let’s understand this one by one.

1. Customer Surveys

Research suggests that ideally, customer surveys can have anywhere between 15-20 questions. This ensures that your customers do not get overburdened with too many questions and can quickly respond to them in 2-5 minutes. Especially, if you have a new customer, you don’t want to lose their valuable feedback by overwhelming them with a lot of questions. In fact, a new customer brings a new perspective to your products and services.

2. Employee Surveys

Unlike a customer survey, an employee survey gives you the privilege to ask more than 20 questions. Employees might have a number of concerns regarding daily responsibilities, pay, perks, work environment, and more. They will be more than happy to voice their concerns through a survey platform. 

3. Pulse Surveys

Pulse surveys are short surveys aimed at understanding the pulse of your employees and customers alike. These surveys are usually conducted on a more frequent basis like once a month or once in two months. Also, the questions are more specific and precise. 

With pulse surveys, you can collect quick and actionable responses from both your customers and employees alike. Usually, this is a shorter version of the annual surveys with just 2-10 questions. These 2-10 questions focus on the most crux issues concerning an organization.

4. Intercept Surveys

Intercept surveys are in-person surveys conducted at points of contact like malls, restaurants, public places like parks, and more. Here, you intercept people and ask them to give feedback on a product or service.

While creating intercept surveys, you need to keep in mind that people do not have much time at hand. You might be interrupting their daily routine to ask for feedback. If the number of questions are more, they may not be willing to stop by and give you feedback. Hence, keep your questions as relevant as possible and limit them to 3-5. 

Since you have understood how many questions should be asked in a survey of different kinds, the next section decodes the methods to determine the right number of survey questions.

Methods to Determine How Many Questions Should be Asked In a Survey

Survey length is a combination of different factors like survey purpose, average response time, determining the type of data collection, etc. There is no fixed limit on how many questions should a survey have .

1. Determine the Purpose of Your Survey

What are the aims and objectives of conducting your survey? Is it to understand the customer opinion about a newly launched product? Or, is it the aim of the survey to collect feedback after a customer touchpoint like product purchase or customer service call.

Based on the above survey objectives, your survey will vary a lot. For example, a customer service survey should have no more than 1-2 questions.

Customer service survey

On the other hand, a product survey can have up to 5 questions to collect more detailed feedback about a newly launched product.

2. Identify the Type of Data You Wish to Collect Through the Survey

After identifying the objective of your survey, you need to identify the type of data you want to collect. Different survey questions help you collect varied information from your target audience. 

For example, with checkbox-type questions, you can collect bulk information from your audience. 

Checkbox type survey

With a comment type of question, you can collect additional information from your target audience.

survey for additional information

3. Use MicroSurveys in Case of More Questions

Micro Surveys contain 1-2 questions. It is ideal to split up your long survey into micro-surveys to reduce the burden on your survey respondents.

For example, if you want to conduct a survey on a newly launched product feature, first conduct a micro survey with the question, “Did you like our new product feature?” If they say yes, you can conduct another follow up micro survey with in-depth questions like “How has the new product feature solved your problem?”, “How would you rate our new product feature?” and more.

4. Determine the Average Answering Time

Based on the different survey types and the industry type, you need to determine the average survey completion time. For example, pulse surveys just require just 2-3 minutes to complete. While a detailed employee survey requires around 10-15 minutes to complete. Also, make sure that you inform the survey takers about the time that will be taken to complete the survey so they can be mentally prepared to complete the survey.

5. Make Use of Market Research Panels

Market research panels consist of a predefined group of individuals who have agreed to take part in the survey process. With this, you are assured of a requisite participation rate for your survey and design the number of questions accordingly. Also, you get to easily decide certain parameters like the demographics to be tracked, choices to be captured, and more.

6. Ask the Right Survey Question

Every survey type is characterized by a definite set of questions. For example, the Net Promoter Score (NPS) survey is based on customer loyalty questions while the  employee satisfaction survey takes into account an employee’s overall satisfaction with their job. 

The easiest way to present the right questions to your target audience is to access survey templates from a good survey maker tool . Most of the tools provide customized templates to get started with different types of surveys ranging from a customer satisfaction survey to an employee engagement survey.

7. Test Your Survey

Unless and until you test your survey, you can’t experience the first-hand performance of your survey. Ideally, you can get a third party to test and review your survey. When a fresh pair of eyes evaluate the survey, you get a fair perspective on the different question types: Are the questions long or short? Is a question precise enough? Will your survey respondents be able to complete the survey in the stipulated time, and more.

Be Market Ready by Asking the Right Questions at the Right Time!

Deciding the number of questions to be asked in a survey is necessary to ensure higher survey response rates. You need to take into account different factors like the type of survey, your target audience, and your survey objectives.

Asking the right number of questions involves careful research and planning. Firstly, you need to brainstorm your survey goals and the type of data you want to capture. Once you have decided this, it is time to determine the average time to answer based on your survey type. For example, a pulse survey is very short and needs just 2-3 minutes of your survey respondent’s time. Lastly, test your survey with a third party to understand its efficacy.

Are you looking to create a survey with the right number of questions? ProProfs Survey Maker gives you access to the right survey templates with the exact number of impactful survey questions.

Emma David

About the author

Emma David is a seasoned market research professional with 8+ years of experience. Having kick-started her journey in research, she has developed rich expertise in employee engagement, survey creation and administration, and data management. Emma believes in the power of data to shape business performance positively. She continues to help brands and businesses make strategic decisions and improve their market standing through her understanding of research methodologies.

Related Posts

how many survey questions in research

What is diversity, equity, and inclusion (DEI)? 25+ Survey Questions to Ask

how many survey questions in research

11 Best Wufoo Alternatives and Competitors

how many survey questions in research

Qualitative Research Methods: Types, Examples, and Analysis

how many survey questions in research

75+ Human Resources Survey Questions To Ask Your Employees

how many survey questions in research

How to Use Surveys for Content Marketing

how many survey questions in research

What Are Matrix Surveys & How to Create One [A Complete Guide]

  • Make a Survey

Opinion Stage » survey » Survey Questions

16 Types of Survey Questions, with 100 Examples

Good survey questions will help your business acquire the right information to drive growth. Surveys can be made up of different types of questions. Each type has a unique approach to gathering data. The questions you choose and the way you use them in your survey will affect its results.

These are the types of survey questions we will cover:

  • Open-Ended Questions
  • Closed-Ended Questions
  • Multiple Choice Questions
  • Dichotomous Questions
  • Rating Scale Questions
  • Likert Scale Questions
  • Nominal Questions
  • Demographic Questions
  • Matrix Table Questions
  • Side-by-Side Matrix Questions
  • Data Reference Questions
  • Choice Model Questions
  • Net Promoter Score Questions
  • Picture Choice Questions
  • Image Rating Questions
  • Visual Analog Scale Questions

But before we go into the actual question types, let’s talk a little about how you should use them.

Try this survey

Ready to create your own?  Make a survey .

How to Use Survey Questions in Market Research

First, you need to make sure it’s a survey you’re after. In some cases, you may find that it’s actually a questionnaire that you need (read more here to learn the difference:  Survey Vs. Questionnaire ), or a research quiz. In any case, though, you will need to use the right type of questions.

To determine the right type of questions for your survey, consider these factors:

  • The kind of data you want to gather
  • The depth of the information you require
  • How long it takes to answer the survey

Regardless of the size of your business, you can use surveys to learn about potential customers, research your product market fit, collect customer feedback or employee feedback, get new registrations, and improve retention.

Surveys can help you gather valuable insights into critical aspects of your business. From brand awareness to customer satisfaction, effective surveys give you the data you need to stay ahead of the competition.

So, how should you use surveys for your market research?

Try this market research survey

Ready to create your own?  Make a research survey .

Identify Customer Needs and Expectations

Perhaps the idea of using customer surveys in this advanced era of data analytics seems quaint. But one of the best ways to find out what consumers need and expect is to go directly to the source and ask. That’s why surveys still matter. All companies and online businesses can benefit from using market research surveys to determine the needs of their clients.

Determine Brand Attributes

A market research survey can also help your company identify the attributes that consumers associate with your brand. These could be tangible or intangible features that they think of when they see your brand. By determining your brand attributes, you can identify other brands in the same niche. Additionally, you can gain a clear understanding of what your audience values.

Understand Your Market’s Supply and Demand Chain

Surveying existing and potential customers enables you to understand the language of supply and demand. You can understand the measure of customer satisfaction and identify opportunities for the market to absorb new products. At the same time, you can use the data you collect to build customer-centric products or services. By understanding your target market, you can minimize the risks involved in important business ventures and develop an amazing customer experience.

Acquire Customer Demographic Information

Before any campaign or product launch, every company needs to determine its key demographic. Online surveys make it so much easier for marketers to get to know their audience and build effective user personas. With a market research survey, you can ask demographic survey questions to collect details such as family income, education, professional background, and ethnicity. It’s important to be careful and considerate in this area since questions that seem matter-of-fact to you may be experienced as loaded questions or sensitive questions by your audience.

Strategize for New Product Launches

Businesses of all sizes can use customer surveys to fine-tune products and improve services. Let’s say there’s a product you want to launch. But you’re hesitant to do so without ensuring that it will be well-received by your target audience. Why not send out a survey? With the data you gather from the survey responses, you can identify issues that may have been overlooked in the development process and make the necessary changes to improve your product’s success.

Develop a Strategic Marketing Plan

Surveys can be used in the initial phases of a campaign to help shape your marketing plan. Thanks to in-depth analytics, a quick and easy survey that respondents can finish within minutes can give you a clear idea of what potential consumers need and expect.

Create beautiful online surveys in minutes

Types of Survey Questions

No matter the purpose of your survey, the questions you ask will be crucial to its success. For this reason, it’s best to set the goal of your survey and define the information you want to gather before writing the questions.

Ask yourself: What do I want to know? Why do I want to know this? Can direct questions help me get the information I need? How am I going to use the data I gather?

Once you have a clear goal in mind, you can choose the best questions to elicit the right kind of information. We’ve made a list of the most common types of survey questions to help you get started.

1. Open-Ended Questions

If you prefer to gather qualitative insights from your respondents, the best way to do so is through an open-ended question. That’s because this survey question type gives respondents more opportunity to say what’s on their minds. After all, an open question doesn’t come with pre-set answer choices that respondents can select. Instead, it uses a text box where respondents can leave more detailed responses.

Ideally, you should ask such questions when you’re doing expert interviews or preliminary research. You may also opt to end surveys with this type of question. This is to give respondents a chance to share additional concerns with you. By letting respondents give answers in their own words, even to a single question, you can identify opportunities you might have overlooked. At the same time, it shows that you appreciate their effort to answer all your questions.

Since quantifying written answers isn’t easy to do, opt to use these questions sparingly, especially if you’re dealing with a large population.

Examples of open-ended questions:

  • What can you tell us about yourself? (Your age, gender, hobbies, interests, and anything else you’re willing to share)
  • How satisfied or dissatisfied are you with our service?
  • What has kept you from signing up for our newsletter?

2. Closed-Ended Questions

Consumers want surveys they can answer in a jiffy. Closed-ended questions are ideal for market research for that reason. They come with a limited number of options, often one-word responses such as yes or no, multiple-choice, or a rating scale. Compared to open-ended questions, these drive limited insights because respondents only have to choose from pre-selected choices.

Ask closed-ended questions if you need to gather quantifiable data or to categorize your respondents. Furthermore, you can use such questions to drive higher response rates. Let’s say your audience isn’t particularly interested in the topic you intend to ask them about. You can use closed-ended questions to make it easier for them to complete the survey in minutes.

Close-ended question examples:

  • Which of the following are you most likely to read? (a) a series of blog posts (b) a novel (c) the daily news (d) I don’t read on a regular basis
  • How would you rate our service on a 5-point scale, with 1 representing bad service, and 5 representing great service?
  • How likely are you to recommend us on a scale of 0 to 10?

3. Multiple Choice Questions

Multiple-choice questions are a basic type of closed-ended survey question that give respondents multiple answers to choose from. These questions can be broken down into two main categories:

  • Single-answer questions – respondents are directed to choose one, and only one answer from a list of answer options.
  • Multiple answer questions – where respondents can select a number of answers in a single question.

When designed correctly they can be very effective survey questions since they’re relatively simple questions to answer, and the data is easy to analyze.

Multiple-choice sample questions:

  • It’s exceptional
  • Could be better
  • It’s terrible
  • Whole-grain rice
  • Gluten-free noodles
  • Suger-free soft drinks
  • Lactose-free ice cream

Try this product survey

Ready to create your own?  Make a product survey .

4. Dichotomous Questions

Dichotomous questions are a type of close-ended questions with only two answer options that represent the opposite of each other. In other words, yes/no questions, or true/false questions. They’re often used as screening questions to identify potential customers since they’re so quick and easy to answer and require no extra effort.

They’re also good for splitting your audience into two groups, enabling you to direct each group to a different series of questions. This can be done quite easily using skip logic which sends people on different survey paths based on their answers to previous questions.

Examples of questions:

Do you have experience working with Google Analytics? Yes/no Google Analytics is used for tracking user behavior. True/false Google Analytics has a steep learning curve for the average user. Agree/disagree

5. Rating Scale Questions

Also called ordinal questions, these questions help researchers measure customer sentiment in a quantitative way. This type of question comes with a range of response options. It could be from 1 to 5 or 1 to 10.

In a survey, a respondent selects the number that accurately represents their response. Of course, you have to establish the value of the numbers on your scale for it to be effective.

Rating scales can be very effective survey questions, however, the lack of proper survey scaling could lead to bad survey questions that respondents Don’t know how to answer. And even if they think you do, the results won’t be reliable because every respondent could interpret the scale differently. So, it’s important to be clear.

If you want to know how respondents experienced your customer service, you can establish a scale from 1 to 10 to measure customer sentiment. Then, assign the value of 1 and 10. The lowest number on the scale could, for instance, mean “very disappointed” while the highest value could represent “very satisfied”.

Examples of rating scale questions:

  • On a scale of 0 to 10, how would you rate your last customer support interaction with us? (0=terrible, 10=amazing)
  • How likely are you to recommend our company to a friend or colleague on a scale of 1 to 5? 1=very unlikely, 5=very likely
  • How would you rate your shopping experience at our online business on a scale of 1 to 7? 1=bad, 4=ok, 7=amazing

6. Likert Scale Questions

These questions can either be unipolar or bipolar. Unipolar scales center on the presence or absence of quality. Moreover, they don’t have a natural midpoint. For example, a unipolar satisfaction scale may have the following options: extremely satisfied, very satisfied, moderately satisfied, slightly satisfied, and not satisfied.

Bipolar scales, on the other hand, are based on either side of neutrality. That means they have a midpoint. A common bipolar scale, for instance, may have the following options: extremely unsatisfied, very unsatisfied, somewhat unsatisfied, neither satisfied nor dissatisfied, somewhat satisfied, very satisfied, or extremely satisfied.

Likert scale questions can be used for a wide variety of objectives. They are great for collecting initial feedback. They can also help you gauge customer sentiment, among other things.

Likert scale sample questions:

  • How important is it that you can access customer support 24/7? (Choices: Very Important, Important, Neutral, Low Importance, and Not Important At All)
  • How satisfied are you after using our products? (Choices: Very Satisfied, Moderately Satisfied, Neutral, Slightly Unsatisfied, and Very Unsatisfied)
  • How would you rate our customer care representative’s knowledge of our products? (Choices: Not at All Satisfactory, Low Satisfactory, Somewhat Satisfactory, Satisfactory, and Very Satisfactory)

Try this Likert scale survey

Ready to create your own?  Make a Likert scale survey .

7. Nominal Questions

Also a type of measurement scale, nominal questions come with tags or labels for identifying or classifying items. For these questions, you can use non-numeric variables. You can also assign numbers to each response option, but they won’t actually have value.

On a nominal scale, you assign each number to a unique label. Especially if the goal is identification, you have to stick to a one-to-one correlation between the numeric value and the label. Much like cars on a race track, numbers are assigned to identify the driver associated with the car. It doesn’t represent the characteristics of the vehicle.

However, when a nominal scale is used for classification, the numerical values assigned to each descriptor serve as a tag. This is for categorizing or arranging the objects in a class. For example, you want to know your respondents’ gender. You can assign the letter M for males and F for females in the survey question.

Examples of nominal questions:

  • What is your hair color? (Choices: 1 – Black, 2 – Blonde, 3 – Brown, 4 – Red, 5 – Other)
  • How old are you? (Choices: 1 – Under 25, 2 – 25-35, 3 – Over 35)
  • How do you commute to work? (Choices: 1- Car, 2 – Bus, 3 – Train, 4 – Walk, 5 – Other)

8. Demographic Questions

As its name suggests, this question type is used for gathering information about a consumer. From their background to income level, these simple questions can provide you with deeper insights into your target market. They’re also used as screening questions since they can help you to identify the population segments you’re targeting.

Demographic questions  help you understand your target market. By collecting customer data, you can identify similarities and differences between different demographics. Then, you can make buyer personas and classify them based on who they are or what they do.

Some demographic topics can lead to quite loaded survey questions. When writing your demographic survey, try to identify the loaded questions and ask yourself if someone could find the question, the answer choices, or the lack of a certain answer choice offensive. Do your best to phrase them sensitively and respectfully, and if you can’t consider leaving them out.

With every single question that you write, it’s important to place yourself in the shoes of your respondents. If you want to ask students about their income, your response options should range below $20,000 per year, because most of them are probably not making more than that. But if your respondents are affluent, your choices should have a range higher than $100,000.

Examples of demographic questions:

  • How old are you?
  • What is your level of education?
  • What is your marital status?
  • What’s your current employment status?

Try this demographic survey

Ready to create your own?  Make a demographic survey .

9. Matrix Table Questions

If you need to ask a series of questions that require the same response options, you can group them together in a matrix table instead of breaking them into separate questions.

While these bundled questions are convenient, you have to use them carefully. Visually, large matrix tables can seem overwhelming. In addition, online survey questions of this sort aren’t always mobile-friendly. Having too many questions or choices may even trigger undesirable survey-taking behavior such as straight-lining. This is when respondents select the same options without carefully considering each one. Sometimes, they do that because the actual experience feels like a complicated matrix and they just want to finish it.

Example of a matrix table:

How satisfied or dissatisfied are you with the following?

Interaction with sales staff

Product selection

Marketing messages

Pricing structure

Then, you can make a brief list of response options. There should be no more than five options.

10. Side-by-Side Matrix Questions

A side-by-side matrix is similar to your regular matrix table in that it allows you to group together questions that require simple response options. However, a matrix table only lets you collect data from a single variable. A side-by-side matrix, on the other hand, enables you to gather data on two or more dimensions.

For example, let’s say you want to ask respondents about the importance of different services and their satisfaction with each. You can group them together in a side-by-side matrix. By organizing questions in tables, your respondents can easily fill out the survey in minutes.

Much like a regular matrix table, you shouldn’t overwhelm consumers. Avoid adding too many variables to your table. Moreover, you should keep the response options short.

Examples of side-by-side matrix questions:

Example of side-by-side matrix:

How would you rate our shopping services?

Identify the variables. They can be customer support, packaging, and punctuality. Next, you should add different dimensions such as importance and satisfaction level. On each table, you should add a similar scale. You can start with 1, which could mean Not Important and Not Satisfied.

11. Data Reference Questions

Use data reference questions to gather validated data against standardized databases. For example, direct respondents to enter their postal code or zip code in a small text box. The value entered will then be cross-referenced with the database. If it is correct, their city or state will be displayed, and they can proceed with the survey. And if it is incorrect, they’ll be asked to enter a valid postal code or zip code.

Examples of data reference questions:

  • What is your five-digit zip code?
  • What is your postal code?

12. Choice Model Questions

Choice model questions enable you to understand the essential aspects of consumers’ decision-making process. This involves a quantitative method called Conjoint Analysis. It helps you grasp your users’ preferences, the features they like, and the right price range your target market can afford. More importantly, it enables you to understand if your new products will be accepted by your target market.

These questions also involve Maximum Difference Scaling, a method that allows the ranking of up to 30 elements. This can include product features, benefits, opportunities for potential investment, and possible marketing messages for an upcoming product.

Example of a choice model question:

  • If you were to buy a sandwich, which ingredient combination would you choose?

Let’s say you want to know about consumers’ bread, filling, and sauce preferences. In your survey, you can give them three sandwich options. You can, for instance, offer three kinds of bread: grain wheat, parmesan oregano, and Italian. As for the sauces, you can make them choose between ranch, blue cheese, and mustard. Finally, you need to suggest three types of filling, for example, chicken, veggies, and meatballs.

Respondents will see unique combinations of these ingredients in your survey. Then, they will have to choose the one that they like best.

13. Net Promoter Score Questions

A net promoter score (NPS) survey question measures brand shareability, as well as customer satisfaction levels. It helps you get reliable customer insights and gauge the likelihood of respondents recommending your company to friends or colleagues (i.e. prospective customers). The scoring model involves a scale of 0 to 10, which is divided into three sections. Respondents who give a 9 to 10 score are considered Promoters. Passives give a 7 to 8 score, while the rest are considered Detractors.

Once you’ve gathered all the data, the responses per section are calculated. Then, the net value of promoters is shown. This type of survey question offers a useful form of initial feedback. It helps you understand why promoters are leaving high ratings so you can work on enhancing those strengths. At the same time, it enables you to determine weaknesses. It illustrates why detractors are leaving such low ratings.

Examples of net promoter score questions:

  • On a scale of 0 to 10, how likely are you to recommend our brand to a friend or colleague? (0 = Not at all Likely and 10 = Very Likely)
  • Would you encourage friends to work at our company?
  • How likely are you to recommend (specific name of the product) to friends?

Try this NPS survey

Ready to create your own?  Make an NPS survey .

14. Picture Choice Questions

It’s no secret that people respond to visual content more than plain text. This applies to surveys as well – visual content can boost user experience.

Think of these as alternate questions to multiple-choice questions. Users can pick one or many from a visual list of options. You can use picture choice questions to make your survey more engaging.

Keep in mind, that it’s very easy to unintentionally create a leading question by using images that get a specific reaction from people. For example, if you’re asking about food preferences and one of the images is more attractive than others, people may see it as the perfect answer even if it doesn’t represent their favorite dish because it looks most attractive. So when you’re illustrating a variety of answers with images make sure their quality and attractiveness is similar.

Picture choice examples:

  • What is your favorite pizza topping?
  • Which color should we choose for our logo?
  • What other products would you like to see in our online store?

Opinion Stage has an online  survey maker  tool that can help you design image-based survey questions in minutes. Choose from hundreds of professionally-designed templates, and tailor them to fit your needs, or design them from scratch.

Try this visual survey

Ready to create your own?  Make a visual survey .

15. Image Rating Questions

Another way to incorporate images in questions is through image ratings. Let’s say you want to know how satisfied consumers are with your products. You can display all of the items you want respondents to rate. Under each item, provide a shortlist of options (e.g. very unsatisfied, unsatisfied, neutral, satisfied, very unsatisfied).

You could also use a rank order question to let your respondents rank their favorite products. Simply give them multiple options, and then, ask them for their top three or five favorites. Or you could ask them to organize a series of answers by ranking.

For example, if it’s an employee engagement survey question you could ask your employees to rank a series of office activities from their least favorite to their most favorite. There are many ways to do this visually. Some tools use dropdown menus, and others let you move the answer options around, but the simplest way is to use numbers like in the example below.

Rank order questions should work well on mobile devices. After all, respondents only have to tap on their favorite items to participate.

Example of image rating questions:

  • What are your 5 favorite desserts?

16. Visual Analog Scale Questions

Another type of scale you can use in a survey is the visual analog scale, which displays your questions in a more engaging manner. For instance, you can use text sliders or numeric sliders to ask respondents to rate the service they’ve received from your company and let them select an image line that best illustrates their answer.

You can also use pictures to depict each option. Smiley ratings are commonly used in surveys nowadays because they’re simple questions, easy on the eyes, and quite fun. Star ratings are also effective survey questions that require no extra effort.

Examples of visual analog scale questions:

  • How would you rate the overall quality of our customer service?
  • What do you think of our website’s interface?
  • How satisfied are you with the way our service works in offline mode?

Create engaging image-based surveys in minutes

The Fundamentals of Good Survey Questions

There is an art to writing effective questions for your survey. Regardless of the kind of survey you plan to deploy, there are a few practices that you should adhere to.

Use Clear and Simple Language

Always choose clear and simple words when writing your online survey questions. In doing so, you can keep the questions short yet specific.

Complex phrasing, too many words, acronyms, and specialized jargon require extra effort and could cause confusion. Make it easy for your respondents to help you. Keep it simple.

Moreover, avoid  double-barreled questions , they will frustrate your respondents and skew your customer insights.  Here’s an example of a double-barreled question: “Did you find our new search feature helpful and easy to use? yes/no” Such a question might be simple to understand, but it isn’t easy to answer because it covers two issues. How could someone respond if they found the search feature helpful but difficult to understand? It would make more sense to separate it into two questions, i.e. did you find the new search feature helpful? Was the new search feature easy for you to use?

Focus on the Consumer

Make the survey engaging. Use the second-person (i.e., ‘you’ format) to address your respondents directly, and use the first-person (i.e., ‘we’ format) to refer to your company. This makes the survey more personal and helps respondents recall prior experiences with your company. In turn, it leads to quicker and more accurate answers.

Ask for Feedback

Get initial feedback from external people that fit the profile of your average user before sending your survey out. It’s like a user testing tool, you need someone who isn’t you to take a look and tell you if your survey is clear and friendly.

Require Minimal Effort to Answer

There’s no reason to ask people questions that aren’t essential to you. Ask people questions that really matter to you, and try to keep it down to the minimum number, so as not to waste their time. The more succinct a survey is, the more likely a respondent is to complete it. So, let them know that you value their time by designing a survey they can finish within minutes.

Stay Free From Bias

Survey question mistake #1 is to ask leading or biased questions. Don’t plant opinions in your respondents’ heads before they can formulate their own. Don’t ask people questions like “How good was your in-store experience today?” Phrase it in a neutral way like “On a scale of 1 to 10, how would you rate your in-store experience?”

Keep the Purpose of the Survey Vague

Sometimes, respondents have a tendency to give you the answers you want to hear. One of the simplest ways to prevent that is by keeping the purpose of your survey vague. Instead, you should give a general description of your survey.

Get a personalized survey up and running today

Sample Survey Questions

Below are sample questions for different market research needs. You can use many of them as close-ended questions as well as open questions, depending on your need and preference.

Brand Awareness Questions

  • When was the last time you used (a type of product)?
  • What brands come to mind as your top choice when you think of buying this product type?
  • What factors do you consider when selecting a vendor? (rank by importance)
  • Which of the following brands have you heard of? (please select all that apply)
  • Where have you seen or heard of our brand in the last three months? (please select all that apply)
  • How often have you heard people talking about our brand in the past three months?
  • How familiar are you with our company?
  • On a scale of 1 to 10, how likely are you to recommend our brand to a friend?

Customer Demographic Questions

  • What gender do you identify as?
  • Where were you born?
  • Are you married?
  • What is your annual household income?
  • Do you support children under the age of 18?
  • How many children under the age of 18 reside in your household?
  • What category best describes your employment status?
  • Which general geographic area of the state do you reside in?
  • What is your current employment status?
  • Which of the following languages can you speak fluently?

Brand & Marketing Feedback Questions

  • Have you purchased from our company before?
  • How long have you been a customer?
  • Which best describes your latest experience with our brand? (please select all that apply)
  • Which of the following attributes do you associate with our brand? (please select all that apply)
  • What kind of feelings do you associate with our brand?
  • Which of these marketing messages represents us best in your opinion?
  • How would you rate your level of emotional attachment to our brand?
  • What five words would you use to describe our brand to a friend or colleague?
  • On a scale of 1 to 10, how likely are you to recommend our brand to a friend or colleague? (1 being Not at All Likely at 10 being Extremely Likely)

Product & Package Testing Questions

  • What is your first impression of the product?
  • How important are the following features to you?
  • How would you rate the product’s quality?
  • If the product was already available, how likely are you to purchase it?
  • How likely are you to replace an old product with this one?
  • How likely would you recommend this product to a friend or colleague?
  • What did you like best about this product?
  • What are the features that you want to see improved?
  • Based on the value for money, how would you rate this product compared to the competition?
  • What is your first impression of the product packaging?
  • How satisfied or dissatisfied are you with the following features? (Visual appeal, Quality, and Price)
  • How similar or different is the packaging from the competition?
  • Does the packaging have too little or too much information?
  • How likely are you to purchase the product based on its packaging?
  • What did you like best about the packaging?
  • What did you dislike about the packaging?
  • How would you like the packaging to be improved?

Pricing Strategy Testing Questions

  • How often do you purchase this type of product?
  • What brands do you usually purchase? (Please select all that apply).
  • On a scale of 1 to 5, how satisfied are you with the pricing of this type of product? (1 being Not at All Satisfied at 5 being Extremely Satisfied)
  • What is the ideal price for this type of product?
  • What price range would make you consider that the product is too expensive?
  • At what price is the product too cheap that its quality is questionable?
  • How much does the price for our product compare to other products on the market?
  • If the product was available, how likely would you be to purchase it?

Customer Satisfaction Questions

  • How would you rate the following products/services at (name of company)?
  • Which of the following attributes would you use to describe our product/service? Please select all that apply.
  • Would you recommend our company to a friend or colleague? (1 being Very Unlikely and 10 being Very Likely)
  • How responsive has our support team been to your questions and concerns?
  • How likely are you to purchase from our company again?
  • What other comments, concerns, or questions do you have for us?

Brand Performance Questions

  • When was the last time you used this type of product?
  • When you think of our brand, what words come to mind?
  • Which of the following are important to your decision-making process?
  • How well do our products perform based on the following categories? (Price, Quality, Design, etc.)
  • How well does our product meet your needs?
  • What was missing or disappointing about your experience with our brand?
  • What did you like most about your experience with our brand?
  • How can we improve your experience?

Customer Behavior Questions

  • In the household, are you the primary decision maker when it comes to purchasing this type of product?
  • When was the last time you purchased this product type?
  • How do you find out about brands offering this product type? Please select all that apply.
  • When you think of this product type, which of the following are the top three brands that come to mind?
  • How much of your purchasing decisions are influenced by social media?

Save time and choose a customizable survey template

How to Improve Survey Response Rates

Every market research survey needs to be designed carefully in order to drive higher response rates. As a result, you can acquire the right data to inform the decision-making process.

Here are a few survey ideas to boost response rates:

Make It Personal

Write a survey as if it’s a conversation between you and your respondents. For example, use first-person pronouns to make your surveys feel more personal and customer-centric. In addition, stick with simple and specific language to better connect with respondents. Simply put, write your questions as you’d use them in a conversation with consumers.

Make It Engaging

Gathering data from consumers is essential to any business, but market research surveys don’t have to be dull. You can engage and connect with respondents on a human level through an interactive survey. As a result, you can obtain thorough responses and maximize the number of respondents that complete the entire survey.

Don’t Waste Their Time

No one wants to answer a survey with 50 questions because it takes too long to complete. Hence, you should narrow down your list to the most important ones. Only ask questions that will lead to actionable insights. As for the rest, you can get rid of them.

Offer Incentives

There are two types of incentives you can offer: monetary or non-monetary. Either way, you need to make sure that the incentive provides value to your target audience. In addition, you must choose between promised or prepaid incentives. In other words, you have to decide if you want to offer everyone or a small group of people some incentives.

Providing respondents with incentives to finish the survey can increase response rates—but not always. Customer satisfaction surveys, for example, won’t always need incentives because it might affect the quality of the results.

Make It Responsive

Perhaps the easiest way to gain respondents is to make your surveys responsive and mobile-optimized. In doing so, it will perform well and look amazing on all devices. It should also enable you to reach consumers during their daily commute or lunch break. Thus, make sure your survey is optimized for different kinds of devices, especially for mobile.

Offer Surveys in Multiple Channels

If a survey is optimized for all device types, it should be easily accessed on social media. So, take advantage of your platforms and share your survey on different social media channels to increase participation rates.

Designing surveys doesn’t have to be challenging. On the contrary, you can easily create interactive surveys with Opinion Stage. Create a survey from scratch, or choose one of our many professionally-made templates to complete it within minutes. Through Opinion Stage, you can drive higher response rates and evaluate results from a powerful analytics dashboard.

It’s important to be familiar with the different types of survey questions and when to use them. Getting to know each survey question type will help you improve your research. Not to mention, you can gain high-quality data when you design a survey with the right types of questions .

In addition, you should leverage the right tool to create engaging surveys in minutes. With an online survey maker like Opinion Stage, you can customize your surveys to fit your brand image. Or, you can choose from professionally-made templates. Either way, it can help boost response rates.

Last but not least, check your survey design before deploying it. Make sure to see what your survey will look like to your respondents. See opportunities for improvement, then apply the necessary changes.

Popular Resources

You can easily do yourself, no need for a developer

Drive Research Logo

Contact Us (315) 303-2040

  • Market Research Company Blog

How Many Questions Should I Ask In My Survey?

by Emily Taylor

Posted at: 5/15/2018 3:32 PM

Wondering how many questions you should ask in your survey? We can help with that! Drive Research is a market research company that helps organizations across the country with market research needs.

There are a variety of factors that determine the ideal survey length. This includes the research approach, type of survey, target audience, and survey objectives. For any type of market research survey, it is important to keep the survey respondent in mind. Remember, survey respondents are somewhat less invested in the survey than the sender of the survey. So, the goal is to get respondents' attention and keep them engaged throughout the survey to collect all of their feedback.

To keep respondents engaged and ensure a high survey response rate, be creative when developing your survey invite email or script. This means creating a short, yet enticing message to encourage respondents to start the survey. To do this, consider offering an incentive for participating in the survey (if appropriate). For most surveys that target general consumers a small reward, like the chance to win one of five $50 Amazon gift cards, will encourage participation in the survey.

Learn more about how many questions should be asked in market research surveys below.

How Many Questions Should I Ask In My Survey?

Is there an ideal number of questions to ask in a survey? The short answer is yes!

Online Survey Questions

The length for a typical online survey is 15 to 20 questions. While that might sound like a small number of questions, an online survey with 15 to 20 questions will take respondents 3 to 5 minutes to complete.

There is some wiggle room, but take caution before increasing the number of questions in an online survey. The attention spans of general consumers is getting shorter, so surveys longer than 15 to 20 questions will likely have a high dropout rate. This means a survey respondent will close out of an online survey without actually finishing the survey.

Survey dropouts can be a big issue if the online survey software used for the survey does not collect partial responses. Some online survey software only collects data if the respondent reaches the end of the survey.

If the survey is being conducted on behalf a well-known, reputable brand you'll likely be able to ask roughly 20 questions. On the other hand, you'll want to aim to ask 15 questions or less if the organization the online survey is being conducted for is unknown or not mentioned.

Learn more about how an online survey works !

Employee Survey Questions

The ideal length for an employee survey is 20 to 35 questions, which will take employees 5 to 10 minutes to complete. Since employees have a direct tie to the organization and high interest in completing the survey, organizations are able to ask more questions than a typical online survey to general consumers.

Employee surveys typically use benchmarking questions, which make up the bulk of the survey. This allows organizations to compare the data collected to other organizations in its industry, its region, or of a similar size. Benchmarking question topics range from employee engagement to pay and benefits to work environment. Learn more about how to use employee survey benchmarks .

Also, remember to use a third-party market research firm when conducting an employee survey to ensure employees know their responses will be kept confidential and anonymous.

Voice of Customer (VOC) Survey Questions

A VoC survey can include up to 15 to 20 questions. This will take customers up to 5 minutes to complete. Whether your organization is B2B or B2C keeping a VoC survey to 15 to 20 questions is ideal.

This ensures respondents are not overwhelmed with the number of questions or time needed to complete the survey. If there is a struggle to limit the number of survey questions, revisit the research objectives to ensure they are clear and concise. Oftentimes, organizations get ahead of themselves and end up with a 50 question customer survey that will be overwhelming and cause respondent fatigue.

Also, remember any data that is already collected, like customer relationship marketing (CRM) data, can be tied to customers' responses and used for analysis and reporting. Examples of this type of data could be company name, company size, ZIP code, date of last purchase, amount spent to date, and more. So, remember to not ask questions you already know the answer to!

Are there best VoC survey questions to ask ? You bet!

Intercept Survey Questions

Intercept surveys can include up to 10 questions. The goal for intercept surveys is to have the survey take less than 2 to 3 minutes to complete. It's important to keep intercept surveys as short as possible to ensure a high response rate.

Here again, it's important to keep the respondent top of mind. It can be difficult to approach someone and encourage them to complete a survey, but it can be even more difficult if the survey is more than a few questions and takes more than a few minutes to complete. Typically, intercept surveys are done in high traffic areas. This could include malls, waiting areas, or at the organization the research is being done on behalf of.

Wondering how an intercept survey project is done ? We've got you covered.

Contact Our Market Research Company

Drive Research is a market research company serving organizations across the United States. Want to learn more about our services ? Or, do you have a question for us?

Feel free to contact us using the information below!

Message us on our website Email us at [email protected] Call us at 888-725-DATA Text us at 315-303-2040

Categories: Online Surveys

Need help with your project? Get in touch with Drive Research.

View Our Blog

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Saudi J Anaesth
  • v.16(1); Jan-Mar 2022

How short or long should be a questionnaire for any research? Researchers dilemma in deciding the appropriate questionnaire length

Hunny sharma.

Department of Community and Family Medicine, All India Institute of Medical Sciences, Raipur, Chhattisgarh, India

A questionnaire plays a pivotal role in various surveys. Within the realm of biomedical research, questionnaires serve a role in epidemiological surveys and mental health surveys and to obtain information about knowledge, attitude, and practice (KAP) on various topics of interest. Questionnaire in border perspective can be of different types like self-administered or professionally administered and according to the mode of delivery paper-based or electronic media–based. Various studies have been conducted to assess the appropriateness of a questionnaire in a particular field and methods to translate and validate them. But very little is known regarding the appropriate length and number of questions in a questionnaire and what role it has in data quality, reliability, and response rates. Hence, this narrative review is to explore the critical issue of appropriate length and number of questions in a questionnaire while questionnaire designing.

Introduction

A questionnaire is an essential tool in epidemiological surveys and mental health surveys and to assess knowledge, attitude, and practice (KAP) on a particular topic of interest. In general, it is a set of predefined questions based on the aim of the research.[ 1 ]

Designing a questionnaire is an art which unfortunately is neglected by most researchers.[ 2 ] A well-designed questionnaire not only saves time for a researcher but helps to obtain relevant information most efficiently, but designing such a questionnaire is complex and time-consuming.[ 3 , 4 ]

The quality of the data obtained by a specific questionnaire depends on the length and number of questions in the questionnaire, the language, and the ease of comprehension of the questions, relevance of the population to which it is administered, and the mode of administration, i.e., the self-administered or paper method or the electronic method [ Figure 1 ].[ 5 , 6 ]

An external file that holds a picture, illustration, etc.
Object name is SJA-16-65-g001.jpg

Qualities of a well-designed questionnaire

Response rate is defined as the number of people who responded to a question asked divided by the number of total potential respondents. Response rate which is a crucial factor in determining the quality and generalizability of the outcome of the survey depends indirectly on the length and number of questions in a questionnaire.[ 7 , 8 ]

Several studies have been conducted to assess the appropriateness of the questionnaire in a particular field and methods to translate and validate them. But very little is known regarding the appropriate length and number of questions in a questionnaire and what role it has in data quality and reliability. Hence, this narrative review is to explore the critical issue of appropriate length and number of questions in a questionnaire while questionnaire designing.

What is a questionnaire

Merriam Webster defines the questionnaire as “a set of questions for obtaining statistically useful or personal information from individuals,” whereas Collins defines a questionnaire as “a questionnaire is a written list of questions which are answered by a lot of people to provide information for a report or a survey.” The oxford learners’ dictionaries also give a somewhat similar definition which states that a questionnaire is “a written list of questions that are answered by several people so that information can be collected from the answers.”[ 9 , 10 , 11 ]

Thus, this provides a simpler meaning that a questionnaire in simpler terms is a collection of questions that can be used to collect information from various individuals relevant to the research aims.

Where are questionnaires generally applied?

A questionnaire, in general, can be applied to a wide variety of research which can either be quantitative or qualitative research which completely depends on how and in which a number of open-ended questions are asked.[ 12 ]

Questionnaires are generally applied when a large population has to be assessed or surveyed with relative ease where they play a crucial role in gathering information on the perspectives of individuals in the population.

There is a variety of applications of questionnaire in opinion polls, marketing surveys, and in politics, wherein the context of biomedical research questionnaires are generally used in epidemiological surveys, mental health surveys, surveys on attitudes to a health service or health service utilization, to conduction knowledge, attitude, and practice (KAP) studies on a particular issue or topic of interest.[ 13 , 14 ]

What are the types of questionnaire?

Questionnaires in general are of two types those which are in paper format and those which are in electronic format. The questionnaire can further be of two types i.e., self-administered or professionally administered via interview. The paper format can be administered easily both in self-administered mode or professional administered mode via direct administration when the population is relatively small as it is cumbersome to manage and store the physical questionnaire, paper format can also be administered to a larger population via postal surveys. Electronic questionnaires can be easily administered to a larger population in self-administered mode via Internet-based services like google forms, e-mails, SurveyMonkey, or Survey Junkie, etc. When administering professional-administered questionnaires professional telephonic services must be utilized to interview a larger population in a shorter duration of time.[ 15 , 16 , 17 ]

What it is required to answer individual questions in the questionnaire or the burden imparted on respondents

As mentioned by Bowling, in general, there are at least four intricated steps required in answering a particular question in a questionnaire, these steps are comprehension, recall of information asked by the question from the memory, judgment on the link between the asked question and the recall of information, and at last communication of the information to the questionnaire or evaluator [ Figure 2 ].[ 18 ]

An external file that holds a picture, illustration, etc.
Object name is SJA-16-65-g002.jpg

Steps involved for answering a particular question in the questionnaire

In the case of a self-administered questionnaire, there is also a need for critical reading skills which is not required in one-to-one or face-to-face interview which only requires listening and verbal skills to respond to questions in the same language in which they are being asked or interviewed.[ 18 ]

There are many other crucial factors which play an important role in deciding the utility of questionnaire in various research, one such factor is the literacy of the participants which is a major limiting factor in self-administered questionnaires. Whereas, the other factors include the respondent's age, maturity, and level of understanding and cognition, which are some of the other ways related to the comprehension of the questions.[ 19 ]

Do the length of the questionnaire matters?

Length and number of items in the questionnaire play a crucial role in questionnaire-based studies or surveys, it has a direct effect on the time taken by the respondent to complete the questionnaire, cost of the survey or study, response rate, and quality of data obtained.[ 20 ]

As evident from the study conducted by Iglesias and Torgerson in 2000, on the response rate of a mailed questionnaire, an increase in the length of the questionnaire from five pages to seven pages reduces the response rate from women aged 70 years and over but on contrary does not seems to affect the quality of response to questions.[ 21 ]

Another study conducted by Similar Koitsalu et al .[ 22 ] in 2018 reported that they were able to increase overall participation and information gathered through a long questionnaire with the help of prenotification and the use of a reminder without risking a lower response rate.

Whereas Sahlqvist, et al .[ 23 ] in 2011 reported that participants were more likely to respond to the short version of the questionnaire as compared to a long questionnaire.

Testing of ultrashort, short, and long surveys of 13, 25, and 75 questions, respectively by Kost et al .[ 24 ] in 2018, revealed that a shorter survey utilizing a short questionnaire was reliable and produce high response and completion rates than a long survey.

Bolt, on the other hand, in 2014, found a surprising find that reducing the length of a long questionnaire in a physician survey does not mean that it will necessarily improve response rate hence to improve the response rate in nonresponders’ researchers may think to utilize a drastically shortened version of the questionnaire to obtain some relevant information rather than no information.[ 25 ]

But the most interesting find comes from the web-based survey giant “Survey Monkey,” which states that there is a nonlinear relationship between the number of questions in a survey and the time spent answering each question. Which in other words can be explained as more there are questions in a survey lesser time respondent spend answering each question which is known as “speeding up” or “satisficing” through the questions. It is also observed that as the length of and the number of questions asked increased there is an increase in a nonresponse rate. This in term affects the quantity and reliability of the data gathered.[ 26 ]

What happens when respondents lose interest?

When there is a loss of interest, in the case of a long length questionnaire or extensive interviews, the bored respondents provide unconsidered and unreliable answers, or in other scenarios, it may lead to high nonresponse to questions. Where on one side a high nonresponse rate may lead to difficulty in data analysis or an unacceptable reduction in sample size, whereas on the other side, unconsidered or unreliable answers may defeat the whole purpose of the research [ Figure 3 ].[ 19 ]

An external file that holds a picture, illustration, etc.
Object name is SJA-16-65-g003.jpg

Consequences of Loss of interest in research participant

Considerations while using a long questionnaire

While using a long questionnaire, a high nonresponse rate should always be expected hence appropriate measures to address the missing data should be considered such as data trimming or data imputation depending on the amount of data missing.[ 27 , 28 ]

While the loss of interest can be administering counteracted by dividing the questionnaire into sections and administering each section separating to avoid respondents’ fatigue or boredom.[ 19 ]

It is always advised that the administration of telephonic interview–based questionnaire should be kept short in general about 30 min to prevent fatigue or inattention which may adversely affect the quality of data. In the case of a very long telephonic interview, questions can be divided into sections, and each section can be administered on separate days or shifts lasting 30 min each. A long questionnaire should preferably be administered through face-to-face interviews.

Designing a questionnaire is an art and requires time and dedication, which in turn leads to the easiest way to measure the relevant information on a desired topic of interest. But many a times, this crucial step in biomedical research is ignored by researchers. With this narrative review, we were able to provide a glimpse of the importance of a good questionnaire. A good questionnaire can be of 25 to 30 questions and should be able to be administered within 30 min to keep the interest and attention of the participants intact. It is observed that as the number of questions increases there is a tendency of the participants speeding up or satisficing through the questions, which severely affect the quality, reliability, and response rates. In case a long questionnaire is essential, it should be divided into sections of 25 to 30 questions each to be delivered at a different time or day. In the case of a long questionnaire i.e., more than 30 questions, a larger amount of missing data or nonresponse rates must be anticipated and provisions should be made to address them. At last, it is always advised that shortening a relatively lengthy questionnaire significantly increases the response.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • Product Demos
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence
  • Market Research
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • What is a survey?
  • Open Ended Questions

Try Qualtrics for free

Your quick guide to open-ended questions in surveys.

17 min read In this guide, find out how you can use open-ended survey questions to glean more meaningful insights from your research, as well as how to analyse them and best practices.

When you want to get more comprehensive responses to a survey – answers beyond just yes or no – you’ll want to consider open-ended questions.

But what are open-ended questions? In this guide, we’ll go through what open-ended questions are, including how they can help gather information and provide greater context to your research findings.

What are open-ended questions?

Open-ended questions can offer you incredibly helpful insights into your respondent’s viewpoints. Here’s an explanation below of what they are and what they can do:

Free-form and not governed by simple one word answers (e.g. yes or no responses), an open-ended question allows respondents to answer in open-text format, giving them the creative thinking, freedom and space to answer in as much (or as little) detail as they like.

Open-ended questions help you to see things from the respondent’s perspective, as you get feedback in their own words instead of stock answers. Also, as you’re getting more meaningful answers and accurate responses, you can better analyze sentiment amongst your audience.

Get started with our free survey maker tool

Open-ended versus closed-ended questions

Open-ended questions provide more qualitative research data; contextual insights that accentuate quantitative information. With open-ended questions, you get more meaningful user research data.

Closed-ended questions, on the other hand, provide quantitative data ; limited insight but easy to analyze and compile into reports. Market researchers often add commentary to this kind of data to provide readers with background and further food for thought.

Here are the main differences with examples of open-ended and closed-ended questions:

Open-ended questions Closed-ended questions
Qualitative Quantitative
Contextual Data-driven
Personalized Manufactured
Exploratory Focused

For example, an open-ended question might be: “What do you think of statistical analysis software?”.

Whereas closed-ended questions would simply be: “Do you use statistical analysis software?” or “Have you used statistical analysis software in the past?”.

Open-ended questions afford much more freedom to respondents and can result in deeper and more meaningful insights. A closed question can be useful and fast, but doesn’t provide much context. Open-ended questions are helpful for understanding the “why”.

When and why should you use an open-ended question?

Open-ended questions are great for going more in-depth on a topic. Closed-ended questions may tell you the “what,” but open-ended questions will tell you the “why.”

Another benefit of open-ended questions is that they allow you to get answers from your respondents in their words. For example, it can help to know the language that customers use to describe a product of feature, so that the company can match the language in their product description to increase discoverability.

Open-ended questions can also help you to learn things you didn’t expect, especially as they encourage creativity, and get answers to slightly more complex issues. For example, you could ask the question “What are the main reasons you canceled your subscription?” as a closed-ended question by providing a list of reasons (too expensive, don’t use it anymore). However, you are limited only to reasons that you can think of. But if you don’t know why people are canceling, then it might be better to ask as an open-ended question.

You might ask open-ended questions when you are doing a pilot out preliminary research to validate a product idea. You can then use that information to generate closed-ended questions for a larger follow-up study.

However, it can be wise to limit the overall number of open-ended questions in a survey because they are burdensome.

In terms of what provides more valuable information, only you can decide that based on the requirements of your research study. You also have to take into account variables such as the cost and scale of your research study, as well as when you need the information. Open-ended questions can provide you with more context, but they’re also more information to sift through, whereas closed-ended questions provide you with a tidy, finite response.

If you still prefer the invaluable responses and data from open-ended questions, using software like Qualtrics Text IQ can automate this complicated process. Through AI technology Text IQ can understand sentiment and articulate thousands of open-ended responses into simplified dashboards.

Learn More: Qualtrics Text IQ

Open-ended question examples

While there are no set rules to the number of open-ended questions you can ask, of course you want to ask an open-ended question that correlates with your research objective.

Here are a few examples of open-ended survey questions related to your product:

  • What do you like most about this product?
  • What do you like least about this product?
  • How does our product compare to competitor products?
  • If someone asked you about our product, what would you say to them?
  • How can we improve our product?

You could even supplement closed-ended questions with an open-ended question to get more detail, e.g. “How often do you use our product?” — with a multiple choice, single word answers approach. These might be simple answers such as “Frequently”, “Sometimes”, “Never” — and if a respondent answers “Never”, you could follow with: “If you have never used our product, why not?”. This is a really easy way to understand why potential customers don’t use your product.

Also, incorporating open-ended questions into your surveys can provide useful information for salespeople throughout the sales process. For example, you might uncover insights that help your salespeople to reposition your products or improve the way they sell to new customers based on what existing customers feel. Though you might get helpful answers from a closed-ended question, open-ended questions give you more than a surface-level insight into their sentiments, emotions and thoughts.

It doesn’t need to be complicated, it can be as simple as what you see below. The survey doesn’t need to speak for itself, let your survey respondents say everything.

Asking open-ended questions: Crafting question that generate the best insights

Open responses can be difficult to quantify. Framing them correctly is key to getting useful data from your answers. Below are some open ended questions examples of what to avoid.

1. Avoid questions that are too broad or vague

Example :  “What changes has your company made in the last five years due to external events?”

Problem : There are too many potential responses to this query, which means you’ll get too broad a range of answers. What kind of changes are being referred to, economic, strategic, personnel etc.? What external events are useful to know about? Don’t overwhelm your respondent with an overly broadquestion – ask the right questions and get precise answers.

Solution : Target your questions with a specific clarification of what you want. For example, “What policy changes has your company made about working from home in the last 6 months as a result of the COVID-19 pandemic?”. Alternatively, use a close-ended question, or offer examples to give respondents something to work from.

2. Make sure that the purpose of the question is clear

Example :  “Why did you buy our product?”

Problem : This type of unclear-purpose question can lead to short, unhelpful answers. “Because I needed it” or “I fancied it” don’t necessarily give you data to work with.

Solution : Make it clear what you actually want to know. “When you bought our product, how did you intend to use it?” or “What are the main reasons you purchased [Our Brand] instead of another brand?” might be two alternatives that provide more context.

3. Keep questions simple and quick to answer

  Example :  “Please explain the required process that your brand uses to manage its contact center (i.e. technical software stack, approval process, employee review, data security, management, compliance management etc.). Please be as detailed as possible.”

Problem : The higher the level of effort, the lower the chances of getting a good range of responses or high quality answers. It’s unlikely that a survey respondent will take the time to give a detailed answer on something that’s not their favorite subject. This results in either short, unhelpful answers, or even worse, the respondent quits the survey and decides not to participate after seeing the length of time and effort required. This can end up causing bias with the type of respondents that answer the survey.

Solution : If you really need the level of detail, there are a few options to try. You can break up the question into multiple questions or share some information on why you really need this insight. You could offer a different way of submitting an answer, such as a voice to text or video recording functionality, or make the question optional to help respondents to keep progressing through the survey. Possibly the best solution is to change from open-ended questions in a survey to a qualitative research method, such as focus groups or one-to-one interviews, where lengthier responses and more effort are expected.

4. Ask only one  question at a time

Example :  “When was the last time you used our product? How was your experience?”

Problem : Too many queries at once can cause a feeling of mental burden in your respondents, which means you risk losing their interest. Some survey takers might read the first question but miss the second, or forget about it when writing their response.

Solution : Only ask one thing at a time!

5. Don’t ask for a minimum word count

Example :  “Please provide a summary of why you chose our brand over a competitor brand. [Minimum 50 characters].”

Problem : Even though making a minimum word count might seem like a way to get higher quality responses, this is often not the case. Respondents may well give up, or type gibberish to fill in the word count. Ideally, the responses you gather will be the natural response of the person you’re surveying – mandating a word count impedes this.

Solution : Leave off the word count. If you need to encourage longer responses, you can expand the text box size to fit more words in. Offer speech to text or video recording options to encourage lengthier responses, and explain why you need a detailed answer.

6. Don’t ask an open-ended question when a closed-ended question would be enough  

Example :  “Where are you from?”

Problem : It’s harder to control the data you’ll collect when you use an open question when a closed one would work. For example, someone could respond to the above question with “The US”, “The United States” or “America”.

Solution : To save time and effort on both your side and the participant’s side, use a drop-down with standardized responses.

7. Limit the total number of open-ended questions you ask  

Example :  “How do you feel about product 1?” “How do you feel about product 2?” “How do you feel about product 3?”

Problem : An open question requires more thought and effort than a closed one. Respondents can usually answer 4-6 closed questions in the same time as only 1 open one, and prefer to be able to answer quickly.

Solution : To reduce survey fatigue,lower drop-off rates, and save costs, only ask as many questions as you think you can get an answer for. Limit open-ended questions for ones where you really need context. Unless your respondents are highly motivated, keep it to 5 open-ended questions or fewer. Space them out to keep drop-offs to a minimum.

8. Don’t force respondents to answer open-ended questions

Example :  “How could your experience today have been improved? Please provide a detailed response.”

Problem : A customer may not have any suggestions for improvements. By requiring an answer, though, the customer is now forced to think of something that can be improved even if it would not make them more likely to use the service again.  Making these respondents answer means you risk bias. It could lead to prioritizing unnecessary improvements.

Solution : Give respondents the option to say “No” or “Not applicable” or “I don’t know” to queries, or to skip the question entirely.

How to analyze the results from open-ended questions

Step 1: collect and structure your responses.

Online survey tools can simplify the process of creating and sending questionnaires, as well as gathering responses to open-ended questions. These tools often have simple, customisable templates to make the process much more efficient and tailored to your requirements.

Some solutions offer different targeting variables, from geolocation to customer segments and site behavior. This allows you to offer customized promotions to drive conversions and gather the right feedback at every stage in the online journey.

Upon receipt, your data should be in a clear, structured format and you can then export it to a CSV or Excel file before automatic analysis. At this point, you’ll want to check the data (spelling, duplication, symbols) so that it’s easier for a machine to process and analyze.

Step 2: Use text analytics

One method that’s increasingly applied to open-ended responses is automation. These new tools make it easy to extract data from open-text question responses with minimal human intervention. It makes an open-ended question response as accessible and easy to analyze as that of a closed question, but with more detail provided.

For example, you could use automated coding via artificial intelligence to look into buckets of responses to your open-ended questions and assign them accordingly for review. This can save a great deal of time, but the accuracy depends on your choice of solution.

Alternatively, you could use sentiment analysis — a form of natural language processing — to systematically identify, extract and quantify information. With sentiment analysis, you can determine whether responses are positive or negative, which can be really useful for unstructured responses or for quick, large-scale reviews.

Some solutions also offer custom programming so you can apply your own code to analyze survey results, giving complete flexibility and accuracy.

Step 3: Visualize your results

With the right data analysis and visualization tools, you can see your survey results in the format most applicable to you and your stakeholders. For example, C-Suite may want to see information displayed using graphs rather than tables — whereas your research team might want a comprehensive breakdown of responses, including response percentages for each question.

This might be easier for a survey with closed-ended questions, but with the right analysis for open-ended questions’ responses, you can easily collate response data that’s easy to quantify.

With the survey tools that exist today, it’s incredibly easy to import and analyze data at scale to uncover trends and develop actionable insights. You can also apply your own programming code and data visualization techniques to get the information you need. No matter whether you’re using open-ended questions or getting one-word answers in emojis, you’re able to surface the most useful insights for action.

Ask the right open-ended questions with Qualtrics

With Qualtrics’ survey software , used by more than 13,000 brands and 99 of the top 100 business schools, you can get answers to the most important market, brand, customer, and product questions with ease. Choose from a huge range of multiple-choice questions (both open-ended questions and closed-ended) and tailor your survey to get the most in-depth responses to your queries.

You can build a positive relationship with your respondents and get a deeper understanding of what they think and feel with Qualtrics-powered surveys. The best part? It’s completely free to get started with.

Get started with our free survey maker tool today

Related resources

Post event survey questions 10 min read, best survey software 16 min read, close-ended questions 7 min read, survey vs questionnaire 12 min read, response bias 13 min read, double barreled question 11 min read, likert scales 14 min read, request demo.

Ready to learn more about Qualtrics?

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Questionnaire Design | Methods, Question Types & Examples

Questionnaire Design | Methods, Question Types & Examples

Published on July 15, 2021 by Pritha Bhandari . Revised on June 22, 2023.

A questionnaire is a list of questions or items used to gather data from respondents about their attitudes, experiences, or opinions. Questionnaires can be used to collect quantitative and/or qualitative information.

Questionnaires are commonly used in market research as well as in the social and health sciences. For example, a company may ask for feedback about a recent customer service experience, or psychology researchers may investigate health risk perceptions using questionnaires.

Table of contents

Questionnaires vs. surveys, questionnaire methods, open-ended vs. closed-ended questions, question wording, question order, step-by-step guide to design, other interesting articles, frequently asked questions about questionnaire design.

A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.

Designing a questionnaire means creating valid and reliable questions that address your research objectives , placing them in a useful order, and selecting an appropriate method for administration.

But designing a questionnaire is only one component of survey research. Survey research also involves defining the population you’re interested in, choosing an appropriate sampling method , administering questionnaires, data cleansing and analysis, and interpretation.

Sampling is important in survey research because you’ll often aim to generalize your results to the population. Gather data from a sample that represents the range of views in the population for externally valid results. There will always be some differences between the population and the sample, but minimizing these will help you avoid several types of research bias , including sampling bias , ascertainment bias , and undercoverage bias .

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Questionnaires can be self-administered or researcher-administered . Self-administered questionnaires are more common because they are easy to implement and inexpensive, but researcher-administered questionnaires allow deeper insights.

Self-administered questionnaires

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or through mail. All questions are standardized so that all respondents receive the same questions with identical wording.

Self-administered questionnaires can be:

  • cost-effective
  • easy to administer for small and large groups
  • anonymous and suitable for sensitive topics

But they may also be:

  • unsuitable for people with limited literacy or verbal skills
  • susceptible to a nonresponse bias (most people invited may not complete the questionnaire)
  • biased towards people who volunteer because impersonal survey requests often go ignored.

Researcher-administered questionnaires

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents.

Researcher-administered questionnaires can:

  • help you ensure the respondents are representative of your target audience
  • allow clarifications of ambiguous or unclear questions and answers
  • have high response rates because it’s harder to refuse an interview when personal attention is given to respondents

But researcher-administered questionnaires can be limiting in terms of resources. They are:

  • costly and time-consuming to perform
  • more difficult to analyze if you have qualitative responses
  • likely to contain experimenter bias or demand characteristics
  • likely to encourage social desirability bias in responses because of a lack of anonymity

Your questionnaire can include open-ended or closed-ended questions or a combination of both.

Using closed-ended questions limits your responses, while open-ended questions enable a broad range of answers. You’ll need to balance these considerations with your available time and resources.

Closed-ended questions

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. Closed-ended questions are best for collecting data on categorical or quantitative variables.

Categorical variables can be nominal or ordinal. Quantitative variables can be interval or ratio. Understanding the type of variable and level of measurement means you can perform appropriate statistical analyses for generalizable results.

Examples of closed-ended questions for different variables

Nominal variables include categories that can’t be ranked, such as race or ethnicity. This includes binary or dichotomous categories.

It’s best to include categories that cover all possible answers and are mutually exclusive. There should be no overlap between response items.

In binary or dichotomous questions, you’ll give respondents only two options to choose from.

White Black or African American American Indian or Alaska Native Asian Native Hawaiian or Other Pacific Islander

Ordinal variables include categories that can be ranked. Consider how wide or narrow a range you’ll include in your response items, and their relevance to your respondents.

Likert scale questions collect ordinal data using rating scales with 5 or 7 points.

When you have four or more Likert-type questions, you can treat the composite data as quantitative data on an interval scale . Intelligence tests, psychological scales, and personality inventories use multiple Likert-type questions to collect interval data.

With interval or ratio scales , you can apply strong statistical hypothesis tests to address your research aims.

Pros and cons of closed-ended questions

Well-designed closed-ended questions are easy to understand and can be answered quickly. However, you might still miss important answers that are relevant to respondents. An incomplete set of response items may force some respondents to pick the closest alternative to their true answer. These types of questions may also miss out on valuable detail.

To solve these problems, you can make questions partially closed-ended, and include an open-ended option where respondents can fill in their own answer.

Open-ended questions

Open-ended, or long-form, questions allow respondents to give answers in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered. For example, respondents may want to answer “multiracial” for the question on race rather than selecting from a restricted list.

  • How do you feel about open science?
  • How would you describe your personality?
  • In your opinion, what is the biggest obstacle for productivity in remote work?

Open-ended questions have a few downsides.

They require more time and effort from respondents, which may deter them from completing the questionnaire.

For researchers, understanding and summarizing responses to these questions can take a lot of time and resources. You’ll need to develop a systematic coding scheme to categorize answers, and you may also need to involve other researchers in data analysis for high reliability .

Question wording can influence your respondents’ answers, especially if the language is unclear, ambiguous, or biased. Good questions need to be understood by all respondents in the same way ( reliable ) and measure exactly what you’re interested in ( valid ).

Use clear language

You should design questions with your target audience in mind. Consider their familiarity with your questionnaire topics and language and tailor your questions to them.

For readability and clarity, avoid jargon or overly complex language. Don’t use double negatives because they can be harder to understand.

Use balanced framing

Respondents often answer in different ways depending on the question framing. Positive frames are interpreted as more neutral than negative frames and may encourage more socially desirable answers.

Positive frame Negative frame
Should protests of pandemic-related restrictions be allowed? Should protests of pandemic-related restrictions be forbidden?

Use a mix of both positive and negative frames to avoid research bias , and ensure that your question wording is balanced wherever possible.

Unbalanced questions focus on only one side of an argument. Respondents may be less likely to oppose the question if it is framed in a particular direction. It’s best practice to provide a counter argument within the question as well.

Unbalanced Balanced
Do you favor…? Do you favor or oppose…?
Do you agree that…? Do you agree or disagree that…?

Avoid leading questions

Leading questions guide respondents towards answering in specific ways, even if that’s not how they truly feel, by explicitly or implicitly providing them with extra information.

It’s best to keep your questions short and specific to your topic of interest.

  • The average daily work commute in the US takes 54.2 minutes and costs $29 per day. Since 2020, working from home has saved many employees time and money. Do you favor flexible work-from-home policies even after it’s safe to return to offices?
  • Experts agree that a well-balanced diet provides sufficient vitamins and minerals, and multivitamins and supplements are not necessary or effective. Do you agree or disagree that multivitamins are helpful for balanced nutrition?

Keep your questions focused

Ask about only one idea at a time and avoid double-barreled questions. Double-barreled questions ask about more than one item at a time, which can confuse respondents.

This question could be difficult to answer for respondents who feel strongly about the right to clean drinking water but not high-speed internet. They might only answer about the topic they feel passionate about or provide a neutral answer instead – but neither of these options capture their true answers.

Instead, you should ask two separate questions to gauge respondents’ opinions.

Strongly Agree Agree Undecided Disagree Strongly Disagree

Do you agree or disagree that the government should be responsible for providing high-speed internet to everyone?

You can organize the questions logically, with a clear progression from simple to complex. Alternatively, you can randomize the question order between respondents.

Logical flow

Using a logical flow to your question order means starting with simple questions, such as behavioral or opinion questions, and ending with more complex, sensitive, or controversial questions.

The question order that you use can significantly affect the responses by priming them in specific directions. Question order effects, or context effects, occur when earlier questions influence the responses to later questions, reducing the validity of your questionnaire.

While demographic questions are usually unaffected by order effects, questions about opinions and attitudes are more susceptible to them.

  • How knowledgeable are you about Joe Biden’s executive orders in his first 100 days?
  • Are you satisfied or dissatisfied with the way Joe Biden is managing the economy?
  • Do you approve or disapprove of the way Joe Biden is handling his job as president?

It’s important to minimize order effects because they can be a source of systematic error or bias in your study.

Randomization

Randomization involves presenting individual respondents with the same questionnaire but with different question orders.

When you use randomization, order effects will be minimized in your dataset. But a randomized order may also make it harder for respondents to process your questionnaire. Some questions may need more cognitive effort, while others are easier to answer, so a random order could require more time or mental capacity for respondents to switch between questions.

Step 1: Define your goals and objectives

The first step of designing a questionnaire is determining your aims.

  • What topics or experiences are you studying?
  • What specifically do you want to find out?
  • Is a self-report questionnaire an appropriate tool for investigating this topic?

Once you’ve specified your research aims, you can operationalize your variables of interest into questionnaire items. Operationalizing concepts means turning them from abstract ideas into concrete measurements. Every question needs to address a defined need and have a clear purpose.

Step 2: Use questions that are suitable for your sample

Create appropriate questions by taking the perspective of your respondents. Consider their language proficiency and available time and energy when designing your questionnaire.

  • Are the respondents familiar with the language and terms used in your questions?
  • Would any of the questions insult, confuse, or embarrass them?
  • Do the response items for any closed-ended questions capture all possible answers?
  • Are the response items mutually exclusive?
  • Do the respondents have time to respond to open-ended questions?

Consider all possible options for responses to closed-ended questions. From a respondent’s perspective, a lack of response options reflecting their point of view or true answer may make them feel alienated or excluded. In turn, they’ll become disengaged or inattentive to the rest of the questionnaire.

Step 3: Decide on your questionnaire length and question order

Once you have your questions, make sure that the length and order of your questions are appropriate for your sample.

If respondents are not being incentivized or compensated, keep your questionnaire short and easy to answer. Otherwise, your sample may be biased with only highly motivated respondents completing the questionnaire.

Decide on your question order based on your aims and resources. Use a logical flow if your respondents have limited time or if you cannot randomize questions. Randomizing questions helps you avoid bias, but it can take more complex statistical analysis to interpret your data.

Step 4: Pretest your questionnaire

When you have a complete list of questions, you’ll need to pretest it to make sure what you’re asking is always clear and unambiguous. Pretesting helps you catch any errors or points of confusion before performing your study.

Ask friends, classmates, or members of your target audience to complete your questionnaire using the same method you’ll use for your research. Find out if any questions were particularly difficult to answer or if the directions were unclear or inconsistent, and make changes as necessary.

If you have the resources, running a pilot study will help you test the validity and reliability of your questionnaire. A pilot study is a practice run of the full study, and it includes sampling, data collection , and analysis. You can find out whether your procedures are unfeasible or susceptible to bias and make changes in time, but you can’t test a hypothesis with this type of study because it’s usually statistically underpowered .

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

You can organize the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomization can minimize the bias from order effects.

Questionnaires can be self-administered or researcher-administered.

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). Questionnaire Design | Methods, Question Types & Examples. Scribbr. Retrieved July 16, 2024, from https://www.scribbr.com/methodology/questionnaire/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, survey research | definition, examples & methods, what is a likert scale | guide & examples, reliability vs. validity in research | difference, types and examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Skip navigation

Nielsen Norman Group logo

World Leaders in Research-Based User Experience

28 tips for creating great qualitative surveys.

how many survey questions in research

September 25, 2016 2016-09-25

  • Email article
  • Share on LinkedIn
  • Share on Twitter

Qualitative surveys ask open-ended questions to find out more, sometimes in preparation for doing quantitative surveys. Test surveys to eliminate problems.

Sooner or later, most UX professionals will need to conduct a survey. Survey science from the quantitative side can be intimidating because it’s a specialized realm full of statistics, random selection, and scary stories of people going wrong with confidence. Don’t be afraid of doing qualitative surveys, though. Sure, it’s important to learn from survey experts, but you don’t have to be a survey specialist to get actionable data. You do have to find and fix the bugs in your questions first, however.

In This Article:

Quantitative vs. qualitative surveys, tips for qualitative surveys.

Quantitative surveys count results : how many people do this vs. do that (or rather, how many say that they do this or that). Use quant surveys when you need to ask questions that can be answered by checkbox or radio button, and when you want to be sure your data is broadly applicable to a large number of people. Quantitative surveys follow standard methods for randomly selecting a large number of participants (from a target group) and use statistical analysis to ensure that the results are statistically significant and representative for the whole population.

Qualitative surveys ask open-ended questions . Use them when you need to generate useful information via a conversation rather than a vote, such as when you’re not sure what the right set of answers might include. Qualitative surveys ask for comments, feedback, suggestions, and other kinds of responses that aren’t as easily classified and tallied as numbers can be. You can survey fewer people than in a quantitative survey and get rich data.

It’s possible to mix the two kinds of surveys, and it’s especially useful to do small, primarily qualitative surveys first to help you generate good answers to count later in a bigger survey. This one-two-punch strategy is much preferable to going straight to a closed-ended question with response categories you and your colleagues thought up in your conference room. (Yes, you could add an “other” option, but don’t count on valid statistics for options left to a catch-all bucket.)

Unordered lists can be more time-consuming to look through than lists that have an obvious ordering principle, but unordered lists seem to yield better answers, especially if you can sort the list differently for different respondents.

  • Draft questions and get feedback from colleagues.
  • Draft survey and get colleagues to attempt to answer the questions. Ask for comments after each question to help you revise questions toward more clarity and usefulness.
  • Revise survey and test iteratively on paper. We typically do 4 rounds of testing, with 1 respondent per round. At this stage, don’t rely on colleagues, but recruit participants from the target audience. Revise between each round. Run these tests as think-aloud studies ; do not send out the survey and rely on written comments — they will never be the same as a realtime stream of commentary.
  • Randomize some sections and questions of the survey to help ensure that (1) people quitting partway through don’t affect the overall balance of data being collected, and (2) the question or section ordering doesn’t bias people’s responses.
  • Test the survey-system format with a small set of testers from the target audience, again collecting comments on each page.
  • Examine the output from the test survey to ensure the data gathered is in an analyzable, useful format.
  • Revise the survey one more time.
  • Don’t make your own tool for surveys if you can avoid it . Many solid survey platforms exist, and they can save you lots of time and money.
  • Decide up front what the survey learning goals are . What do you want to report about? What kind of graphs and tables will you want to deliver?
  • Write neutral questions that don’t imply particular answers or give away your expectations .
  • Open vs. closed answers : Asking open-ended questions is the best approach, but it’s easy to get into the weeds in data analysis when every answer is a paragraph or two of prose. Plus, users quickly tire of answering many open-ended questions, which usually require a lot of typing and explanation. That being said, it’s best to ask open-ended questions during survey testing . The variability of the answers to these questions during the testing phase can help you decide whether the question should be open-ended in the final survey or could be replaced with a closed-ended question that would be easier to answer and analyze.
  • Carefully consider how you will analyze and act on the data . The type of questions you ask will have everything to do with the kind of analysis you can make: multiple answers, single answers, open or closed sets, optional and required questions, ratings, rankings, and free-form answer fields are some of the choices open to you when deciding what kinds of answers to accept. (If you won’t act on the data, don’t ask that question. See guideline #12.)
  • Multiple vs. single answers : Often multiple-answer questions are better than single-answer ones because people usually want to be accurate, and often several answers apply to them. Survey testing on paper can help you find multiple-answer questions, because people will mark several answers even when you ask them to mark only one (and they will complain about it). If you are counting answers, consider not only how many responses each answer got, but also how many choices people made.
  • Front-load the most important questions, because people will quit partway through . Ensure that partial responses will be recorded anyway.
  • Provide responses such as, “Not applicable” and “Don’t use” to prevent people skipping questions or giving fake answers. People get angry when asked questions they can’t answer honestly, and it skews your data if they try to do it anyway.
  • People have trouble understanding required and optional signals on survey question/forms . It’s common practice to use a red asterisk “ * ” to mark required fields, but that didn’t work well enough, even in a survey of UX professionals — many of whom likely design such forms. People complained that required fields were not marked. Pages that stated at the top that all were required or optional also didn’t help, because many people ignore instruction text. Use “(Optional)” and/or “(Required)” after each question, to be sure people understand.
  • When marking is not clear enough, many people feel obligated to answer optional questions . Practically speaking that means you don’t have to require every question, but you should be careful not to include so many questions that people quit the survey in the middle.
  • Keep it short . Every extra question reduces your response rate, decreases validity, and makes all your results suspect. Better to administer 2 short surveys to 2 different subsamples of your audience than to lump everything you want to know into a long survey that won’t be completed by the average customer. 20 questions are too many unless you have a highly motivated set of participants. People are much more likely to participate in 1-question surveys. Be sensitive to what your pilot testers tell you, and realistically estimate the time to complete the survey. The more open-ended questions and complex ranking you ask people to do, the more you’ll lose respondents.
  • People often overlook examples and instructions that are on the right , after questions. Move instructions and examples to the left margin instead (or the opposite side, for languages that read right to left), to put them in the scannability zone and place them closer to the person’s focus of attention, which is on the answer area.
  • Use one-line directions if you can. Less is more. Just as in our original writing for the web studies , people read more text when there is a lot less of it. People complain about not getting enough information, but when it’s there they don’t read it because it’s too long.
  • People tend not to read paragraphs or introductions . If you must use a paragraph, bold important ideas to help ensure that most people, who scan instead of reading , glean that information.
  • Think carefully about using subjective terms , such as “essential,” “useful,” or “frequent.” Terms that cause people to make a judgment call may get at how they feel, but such questions can be confusing to evaluate logically. Ratings scales are more flexible. If you do need to know how participants perceive a certain aspect, indicate that’s what you want them to base their answer on (for example, instead of asking “Is X essential for Y?” say “Do you feel that X is essential for Y?”).
  • Define terms as needed in order to start from a shared meaning. People might quibble about the definition, but it’s better than getting misleading answers because of a misinterpretation.
  • Don’t ask about things that your analytics can tell you . Ask why and how questions.
  • Include a survey professional in your test group . Your survey method may be criticized after the fact, so get expert advice before you conduct your survey.
  • Items at the top and bottom of lists may attract more attention than items in the middle of long lists.
  • Because people scan instead of read, the first words of items in lists can cause them to overlook the right choice, especially in alphabetical lists.
  • Test where best to place page breaks. Sometimes it’s important for people to be able to see all the topic’s questions before they answer one. Otherwise they volunteer answers for the questions they have not yet seen and write, “see previous answer” later, which adds extra interpretation steps in data analysis. To find questions with these kinds of problems, you can test the survey with each question on its own page first, and then collocate the questions that need to be shown together on one page in the next test version. In some cases, simply forcing one question to come before another one can fix these problems.
  • If possible, don’t annoy people by asking questions that don’t apply to them . When respondents choose a particular answer, show them one or two more questions about that topic that would be applicable in that case. Choose a survey platform that allows conditional questions, so you can avoid presenting nonapplicable questions and keep your list of questions as short as possible for each respondent. If most of your questions are conditional, you might be able to put a key conditional question early in the list, then branch to different versions of the survey for the rest of the questions.
  • Take your data with a grain of salt . Unlike for quantitative surveys, qualitative survey metrics are rarely representative for the whole target audience; instead, they represent the opinions of the respondents. You can still present descriptive statistics (such as how many people selected a specific response to a multiple-choice question) to summarize the results of the survey, but, unless you use sound statistics tools, you cannot say whether these results are the result of noise or sample selection, as opposed to truly reflecting the attitudes of your whole user population.
  • Count whatever you can count . Researchers often refer to coding and normalizing data during analysis. Coding data is the process of making text answers into something you can count, so you can extract the bigger trends and report them in a way that makes sense to your report audience. You can capture rich textual data for understanding and quoting, and code some types of responses as 0, 1, or 2 (no, partially, yes; or none, some, all) for example, or you may be able to define many different phrases as meaning the same thing (for example when people use synonyms or express the same ideas). This coding can be done after the data is collected, in a spreadsheet.
  • Show, don’t tell . Use lots of graphs, charts, and tables, with an executive summary of key takeaways.
  • Consider graphs before you decide on a spreadsheet layout . Unfortunately some spreadsheets won’t make reasonable graphs until you switch columns to rows or rows to columns. It’s easiest to plan for this necessity before you analyze your data. It’s also possible to take the chart data, put it on its own spreadsheet page, and then reorder it to make the charts. Just be careful not to make data transfer errors.
  • Beware of disappearing chart data . Some spreadsheets hide data in charts silently when font-size changes or chart-size changes are made.
  • Don’t embed data if you can screenshot it . Screenshots (PNG format is recommended) are lovely and robust over time, unlike embedded data, which tends to cause document corruption, become unlinked, or could be changed by mistake.

Qualitative surveys are tools for gathering rich feedback. They can also help you discover which questions you need to ask and the best way to ask them, for a later quantitative survey. Improve surveys through iterative testing with open-ended feedback. Test surveys on paper first to save time-consuming rework in the testing platform. Then test online to see the effects of page order and question randomization and to gauge how useful the automated results data may be.

Related Courses

Survey design and execution.

Learn how to use surveys to drive and evaluate UX design

Analytics and User Experience

Study your users’ real-life behaviors and make data-informed design decisions

Measuring UX and ROI

Use metrics from quantitative research to demonstrate value

Related Topics

  • Research Methods Research Methods

Learn More:

how many survey questions in research

Competitive Reviews vs. Competitive Research

Therese Fessenden · 4 min

how many survey questions in research

15 User Research Methods to Know Beyond Usability Testing

Samhita Tankala · 3 min

how many survey questions in research

How to Present UX Research Results Responsibly

Caleb Sponheim · 3 min

Related Articles:

Should You Run a Survey?

Maddie Brown · 6 min

Qualitative Usability Testing: Study Guide

Kate Moran · 5 min

User-Feedback Requests: 5 Guidelines

Anna Kaley · 10 min

Card Sorting: Pushing Users Beyond Terminology Matches

Samhita Tankala and Jakob Nielsen · 5 min

Benchmarking UX: Tracking Metrics

Kate Moran · 3 min

What a UX Career Looks Like Today

Rachel Krause and Maria Rosala · 5 min

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

how many survey questions in research

Home Market Research

Quantitative Survey Questions: Definition, Types and Examples

Quantitative survey questions

Content Index

Quantitative Survey Questions: Definition

Types of quantitative survey questions with examples, how to design quantitative survey questions.

Quantitative survey questions are defined as objective questions used to gain detailed insights from respondents about a survey research topic. The answers received for these quantitative survey questions are analyzed and a research report is generated on the basis of this

 data . These questions form the core of a survey and are used to gather numerical data to determine statistical results.

The primary stage before conducting an online survey will be to decide the objective of the survey. Every research should have an answer to this integral question: “What are the expected results of your survey?”. Once the answer to this question is figured out, the secondary stage will be deciding the type of required data: quantitative or qualitative data .

LEARN ABOUT: Survey Mistakes And How to Avoid

Deciding the data type indicates the type of information required from the research process. While qualitative data provides detailed information about the subject, quantitative data will provide effective and precise information.

Quantitative survey questions are thus, channels for collecting quantitative data . Feedback received to quantitative survey questions is related to, measured by or measuring a “quantity” or a statistic and not the “quality” of the parameter.   

Learn more: Survey Questions

Quantitative survey questions should be such that they offer respondents a medium to answer accurately. On the basis of this factor, quantitative survey questions are divided into three types:

1. Descriptive Survey Questions: Descriptive survey questions are used to gain information about a variable or multiple variables to associate a quantity to the variable.

It is the simplest type of quantitative survey questions and helps researchers in quantifying the variables by surveying a large sample of their target market.

LEARN ABOUT: Survey Sample Sizes

Most widely implemented descriptive analysis questions start with “What is this..”, “How much..”, “What is the percentage of..” and such similar questions. A popular example of a descriptive survey is an exit poll as it contains a question: “What is the percentage of candidate X winning this election?” or in a demographic segmentation survey: “How many people between the age of 18-25 exercise daily?”

Learn more: Demographic Survey Questions

Other examples of descriptive survey questions are:

  • Variable: Cuisine
  • Target Group: Mexicans
  • Variable: Facets that transform career decisions
  • Target Group: Indian students
  • Variable: Number of citizens looking for better opportunities
  • Target Group: Chinese citizens

In every example mentioned above, researchers should focus on quantifying the variable. The only factor that changes is the parameter of measurement. Every example mentions a different quantitative sample question which needs to be measured by different parameters.

LEARN ABOUT: Testimonial Questions

The answers for descriptive survey questions are definitional for the research topic and they quantify the topics of analysis. Usually, a descriptive research will require a long list of descriptive questions but experimental research or relationship-based research will be effective with a couple of descriptive survey questions.

Learn more: Quantitative Market Research & Descriptive Research vs Correlational Research

2. Comparative Survey Questions: Comparative survey questions are used to establish a comparison between two or more groups on the basis of one or more dependable variables. These quantitative survey questions begin with “What is the difference in” [dependable variable] between [two or more groups]?. This question will be enough to realize that the main objective of comparative questions is to form a comparative relationship between the groups under consideration.

LEARN ABOUT:   Structured Question & Structured Questionnaire

Comparative survey question examples:

  • Dependable Variable: Cuisine preferences
  • Comparison Groups: Mexican adults and children
  • Dependable Variable: Factors that transform career decisions
  • Comparison Groups: Indian and Australian students
  • Dependable Variable: Political notions
  • Comparison Groups: Asian and American citizens

The various groups mentioned in the above-mentioned options indicate independent variables (Mexican people or country of students). These independent variables could be based on gender questions , ethnicity or education. It is the dependable variable that determines the complexity of comparative survey questions.

LEARN ABOUT: Average Order Value

3. Relationship Survey Questions: Relationship survey questions are used to understand the association, trends and causal comparative research  relationship between two or more variables. When discussing research topics, the term relationship/causal survey questions should be carefully used since it is a widely used type of research design , i.e., experimental research – where the cause and effect between two or more variables. These questions start with “What is the relationship” [between or amongst] followed by a string of independent [gender or ethnicity] and dependent variables [career, political beliefs etc.]?

  • Dependent Variable: Food preferences
  • Independent Variable: Age
  • Relationship groups: Mexico
  • Dependent Variable: University admission
  • Independent Variable: Family income
  • Relationship groups: American students
  • Dependent Variable: Lifestyle
  • Independent Variable: Socio-economic class, ethnicity, education
  • Relationship groups: China

Learn more: What is Research?

There are four critical steps to follow while designing quantitative survey questions:

1. Select the type of quantitative survey question: The objective of the research is reflected in the chosen type of quantitative survey question. For the respondents to have a clear understanding of the survey, researchers should select the desired type of quantitative survey question.  

2. Recognize the filtered dependent and independent variables along with the target group/s: Irrespective of the type of selected quantitative survey question (descriptive, comparative or relationship based), researchers should decide on the dependent and independent variables and also the target audiences .

LEARN ABOUT: Product Survey Questions

There are four levels of measurement variables – one of which can be chosen for creating quantitative survey questions. Nominal variables indicate the names of variables, Ordinal variables indicate names and order of variables, Interval variables indicate name, order and an established interval between ordered variables and Ratio variables indicate the name, order, an established interval and also an absolute zero value.

A variable can not only be calculated but also can be manipulated and controlled. For descriptive survey questions, there can be multiple variables for which questions can be formed. In the other two types of quantitative survey questions (comparative and relationship-based), dependent and independent variables are to be decided. Independent variables are those which are manipulated in order to observe the change in the dependent variables.

Learn more: Quantitative Observation

3. Choose the right structure according to the decided type of quantitative survey question: As discussed in the previous section, appropriate structures have to be chosen to create quantitative survey questions. The intention of creating these survey questions should align with the structure of the question.

LEARN ABOUT: Level of Analysis

This structure indicates – 1) Variables 2) Groups and 3) Order in which the variables and groups should appear in the question.

4. Note the roadblocks you are trying to solve in order to create a thorough survey question: Analyze the ease of reading these questions once the right structure is in place. Will the respondents be able to easily understand the questions? – Ensure this factor before finalizing the quantitative survey questions.

Learn more:

  • Nominal Scale
  • Ordinal Scale
  • Interval Scale
  • Ratio Scale
  • Nominal Data

You can use QuestionPro for survey questions like income survey questions , Quantitative survey questions, Ethnicity survey questions, and life survey questions.

MORE LIKE THIS

Employee listening strategy

Employee Listening Strategy: What it is & How to Build One

Jul 17, 2024

As your relationships grow, you’ll find that people will come to you for a different perspective or creative way to solve a problem, and it spirals from there.

Winning the Internal CX Battles — Tuesday CX Thoughts

Jul 16, 2024

Knowledge Management

Knowledge Management: What it is, Types, and Use Cases

Jul 12, 2024

Response Weighting: Enhancing Accuracy in Your Surveys

Response Weighting: Enhancing Accuracy in Your Surveys

Jul 11, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence

how many survey questions in research

This page could not be found.

U.S. flag

A .gov website belongs to an official government organization in the United States.

A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

  • Collaborating Center for Questionnaire Design and Evaluation Research
  • Question Evaluation
  • Participate
  • CCQDER Products

National Center for Health Statistics

  • Question evaluation is a crucial element in developing quality health surveys.
  • Evaluation findings help improve survey data collection by increasing question validity and reducing response and non-response error.
  • Quality question evaluation should be empirical, transparent, and systematic.

A person answering survey questions on a laptop.

Question evaluation is an important part of developing a good survey. Question evaluation also supports researchers who analyze data collected from surveys.

Evaluating survey questions reduces the risk that respondents will answer a question incorrectly because they don't understand the question (response error). It also reduces the risk that respondents won't answer a question at all (non-response error).

Reducing these risks also helps ensure that data can be compared across groups. If people in different groups understand questions differently, their responses won't mean the same things and shouldn't be compared.

High-quality question evaluation should be empirical, transparent, and systematic.

Empirical question evaluation is based on directly observed evidence, not based on expert opinion.

Transparent

Methods and analyses are clearly documented, so all findings and conclusions can be traced back to the original data. This transparency increases the credibility of the findings from question evaluation studies.

Applying procedures consistently and strictly when collecting and analyzing data helps to ensure findings aren't inaccurate or misleading because of bias. Bias occurs when study procedures and practices make it more likely the study will produce some possible results and not others.

Q-Notes‎

Evaluation findings can tell us if our questions will produce accurate responses and get us the data we want. This is called question performance.

Does the question capture the intended construct?

Ideally, questions should accurately present the concepts and topics that the survey researchers and designers want to study. These are the intended constructs. Knowing if questions are not presenting the intended constructs is important so questions can be changed before they are used.

A thorough study of question performance allows researchers to understand the potential range of constructs the data may represent. This is how question evaluation can enhance question validity. Validity is the extent to which a question measures what you wanted it to measure

Is the question interpreted consistently across respondent groups?

Response patterns can differ across respondent groups. Respondents may interpret a question differently based on their personal experiences and characteristics. For example, respondents with different education levels might understand or interpret the terms used in a question differently.

Looking at response patterns across respondent groups can help researchers better understand and reduce subgroup bias. Subgroup bias occurs when people in different groups are more likely to interpret questions differently, while people in the same group are more likely to interpret them similarly. When this happens, data can't be compared across groups because the data may not mean the same thing from group to group.

Is the question difficult for respondents to answer?

There are many possible reasons why respondents might have difficulty answering a question. The question might be confusing, or the respondent might want to give the answer they think will make them look good (social desirability bias). Findings from question evaluation studies can reduce item non-response when findings are used to revise questions so that respondents are willing and able to answer.

Learn more about the methods CCQDER researchers use to evaluate questions.

  • Cognitive interviewing
  • Other research methods

Policy requirements

In Section 1.4 of The Standards and Guidelines for Statistical Surveys , the Office of Management and Budget (OMB) states:

"Agencies must ensure that all components of a survey function as intended when implemented in the full-scale survey and that measurement error is controlled by conducting a pretest of the survey components or by having successfully fielded the survey components on a previous occasion."

OMB oversees and coordinates the federal statistical system . The National Center for Health Statistics (NCHS) is part of the federal statistical system. CCQDER is an NCHS program.

NCHS collects, analyzes, and disseminates timely, relevant, and accurate health data and statistics to inform the public and guide program and policy decisions to improve our nation’s health.

Collaborating Center for Questionnaire Design and Evaluation Research (CCQDER)

An ultimate guide to writing a good survey paper: key strategies and tips

Mastering a survey paper: a comprehensive guide to writing and analysis

writer

John, a PhD in Education from Harvard and former Edutopia star, now is a consultant renowned for his work in educational review.

A survey paper is often the first step in discovery and advancement in various academic disciplines. It is an efficient way to summarize your discoveries and develop a better understanding of any subject. Yet, most students don’t feel comfortable taking on such a challenge at first try and go for college paper help instead. One way or another, you still should know what a survey paper is and how to recognize a job well done. We are ready to show you how to write a survey paper so it never becomes a problem. Follow this guide to learn everything you need to know.

What is a survey paper

Survey papers occupy a unique niche among many other research paper types. Unlike their more practical colleagues, the experimental survey papers go into more theoretical realms. They aim to summarize and analyze the existing scientific research on a specific topic. Students with a survey paper assignment must review the vast amount of information found across numerous scholarly articles and books and turn that information into a single, well-organized resource.  

Survey papers don't require you to conduct experiments, express personal opinions, or build convincing arguments. Here, you only collect and rely on original data. Students learn to explore existing sources of information, extract their value, and apply it to the field of their study. So, what is a survey paper? Well, in essence, survey papers close the gap between theory and experiment, focusing on areas where more data is needed and providing the foundation for future research and discovery.

For students on academic research journeys , mastering the art of crafting a survey paper is essential. In research, a survey paper helps you explore a topic by mapping out the existing knowledge. It doesn't involve experiments but acts as a compass, pointing you toward the most relevant research and highlighting key findings. This process enhances your research, critical thinking, and analytical skills. It helps them go deeper into their chosen fields and discover valuable knowledge hidden in past works. 

Students proficient in survey paper writing will also improve their ability to skim over large qualities of texts, discover patterns and trends, and pinpoint areas with research and academic potential. Such powerful skills contribute to students’ overall academic performance, research abilities, and progress within their field of study. 

Survey paper preparation 

Here's your toolkit on how to write a survey paper.

Topic selection

The first step is picking a topic for a survey paper that sparks your intellectual curiosity and aligns with your broader academic goals. Ideally, the topic should have a lot of existing research, providing a solid foundation for your survey. Don't hesitate to seek guidance from professors, librarians, or fellow researchers to find a topic that excites you and has a strong research base.

Preliminary research

Before you begin your planning, you need to explore the area. Start with a broad yet focused search to gain a general understanding of the chosen topic for your paper survey. Study online academic databases, relevant academic journals in your field, and reliable textbooks. This initial exploration helps you refine your research question and identify key areas within the broader topic that deserve a deeper look.

Analyzing existing literature

Once you have a more specific direction, it's time to explore the "base camp" of existing scholarly work. Use online databases and search engines to find relevant peer-reviewed articles, books, and credible online resources. As you examine each source, take notes, focusing on these key aspects:

  • Central arguments : Identify the main points and arguments presented in each study. Focus on the specific questions each researcher was trying to answer and the conclusions they presented.
  • Methodological tools : Learn about the different research methods used by various scholars within the field. These might include surveys, interviews, case studies, or other methods relevant to the topic of your survey paper.
  • Key findings : Extract the significant results and insights from each study. Analyze how these findings contribute to the broader understanding of the topic and identify any recurring patterns or themes. 

Theoretical frameworks

Examine the theoretical lenses used by researchers to interpret their findings. Understanding the theoretical underpinnings of each study allows for a more nuanced comprehension of the presented research. These frameworks will guide you through the complexities of the existing literature scope.

Developing a research question

What is a survey paper without the research question? This writing element lies in the heart of your survey paper. It guides your analysis and shapes the purpose of your survey paper writing. You should be able to reshape your topic into a more specific, manageable question. It should be concise, clear, and relatable. 

Writing tip from SpeedyPaper

How to write a survey paper: perfect structure.

A compelling survey paper requires a well-organized structure. Learning how to write a survey paper will enhance your delivery, paper's readability, and overall appearance of your text. Proper structure will guide you through the text, unfolding each argument with every paragraph. Such structure ensures readers can easily follow your analysis and engage with the presented research. This guide shows the essential steps on how to format a survey paper and build a powerful outline, effectively introducing and analyzing existing research on your chosen topic.

The abstract serves as your reader's first impression, so it's crucial to craft a clear and informative one. Here are some insights and tips to help you write a compelling survey paper. 

  • Focus : Briefly introduce the topic, research question, and key findings.
  • Start with context : Briefly set the stage for your research question.
  • Clear question : State your research question, making the center of attention. 
  • Highlight findings : Briefly mention the main insights from your analysis.
  • Target audience : Tailor the language to your reader.
  • Conciseness is key : Aim for clarity within the word limit. Use every word to convey your message.

Introduction

A strong introduction for your survey paper is both a captivating hook and a roadmap for your analysis. Start by grabbing the reader's attention with a thought-provoking question, a surprising statistic, or a historical anecdote related to your topic. Next, provide a concise overview of the field, highlighting its current facts and data and focusing on its broader significance within academia.  

Finally, establish the central research question of your survey paper. This question becomes the focal point of your analysis, ensuring your reader understands the specific area of knowledge you are exploring within the existing research.

Literature Review

The literature review acts as the core of your survey paper. Here, you begin your journey into existing research, analyzing and synthesizing findings from various scholarly sources. This section allows you to demonstrate your understanding of the current research landscape and lay the foundation for your own analysis.

  • Organize for clarity : Structure your review to optimize clarity and highlight the key themes and arguments within the literature. You can organize thematically, focusing on specific topics or concepts within your research area. Alternatively, you might choose a chronological or methodological approach, depending on what best facilitates a clear understanding of the research evolution or the methods employed by various scholars.
  • Analyze, don't summarize : Go beyond a mere summary of each study. Engage in critically analyzing your texts.
  • Explain research methods : Explain scholars' different research methodologies in your field. This might include surveys, interviews, content analysis, or other methods relevant to your topic. Understanding the diverse approaches gives you a more nuanced perspective on the research landscape.
  • Focus on key findings : Extract the significant results and insights from each study. Analyze how these findings contribute to the broader understanding of the topic of your survey paper and identify any frequent patterns or themes. 

The discussion section bridges your thorough analysis of a literature review and your concluding remarks. Here, you move beyond summarizing findings and start a conversation with the existing research. You can’t learn how to write a survey paper without synthesizing the key themes and insights extracted from your analysis. Highlight any areas of agreement or disagreement amongst scholars. Such work demonstrates your understanding of the complexities within the research landscape.  

Additionally, identify any gaps in knowledge or areas of conflicting opinions. These areas identify potential ideas for future research. Remember, while supporting your argument, keep the language simple and concise with easy-to-follow logical transitions. 

The conclusion should close your survey paper, leaving a lasting impression on the reader. Here's how to craft a strong conclusion. 

  • Restate and summarize : Briefly remind the reader of your research question and summarize the key findings you've presented throughout your analysis.
  • Reiterate the significance : Emphasize the importance of understanding the existing research base within your chosen topic. Explain how this knowledge contributes to a broader understanding of the field.
  • Future directions : Briefly suggest potential avenues for future research. Drawing on the gaps in knowledge you identified in your discussion, propose areas where further exploration could be fruitful.

Top 15 topics ideas for your survey paper 

Sometimes, the hardest part of writing survey papers is finding a topic of great interest and importance. Here are 15 ideas to consider during your next attempt at writing a research paper .

  • The Rise of Citizen Science: Exploring Public Participation in Research
  • How Algorithms Shape Our Online Experiences
  • Automation, Remote Work, and the Evolving Job Market -  The Future of Work
  • How Video Games Impact Learning and Development
  • Environmental Concerns and Sustainability Practices
  • The Evolving Landscape of News Media and Information Consumption
  • Exploring the Rise of Collaborative Consumption
  • The Power of Personal Branding
  • Exploring Adaptive Education Technologies During the Age of Personalized Learning
  • Exploring the Rise of Online Education
  • The Ethical Dilemma of Artificial Intelligence: Exploring Public Perceptions
  • How Technology Empowers Older Adults
  • Exploring Practices for Stress Reduction
  • Exploring Sustainable and Experiential Travel Trends
  • Exploring the Impact of Narrative in Social Movements

Following these steps and embracing a critical and analytical approach will prepare you to craft a compelling survey paper. Remember, your paper serves as a valuable resource for navigating the existing studies within your chosen field. It doesn't just show what's already known. It also helps pave the way for new discoveries.

Related articles

How to write a term paper. Full writing guide with expert tips and tricks

Opinion: Too many Americans support political violence. It’s up to the rest of us to dissuade them

Trump supporters storm the U.S. Capitol

  • Copy Link URL Copied!

Donald Trump’s shooting wasn’t a complete surprise. While no one could have predicted the specifics, researchers and policy experts have been concerned for years that this election season would bring an outbreak of political violence. For weeks, I’d been counting us lucky as each day passed without it happening.

America saw political violence following its last presidential election, after all. At the time, one expert put it this way : “A lot of people want to see January 6 as the end of something. I think we have to consider the possibility that this was the beginning of something.” And for more than a decade, political figures (including Trump) have engaged in rhetoric that seems to endorse and promote violence, reflecting on the need for “ 2nd Amendment remedies ” or the possibility of a “ bloodbath ” if election results are not to their liking.

Republican presidential candidate former President Donald Trump is helped off the stage at a campaign event in Butler, Pa., Saturday, July 13, 2024. (AP Photo/Gene J. Puskar)

Abcarian: Don’t let political rhetoric distract you from this truth about the Trump shooting

Despite partisan accusations by J.D. Vance and others, Thomas Matthew Crooks’ assassination attempt isn’t exceptional in the annals of American firearm violence.

July 17, 2024

Fortunately, there is a growing body of research on what leads to political violence, who’s most at risk for committing it and how it might be prevented. Here at UC Davis, we have been conducting a large, annual, nationally representative survey of American adults on all those topics since 2022. We’re contacting the same people each year, which allows us to measure real change over time.

In 2022 , to our great concern, 33% of American adults thought physical violence was usually or always justified to advance at least one political objective (we provided a list of objectives for them to consider), and 14% strongly expected civil war in the next few years. But both percentages fell in 2023 , to 25% for justification of violence and 6% for an expectation of civil war. That good news comes with a caveat: 2023 was not a federal election year. A first look at our 2024 data suggests, though, that there has not been a rebound this year. There was more good news: In both 2022 and 2023 , the great majority (about 70%) of people who thought violence was justified were unwilling to participate in it themselves.

A delegate holds up a sign during the Republican National Convention Monday, July 15, 2024, in Milwaukee. (AP Photo/Nam Y. Huh)

Opinion: This is a turning point for Trump. What will he make of it?

Trump frames the 2024 election as his strength versus Biden’s weakness. Now that message is paying off on an entirely new level.

Not all the news was good. Of those who strongly expected civil war in 2023, 39% also strongly agreed with the statement that “the United States needs a civil war to set things right.” In both years, between 1% and 2% of all respondents thought it very or extremely likely that at some future time they would shoot someone to advance a political objective. That’s a very small percentage (and survey estimates of small percentages can be unreliable), but with more than 250 million adults in the United States, 1% of survey respondents would extrapolate to 2.5 million people.

We learned that, not surprisingly, support for political violence was not evenly distributed across the population. Among the groups more likely to support such violence (and, in most cases, more willing to participate in it) were men, young people , Republicans (and MAGA Republicans in particular), those who endorsed many forms of fear and loathing (racism, sexism, homophobia, transphobia, xenophobia, Islamophobia and antisemitism) and firearm owners who had assault-type rifles, purchased firearms during the COVID-19 pandemic or frequently carried weapons in public.

This research arose from the assumption that violence, including political violence, is a health problem. (If it isn’t, as a federal health official said 30 years ago , then why are so many people dying from it?) Participation in violence is, therefore, a health behavior.

That understanding helps to translate the research on political violence into recommendations for prevention, which are based on strong evidence that individuals’ health behaviors are influenced by the opinions and behaviors of the people around them. Prevention is a worthy goal; although we can’t eliminate political violence, we can minimize it.

The great majority of us who reject violence must become agents for change. Our declarations that political violence is unacceptable, if made often enough by enough of us, can create conditions in which political violence is less likely to occur. This won’t be as easy as it sounds. It means having sometimes-awkward conversations with family, friends and members of our social networks. It can mean becoming, for this specific purpose, an influencer on social media. Where necessary, it means telling our elected officials and others in positions of leadership that their pro-violence rhetoric is unacceptable.

By itself, this won’t be enough. There are people who are committed to violence and are beyond persuasion. Law enforcement has strategies for them — but we are part of those strategies too. Any one of us might see the social media post or hear the conversation that conveys a threat to commit political violence. When that happens, we must be willing to communicate that threat to those who can do something about it.

Those of us who reject political violence aren’t mere observers of a national train wreck. We are on the train. Will we do everything we can to apply the brakes? The proper answer to that question should be: Yes.

What will your answer be?

Garen Wintemute is a distinguished professor of emergency medicine at UC Davis and director of the California Firearm Violence Research Center.

A cure for the common opinion

Get thought-provoking perspectives with our weekly newsletter.

You may occasionally receive promotional content from the Los Angeles Times.

More From the Los Angeles Times

People walk past the Fiserv Forum ahead of the 2024 Republican National Convention, Thursday, July 11, 2024, in Milwaukee. (AP Photo/Alex Brandon)

Opinion: The GOP’s bait and switch on abortion

Former President Donald Trump talks to Republican vice presidential candidate Sen. JD Vance

Hollywood Inc.

Why some Silicon Valley investors are backing the Trump-Vance campaign

July 18, 2024

MILWAUKEE, WI JULY 17, 2024 -- Republican vice presidential candidate Sen. J.D. Vance speaks during the Republican National Convention on Wednesday, July 17, 2024. (Robert Gauthier / Los Angeles Times)

Column: The GOP convention has been a split-screen experience, nasty and nice

President Joe Biden walks down the steps of Air Force One at Dover Air Force Base in Delaware, Wednesday, July 17, 2024. Biden is returning to his home in Rehoboth Beach, Del., to self-isolate after testing positive for COVID-19. (AP Photo/Susan Walsh)

Beleaguered Biden sits on sidelines with COVID as Trump returns to center stage after shooting

What instruments do researchers use to evaluate LXD? A systematic review study

  • Original research
  • Open access
  • Published: 18 July 2024

Cite this article

You have full access to this open access article

how many survey questions in research

  • Andrew A. Tawfik   ORCID: orcid.org/0000-0002-9172-3321 1 ,
  • Linda Payne 1 ,
  • Heather Ketter 1 &
  • Jedidiah James 1  

In contrast to traditional views of instructional design that are often focused on content development, researchers are increasingly exploring learning experience design (LXD) perspectives as a way to espouse a broader and more holistic view of learning. In addition to cognitive and affective perspectives, LXD includes perspectives on human–computer interaction that consist of usability and other interactions (ie—goal-directed user behavior). However, there is very little consensus about the quantitative instruments and surveys used by individuals to assess how learners interact with technology. This systematic review explored 627 usability studies in learning technology over the last decade in terms of the instruments (RQ1), domains (RQ2), and number of users (RQ3). Findings suggest that many usability studies rely on self-created instruments, which leads to questions about reliability and validity. Moreover, additional research suggests usability studies are largely focused within the medical and STEM domains, with very little focus on educators' perspectives (pre-service, in-service teachers). Implications for theory and practice are discussed.

Avoid common mistakes on your manuscript.

1 Literature Review

1.1 emergence of learning experience design (lxd).

A primary goal of educators (teachers, learning designers, etc.) is to consider strategies and resources that support learning among students. Within the field of instructional and learning design, this has often focused on the development of learning environments that support students’ construction of knowledge. This emphasis has often outlined theories and models that foster cognitive and socio-emotional learning outcomes as learners engage with course materials (Campbell et al., 2020 ; Ge et al., 2016 ). Recently, theorists have increasingly advocated for a more holistic view of learning, also known as “learning experience design” (LXD) (Chang & Kuwata, 2020 ). To date, there are various emergent perspectives to define elements of LXD. For example, North ( 2023 ) suggests LXD “considers the intention of the content and how it will be used in the organization to design the best experience for that use”. Others such as Floor ( 2023 ) and Schmidt and Huang ( 2022 ) extend this view to underscore the human-centered and goal-oriented nature of LXD. As an extension of conceptual discourse, Tawfik and colleagues (in press) conducted a Delphi study from LXD practitioners where they define the phenomenon as follows: “LXD not only considers design approaches, but the broader human experience of interacting with a learning environment. In addition to learners’ knowledge construction, experiential aspects include socio-technical considerations, emotive aspects (e.g., empathy, understanding of learner), and a detailed view of learner characteristics within context. As such, LXD perspectives and methodologies draw from and are informed by fields beyond learning design & technology, educational psychology, learning sciences, and others such as human–computer interaction (HCI) and user-experience design.” Although various definitions have emerged in recent years, these collective views suggest that learning technologies should not solely focus on content design or performance goals, but also elevate the individual and their interaction with the learning environment. This includes the aforementioned learning outcomes (cognitive, socio-emotional), but also considers the human–computer interaction (e.g., user experience, usability) regarding the technical features and users’ ability to complete tasks within the interface (Gray, 2020 ; Lu et al., 2022 ; Tawfik et al., 2022 ).

There are various emerging frameworks that outline different approaches to understanding and defining constructs of learning experience design. For example, frameworks such as the socio-technical pedagogical (STP) framework (Jahnke et al., 2020 ) and activity theory (Engeström, 1999 ; Yamagata-Lynch, 2010 ) describe how learning environments must consider the broader learning context. That is, how individuals are goal directed and leverage technology tools to complete actions within a socially constructed context. Other frameworks, such as that by Tawfik et al. ( 2022 ), underscore the relationship between LXD and human-centered learning experiences; that is, the behaviors and interactions that are important as users employ technology for learning. Specifically, this conceptual framework describes LXD in two constructs: (a) interaction with the learning environment and (b) interaction with the learning space. The former includes learning interactions with the interface that are more technical in nature, such as customization, expectation of content placement, functionality of component parts, interface terms aligned with existing mental models, and navigation. Alternatively, the interaction with the learning space includes how the learners engage with specific affordances of technology designed to enhance learning: engagement with the modality of content, dynamic interaction, perceived value of technology features to support learning, and scaffolding. Collectively, these LXD frameworks highlight how the design perspectives extend beyond merely specific content; it details the experiential and HCI aspects of learning (e.g.—UX, usability).

1.2 LXD and Assessment of Usability Within Learning Environments

Usability is a key component of LXD because many learning technologies are often dependent on learners’ technology interaction and ability to navigate the learning environment. If learners experience challenges with usability as they interact with the learning interface, studies show that this can result in decreased learning outcomes (Novak et al., 2018 ). However, this aspect of LXD is often understudied relative to the design and development of learning environments (Lu et al., 2022 ). To date, there are various methods evaluators employ to assess technology, which can broadly be defined in terms of qualitative and quantitative approaches (Schmidt et al., 2020 ). The former largely consists of ethnography, focus groups, and think-aloud studies. Ethnography, a method employed for comprehensive contextual understanding, necessitates that researchers immerse themselves in natural settings to observe and interact with participants, thereby gaining insights into their behaviors and practices (Rizvi et al., 2017 ). Focus groups, characterized by small, structured discussions moderated by a skilled facilitator, assist in the exploration of participants’ diverse perspectives and group dynamics (Downey, 2007 ; Maramba et al., 2019 ). Lastly, a think-aloud qualitative approach explores participants’ cognitive processes and reactions while engaging in goal-directed user tasks (Gregg et al., 2020 ; Jonassen et al., 1998 ; Nielsen, 1994 ). Collectively, qualitative methods of usability generate rich user data that enables a deeper understanding of users’ experiences with the technology (Gray, 2016 ; Schmidt et al., 2020 ).

Whereas qualitative research is often less structured, usability evaluators might opt for a more quantitative approach that provides alternative measures of the learner experience. This may include a variety of approaches, such as analytics that capture user-logs or survey research. Alternatively, questionnaires allow researchers and practitioners to quantify specific constructs and gather data at scale during a usability evaluation. Indeed, recent large-scale reviews suggest that questionnaires are the most common form of usability evaluation (Estrada-Molina et al., 2022 ; Lu et al., 2022 ). Further analysis by Estrada-Molina et al. ( 2022 ) suggests a lack of learning emphasis within the survey items, while the results from Lu et al ( 2022 ) indicate a mix of self-developed and adapted questionnaires outside of education. For example, the System Usability Scale (SUS) (Brooke, 1996 ) is a popular usability instrument that explores questions related to complexity, inconsistencies, and overall functionality (Vlachogianni & Tselios, 2022 ). The SUS is therefore widely used as it considers users’ more general perceptions about technical features of the assessed technology. In terms of scoring, the instrument proffers ten questions and then translates the score to a percentile rank that can be categorized as above or below average. Although the instrument derives outside the field of learning design, research shows this has been extensively used as researchers and evaluators assess various technologies (Vlachogianni & Tselios, 2022 ). Other surveys rooted in the Technology Acceptance Model (TAM) often ask questions related to the ‘perceived ease of use’ construct (Davis, 1989 ), which is used to measure usability interactions such as flexibility, learnability, and terminology (Perera et al., 2021 ). Rather than seen as a diagnostic tool, items are often seen from a decision-making perspective to determine how perceived usability factors into technology adoption. Moreover, other instruments that measure nuanced interactions not often outlined in older surveys are often self-created (Victoria-Castro et al., 2022 ), especially as novel technologies (e.g., VR, wearable) and modalities emerge that (Lu et al., 2022 ).

1.3 Research Question

A recent systematic review by Lu et al. ( 2022 ) noted that questionnaires were the most widely used form of usability evaluations for learning technology. Although usability methodology is increasingly seen as an important aspect of LXD, there is very little understanding as to what instruments researchers and practitioners employ. This can lead to questions about the validity and reliability of usability in LXD, especially for novel technologies where there are few surveys and established practices evaluating a learners’ interaction using quantitative methods (Martin et al., 2020a , 2020b ). Finally, the lack of agreement about usability within the LXD research community can lead to issues of replication, which make it hard to establish trends and gaps around a specific research topic (Christensen et al., 2022 ). There is therefore a need to provide an overview of the published literature about how researchers and practitioners assess learning technologies. Given this gap in the literature, we propose the following research questions:

What are the surveys used to evaluate usability for learning environments?

What are the types of technology and domains studied for usability evaluations of learning environments?

What are the user characteristics for usability evaluations of learning environments?

2 Methodology

Given the lack of clarity around quantitative methods used by LXD practitioners and researchers, this study employed a systematic review to understand trends in usability evaluations of learning technologies. Specifically, the process employed for this systematic review mirrored the steps outlined in the What Works Clearinghouse Procedures and Standards Handbook (2017).

2.1 Data Sources and Search Parameters

The systematic review query was amended based on a similar usability systematic review published by Lu et al. ( 2022 ). Specifically, the terms were broadened and included the following additional terms to capture the evaluation aspect of LXD: assess*, measure*, survey*, and questionnaire*. Taking into consideration the project goals and the librarian’s search recommendations, the research team searched the Scopus, ERIC, PubMed, and IEEE Explore databases using the following parameters: usability AND (evaluate* OR assess* OR measure*) AND (learn* OR education*) AND ("technology" OR "online" OR "environment" OR "management system" OR "mobile" OR "virtual reality" OR "augmented reality") AND (survey* OR questionnaire*) .

2.2 Review Protocol and Data Coding

Given the emergence of LXD in recent years, the goal was to better understand issues of usability of learning technology within the last decade. Hence, the search results were narrowed by date range (January 1, 2013 to December 31, 2023), language (English), and limited to peer-reviewed journal articles, including early access articles. Based on the goals and research questions, the research team developed categories of codes based on criteria and common themes found during the initial article review (see Table  1 ). The code categories were as follows: Instrument, Setting, Type of Technology, Domain, Participant Type, and Number of Participants. An “Other” category was added as needed for items that appeared fewer than three times. Table 2 shows a breakdown of the codes identified and used for each category.

The final number of articles returned was 1,610. Similar to prior systematic reviews (Martin et al., 2017 , 2020a , 2020b ), two researchers individually reviewed the title and abstracts to determine if each article included the criteria outlined in Fig.  1 . After removing articles that did not meet this inclusion criteria this, the final articles reviewed included 735 studies. Prior to the second round of coding, the research team met to review and discuss any disagreements in coding or coding categories, and then the primary investigator completed the final review. The final inter-reliability agreement was 100%, with 627 articles meeting the criteria.

figure 1

The systematic review process for the study

The first research question sought to understand what surveys were employed as evaluators assess usability in learning environments. The results found that a large number do not use standard and reliable instruments; rather, many surveys are self-created (39.58%). In terms of more popular and established survey instruments, it appears that the System Usability Scale (24.68%) and some variation of the Technology Acceptance Model (10.58%) were preferred when doing usability studies for learning technologies. Another finding for this research question is that a significant portion was categorized as ‘Others’ (14.26%) or ‘Not Reported’ (12.02%). The “Others” category implies little uniformity among usability instruments. Whereas the “Self Created” category was used when researchers assessed usability with an instrument generated for that specific study, the ‘Not Reported’ category was used when the researchers failed to disclose what instrument they employed (see Table  3 ).

The second research question examines the different types of technologies and how frequently each was studied. The data is broken down into 11 categories, including “Other” as examples of technologies which were only studied between one to three times across the sample. Results show that the total numbers of each are spread primarily between Online Module/Multimedia (22.74%), Virtual Reality/Augmented Reality (21.84%), Mobile Applications (15.81%), and Other (15.06%). It is noteworthy that although many institutions rely on learning management systems (LMS) for education and training, they were not widely studied (8.13%) (see Table  4 ).

To further answer the second research question, the research team sought to understand the the different domains addressed by usability evaluations for learning technologies. The data found that a significant portion were focused on the sciences, with the medical field (39.26%) and STEM learning (28.53%) as the top two categories. Another sizeable category was ‘General’, which consisted of usability evaluations that were not isolated to a specific domain; for example, if a usability evaluation was conducted across a university for a learning management system. Additionally, few studies considered UX for assistive services despite the fact that this field relies heavily on the use of technology and is federally required by government-funded education initiatives in accordance with section 508 of the Rehabilitation Act. These technologies include providing systems to support hearing or vision impaired learners. Finally, the data also indicates that very few studies focused on education (pre-service, in-service education) despite the fact that instructors, trainers, and teachers are often the ones to implement the technology in a learning environment (see Table  5 ).

The final research question also considers the different types of users surveyed about their user experience throughout the studies. In terms of analysis, a usability study could conceivably include multiple stakeholders within the same study, such as an app that is focused on education to enhance medical outcomes. So if the usability evaluation included both the child and parent/caregiver, they were each counted in their respective categories. The results indicate that the majority of users assessed were the learners themselves (77.19%) with a lesser percentage being the instructors (17.93%). An insignificant portion fell into different categories including Parent/Caregiver, Expert, Other, or Not Reported. Since these percentages include instances where studies surveyed more than one type of user (i.e., learners and instructors together), the low percentage of included instructors is of interest. While logically a user experience study would emphasize the learner experience, there is opportunity to broaden the understanding of usability by including greater instructor perspective (see Table  6 ).

The research question also focused on how many users took part in the usability evaluation for a specific learning technology. Because the number of users within a usability study is subject to longstanding debate within the field (Alroobaea & Mayhew, 2014 ; Hornbæk, 2006 ; Hwang & Salvendy, 2010 ), this systematic review sought to distinguish between small (0–10), medium (11–30; 30–50), and large scale studies (51–100; 100 or more). In contrast to the other tables, there was arguably more uniformity, as represented in Table  7 . Interestingly, the data ranged from large-scale usability assessments (100 + at 31.25%) to more moderate in scale (25.80%). Alternatively, very few usability studies were within the range of 0–10. This is noteworthy in light of the discussion around usability test sample size where established findings suggest that the ideal size is between 3 and 16 users, with an ideal number of five users to identify 80% of issues (Nielsen & Landauer, 1993 ) .

4 Discussion

The field of learning design and technology has garnered considerable interest in the concept of learning experience design, which is broadly defined as a human-centric view of learning as individuals engage in knowledge construction (Chang & Kuwata, 2020 ; Gray, 2020 ; Jahnke et al., 2020 ; Tawfik et al., 2022 ). Beyond a content-driven approach, this includes additional experiential aspects as individuals employ technology, such as usability and other aspects of human–computer interaction. To date, some research has explored usability within learning technologies using a systematic review within the range of 50–120 (Estrada-Molina et al., 2022 ; Lu et al., 2022 ), which provide insight into the methods, measures and other key aspects that are essential to LXD. This research builds on these prior studies through analysis of over 600 articles focused on recent usability studies in education within the last decade.

The first research question sought to understand the specific instruments used to evaluate usability for learning environments. This is important to better understand LXD evaluation from a methodological perspective and identify what tools are used within LXD. Moreover, this would potentially allow the field to identify preferred methods among existing LXD professionals and researchers, while also identifying consistent trends as the instrument is utilized across different contexts. In terms of the first research question, a significant finding is that many usability instruments were self-generated. In some respects, one might argue that this is a logical outcome when a new technology (e.g., wearable technology; robotics) is employed and no standard instrument can accurately assess novel user interactions. However, the overreliance of self-created instruments is problematic from a validity and reliability perspective as these measures are often susceptible to biased results (Davies, 2020 ). Furthermore, self-created survey instruments can make it difficult to replicate findings, especially when the surveys are not published as part of the usability study (Spector et al., 2015 ). As researchers look to address this issue, it may highlight the need for more instrument development as advancements in technologies evolve, especially for diverse populations (Schmidt & Glaser, 2021 ). Rather than depend on a self-created instrument, it is important to have valid and reliable instruments that measure the complexities of LXD as learners interact with technology.

Additional data identifies specific instruments that have been used to evaluate different learning technologies. This finding coincides with the discussion that many in LXD often borrows from other fields, especially for evaluationn (Schmidt et al., 2020 ). Specifically, the systematic review finds that instruments such as SUS and variations of TAM have been used extensively as researchers assess the usability of the learning environment. While this is beneficial to employ methods rooted in HCI, other LXD researchers note that instruments must account for the unique interactions that are inherent to learning technologies (Novak et al., 2018 ; Tawfik et al., 2022 ), such as learning from failure and iterative knowledge construction. In addition, surveys like the SUS are not necessarily designed to diagnose specific issues that might plague a learning environment; rather, it evaluates the system from a more technical and general perspective (Vlachogianni & Tselios, 2022 ). Hence, one might question whether these instruments fully capture the complex user experience needs of learning technologies, especially for specific features that one might want to assess (i.e., embedded artificial intelligent component; perceptions of a novel collaborative tool). In conclusion, the finding underscores the need for more instrument development within LXD that is specifically designed to evaluate the usability of learning environments.

What are the domains and types of technology in which usability is evaluated for learning environments?

The second research question focused on the type of technologies and the domain in which usability of learning environments is assessed. In terms of the former, the data seems to be relatively distributed, with four technology types greater than 15%. That said, there are several notable gaps that were identified through this systematic review. First, collaborative technologies only constituted 3.77% of overall usability studies. In light of the first research question, it may highlight the need for instruments and protocols that measure collaborative technologies in which different types of learning interactions take place. For example, the protocols not rooted in education may overlook important collaborative actions that are often complex, such as how to engage in paired programming, sharing resources, perceptions of privacy, and others. As such, the lack of protocols that account for multiple users may be a reason for the lack of studies focused on collaborative technologies. Additional gaps relate to more emergent technologies that are of interest to many educators, such as robotics and artificial intelligence used in intelligent tutoring systems. Although educator interest and implementation rates are increasing to support learning outcomes, the data suggests that the usability evaluations using surveys lack methodological consistency.

The second aspect of this research question centered on what domains were the focus of usability studies of learning technology. Results show that approximately two-thirds (67.79%) were focused on medical education and STEM. In many respects, one might argue that the top two categories are reflective of increased emphasis, continued innovation, and consistent funding allocated for these domains. Moreover, the high number of studies identified in this systematic review suggests that usability studies are indeed an emphasis for new tools in these domains, as opposed to merely implementing without any source of evaluation.

Although there seems to be considerable research pertaining to learning environment usability in the STEM and medical domain, the data suggests that there is a notable gap between these disciplines and others domains. Less than approximately four percent of usability studies were focused within the teaching (pre-service, in-service) domain. This is potentially problematic because teachers are often the ones that rely upon and integrate technology in the K-12 setting, so it follows that those in LXD should consider their perspective during a usability evaluation. The finding also has implications regarding the debate about the efficacy of technology and roles of technology in K-12. Some argue that technology can address issues of inequity and access, while others argue that the resources needed for technology may not justify the learning gains (Nye, 2015 ; Petko et al., 2017 ; Tawfik et al., 2016 ; Vivona et al., 2023 ). The lack of usability evaluations is problematic for implementation and adoption, and the data presented in this systematic review suggests these educator perspectives are severely lacking, which may exacerbate the debate about effective use of technologies. Similarly, usability studies are important for assistive services technologies (e.g. learning tools for neurodivergent students) given the opportunity for technology to address specific challenges encountered by these learners beyond just memorization of information. In line with the lack of usability focused on K-12 educators, it is problematic that there seems to be a dearth of literature for assistive technology. In conclusion, the findings from the review suggest that future studies should bridge this considerable research gap beyond usability in the medical and STEM when compared with other domains.

The final research question focused on users' characteristics for participants in usability evaluations of learning technologies. As noted in the results section, there is a debate regarding the appropriate number of participants per research study (Alroobaea & Mayhew, 2014 ; Hornbæk, 2006 ; Hwang & Salvendy, 2010 ), which was addressed by providing nuanced categories in terms of the following evaluations: small (0–10); medium (11–30; 31–50), and large (50–100; 100 +). Regarding scope, a large majority of studies were focused on the learner. One might argue this is a positive outcome to understand the learner perspective, especially given the focus of LXD to be a more human-centric view of learning (Schmidt & Huang, 2022 ). However, this systematic review also found that very few studies are focused on other perspectives, such as the educator or parent/caregiver.To adopt a systems view of learning technologies (Šumak et al., 2011 ), it follows that multiple perspectives should be included. The present data from the systematic review suggests that critical perspectives are often excluded regarding the usability of learning environments.

The final research question also sought to understand the quantify of users assessed during usability evaluations. This finding is important because a prevailing assumption among practitioners is that one can find approximately 80% of usability issues with five users (Alroobaea & Mayhew, 2014 ). The study extends this through broad parameters of small (0–10), medium (11–30; 31–50), and large-scale studies (51–100; 100 or more). Based on this belief, one might thus expect to find that the category of 0–10 participants was the highest. However, a surprising finding is that large-scale usability studies are most common using these instruments, with many studies conducting survey-driven usability research with over 100 participants (32.09%). The next category was more moderate in scale (11–30 users at 25.39%), which suggests there is relative uniformity in terms of how many users are assessed. Rather than make assumptions based on a small number of users that are subject to wide variability, the research shows that usability is assessed in both moderate and large-scale approaches.

5 Conclusion and Future Studies

The current systematic review builds on recent discourse about how to assess a critical design aspect of learning technology, namely usability studies. Recent systematic reviews have conducted similar assessments, finding the number of studies employing surveys between approximately 50 (Estrada-Molina et al., 2022 ) and 120 (Lu et al., 2022 ). This systematic review focused on the last decade suggests there appears to be a lack of standardized instrumentation for usability evaluations, with an overreliance on self-created instruments (RQ1). The research is often focused on the medical and STEM domains, with very little usability assessment done on domains such as teacher education, language arts, and others (RQ2). Finally, the number of participants is also extremely variable (RQ3).

Although the systematic review provides clarity regarding characteristics of usability evaluations, there are multiple opportunities to build on this research. While the current study applied a systematic review, it may be helpful to conduct a meta-analysis that quantifies trends among the instruments, especially in terms of the quality, validity, and reliability. The data presented in this article describes what survey instruments were used, but it does not necessarily present usability scores provided from the learner. A meta-analysis that aggregates these scores or a specific number of users during an evaluation study can identify gaps that should be addressed by LXD researchers and practitioners. Similarly, the current systematic review provides an overall count of instruments and types of learning technology, but does not necessarily consider broader trends in adoption of these tools. For example, it may be that findings around few usability studies in robotics and MOOC usability studies are correlated with the implementation of these digital tools. Other technologies may be more restricted based on the country where the research was conducted, which would limit usability evaluations. Further studies might explore the percentage of usability evaluations relative to the published literature about that specific learning technology. This can be especially helpful to identify issues for emergent technologies that might require additional testing during a usability evaluation.

Another follow-up study could revisit the assessment strategies. The current systematic review focused on surveys; however, usability studies employ multiple other strategies to understand the user perspective (Schmidt et al., 2020 ). A future study could look at approaches such as expert reviews, heuristics, think-alouds, and others that are often employed in usability studies. It may also be beneficial to the literature to look at studies that utilized multiple assessment formats (e.g., both heuristic evaluations and surveys).

Alroobaea, R., & Mayhew, P. J. (2014). How many participants are really enough for usability studies? Science and Information Conference, 2014 , 48–56. https://doi.org/10.1109/SAI.2014.6918171

Article   Google Scholar  

Brooke, J. (1996). SUS: A “quick and dirty” usability scale. In P. W. Jordan, B. Thomas, B. A. Weerdmeester, & A. L. McClelland (Eds.), Usability evaluation in industry (pp. 189–194). Taylor & Francis.

Google Scholar  

Campbell, A., Craig, T., & Collier-Reed, B. (2020). A framework for using learning theories to inform “growth mindset” activities. International Journal of Mathematical Education in Science and Technology, 51 (1), 26–43. https://doi.org/10.1080/0020739X.2018.1562118

Chang, Y. K., & Kuwata, J. (2020). Learning experience design: Challenges for novice designers. In M. Schmidt, A. A. Tawfik, I. Jahnke, & Y. Earnshaw (Eds.), Learner and user experience research: An introduction for the field of learning design & technology (pp. 145–163). EdTech Books.

Christensen, R., Hodges, C. B., & Spector, J. M. (2022). A framework for classifying replication studies in educational technologies research. Technology Knowledge and Learning, 27 (4), 1021–1038. https://doi.org/10.1007/s10758-021-09532-3

Davies, R. (2020). Assessing learning outcomes. In M. J. Bishop, E. Boling, J. Elen, & V. Svihla (Eds.), Handbook of research in educational communications and technology: learning design (pp. 521–546). Springer International Publishing. https://doi.org/10.1007/978-3-030-36119-8_25

Chapter   Google Scholar  

Davis, F. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. The Mississippi Quarterly, 13 (3), 319–340.

Downey, L. L. (2007). Group usability testing: Evolution in usability techniques. Journal of Usability Studies, 2 (3), 133–155.

Engeström, Y. (1999). Activity theory and individual and social transformation. Perspectives on Activity Theory . https://books.google.ca/books?hl=en&lr=&id=GCVCZy2xHD4C&oi=fnd&pg=PA19&ots=l00JOMD5mU&sig=Tjz8OrwWINxiqKRG0ByVPbIx_WU

Estrada-Molina, O., Fuentes-Cancell, D. R., & Morales, A. A. (2022). The assessment of the usability of digital educational resources: An interdisciplinary analysis from two systematic reviews. Education and Information Technologies, 27 (3), 4037–4063. https://doi.org/10.1007/s10639-021-10727-5

Floor, N. (2023). This is learning experience design: What it is, how it works, and why it matters. (Voices That Matter) (1st ed.). New Riders.

Ge, X., Law, V., & Huang, K. (2016). Detangling the interrelationships between self-regulation and ill-structured problem solving in problem-based learning. Interdisciplinary Journal of Problem-Based Learning, 10 (2), 1–14. https://doi.org/10.7771/1541-5015.1622

Gray, C. (2016). “It’s More of a Mindset Than a Method” UX Practitioners’ Conception of Design Methods. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems , 4044–4055. https://doi.org/10.1145/2858036.2858410

Gray, C. (2020). Paradigms of knowledge production in human-computer interaction: Towards a framing for learner experience (lx) design. In M. Schmidt, A. A. Tawfik, I. Jahnke, & Y. Earnshaw (Eds.), Learner and user experience research: An introduction for the field of learning design & technology (pp. 51–65). EdTech Books.

Gregg, A., Reid, R., Aldemir, T., Gray, J., Frederick, M., & Garbrick, A. (2020). Think-aloud observations to improve online course design: A case example and “how-to” guide. In M. Schmidt, A. Tawfik, I. Jahnke, & Y. Earnshaw (Eds.), Learner and user experience research. EdTech Books.

Hornbæk, K. (2006). Current practice in measuring usability: Challenges to usability studies and research. International Journal of Human-Computer Studies, 64 (2), 79–102. https://doi.org/10.1016/j.ijhcs.2005.06.002

Hwang, W., & Salvendy, G. (2010). Number of people required for usability evaluation: The 10±2 rule. Communications of the ACM, 53 (5), 130–133. https://doi.org/10.1145/1735223.1735255

Jahnke, I., Schmidt, M., Pham, M., & Singh, K. (2020). Sociotechnical-pedagogical usability for designing and evaluating learner experience in technology-enhanced environments. In M. Schmidt, A. A. Tawfik, I. Jahnke, & Y. Earnshaw (Eds.), Learner and user experience research (pp. 127–144). EdTechBooks.

Jonassen, D., Tessmer, M., & Hannum, W. H. (1998). Task analysis methods for instructional design . Routledge.

Book   Google Scholar  

Lu, J., Schmidt, M., Lee, M., & Huang, R. (2022). Usability research in educational technology: A state-of-the-art systematic review. Educational Technology Research and Development, 70 , 1951–1992. https://doi.org/10.1007/s11423-022-10152-6

Maramba, I., Chatterjee, A., & Newman, C. (2019). Methods of usability testing in the development of eHealth applications: A scoping review. International Journal of Medical Informatics, 126 , 95–104. https://doi.org/10.1016/j.ijmedinf.2019.03.018

Martin, F., Ahlgrim-Delzell, L., & Budhrani, K. (2017). Systematic review of two decades (1995 to 2014) of research on synchronous online learning. The American Journal of Distance Education, 31 (1), 3–19. https://doi.org/10.1080/08923647.2017.1264807

Martin, F., Dennen, V. P., & Bonk, C. J. (2020a). A synthesis of systematic review research on emerging learning environments and technologies. Educational Technology Research and Development: ETR & D, 68 (4), 1613–1633. https://doi.org/10.1007/s11423-020-09812-2

Martin, F., Sun, T., & Westine, C. D. (2020b). A systematic review of research on online teaching and learning from 2009 to 2018. Computers & Education, 159 , 104009. https://doi.org/10.1016/j.compedu.2020.104009

Nielsen, J., & Landauer, T. K. (1993). A mathematical model of the finding of usability problems. In Proceedings of the INTERACT ’93 and CHI '93 Conference on Human Factors in Computing Systems , 206–213. https://doi.org/10.1145/169059.169166

Nielsen, J. (1994). Usability Engineering . Morgan Kaufmann. https://play.google.com/store/books/details?id=95As2OF67f0C

North, C. (2023). Learning experience design essentials . Association for Talent Development.

Novak, E., Daday, J., & McDaniel, K. (2018). Assessing intrinsic and extraneous cognitive complexity of e-textbook learning. Interacting with Computers, 30 (2), 150–161. https://doi.org/10.1093/iwc/iwy001

Nye, B. D. (2015). Intelligent tutoring systems by and for the developing world: A review of trends and approaches for educational technology in a global context. International Journal of Artificial Intelligence in Education, 25 (2), 177–203. https://doi.org/10.1007/s40593-014-0028-6

Perera, P., Tennakoon, G., Ahangama, S., Panditharathna, R., & Chathuranga, B. (2021). A systematic mapping of introductory programming languages for novice learners. IEEE Access, 9 , 88121–88136. https://doi.org/10.1109/ACCESS.2021.3089560

Petko, D., Cantieni, A., & Prasse, D. (2017). Perceived quality of educational technology matters: A secondary analysis of students’ ICT use, ICT-related attitudes, and PISA 2012 test scores. Journal of Educational Computing Research, 54 (8), 1070–1091. https://doi.org/10.1177/0735633116649373

Rizvi, R. F., Marquard, J. L., Hultman, G. M., Adam, T. J., Harder, K. A., & Melton, G. B. (2017). Usability evaluation of electronic health record system around clinical notes usage–An ethnographic study. Applied Clinical Informatics, 08 (04), 1095–1105. https://doi.org/10.4338/ACI-2017-04-RA-0067

Schmidt, M., & Glaser, N. (2021). Investigating the usability and learner experience of a virtual reality adaptive skills intervention for adults with autism spectrum disorder. Educational Technology Research and Development, 69 (3), 1665–1699. https://doi.org/10.1007/s11423-021-10005-8

Schmidt, M., & Huang, R. (2022). Defining learning experience design: Voices from the field of learning design & technology. TechTrends, 66 (2), 141–158. https://doi.org/10.1007/s11528-021-00656-y

Schmidt, M., Tawfik, A. A., Jahnke, I., & Earnshaw, Y. (2020). Methods of user centered design and evaluation for learning designers. In M. Schmidt, A. A. Tawfik, I. Jahnke, & Y. Earnshaw (Eds.), Learner and user experience research. EdTechBooks.

Spector, J. M., Johnson, T. E., & Young, P. A. (2015). An editorial on replication studies and scaling up efforts. Educational Technology Research and Development: ETR & D, 63 (1), 1–4. https://doi.org/10.1007/s11423-014-9364-3

Šumak, B., Heričko, M., & Pušnik, M. (2011). A meta-analysis of e-learning technology acceptance: The role of user types and e-learning technology types. Computers in Human Behavior, 27 (6), 2067–2077. https://doi.org/10.1016/j.chb.2011.08.005

Tawfik, A., Gatewood, J., Gish-Lieberman, J., & Hampton, A. (2022). Toward a definition of learning experience design. Technology, Knowledge, & Learning, 27 (1), 309–334. https://doi.org/10.1007/s10758-020-09482-2

Tawfik, A., Reeves, T. D., & Stich, A. (2016). Intended and unintended consequences of educational technology on social inequality. TechTrends, 60 (6), 598–605. https://doi.org/10.1007/s11528-016-0109-5

Victoria-Castro, A. M., Martin, M., Yamamoto, Y., Ahmad, T., Arora, T., Calderon, F., Desai, N., Gerber, B., Lee, K. A., Jacoby, D., Melchinger, H., Nguyen, A., Shaw, M., Simonov, M., Williams, A., Weinstein, J., & Wilson, F. P. (2022). Pragmatic randomized trial assessing the impact of digital health technology on quality of life in patients with heart failure: Design, rationale and implementation. Clinical Cardiology, 45 (8), 839–849. https://doi.org/10.1002/clc.23848

Vivona, R., Demircioglu, M. A., & Audretsch, D. B. (2023). The costs of collaborative innovation. The Journal of Technology Transfer, 48 (3), 873–899. https://doi.org/10.1007/s10961-022-09933-1

Vlachogianni, P., & Tselios, N. (2022). Perceived usability evaluation of educational technology using the system usability scale (SUS): A systematic review. Journal of Research on Technology in Education, 54 (3), 392–409. https://doi.org/10.1080/15391523.2020.1867938

Yamagata-Lynch, L. C. (2010). Activity systems analysis methods: Understanding complex learning environments . Springer Science & Business Media.

Download references

This material is based upon work in part supported by the National Science Foundation under Grant (Removed for Anonymity). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the (Removed for Anonymity).

Author information

Authors and affiliations.

University of Memphis, Memphis, USA

Andrew A. Tawfik, Linda Payne, Heather Ketter & Jedidiah James

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Andrew A. Tawfik .

Ethics declarations

Conflict of interest.

The authors have no competing interests to declare that are relevant to the content of this article.

Consent to Participate

The research did not engage in data collection; therefore, informed consent is not relevant to this article.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Tawfik, A.A., Payne, L., Ketter, H. et al. What instruments do researchers use to evaluate LXD? A systematic review study. Tech Know Learn (2024). https://doi.org/10.1007/s10758-024-09763-0

Download citation

Accepted : 26 June 2024

Published : 18 July 2024

DOI : https://doi.org/10.1007/s10758-024-09763-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Learning experience design (LXD)
  • Human–computer interaction
  • Learning technology
  • User-experience
  • Find a journal
  • Publish with us
  • Track your research

National Center for Science and Engineering Statistics

  • All previous cycle years

The FFRDC Research and Development Survey is the primary source of information on separately accounted for R&D expenditures at federally funded research and development centers in the United States.

Survey Info

  • tag for use when URL is provided --> Methodology
  • tag for use when URL is provided --> Data
  • tag for use when URL is provided --> Analysis

The FFRDC Research and Development Survey is the primary source of information on research and development (R&D) expenditures that are separately accounted for at federally funded research and development centers (FFRDCs) in the United States. Conducted annually for university-administered FFRDCs since FY 1953 and all FFRDCs since FY 2001, the survey collects information on R&D expenditures by source of funds and types of research and expenses. The survey is an annual census of the full population of eligible FFRDCs.

Areas of Interest

  • Research and Development

Survey Administration

The FY 2023 survey was conducted by ICF under contract to the National Center for Science and Engineering Statistics.

Survey Details

Status Active
Frequency Annual
Reference Period FY 2023
Next Release Date August 2025
  • Survey Description (PDF 250 KB)
  • Data Tables (PDF 576 KB)

Featured Survey Analysis

Federally Funded R&D Centers Report 13% Increase in R&D Spending in FY 2023.

Federally Funded R&D Centers Report 13% Increase in R&D Spending in FY 2023

Image 3096

FFRDC R&D Survey Overview

Data highlights, fy 2023 was the 10th consecutive year of nominal growth for r&d expenditures at federally funded research and development centers (ffrdcs).

Figure 1

Applied research accounted for the largest share of FFRDC R&D expenditures in FY 2023 (40.7% or $11.9 billion)

Figure 1

Methodology

Survey description, survey overview (fy 2023 survey cycle).

The FFRDC Research and Development Survey is conducted by the National Center for Science and Engineering Statistics (NCSES) within the U.S. National Science Foundation (NSF). It is the primary source of information on research and development (R&D) expenditures that are separately accounted for at federally funded research and development centers (FFRDCs) in the United States.

Data collection authority

The information is solicited under the authority of the National Science Foundation Act of 1950, as amended, and the America COMPETES Reauthorization Act of 2010. The Office of Management and Budget control number is 3145-0100, with an expiration date of 31 July 2025.

Major changes to recent survey cycle

Key survey information, initial survey year, reference period, response unit.

Establishment.

Sample or census

Population size, sample size.

The survey is a census of all known eligible FFRDCs.

Key variables

Key variables of interest are listed below.

  • R&D expenditures by source of funds (federal, state and local, business, nonprofit, or other)
  • R&D expenditures by federal agency source
  • R&D expenditures by type of R&D (basic research, applied research, or experimental development)
  • R&D expenditures by type of costs (salaries, software, equipment, subcontracts, other direct costs, and indirect costs)
  • Total operating budget
  • R&D personnel headcounts and full-time equivalents by R&D function

Survey Design

Target population.

All FFRDCs.

Sampling frame

The total survey universe is identified through the Master Government List of FFRDCs. NSF is responsible for maintaining this list and queries all federal agencies annually to determine any changes to, additions to, or deletions from the list.

Sample design

The FFRDC R&D Survey is a census of all current FFRDCs identified on the Master Government List of FFRDCs.

Data Collection and Processing

Data collection.

The FY 2023 survey announcements were sent by e-mail to all FFRDCs in November 2023. The data collection period concluded in April 2024. Respondents could choose to complete a questionnaire downloaded from the Web or use a Web-based data collection system to respond to the survey. Every effort was made to maintain close contact with respondents to preserve the consistency and continuity of the resulting data. Survey data reports were available on the survey website for each FFRDC; these reports showed comparisons between the current year and the 2 prior years of data and noted any substantive disparities.

Data processing

Respondents using the Web-based data collection system were asked to explain significant differences between current and past year reporting while completing the questionnaire. Questionnaires were carefully examined by survey staff for completeness upon receipt. Reviews focused on unexplained missing data and explanations provided for changes in reporting patterns. If additional explanations or data revisions were needed, respondents were sent personalized e-mails asking them to provide any necessary revisions before the final processing and tabulation of data. Respondents were encouraged to correct prior year data, if necessary. When respondents updated or amended figures from past years, NCSES made corresponding changes to trend data in the FY 2023 data tables and to the underlying microdata. For accurate historical data, use only the most recently released data tables.

Estimation techniques

No data were imputed for FY 2023.

Survey Quality Measures

Sampling error.

Because the FY 2023 survey was a survey distributed to all organizations in the universe, there was no sampling error.

Coverage error

Given the availability of a comprehensive FFRDC list, there is no known coverage error for this survey. FFRDCs are identified through the Master Government List of FFRDCs . NSF is responsible for maintaining the master list and queries all federal agencies annually to determine changes to, additions to, or deletions from the list.

Nonresponse error

Most FFRDCs have incorporated the data needed to complete most of the survey questions into their record-keeping systems. Eleven FFRDCs chose not to complete any of Question 4 (R&D costs), which asks for expenditures by type of cost. Ten of those FFRDCs are managed by private companies for whom salary information is considered proprietary. One FFRDC, which is managed by a university, does not capture direct and indirect costs in its financial system in a way that is reportable on Question 4. Other FFRDCs did not answer all sections of Question 4: one FFRDC could not provide information on software, equipment, and other direct costs, another could not provide information on software and equipment expenditures, and one could not provide data on equipment expenditures. One FFRDC did not report its operating budget (Question 5). Eight FFRDCs chose not to complete Question 6, which asks for full-time equivalents and headcounts for R&D staff. Six of those FFRDCs are managed by private companies for whom staffing structure information is considered proprietary. Variables for Questions 4, 5, and 6 are not tabulated and are not included in the public use file.

Measurement error

NCSES discovered during the FY 2011 survey cycle that seven FFRDCs included capital project expenditures in the R&D totals reported on the survey. Corrections made for the FY 2011 survey cycle lowered total expenditures by $468 million. However, previous years still include an unknown amount of capital expenditures in the total. The amount is estimated to be less than $500 million per year.

Prior to the FY 2011 survey, the five FFRDCs administered by the MITRE Corporation had reported only internally funded R&D expenditures. After discussions with NCSES, these five FFRDCs agreed to report all FY 2011 operating expenditures for R&D and to revise their data for FYs 2008–10.

NCSES discovered during the FY 2013 survey cycle that Los Alamos National Laboratory (LANL) was reporting some expenditures that were not for R&D as defined by this survey. Corrections made for the FY 2013 survey cycle lowered the laboratory’s total expenditures by $349 million. LANL also was incorrectly reporting that all expenditures were for basic research. In corrections made for FY 2013, LANL reported that $1,554 million (91%) of its total research expenditures was for applied research. LANL data from previous years still include an unknown amount of expenditures that were not for R&D and categorize all expenditures as basic research.

Prior to FY 2014, the Aerospace FFRDC reported only expenditures on internal R&D projects. After discussions with NCSES, the Aerospace Corporation agreed to report all R&D expenditures for FY 2014 and provide revised data to include all R&D expenditures for FYs 2010–13. R&D expenditures increased by more than $800 million each year.

Prior to the FY 2021 collection, NCSES discovered that the Green Bank Observatory had split from the National Radio Astronomy Observatory on 1 October 2016, both retained FFRDC status. For FYs 2017–20, R&D expenditures reported for the National Radio Astronomy Observatory include the expenditures for the Green Bank Observatory. The Green Bank Observatory began reporting separately on the FY 2021 survey.

NCSES discovered during the FY 2022 survey cycle that two FFRDCs, Idaho National Laboratory and Savannah River National Laboratory, were incorrectly classified as industry-administered FFRDCs for select years. Idaho National Laboratory’s industrial firm administrator, Bechtel BWXT Idaho, LLC, was replaced by a nonprofit administrator, Battelle Energy Alliance, LLC, in February 2005. The classification was corrected for FY 2022, and Idaho National Laboratory’s FYs 2005–21 data were reclassified as coming from a nonprofit-administered FFRDC. Savannah River National Laboratory’s industrial firm administrator, Savannah River Nuclear Solutions, LLC, was replaced by a nonprofit administrator, Battelle Savannah River Alliance, LLC, in December 2020. The classification was corrected for FY 2022 and Savannah River National Laboratory’s FY 2021 data were reclassified as coming from a nonprofit-administered FFRDC.

NCSES discovered during the FY 2023 survey cycle that LANL reported non-R&D expenditures in the FY 2022 survey that should have been excluded. This included $356.5 million in the federally funded expenditures and $160.5 million in the business-funded expenditures. The FY 2022 LANL and national totals were revised in the FY 2023 publications, lowering the laboratory’s total expenditures by $517 million.

Data Availability and Comparability

Data availability.

Annual data are available for all FFRDCs for FYs 2001–23.

Data comparability

When the review for consistency between each year’s data and submissions in prior years reveals discrepancies, it is sometimes necessary to modify prior years’ data. For accurate historical data, use only the most recently released data tables. Individuals wishing to analyze trends other than those in the most recent NCSES publication are encouraged to contact the Survey Manager for more information about the comparability of data over time.

Data Products

Publications.

The data from this survey are published annually and available at https://ncses.nsf.gov/surveys/ffrdc-research-development/ . Information from this survey is also included in Science and Engineering Indicators .

Electronic access

Public use files are available at https://ncses.nsf.gov/explore-data/microdata/ffrdc-research-development .

Technical Notes

Survey overview.

Purpos e . The FFRDC Research and Development Survey is conducted by the National Center for Science and Engineering Statistics (NCSES) within the U.S. National Science Foundation (NSF). It is the primary source of information on separately accounted for research and development (R&D) expenditures at federally funded research and development centers (FFRDCs) in the United States.

Data c oll ec tion authorit y . The information is solicited under the authority of the NSF Act of 1950, as amended, and the America COMPETES Reauthorization Act of 2010. The Office of Management and Budget control number for the FY 2023 FFRDC R&D Survey is 3145-0100, with an expiration date of 31 July 2025.

Sur ve y c ontr a c tor. ICF.

Sur ve y sponsor. NCSES.

Fr e qu e n c y . Annual.

I nitial sur ve y ye ar. 2001.

R e f e r e n c e p e riod. FY 2023.

R e sponse unit. Establishment.

Sample or ce nsus. Census.

Population siz e . 42.

Sample siz e . The survey is a census of all known eligible FFRDCs.

T arg e t population. All FFRDCs.

Sampling fram e . The total survey universe is identified through the Master Government List of FFRDCs ( https://www.nsf.gov/statistics/ffrdclist/ ). NSF is responsible for maintaining this list and queries all federal agencies annually to determine any changes to, additions to, or deletions from the list.

Sample d e sign. The FFRDC R&D Survey is a census of all current FFRDCs identified on the Master Government List of FFRDCs.

Data Collection and Processing Methods

Data c oll ec tion. The FY 2023 survey announcements were sent by e-mail to all FFRDCs in November 2023. The data collection period concluded in April 2024. Respondents could choose to complete a questionnaire downloaded from the Web or use a Web-based data collection system to respond to the survey. Every effort was made to maintain close contact with respondents to preserve the consistency and continuity of the resulting data. Survey data reports were available on the survey website for each FFRDC; these reports showed comparisons between the current year and the 2 prior years of data and noted any substantive disparities.

Respondents using the Web-based data collection system were asked to explain significant differences between current year reporting and past year reporting while completing the questionnaire. Questionnaires were carefully examined by survey staff for completeness upon receipt. Reviews focused on unexplained missing data and explanations provided for changes in reporting patterns. If additional explanations or data revisions were needed, respondents were sent personalized e-mails asking them to provide any necessary revisions before the final processing and tabulation of data. These e-mails included a link to the Web-based collection system, to help respondents view and correct their data.

Respondents were encouraged to correct prior year data, if necessary. When respondents updated or amended figures from past years, NCSES made corresponding changes to trend data in the FY 2023 data tables and to the underlying microdata. For accurate historical data, use only the most recently released data tables.

M od e . Respondents could respond to the survey by completing a PDF questionnaire downloaded from the Web or by using a Web-based data collection system. All FFRDCs submitted data using the Web-based survey.

R e sponse rat e s. All 42 FFRDCs included on the Master Government List of FFRDCs during the FY 2023 survey cycle completed the key survey questions.

Data e diting. The FFRDC R&D Survey was subject to very little editing; respondents were contacted and asked to resolve possible self-reporting issues themselves. Questionnaires were carefully examined by survey staff upon receipt. These reviews focused on unexplained missing data and explanations provided for changes in reporting patterns. If additional explanations or data revisions were needed, respondents were sent personalized e-mail messages asking them to provide any necessary revisions before the final processing and tabulation of data.

Imputation. No data were imputed for FY 2023.

W e ighting. FFRDC R&D Survey data were not weighted.

Varian c e e stimation. No variance estimation techniques were used.

Sampling e rror. Because the FY 2023 survey was a survey distributed to all organizations in the universe, there was no sampling error.

C o ve rage e rror. Given the availability of a comprehensive FFRDC list, there is no known coverage error for this survey. FFRDCs are identified through the NSF Master Government List of FFRDCs . NSF is responsible for maintaining the master list and queries all federal agencies annually to determine changes to, additions to, or deletions from the list.

N onr e sponse e rror. Most FFRDCs have incorporated the data needed to complete most of the survey questions into their record-keeping systems. Eleven FFRDCs chose not to complete any of Question 4 of the survey, which asks for expenditures by type of cost. Ten of those FFRDCs are managed by private companies for whom salary information is considered proprietary. One FFRDC, which is managed by a university, does not capture direct and indirect costs in its financial system in a way that is reportable on Question 4. Other FFRDCs did not answer all sections of Question 4: one FFRDC could not provide information on software, equipment, and other direct costs, another could not provide information on software and equipment expenditures, and one could not provide data on equipment expenditures. One FFRDC did not report its operating budget (Question 5). Eight FFRDCs chose not to complete Question 6 of the survey, which asks for full-time equivalents (FTEs) and headcounts for R&D staff. Six of those FFRDCs are managed by private companies for whom staffing structure information is considered proprietary. Variables for Questions 4, 5, and 6 are not tabulated and are not included in the public use file.

Me asur e m e nt e rror. NCSES discovered during the FY 2011 survey cycle that seven FFRDCs were including capital project expenditures in the R&D totals reported on the survey. Corrections made for the FY 2011 survey cycle lowered total expenditures by $468 million. However, previous years still include an unknown amount of capital expenditures in the total. The amount is estimated to be less than $500 million per year.

NCSES discovered during the FY 2013 survey cycle that Los Alamos National Laboratory (LANL) was reporting some expenditures that were not for R&D, as defined by this survey. Corrections made for the FY 2013 survey cycle lowered the laboratory’s total expenditures by $349 million. LANL also was incorrectly reporting that all expenditures were for basic research. In corrections made for FY 2013, LANL reported that $1,554 million (91%) of its total research expenditures was for applied research. LANL data from previous years still include an unknown amount of expenditures that were not for R&D and categorize all expenditures as basic research.

Prior to the FY 2021 collection, NCSES discovered that the Green Bank Observatory had split from the National Radio Astronomy Observatory on 1 October 2016, both retained FFRDC status. For FY 2017–20, R&D expenditures reported for the National Radio Astronomy Observatory include the expenditures for the Green Bank Observatory. The Green Bank Observatory began reporting separately on the FY 2021 survey.

NCSES discovered during the FY 2022 survey cycle that two FFRDCs, Idaho National Laboratory and Savannah River National Laboratory, were incorrectly classified as industry-administered FFRDCs for select years. Idaho National Laboratory’s industrial firm administrator, Bechtel BWXT Idaho, LLC, was replaced by a nonprofit administrator, Battelle Energy Alliance, LLC, in February 2005. The classification was corrected for FY 2022 and Idaho National Laboratory’s FYs 2005–21 data were reclassified as coming from a nonprofit-administered FFRDC. Savannah River National Laboratory’s industrial firm administrator, Savannah River Nuclear Solutions, LLC, was replaced by a nonprofit administrator, Battelle Savannah River Alliance, LLC, in December 2020. The classification was corrected for FY 2022 and Savannah River National Laboratory’s FY 2021 data were reclassified as coming from a nonprofit-administered FFRDC.

Data Comparability (Changes)

Annual data are available for FYs 2001–23. When the review for consistency between each year’s data and submissions in prior years reveals discrepancies, it is sometimes necessary to modify prior year data. For accurate historical data, use only the most recently released data tables. Individuals wishing to analyze trends other than those in the most recent NCSES publication are encouraged to contact the Survey Manager for more information about the comparability of data over time.

C hang e s in survey coverage and population. Most years, there are some changes to the FFRDC population that may affect trend analyses. FFRDCs have been created, decertified, renamed, or restructured, as described below:

  • On 20 December 2006, the National Biodefense Analysis and Countermeasure Center was created.
  • Prior to FY 2009, the Center for Enterprise Modernization was listed as the Internal Revenue Service FFRDC.
  • On 5 March 2009, the Homeland Security Studies and Analysis Institute and the Homeland Security Systems Engineering and Development Institute were created. These new FFRDCs replaced the Homeland Security Institute.
  • On 1 October 2009, the National Solar Observatory split from the National Optical Astronomy Observatory, with both retaining their FFRDC status.
  • On 2 September 2010, the Judiciary Engineering and Modernization Center was created.
  • Prior to FY 2011, the National Security Engineering Center was listed as C3I FFRDC.
  • On 1 October 2011, the National Astronomy and Ionosphere Center was decertified as an FFRDC.
  • Prior to FY 2012, the Frederick National Laboratory for Cancer Research was listed as the National Cancer Institute at Frederick.
  • On 27 September 2012, the Centers for Medicare and Medicaid Services FFRDC was created. On 15 August 2013, its name was changed to the CMS Alliance to Modernize Healthcare.
  • Prior to FY 2013, the Systems and Analyses Center was listed as the Studies and Analyses Center.
  • On 19 September 2014, the National Cybersecurity Center of Excellence was created.
  • On 15 September 2016, the Homeland Security Operational Analysis Center was created.
  • On 1 October 2016, Green Bank Observatory was split out from the National Radio Astronomy Observatory, both retained FFRDC status.
  • The Homeland Security Studies and Analysis Institute was phased out on 31 October 2016.
  • In June 2020, the National Optical Astronomy Observatory changed its name to NSF’s National Optical-Infrared Astronomy Research Laboratory or NSF’s NOIRLab.
  • The Judiciary Engineering and Modernization Center was decertified as an FFRDC in January 2021.

C hang e s in qu e stionnai r e . FFRDCs are asked to provide R&D expenditures by source of funding and type of R&D. In FY 2016, NCSES added a question asking for R&D expenditures funded by seven specific federal agencies. In FY 2019, NCSES added a question asking which federal agencies not specifically listed in the previous question funded the expenditures. In FY 2022, NCSES added a question (Question 6) asking for FTEs and headcounts for personnel by R&D function.

Changes in reporting procedures or classification. The FFRDC R&D Survey has been conducted annually for university-administered FFRDCs since FY 1953 and for all FFRDCs since FY 2001.

Definitions

  • Federal agency . Any agency of the United States government. Expenditures were reported by seven specific agencies (Department of Defense; Department of Energy; Department of Health and Human Services, including the National Institutes of Health; Department of Homeland Security; Department of Transportation; National Aeronautics and Space Administration; and National Science Foundation). FFRDCs listed up to ten other agencies with corresponding expenditure amounts.
  • Fis c al ye ar. FFRDC’s financial year.
  • Full- time equivalent (FTE) . Calculated as the total working effort spent on research during the fiscal year divided by the total effort representing a full-time schedule within the same period.
  • Headcount. The number of personnel paid from R&D accounts during the fiscal year.
  • Research and development ( R & D ) . R&D activity is creative and systematic work undertaken to increase the stock of knowledge—including knowledge of humankind, culture, and society—and to devise new applications of available knowledge. R&D covers three activities: basic research, applied research, and experimental development. R&D does not include outreach, non-research training programs, capital projects (i.e., construction or renovation of research facilities), routine testing, or policy development.
  • R & D e x p e nditur e s. Expenditures for R&D activities from the organization’s current operating funds that were separately accounted for. For the purposes of the survey, R&D includes expenditures from funds designated for research. Expenditures come from internal or external funding and include recovered and unrecovered indirect costs . Funds passed through to subrecipient organizations are also included.
  • R&D functions .
  • Researchers. Professionals engaged in the conception or creation of new knowledge, products, processes, methods, and systems and in the management of the projects concerned. Researchers contribute more to the creative aspects of R&D , whereas technicians provide technical support .
  • R&D technicians. Persons whose main tasks require technical knowledge and experience in one or more fields of science or engineering, but who contribute to R&D by performing technical tasks such as computer programming, data analysis, ensuring accurate testing, operating lab equipment, and preparing and processing samples under the supervision of researchers.
  • R&D support staff. Employees not directly involved with the conduct of a research project but support the researchers and technicians. These employees might include clerical staff, financial and personnel administrators, report writers, patent agents, safety trainers, equipment specialists, and other related employees.
  • Source of funds .
  • U.S. federal government . Any agency of the U.S. government. Federal funds that were passed through to the reporting organization to another organization were included.
  • State and local government . Any state, county, municipality, or other local government entity in the United States, including state health agencies.
  • Business . Domestic or foreign for-profit organizations. Funds from a company’s nonprofit foundation were not reported here; they were reported under Nonprofit organizations.
  • Nonprofit organizations . Domestic or foreign nonprofit foundations and organizations. Funds from universities and colleges were not reported here; they were reported in All other sources.
  • All other sources . Sources not reported in other categories, such as funds from foreign governments and foreign or U.S. universities.
  • T otal op e rating budg e t. Total executed operating budget excluding capital construction costs.
  • T y p e of c ost. R&D expenditures were reported in the following categories:
  • Salari e s, w ag e s, and fri n ge b e n e fits . Compensation for all R&D personnel, whether full time or part time, temporary or permanent, including salaries, wages, and fringe benefits paid from organization funds and from external support.
  • Soft w are pur c has e s . Payments for all software, both purchases of software packages and license fees for systems.
  • Equipm e nt . Payments for movable equipment, including ancillary costs such as delivery and setup.
  • Sub c ontra c ts . Payments to subcontractors or subrecipients for services on R&D projects.
  • Oth e r dir ec t c osts . Other costs that did not fit into one of the above categories, including (but not limited to) travel, computer usage fees, and supplies.
  • I ndir ec t c osts . All indirect costs (overhead) associated with R&D projects.
  • T ype of R&D . R&D expenditures were reported in the following categories:
  • Basic r e s e ar c h . Experimental or theoretical work undertaken primarily to acquire new knowledge of the underlying foundations of phenomena and observable facts, without any particular application or use in view.
  • Appli e d r e s e ar c h . Original investigation undertaken to acquire new knowledge. It is directed primarily toward a specific, practical aim or objective.
  • Experimental d eve lop m e nt . Systematic work, drawing on knowledge gained from research and practical experience and producing additional knowledge, which is directed to producing new products or processes or to improving existing products or processes.

Questionnaires

View archived questionnaires, data tables.

  • All Formats (ZIP 634 KB)
  • Excel (ZIP 87 KB)
  • PDF (ZIP 546 KB)

Federally funded research and development center (FFRDC) expenditures

Ffrdc expenditures, by ffrdc, general notes.

The FFRDC Research and Development Survey is the primary source of information on R&D expenditures that are separately accounted for at federally funded research and development centers (FFRDCs) in the United States. Conducted annually for university-administered FFRDCs since FY 1953 and all FFRDCs since FY 2001, the survey collects information on R&D expenditures by source of funds and types of research and expenses. The survey is an annual census of the full population of eligible FFRDCs. See https://www.nsf.gov/statistics/ffrdclist/ for the Master Government List of FFRDCs.

Acknowledgments and Suggested Citation

Acknowledgments.

Michael T. Gibbons of the National Center for Science and Engineering Statistics (NCSES) developed and coordinated this report under the guidance of Amber Levanon Seligson, NCSES Program Director, and under the leadership of Emilda B. Rivers, NCSES Director; Christina Freyman, NCSES Deputy Director; and John Finamore, NCSES Chief Statistician.

Under contract to NCSES, ICF conducted the survey and prepared the tables. ICF staff members who made significant contributions include Kathryn Harper, Project Director; Jennifer Greer, Data Management Lead; Sindhura Geda, Data Management Specialist; Alison Celigoi, Data Management Specialist; Cameron Shanton, Data Collection Specialist; Audrey Nankobogo, Data Collection Specialist; Henry Levee, Data Collection Specialist; Vladimer Shioshvili, Survey Systems Lead.

NCSES thanks the FFRDCs that provided information for this report.

Suggested Citation

National Center for Science and Engineering Statistics (NCSES). 2024. FFRDC Research and Development Expenditures: Fiscal Year 202 3 . NSF 24-328. Alexandria, VA: U.S. National Science Foundation. Available at https://ncses.nsf.gov/surveys/ffrdc-research-development/2023 .

Featured Analysis

Master government list of federally funded r&d centers, related content, related collections, survey contact.

For additional information about this survey or the methodology, contact

Get e-mail updates from NCSES

NCSES is an official statistical agency. Subscribe below to receive our latest news and announcements.

Understanding retirement needs

  • How much do you need to retire comfortably?
  • How much to save based on age

Building your retirement savings

Adjusting for inflation, lifestyle, and healthcare costs.

  • General rules of thumb
  • Seeking professional advice

How Much Do I Need to Retire? A Complete Guide to Retirement Planning

Paid non-client promotion: Affiliate links for the products on this page are from partners that compensate us (see our advertiser disclosure with our list of partners for more details). However, our opinions are our own. See how we rate investing products to write unbiased product reviews.

  • Target savings will vary for each future retiree, depending on one's expenses and current salary.
  • Many advisors recommend saving 15% of your earnings annually or even more if you are getting a late start.
  • Multiple income streams and a conservative withdrawal rate ensure you don't run out of money.

Insider Today

Acquiring adequate retirement savings doesn't happen overnight. For most people, saving enough for retirement requires decades of dedication and strategic financial planning . But how much do you actually need to save to ensure a comfortable retirement? 

Here are the best retirement plans , calculators, investment strategies, and tips you can use to ensure your retirement savings plan is on track. 

Assessing your retirement needs

Unfortunately, there's no general number to aim for when saving toward retirement. Your target retirement savings goal will differ greatly from your siblings', neighbors', and even your coworkers' goals since the amount you'll need largely depends on personal factors.

However, one rule of thumb applies to everyone: The sooner you start saving, the less effort you'll need to put in to reach your goal, and the better-positioned you'll be later in life. 

According to the 2024 MassMutual Retirement Happiness Study , the average age for retirees in the U.S. is 62. If you were to live to 85, this means you'd need enough money to cover all your expenses (and retirement goals) for at least 22 years. Economic factors like inflation will also certainly impact your savings over time.

Estimating your retirement expenses

Understanding what you expect retirement to look like will help determine how much you'll need to fund that lifestyle. If you plan to travel the world in luxury, your budget will differ from someone wanting to bird watch from the backyard each morning.

In retirement, your savings will cover many of the same expenses you had pre-retirement. This includes costs like food, housing, transportation, clothes, gifts, utilities, insurance (including a health plan), and travel.

In most cases, these expenses won't change much from pre- to post-retirement, which makes creating a budget easier. But if you have big plans for your retirement years (moving to a new state or country, buying a bigger home, increasing travel, etc.), you must calculate how much your new standard of living will cost. 

How much do you need to retire comfortably? 

The first step to adequately saving for retirement is to determine how much you'll need. This means analyzing current and future expenses and deciding how much you can afford to put away each month. You may also want to use a number of different savings and investment vehicles or passive income streams.

Financial advisors suggest saving around 10 times your current salary by the time you reach retirement age. Before you retire, you should aim to reduce your annual expenses as much as possible, including paying off existing debt. This can help stretch your retirement savings for even longer. 

As always, it's wise to consult with a trusted financial planner to help you determine your unique needs and retirement savings strategy.

How much to save for retirement based on your age

One way to see if you're on track to reach a comfortable retirement savings is to aim for a multiple of your current annual earnings. This serves as a rough estimate so you can get a better sense of your situation. Remember that the amount of savings required to ensure a comfortable retirement varies according to your projected retirement costs and even the specific investments you choose for your retirement portfolio.

According to Fidelity , here's how much you should have saved up each decade to meet your retirement goals:

30

1-2 times your starting salary

40

3-4 times your starting salary

50

6-7 times your starting salary

60

8 times your starting salary

67

10+ times your starting salary 

Financial advisors recommend dedicating 15% of your annual income toward retirement. However, depending on your retirement goals, financial obligations, and current assets, you may need to save even more.

Types of retirement accounts (401(k), IRA, etc.)

There are multiple savings vehicles and income streams to consider for building your nest egg. These can affect how much you need to save today, depending on which sources of income are available to you.

Some of the most popular types of retirement accounts include: 

  • 401(k) plans: Employer-sponsored investment vehicles with compounding power and tax advantages to help you grow your nest egg steadily over time. Money in a 401(k) can be invested in various securities, and your contributions may even be matched by your employer, amplifying your efforts. Funds can be distributed without penalty beginning at age 59 ½, or earlier with certain exceptions.
  • IRAs: IRAs are retirement accounts individuals open through major banks, credit unions, and other financial institutions. The best IRA accounts include traditional, Roth, SEP, and SIMPLE IRAs. IRAs have the same tax advantages as 401(k)s but offer more flexibility over how your funds are allocated.
  • Traditional pension plans: Traditional pensions are another employer-sponsored investment vehicle certain businesses offer. With a pension, your employer is responsible for contributing and investing the funds in your account. The amount contributed is determined by employee earnings and years at the business. 

Outside of savings accounts, other ways to generate income during retirement include:

  • Social Security benefits: Social Security is a government program that provides individuals with monthly retirement and disability benefits. You can opt-in to start receiving Social Security benefits as early as age 62, but you'll receive lower payments. Financial experts recommend delaying Social Security until you reach full retirement age (age 70). 
  • Annuities: Annuities are another retirement income source to consider. They're offered by insurance companies and act as a long-term investment vehicle. After purchasing an annuity — either with a lump sum or periodic purchase payments — you will receive regular payments over the course of your retirement.

Planning for inflation in retirement

Remember to consider inflation and its impact on your savings. For instance, in 2024, there have been inflation rates of 3%, following the 3.3% increase in 2023 and the high 6.5% rate in 2022. Generally, you should account for inflation of approximately 2% per year.

However, certain economic, political, or natural disasters can cause unexpected spikes in inflation. In those cases, you may experience significant financial losses that require you to permanently or temporarily adjust your lifestyle and budget. One of the best ways to hedge against inflation and market downturns is by continuing to invest after retirement . 

Healthcare costs and long-term care planning

Try to account for potential unexpected expenses, such as medical care for you and your spouse or even financially helping a child or grandchild.

"The most common expense that a retiree can ignore (or forget to budget for) is end-of-life expectancy expenses," says Jim Ludwick, a CFP and member at Garrett Planning Network . "This includes caregivers coming to your house, going into assisted living, or skilled nursing. Those are very expensive parts of people's lives. And a lot of times that can eat up quite a bit of savings if it goes on for an extended period of time."

Downsizing and lifestyle adjustments

When planning your retirement lifestyle, consider where you want to live. You may want to downsize depending on your preferred lifestyle, savings amount, and priorities. That said, your priorities may be buying your dream retirement home or moving to a certain location. In this case, be sure that you factor those larger expenses into your budget.

Retirement planning general rules of thumb

While everyone's situation and needs will differ, there are a few primary rules of thumb that most financial advisors follow, which you should consider when determining how much to save for retirement.

Retirement income as a percentage of pre-retirement income

Many financial professionals recommend that you account for between 70% and 80% of your pre-retirement income each year in retirement. This means that if you currently earn $60,000 per year, you should plan to spend between $42,000 to $48,000 annually once you retire. 

This isn't a set rule for everyone, and you may need to even account for more savings. "Many people need to have income streams (or savings and investments) cover 80%, 90%, or even 100% of their pre-retirement budget," Ludwick says. It all depends on your specific expenses now and in retirement.

Saving 15% of your earnings every year

If you start saving for retirement early enough, an annual savings rate of 15% may be sufficient to meet your goals. If you're off to a late start, you may need to save a lot more each year to catch up. 

"As you get older, the amount needed for savings to reach the same end goal roughly doubles every 10 years," says Tolen Teigen, chief investment officer for FinDec . "So, if someone waits ten years to start saving, instead of 30, they are now 40. Instead of 8% to 10% annually, they are now looking at 16% to 20% saved to reach the same end number."

Saving 10 times your income by retirement age

As mentioned above, many financial advisors and firms like Fidelity recommend having approximately 10 times your annual salary saved by the time you reach retirement age. While this may not be exactly what you need, it's a good target to keep in mind as you go. You can always adjust it depending on your projected needs in retirement.

The 4% withdrawal rule 

Many retirees are concerned about running out of money once they reach retirement. The 4% rule may be a good guideline to avoid this. While many factors can affect the actual drawdown process, the 4% rule can be a good place to start if you want to avoid running out of money.

This rule states that retirees can withdraw up to 4% of their retirement savings in year one of retirement. So, if you have $2,000,000 in retirement savings, you would withdraw $80,000 that first year. In year two, you would adjust that $80,000 for inflation and withdraw that amount from your savings.

Keep in mind that while the 4% rule is standard, some financial advisors say your actual withdrawal percentage could be anywhere from 3% to 5%.

Seeking professional advice when retirement planning

There is no one-size-fits-all approach to saving for retirement. Everyone's needs will be different, and so will their approach to saving, including when they start and how much they can set aside each year. Consulting with a certified financial planner or other retirement expert is the best way to understand your unique needs.

"Planning ahead and checking in on your efforts" is key to saving enough for the retirement years, Ludwick says."It's dangerous when you're 75 and realize you're running out of money and you have to move in with a younger sibling or something." 

His advice? "If you want to stay independent, do your homework ahead of time. Think about all those things that could possibly happen. If they don't happen, you're lucky … and your kids and grandkids can have a nice gift that you leave behind."

You can calculate how much you need to retire by assessing your expected expenses, considering your desired lifestyle, current expenses, projected inflation, and healthcare costs. Business Insider's free retirement calculator can help you see if you're on track to secure a comfortable retirement. You can also use other rules of thumb, such as having an annual savings rate of 15%.

The 4% rule in retirement planning suggests withdrawing 4% of your retirement savings each year to prevent you from prematurely running out of money for at least 30 years. It's a general guideline to help estimate how much you need to save. However, some advisors recommend more or less than that rate.

You can maximize your retirement savings by regularly contributing to tax-advantaged retirement accounts like 401(k)s and IRAs to maximize employer matching contributions, investment opportunities, and compound interest. Generally, it's best practice to max out your retirement accounts first. Also, adopt a diversified investment strategy for greater portfolio growth and risk management.

The sooner you start planning for retirement, the easier it will be to compound your savings and reach your goals. Starting in your 20s and 30s allows more time for your investments to grow. It's still possible to catch up if you start in your 40s or 50s by saving more aggressively and adjusting your strategy, but it will be generally more stressful. 

Common mistakes to avoid in retirement planning include underestimating expenses, waiting to start saving, relying too heavily on Social Security, failing to diversify investments, spending too quickly, and not accounting for healthcare costs and inflation. The best way to avoid these common mistakes is by creating a thorough financial plan and consulting a financial advisor.

how many survey questions in research

  • Credit cards
  • Investing apps
  • Retirement savings
  • Cryptocurrency
  • The stock market
  • Retail investing

how many survey questions in research

  • Main content

National Academies Press: OpenBook

Considering Greenhouse Gas Emissions and Climate Change in Environmental Reviews: Conduct of Research Report (2024)

Chapter: appendix a: state dot survey instrument.

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

APPENDIX A State DOT Survey Instrument A-1

012032405ÿ3714ÿ89 8  $ÿ$$ÿ*!ÿ#ÿ8$ÿ+ 8,ÿ8$-#ÿ4&%'ÿÿ.$ÿ $#ÿ./$$0ÿ1$ÿÿ$$ÿ#ÿ$ÿ#$.ÿ$ÿÿ+2 2, $ÿ.ÿ#ÿ#ÿÿ$ÿ*5ÿ#.ÿ*ÿ3.ÿ.ÿ $ÿ4$ÿ8$#!ÿ5#ÿ+485,ÿÿ*ÿÿ!ÿ*ÿ#$.#.ÿÿ 3ÿ$ÿ.#"ÿ$ÿ.$ÿÿ./$$05ÿÿ$-#ÿÿÿ$#ÿÿ1$ ÿ$$ÿ#!ÿ11ÿ$ÿ!$ÿ#!6ÿ#ÿ$#ÿ$ÿ#$.ÿ2 2ÿ. #ÿ11#5ÿÿ*ÿÿ#.ÿ1ÿ$#ÿ.ÿ1$$ÿ."ÿ7ÿ $.ÿÿ/1ÿ!ÿ$ÿÿ&4ÿÿÿ.ÿ*ÿ1$$*ÿÿ*ÿÿ/ÿ$1ÿÿ1$ $ÿ..ÿ*ÿ.ÿ*$0$ÿ#$" 0"ÿ$ÿ!$ÿ#!ÿ+""ÿÿÿ,ÿÿÿ/.ÿÿ$1ÿ$#.ÿ$ÿ.#ÿ1$ #$.ÿ2 2ÿ$ÿ.2$ÿ#ÿ#ÿ11#ÿÿÿ$1ÿ$ÿ* #$.#.ÿ.ÿ485ÿ.2$ÿÿÿ$ÿ*ÿ38 : 9 :$ : ;.ÿ$ 4"ÿ$ÿ!$ÿÿÿ!ÿ#1#ÿ$ÿ*ÿ38 : 9#ÿ<ÿÿ#11.#ÿ3ÿ1$ÿ#$.ÿ2 2ÿ$ÿ.2$ÿ# : 9# ÿ<ÿ.$ÿ$ÿ#.ÿ3ÿ1$ÿ#$.ÿ2 2ÿ$ÿ$ ÿ#ÿ11# :$ 3"ÿÿ!$ÿ#!ÿ.$.ÿ!ÿ$ÿ.$#$ÿ+""5ÿ4$ 55ÿ4$ÿ=#ÿ5ÿ$!ÿ=#ÿ5,ÿ1$ÿ! $-#+,ÿÿÿÿ01ÿ!ÿÿ31ÿ2 2ÿ$8 : 9 :$ 722!"#"#$232%&'&0()2 8  023

012032405ÿ3714ÿ89 8  * $ÿ '"ÿÿ!$ÿ#!ÿ+$+ÿ!ÿ$,#ÿ$ÿ+$#$ÿÿÿÿ01ÿ! ÿ-!ÿ++ÿ. .ÿ$ÿ+2$ÿ#ÿ#ÿ//#0 * 1 *$ * $ÿ &"ÿÿ!$ÿ#!ÿ+$+ÿ!ÿÿÿ$/ÿ. .ÿ$ÿ+2$ #ÿ#ÿ//#0 * 1 *$ * $ÿ %"ÿÿ!$ÿ#!ÿ+$+ÿ!ÿÿ$ÿ$,#ÿ-!ÿÿ #$+ÿ. .ÿ$ÿ+2$ÿ#ÿ#ÿ//#0 * 1 *$ * $ÿ 2"ÿÿ!ÿ$$ÿ$,#ÿÿ!$ÿÿ#+ÿ3#ÿ#$ÿ$ÿÿ# 3+ÿ$ÿ. .ÿ$ÿ+2$ÿ#ÿ#ÿ//#0 * 1 *$ * $ÿ )"ÿ4$+ÿ!$ÿ3ÿ+ÿÿ#ÿÿÿ5ÿ+2$ÿÿ5$6$ÿ5ÿ 8ÿ4&%'ÿ$,#ÿÿÿÿ/ÿ$/ÿ4140ÿ$ÿ/ÿ+#ÿ!$ÿ#ÿ$#ÿ+ #+ÿ+ÿ5ÿ#ÿ$ÿ#$+$ÿ$/ÿ. .ÿ$ÿ+ÿ#ÿ#ÿ $ÿ+$#$0 * 1 722!"#"#$232%&'&0()2 8  423

012032405ÿ3714ÿ89 8  +$ + 8$*! ("ÿ8ÿ$,ÿ-ÿ$ÿ!ÿÿ,$#ÿ.ÿ*#!ÿ*7 01"ÿÿ#$7 * 1/ 722!"#"#$232%&'&0()2 8  323

Many state departments of transportation (DOTs) are working to incorporate the treatment of greenhouse gas (GHG) emissions, climate change effects, or both in project planning and environmental reviews. There is a wide range of experience, with some states working hard to integrate all of their activities (including environmental review) into a unified, agency-wide treatment of climate change while others are just beginning their efforts.

NCHRP Web-Only Document 400: Considering Greenhouse Gas Emissions and Climate Change in Environmental Reviews: Conduct of Research Report supplements the resources and guidance for state DOTs on addressing climate change effects and GHG emissions provided by NCHRP WebResource 3: GHG Emissions and Climate Change in Environmental Reviews .

Supplemental to NCHRP Web-Only Document 400 is a fact sheet that summarizes the essential findings of the project and provides an overview of the WebResource, as well as an implementation memo that identifies implementation pathways for the project.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

IMAGES

  1. How Many Questions Should Be Asked in a Survey: 360 Degree Guide

    how many survey questions in research

  2. Survey Research: A Quantitative Technique

    how many survey questions in research

  3. Survey Research: A Quantitative Technique

    how many survey questions in research

  4. Survey Questions: 250+ Good Examples, Types & Best Practices

    how many survey questions in research

  5. Research Questions

    how many survey questions in research

  6. Relationship between Research Questions and Survey Items

    how many survey questions in research

VIDEO

  1. Tips for Writing Research Objectives, Research Questions and Research Hypotheses from Model

  2. What Types of Survey Questions to Ask in Your Questionnaire?

  3. Survey Research

  4. Surveys and Questionnaires: Research

  5. The 2 Best Survey Questions for Decoding Your Prospects Inner Dialogue #businessscaling

  6. What is a Survey and How to Design It? Research Beast

COMMENTS

  1. 90 Survey Question Examples + Best Practices Checklist

    However, all questions must serve a purpose. In this section, we divide survey questions into nine categories and include the best survey question examples for each type: 1. Open Ended Questions. Open-ended questions allow respondents to answer in their own words instead of selecting from pre-selected answers.

  2. How to write great survey questions (with examples)

    For example, "With the best at the top, rank these items from best to worst". Be as specific as you can about how the respondent should consider the options and how to rank them. For example, "thinking about the last 3 months' viewing, rank these TV streaming services in order of quality, starting with the best".

  3. Writing Survey Questions

    Many of the questions in Pew Research Center surveys have been asked in prior polls. Asking the same questions at different points in time allows us to report on changes in the overall views of the general public (or a subset of the public, such as registered voters, men or Black Americans), or what we call "trending the data". ...

  4. Writing Good Survey Questions: 10 Best Practices

    4. Focus on Closed-Ended Questions. Surveys are, at their core, a quantitative research method.They rely upon closed-ended questions (e.g., multiple-choice or rating-scale questions) to generate quantitative data. Surveys can also leverage open-ended questions (e.g., short-answer or long-answer questions) to generate qualitative data.

  5. How Many Survey Questions Should I Use?

    But, in general and for most survey types, it's best to keep the survey completion time under 10 minutes. Five minute surveys will see even higher completion rates, especially with customer satisfaction and feedback surveys. This means, you should aim for 10 survey questions (or fewer, if you are using multiple text and essay box question types).

  6. Survey Questions: 70+ Survey Question Examples & Survey Types

    Impactful surveys start here: The main types of survey questions: most survey questions are classified as open-ended, closed-ended, nominal, Likert scale, rating scale, and yes/no. The best surveys often use a combination of questions. 💡 70+ good survey question examples: our top 70+ survey questions, categorized across ecommerce, SaaS, and ...

  7. How many questions should be asked in a survey?

    Pulse surveys typically consist of 5-15 questions and are dispatched at a higher frequency than annual surveys. Pulse surveys are also an effective way to collect critical data and more often. Shorter surveys, such as Employee Satisfaction Index or Net Promoter Score, are best for providing straightforward, real-time feedback.

  8. Doing Survey Research

    Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout. Distribute the survey.

  9. How to Analyse Survey Data: Best practices, Tips and Tools

    Research questions are the underlying questions your survey seeks to answer. Research questions are not the same as the questions in your questionnaire, although they may cover similar ground. It's important to review your research questions before you analyse your survey data to determine if it aligns with what you want to accomplish and ...

  10. 13.1 Writing effective survey questions and questionnaires

    Describe some of the ways that survey questions might confuse respondents and how to word questions and responses clearly; Create mutually exclusive, exhaustive, and balanced response options; ... One is to encourage respondents to participate in the survey. In many types of research, such encouragement is not necessary either because ...

  11. How Many Questions Should Be Asked in a Survey?

    Usually, this is a shorter version of the annual surveys with just 2-10 questions. These 2-10 questions focus on the most crux issues concerning an organization. 4. Intercept Surveys. Intercept surveys are in-person surveys conducted at points of contact like malls, restaurants, public places like parks, and more.

  12. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  13. 16 Types of Survey Questions, with 100 Examples

    The questions you choose and the way you use them in your survey will affect its results. These are the types of survey questions we will cover: Open-Ended Questions. Closed-Ended Questions. Multiple Choice Questions. Dichotomous Questions. Rating Scale Questions. Likert Scale Questions. Nominal Questions.

  14. How Many Questions Should I Ask In My Survey?

    Learn more about how many questions should be asked in market research surveys below. Is there an ideal number of questions to ask in a survey? The short answer is yes! Online Survey Questions. The length for a typical online survey is 15 to 20 questions. While that might sound like a small number of questions, an online survey with 15 to 20 ...

  15. How short or long should be a questionnaire for any research

    Response rate is defined as the number of people who responded to a question asked divided by the number of total potential respondents. Response rate which is a crucial factor in determining the quality and generalizability of the outcome of the survey depends indirectly on the length and number of questions in a questionnaire.[7,8]Several studies have been conducted to assess the ...

  16. Your quick guide to open-ended questions in surveys

    Step 1: Collect and structure your responses. Online survey tools can simplify the process of creating and sending questionnaires, as well as gathering responses to open-ended questions. These tools often have simple, customisable templates to make the process much more efficient and tailored to your requirements.

  17. 10 Research Question Examples to Guide your Research Project

    The first question asks for a ready-made solution, and is not focused or researchable. The second question is a clearer comparative question, but note that it may not be practically feasible. For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.

  18. Questionnaire Design

    Questionnaires vs. surveys. A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

  19. 28 Tips for Creating Great Qualitative Surveys

    Here's the procedure that we recommend: Draft questions and get feedback from colleagues. Draft survey and get colleagues to attempt to answer the questions. Ask for comments after each question to help you revise questions toward more clarity and usefulness. Revise survey and test iteratively on paper.

  20. Quantitative Survey Questions: Definition, Types and Examples

    Quantitative survey questions are defined as objective questions used to gain detailed insights from respondents about a survey research topic. The answers received for these quantitative survey questions are analyzed and a research report is generated on the basis of this. data. These questions form the core of a survey and are used to gather ...

  21. Psychographic Survey Questions: What They Are and How to Use Them in

    Psychographic survey questions are a powerful tool in market research, offering deep insights into the psychological drivers behind consumer behavior. Understanding your audience's lifestyles, interests, values, and motivations can create more effective marketing strategies, develop better products, and foster stronger customer connections.

  22. Question Evaluation

    Question evaluation is an important part of developing a good survey. Question evaluation also supports researchers who analyze data collected from surveys. ... Collaborating Center for Questionnaire Design and Evaluation Research Question Evaluation Participate CCQDER Products Sign up for Email Updates. Contact Us . Contact Us . Call 800-232 ...

  23. An ultimate guide to writing a good survey paper: key strategies and tips

    Conclusion. The conclusion should close your survey paper, leaving a lasting impression on the reader. Here's how to craft a strong conclusion. Restate and summarize: Briefly remind the reader of your research question and summarize the key findings you've presented throughout your analysis.; Reiterate the significance: Emphasize the importance of understanding the existing research base ...

  24. How many questions are suitable in the research questionnaire of a survey?

    In many PhD dissertations there's one research question that the whole research tries to answer. However, others have addressed multiple interrelated or compound research questions; e.g., they may ...

  25. Opinion: Too many Americans support political violence. It's up to the

    That's a very small percentage (and survey estimates of small percentages can be unreliable), but with more than 250 million adults in the United States, 1% of survey respondents would ...

  26. Secret Service faces serious questions about security footprint and

    Notably, the shooter's location was outside the security perimeter, raising questions about both the size of the perimeter and efforts to sweep and secure the American Glass Research building ...

  27. What instruments do researchers use to evaluate LXD? A ...

    The first research question sought to understand what surveys were employed as evaluators assess usability in learning environments. The results found that a large number do not use standard and reliable instruments; rather, many surveys are self-created (39.58%).

  28. FFRDC Research and Development Survey 2023

    The FFRDC Research and Development Survey is the primary source of information on research and development (R&D) expenditures that are separately accounted for at federally funded research and development centers (FFRDCs) in the United States. ... Most FFRDCs have incorporated the data needed to complete most of the survey questions into their ...

  29. How Much Do I Need to Retire?

    Many financial professionals recommend that you account for between 70% and 80% of your pre-retirement income each year in retirement. This means that if you currently earn $60,000 per year, you ...

  30. Appendix A: State DOT Survey Instrument

    5 Additional Research 16; Appendix A: State DOT Survey Instrument 17-20; Appendix B: State DOT Survey Responses 21-25; Appendix C: State DOT Interview Questions 26-31; Appendix D: NGO-CBO Survey Instrument 32-35