George Mason University
Statistical Process Improvement



This is a think-it-through exercise for people who want to evaluate satisfaction with their services.

Planning Satisfaction Surveys

This exercise will help you think through surveying patients' satisfaction. To use this section, you must have a specific service in mind.  Please do not proceed until you think of a specific service to evaluate. 

The purpose of this section is to help you plan. We will ask you a series of questions about how you want to survey your clients and as a result you may increase your awareness of issues involved in design of satisfaction surveys.  By the end of the exercise, we hope that the exercise has helped you design a detailed plan for conducting satisfaction surveys.


Are you comfortable that evaluating your service is a good idea and have you sorted out for yourself what might be the consequences of not evaluating the service?

Yes. Proceed.
No. I am not sure.
Please consult "Do I need to evaluate?" exercise

Think through what is the goal of the survey. Is it to convince purchasers that you have a good service? Sometimes purchasers are interested in repeated evaluation efforts that not only document problems but show a systematic effort to resolve problems. You may also engage in evaluation to help you find problems and fix it. In this case, you are not so much interested in reporting the problems you find but in fixing it and going on. There are many other reasons too.

Tell us why you want to know if consumers are satisfied with your service?

One of the first issues you need to think through is who should do the work. Sometimes it is useful to engage a third party, which will help convince purchasers and other reviewers of the data that independent evaluations were done. If you evaluate your own efforts, there is always a suspicion that you may not report the whole story. In addition, some third party surveyors can benchmark your site against your competitors. For example, they can report that your service is among the top 5% of all services. Purchasers and consumers like benchmarked data. At the same time, asking others to help you evaluate a site is time consuming, expensive and may interfere with keeping your activities secret until it is publicly released.

Given these issues, who do you think should evaluate your service and why?

How often do you want to evaluate satisfaction with your service? The answer to this question in part depends on what question you want answered. If, for example, you want to know which target group (people of certain age, sex, etc.) is most satisfied with your service; then an occasional cross-section analysis is sufficient. In cross-section analysis you survey the patients after exposure to your service. Cross-section analysis can also be used to benchmark your service against other services.

If you plan to regularly survey your clients over time then you need a longitudinal study. These types of studies are best for answering questions such as the following three:

  1. If you want to know whether exposure to your service changed the level of satisfaction patients have with their health plan, then you survey patients before and after exposure.
  2. If you want to trace your improvement over time, you also need to stay with a longitudinal design.
  3. If you are evaluating a service as you are building it and you are concerned with whether you are improving the patients' experience with your service, you also need a longitudinal design.

Do you think that you may need to conduct a longitudinal or a cross-sectional study?


What is sufficient evidence? There are many ways that satisfaction surveys could mislead you. One possibility is that improvement in satisfaction may be related to other events and not to your service. For example, patients' life style adjustments may change their satisfaction with your service. To control for this type of error it is important to contrast the improvement against a control group exposed to another service. Another source of error could be that over time respondents are getting to learn the system more and thus are more satisfied with the services they are using. Dissatisfied individuals are unlikely to use your service.  Surveying only your users of your services may mislead you by painting a rosy picture of clients' satisfaction.   To control for these types of errors it is important to contrast your services with others and to explicitly look for customers who are not repeat users.

Many other sources of errors are also possible. Campbel and Stanley in their book "Quasi-experimental design" highlight a list of common errors in survey research. Given the various sources of error, you need to choose for yourself how important it is to have an accurate picture of clients' satisfaction with your service. At the extreme, you can randomly assign people to two services: yours and an alternative placebo service. Random assignments control for most types of errors. But random assignment is expensive and in some occasions blind assignment may be unethical. Subjects have to volunteer to be assigned to the experimental or a placebo services and subjects may refuse to participate. Repeated evaluation of satisfaction over time provides a time series of data that control for some errors but subjects have self selected to be part of these studies and therefore the result may be more rosy than if all subjects were included. The least accurate analysis is to do studies without any comparison group. Actually, we should say that the least accurate approach is doing no evaluation at all and guessing consumer satisfaction.

Obviously you will survey the people who received your service; but will you also survey others that can serve as a comparison group for you?  If so, which type of comparison group will you include? If you need a "preponderance of evidence", select to track a control group over time. If you need data that is "beyond reasonable doubt" then choose a control group that is randomly assigned.

What do you want to ask? Some of the items in satisfaction surveys include the following:

(1) Overall satisfaction with quality of services.

(2) Ease of use.

(3) Satisfaction with integration of services with other health services.

(4) Readability, accuracy, comprehensiveness, usefulness and timeliness of information provided.

(5) Comfort received and skills gained from cognitive services and support groups.

You do not need to include all of the above items, nor do you need to limit your surveys to above items. There are many data banks of surveys. At the end of this interview we will provide with links to other sites and examples of satisfaction surveys. Keep in mind that standardized surveys allow you to benchmark your data against others. In contrast, doing your own survey helps you focus on patients' reactions to innovations in your effort. You can tailor your own surveys to your needs and therefore get more for the effort you are putting in.

Please draft the questions your are planning to ask.

It is neither necessary nor reasonable to survey all patients who use your service. You can sample. Sampling helps reduce the data collection burden on both the patients and the analyst. The size of the sample depends on what you are trying to conclude. If there is a lot of variability in patients' satisfaction with your service, you need larger samples. If you plan to compare your service with others and the two efforts are very similar, you need larger data. More important than the size of the survey is the representativeness of the survey. Getting a lot of patients responding does not correct for the lack of representativeness. This is one case in which more is not always better. The point of sampling is to get a representative sample of people who visit your site. Small and large samples can both be representative. The key is to examine whether there are systematic differences among people who respond and those who do not. Here are some examples of non-representative designs:

(1) Survey anyone who completes your service. Most dissatisfied patients will abandon before reaching the end.
(2) Survey patients in a particular month. Patients' preferences and types of illness may be seasonally different.

We recommend that you randomly select a percentage of visitors (not to be mistaken with randomly assigning visitors to the service - a much harder task). This gives an equal chance that any particular patient may be included.

In some circumstances you may wish to over-sample segments of the population. When segments of your population are small, you need to over-sample these segments so that you can obtain an accurate estimate of their satisfaction. Otherwise, too few of them will be in your sample to provide an accurate picture. Suppose that few teenagers visit your service. If you want to know about their satisfaction with your service, you will need to over sample teenagers. Thus, you may sample every 10 adults but every 5 teenagers. Over-sampling helps get a more accurate picture of small sub-groups of patients using your service.

Think through the sampling strategy you wish to implement. Comment why you expect that a satisfied or a dissatisfied client will be reached in this fashion. Discuss how you plan to verify if your sample represents the population you want to generalize to.

How do you plan to collect the information? There are many choices available. You can have the survey done online and automatically. In these types of surveys, a computer calls or sends an email to your clients.  You can also survey participants by mail, telephone or in person. The mode of conducting the survey may affect the results. Online, computerized telephone and mailed surveys are self-administered. Patients are more likely to report deviant social behavior in self-administered surveys. Online surveys (if connected to an automatic reminder) has a larger response rate than off line surveys. Online surveys are less expensive than offline surveys. Among offline surveys, face to face interviews are most expensive but allow for longer interviews.

Given the tradeoffs of different modes of surveys, which is your preferred approach and why?

How you start your survey will have a lot to do with its success. We generally recommend that you alert the respondent that you plan to survey them before you actually send them a survey. This is preemptive reminder for people who forget to respond. In this fashion, the respondent will hear about you at least three to four times.

1. Invitation to participate
2. Alert to upcoming survey.
3. Survey
4. Alert to non-respondents to remind them to participate and an additional copy of the survey.

The invitation to respond to a survey should highlight:

  • Who is conducting the survey?
  • Why is the survey done?
  • The time needed to complete the survey.
  • The effect of failure to respond on the overall conclusions.

Alert to upcoming survey includes an appreciation of respondent's willingness to participate, the day survey will be sent, and importance of timely respond.   A reminder to non-participants often includes a repeated version of the survey.   In online surveys, it is often necessary to make sure that respondents can answer questions quickly and without much download time. In fact, if you are tracing the person through their email, then it is best to have the survey page pasted to the email. In mailed surveys you should include a self-addressed, stamped envelope.

No matter how you do the survey, you should provide a real benefit for the respondent in completing the survey. Altruism and voluntary requests get you far, but not far enough. Think through what questions can you add that will make the respondent feel happier, more cared for at the end of the survey. Many providers combine satisfaction surveys with surveys of patients' health status or life style appraisals. Patients get the benefit of a free health appraisal and evaluation while they complete the satisfaction surveys.

Take a few minutes now to write to potential respondents about why they should take time away from their busy schedule to answer your questionnaire? What will they gain from doing so? Note that the key is not what you will learn but what will they achieve by responding to the questionnaire. Be short.

What language will you use for the survey? Keep in mind that your services are open to many people from different backgrounds and people are more likely to respond to a questionnaire prepared in their mother tongue.

How would you prepare the data for analysis? Before you can analyze the data, you need to code the data (i.e. assign numbers to responses). When coding the data, you should include different codes for:

"Not responding to any questions in the survey,"
"Skipping the question,"
"Unclear or unusable responses."
"Skipped question because the question was not appropriate."

Analyze the missing data codes first. If you have a large percent of your responses missing, then it is doubtful you can use the survey to arrive at any conclusions. If certain patient groups tend to miss specific questions, then you might have a systematic bias in your data.

If you are conducting online data collection, then there is no need to spend time entering the data onto the computer. If you have done mailed, phone or in person surveys, you must enter the data on the computer. This is often a tedious process. To make sure that the data are correct, enter the data twice and verify the difference between the two data sets.

Another step taken in cleaning the data is to check for inconsistent or out of range responses. If responses 1 through 5 are expected but response 7 is given, then the response is considered out of range and is counted as erroneous responses. Similarly, if earlier in the survey the client indicated he is male and later that he is pregnant; then an inconsistent response is detected. Spend time cleaning the data. It will help you when it comes to interpreting the results.

How would you analyze the data? To analyze the data begin with descriptive statistics. Check each variable for skewness, range, mode and mean. Do the responses seem reasonable? Next plot the data. Usually, satisfaction surveys have a number of scales. Each scale is the average of responses to a number of questions. The idea is that if there are different ways of asking the same question, then one may have a more reliable scale. If so, then data may have a Normal distribution. Do scale responses look like an upside down "U" shape? They should. Statistical theory suggests that averages of more than four numbers will tend to have a Normal distribution.

After you have completed your descriptive data analysis and everything makes sense to you, then you should conduct other analysis. If you have a cross-sectional design, then you may use cross-tabulation to display the data and use chi-squared test to examine the significant of differences you observe in a table.

Sometimes you collect longitudinal data about satisfaction. One way to examine if patients' satisfaction over time has changed is to use statistical process control tools. When you submit this interview we will send you Internet address for a tutorial on how to use statistical process control tools.


Thank you for taking the time to answer the questions in our survey.

Please enter your email here before you submit the information so that we can contact you regarding your comments:

Copyrights protected.  For more information contact Farrokh Alemi.  Revised: May 03, 2015.