Probability ChartsObjectivesIn this section, we show you how to analyze the data from improvement efforts involving frequency of events. We assume that you have collected data about a key indicator over several weeks and that you need to analyze the data. We will teach you how to analyze the data and we will even provide you with templates that you can use to quickly complete your tasks. The objectives of this session are:
Why chart?There are two reasons why to construct a control chart.
Which chart is right?
When tracking data over time, you have a number of options. You could use a P-chart, designed specifically to track mortality or adverse health events over time. You could use an X-bar chart designed for tracking health status and satisfaction surveys of a group of patients over time. You could use a moving average chart to help you construct control chart for an individual patient's data over time. This section helps you decide which of these various charts are appropriate for your application. If you do not have a specific application in mind or if you wish to learn more about each of the various different charts, skip this section. In the following, we ask you 4-7 questions and based on your answers advise you which chart is right for the application that you have in mind. Have you collected observations over different time periods? Yes► No► AssumptionsWhen analyzing adverse health outcomes, such as mortality, a useful method of analysis is P-chart. In the P-chart we assume the following:
These assumptions are important and should be verified before proceeding further with the use of risk adjusted P-charts. When these assumptions are not met, alternative approaches such as bootstrapping distributions should be used. P-chartThis section takes you through a step by step process of using a P-chart; a type of control chart for analysis of mortality data. We introduce the concepts behind P-charts in several steps. It is important that you take each step and complete the assignment in the step before proceeding to the next. To help you understand the concepts, the lecture focuses on data from one hospital's mortality over 8 consecutive months. Here is the data we need to analyze:
Calculate Mortality RatesThe first step is to create an x-y plot; where the x axis is time and the y axis is mortality rates. Calculate mortality rates by dividing number dead by the number of cases in that month.
Numbers are deceiving. They hide much. To understand numbers you must see them by plotting them. Figure 1 shows the data plotted against time.
Figure 2: Observed Mortality in Eight Time Periods What does the plot in Figure 1 tell you about unusual time periods? There are wide variations in the data. It is difficult to be sure if the apparent improvements are due to chance. To understand if these variations could be due to change, we can plot on the same chart two limits in such a manner that 95% or 99% of points would by mere change fall between the lower and upper limits. Setting limitsFigure 3 shows the steps in calculating control limits.
Figure 3: Steps in Calculating Control Limits in P-charts In step one the grand average p is calculated by dividing the total adverse events by the total number of cases. Note that averaging the rates at different time periods will not yield the same results. Calculate the total number of cases and the total number of deaths. The ratio of these two numbers is the average P, the average mortality rate. Next calculate the standard deviation of the data. In a binomial distribution the standard deviation is the square root of grand average p multiplied by one minus grand average p divided by the number of cases in that time period. For example, if the grand average p is .25 and the number of cases in the time period is 186, then the standard deviation is the square root of (.25)*(.75)/(186). Table 3 shows the calculated standard deviations for each time period:
Calculate the upper lower limit for each time period as grand average p plus 3 times the standard deviation. This means that you are setting the control limits so that 99% of the data should fall within the limits. If you want limits for 90% or 95% of data you can use other constants besides 3. The constant you use depends on number of observations in the time period and can be read from table of t-values for different sample sizes. Table below shows the lower and upper control limits:
Please note that negative control limits in time periods 4, 6 and 8 are set to zero because it is not possible to have a negative mortality rate. Also note that the upper and lower control limit change in each time period. This is to reflect the fact that we have different number of cases in each time period. When we have many observations, we have more precision in our estimates and the limits become tighter and closer to the average p. When we have few observations, the limits go further away from each other. Remember that we are trying to answer the question of whether there has been improvements in the process. The control limits help answer this question. If during a time period we have more mortality than can be expected from chance then the process has deteriorated during that period. Any point above the UCL indicates a potential change for the worst in the process. Any point below the LCL indicates that mortality is lower than can be expected from chance. It suggests that the process has improved. Figure 4 shows the resulting plot:
Figure 4: P-chart for Data in Table 1 Notice the peculiar construction of the plot, designed to help attract the viewers attention to observed rates. The observed rates are shown as single marker connected with a line. Any marker than falls outside the limits is circled and highlighted. The control limits are show as a line without markers. In the plot in Figure 3, all the data points are within limit. Risk Adjusted P-chartP-charts were designed for monitoring the performance of manufacturing firms. These charts assume that the input to the system is the same at each time period. In manufacturing this makes sense. The metal needed for making a car changes little over time. But in health care this makes no sense. People are different. People are different in their severity of illness, in their ability and will to recover from their illness and in their attitudes toward heroic interventions to save their lives. These differences affect the outcomes of care. If these differences are not accounted for, we may mistakenly blame the process when poor outcomes were inevitable and praise the process when good outcomes were due to the type of patients arriving at our unit. Some institutions receive many severely ill patients. These institutions would be unfairly judged if their outcomes are not adjusted for their case mix before comparing them to other institutions. Similarly, in some months of the year, there are many more severely ill patients. For example, seasonal variations affect the severity of asthma. Holidays affect both the frequency and the severity of trauma cases. But even more significant source of change in the severity of illness of our patients is our own actions. Many process changes lead to changes in the kinds of patients attracted to our unit. Consider for example, if we aggressively try to educate patients for the need for avoiding C-section, we may get a reputation for normal birth delivery and we may attract patients who have less pregnancy complications and wish for normal birth delivery. In the end, we have not really reduced c-sections in our unit, all we have done is to attract a new kind of patient who does not need cesarean births. Nothing fundamentally has changed in our processes, except for the input to the process. Risk adjustment of control charts is one method of making sure that the observed improvement in the process are not due to changes in the kind of patients that we are attracting to our unit. To help you understand this method of analysis, suppose we have collected the data in Table 5 over 8 time periods. This table shows the patients severity of illness (risk of mortality). If you have forgotten what is severity and how we measure it please click here to return to a previous lecture on this issue.
The question we want to answer is whether the observed mortality rate should have been expected from the patients severity of illness (individual patient's risk of mortality). To answer this question, we need to calculate control limits. Risk adjusted control limits for probability charts are calculated using the steps in Figure 5:.
Figure 5: Steps in Calculation of Risk Adjusted Control Limits for Probability Charts The upper and lower control limits are calculated from the expected risk,
E_{i}, the expected deviations, D_{i}, and the student-t distribution constant. Each of these are further defined
and explained below. Expected MortalityThe expected mortality rate for each time period is calculated as the average of the risks of mortality of all the patients in that time period. These calculations are shown in Table 6:
Expected DeviationBefore we construct control limits for the expected mortality, we need to measure the variation in these values. The variation is measured by a statistic that we call expected deviation. It is calculated in four steps:
Figure 6: Calculation of Expected Deviation Figure 5 shows calculation of expected deviation for the first time period. The same calculation should be carried through for each time period, resulting in the data in Table 7:
T-valueTo calculate the control limits we need to estimate the t-statistic that would make sure that 95% or 99% of data will fall within the control limits. T-values depend on the sample size. To see a Table of "t" values for different sample sizes click here. Table 8 summarizes the estimated t-values for all time periods:
Plotting a Risk Adjusted P-ChartWe are now reading to calculate the control limits and plot the chart. The upper and lower control limits are calculated from the expected mortality and expected deviation so that 95% of the data would fall within these limits (i.e. we use a t-value appropriate for 95% confidence intervals):
Since negative probabilities do not make sense, we re-set the negative numbers to 0. In a risk adjusted p-chart, we plot the observed rate against the control limits derived from expected values. Figure 7 shows the resulting chart:
One of the data points in Figure 6 falls outside the control limits. We have drawn a circle around this data point to attract attention to it. Points above the upper control limit show time periods when outcomes have been worse than expected from the patients' risks. Points below the control limit show time periods when outcomes have been better than expected. In time period 3, mortality rates were less than expected. PresentationsThere are three sets of presentations for this lecture:
Narrated slides and videos require Flash. Examples
Analyze DataAdvanced learners like you, often need different ways of understanding a topic. Reading is just one way of understanding. Another way is through analyzing data. The enclosed questions are designed to get you to think more about the concepts taught in this session. 1. Assume that following data were obtained about number of falls in a Nursing Home facility.
Produce a control chart. Make sure that your control chart does not have any of the following typical errors:
Email your instructor. Attach your Excel file. In the subject line include the course number and your name. For example, subject line could be: "Joe Smith from HAP 586 analysis of data in lecture on P-chart" Please submit one file. Please note that all cell values must be calculated using a formula from the data. Do not enter values in any calculated cells. Calculate each cell using Excel formulas. Make sure that legend, the X-axis and the Y-axis are appropriately labeled in the chart. Keep a copy of all assignments till end of semester. Email► MoreThis page is part of the course on quality, the lecture on probability charts. This presentation was based on Alemi F, Rom W, Eisenstein E. Risk adjusted control charts for health care assessment. Annals of Operations Research, 1996. Created on Tuesday, September 17, 1996. Most recent revision 01/15/17. |