Right focus So You Do Not Have to Redo
Total Quality Management recommends that organizations choose which problem they should focus upon based on data. To accomplish this many collect data routinely about their processes and benchmark the performance of their processes against other institutions. We find this practice wasteful. First, real problems are large and obvious. A search for them is a search for the obvious. A casual conversation with anyone in the organization will identify a number of real problems.
Second, benchmarking without a clue is like fishing for insights from data. It is unscientific as it allows people to read into data what they like to see. The scientific method of benchmarking is to begin with several hypotheses about potential problems in the organization and then collect data. Furthermore, defining problems from data misleads organizations to focus on problems for which they have routine data collection efforts already underway. Thus, it focuses attention on what is easy to measure and not on what is the real problems of the organization.
Some proponents of data collection point out that defining problems from data help organization members share the need for change. Thus, these efforts prepare the organization for change. We agree. But there are more efficient ways of doing so. A consensus panel accomplishes the same task but in a shorter time. In these panels, members of organization and outsiders spend half a day listing the potential problems. They then provide a numerical estimate of the extent of the problem, discuss their estimates, and re-do their estimates. If any objective data is available, panel members may also examine these data before re-doing their estimates. This process of estimation talking and re-estimation takes another half day. In the end, in two half-day meetings, the consensus panel arrives at a list of problems and a consensus regarding the extent of these problems. Contrast this with benchmarking, which takes several months and, may or may not focus on the real underlying problems of the organization.
But the desire to have objective data may go beyond rational need. Some in the organization may never believe in the result of a consensus panel because it is based on opinions and not objective data. If this is the case, we recommend a rapid and focused data collection on the problems identified by the consensus panel. Because the problems have been identified a priori, it is possible to collect and report data within a week. Rapid data collection is explained further in a later section. When you know what data you need to collect, you waste little effort in collecting data you do not plan to use. Thus, you get to document the extent of the problem much faster.
One way to reduce wasted time is to make sure that you are solving the right problem. The natural tendency of people in defining problems is to have a solution in mind or to blame a specific group of employees. Here is an example. One group defined their problem as reducing the number of times that their information system is down. This statement blames the Information System personnel for the problem. Instead, we suggest that you define problems in terms of customer's experiences. Thus, you may define the above problem as "Customers cannot complete their orders." The later statement of the problem allows many different solutions, one of which is to improve the information systems. By focusing on the customer's experience, one avoids two deadly sins in problem definition: blaming employees and embedding a solution inside the statement of the problem.
It is our experience that many organizations do not post their storyboards publicly and discuss the problem definition, until a solution has been identified and implemented. This is a mistake. Post the definition of the problem as soon as the consensus panel arrives at it. This helps share information across the organization and creates an avenue for others to contribute insights about the extent of the problem. Thus, before the problem is solved, the team verifies that others consider it to be a real and important problem. It also creates a momentum for and a public commitment to address the problem.
Research on how frames influence human judgment suggests that it is important to perceive problems in different ways. When problems are defined in different ways, different solutions are suggested. Thus, for example, the possible solutions to the problem of "10% of customers cannot complete their orders" are entirely different from the problem of "90% of customers can complete their orders." Both statements mean the same but they invoke different frames and thus make the team sensitive to different issues. We find it useful to re-state every problem statement as an opportunity, to state both the positive and the negative aspects of a problem. Such re-statements help expand what is examined by the team.
Rapid meetings Can Save Time
The advice to cross functional teams trying to conduct improvement projects is to follow the seven step meeting rules. These steps include (1) set an agenda; (2) keep time, (3) record people's ideas, and (4) evaluate the process at the end of the meeting. The literature on group processes suggests a number of other steps that can radically improve group's effectiveness and efficiency. We recommend the following:
A facilitator is a team member, or an outside person, whose job is to keep the group on the task, record team members ideas, keep time, and alert the group to process problems. The facilitator should not participate in the meeting so that he or she can do a credible and unbiased job of helping the meeting. When no outsiders are available to facilitate the meeting, it is often useful to assign, in a rotating fashion, one team member to facilitate the meeting.
The facilitator should gather ideas from team members in a round robin fashion and without evaluation until all relevant ideas have been expressed. Data shows that when evaluation of ideas are postponed, teams supply more ideas and supply more creative ideas. Here is an example. When asking for a volunteer to accomplish a task, there is often one volunteer in small teams. Many groups assign the task and proceed to other steps without realizing that someone else in the group may have been better at it. In contrast, postponing evaluation means that you wait and ask if others would also volunteer. Then, after identifying a number of people who can do the task, a conscious choice is made. Postponing evaluation prevents premature closure of group's information gathering. It reduces the chance that the group may fall victim to group-think -- where everyone goes along with the group's norm.
For many teams, meetings start when team members assemble. This is a mistake. One can do much work before the face to face meeting to make sure that it runs smoothly. Typically, Total Quality Management teams set the agenda ahead of the meeting. This is useful but not enough. We prefer to start the meeting ahead of face to face gathering. The facilitator should contact team members individually and ask for their input on all items in the agenda. Sometimes, the facilitator can do this through Delphi questionnaires, other times through email or face to face meetings. Sometimes a simple phone call will accomplish the task. No matter how it is done, the facilitator needs to gather as many reactions from the team members as possible. In the real meeting, these responses are summarized and presented back to the team.
There are several advantages in collecting team members' ideas before the meeting. First, people are separated from their ideas and thus it is easier to judge the idea based on its merit than based on who first thought of it. Second, time is saved. Much of the activity that needs to be done in the meeting is done ahead of time, making the meeting time shorter.
We find it fascinating that when we ask teams to think through an issue again, they often improve on their own decisions. Redoing is somewhat like bootstrapping. You improve your prior effort. When you come to the same task again, you have the sense of familiarity and déjà vu that lets you search at more depth for modifications and improvements. Furthermore, you have the benefit of all the discussion that has preceded the group's decision making. You come to see the idea from a new perspective. Thus, you improve your previous work. Redoing is very useful when the group has accomplished its task remotely. When the group meets face to face, they review their consensus to date, discuss and modify it. Re-doing is also very useful when the group has been asked to vote on a long list of alternatives so a few options can be pursued. In these cases, discussion and re-doing the task improves the group's judgment.
At first it may seem strange that when we are designing rapid improvement efforts, we
ask you to do things twice, which clearly will take more time. But in reality, the time
spent doing things over again is well spent. It reduces costly mistakes that will consume
even more time in the long run. Furthermore, steps like meeting before meeting will so
drastically save you time that redoing the meeting can be easily accommodated.
Rapid Plans Can Save Time
There are several distinct methods of planning. One, like Total Quality Management, is focused on what is and how to improve it. Others, like Third Wave organizational developed, are focused on what could be and how to reach it. If we focus on "what could be," we still need to flow chart the current situation but not in as much detail. Instead of flow charting the details of the current situation, we need to identify the constraints that the current process creates for the implementation of our imagined future process. Since flow-charting takes considerable amount of time, reducing the details in these charts helps teams complete their plans faster.
Also, when too much attention is paid to details of the current situation, there is a danger that the group may be lost in details and fail to see the larger picture. The team may arrive at small incremental changes as opposed to large steps forward. By focusing on a new vision of the process, more improvements can be suggested without anchoring the team's judgment in the current process.
Rapid Data Collection Can Save Time
There are at least three steps in which you can reduce the data collection time.
Too often people collect data that they do not need and will not analyze. This is absurd but many do it, some because they feel that certain information must always be collected, others because they do not have a clear idea what they plan to do with the data. Every piece of datum, no matter how small adds to the length of a survey and to the difficulty of administering it.
We have an easy exercise that reduces the tendency to collect data that you do not need. Imagine that you have the data you plan to collect. Even make up the data if that helps you imagine it. Analyze the data and write the report. Write the introduction, the methods section, and the result section of your report. Leave the conclusion and the summary for later. In the result section, create the Table and Figures you will use to show your findings. Such exercise in planning the report will make sure that no data is collected that is not subsequently needed. It also reduces the time it takes to go from data collection to report writing. Write the report before you collect data and you will know what data are absolutely necessary for your report.
An alternative to writing the report is to conduct a decision analysis. For those occasions that complex data models are used, we recommend conducting Decision Analysis. Then, sensitivity analysis can help you decide whether the conclusions of your report depend on your data collection efforts. Collect only those data that will directly change the conclusion of your report.
Many teams collect data from every patient affected by the process. This creates a large burden for administration of the surveys. The larger the number of people surveyed, typically the longer the time for completion of the effort. Instead of focusing on large-scale surveys, it is better to focus on representative surveys. A sample of patients is chosen that seem to represent the population affect by the new process. These patients complete the survey and based on their responses inferences are made concerning the whole population.
One way to make sampling more efficient is to devise rules for expanding the sample. A small sample is drawn. If it leads to clear unequivocal conclusions, then no more data is collected. If the results are ambiguous, then a larger sample is drawn. Thus, for example, one may agree to sample 20 representative patients about their satisfaction with the new process. If less than 1/2% or more than 5% are dissatisfied, then no more samples are drawn. If between 1/2% to 5% of the respondents are dissatisfied; then, a larger sample of 50 patients is drawn. These methods of two-stage sampling save the number of patients that need to be contacted and thus reduce the time it takes to collect the information.
There are at least two sources for data. The first relies on your observation of the process and is called "objective." The second relies on the observations of others and is referred to as "subjective." Note that by subjective data we do not mean the likes and dislikes of a person, which is after all idiosyncratic and unreliable. By subjective data, we mean relying on observations of others. Thus, a nurse saying that patients' satisfaction has improved is based on the nurse's observation of the frequency of the patients' complaints not on his or her likes and dislikes.
When under time and resource pressures, subjective opinions maybe a reliable source of data that could replace the seemingly more "objective" surveys of patients. In the above example, a nurse's report of the frequency of patient complaints may be as accurate as our own survey of patients' complaints. Especially when, as a novice, we do not know how to get patients to tell us what is on their mind. Many people think opinions are unreliable and therefore could not be the basis of action. This is a peculiar position when the same people are willing to bet their own lives on the opinions of a surgeon and go ahead with life threatening medical interventions. Consider for a moment what would you rely upon in the following example. If you were betting on a horse race, would you like to rely on a book of statistical data on the jockey and the horse's previous performance or would you like to have a hot tip. When faced with these circumstances, most people would choose the hot tip as long as the source of the information is reliable. The same people may find manager's reliance on opinions concerning the performance of a process as unacceptable. Why? There is a suspicion that opinions are unreliable. One way to address this suspicion is to show that the opinions are shared across a cross-section of people familiar with the process. Instead of asking one person, ask a group of people familiar with the process and see to what extent they agree. When and if they agree, then a reliable judgment has been made about the performance of the process.
The other component of our suggestion is that you should ask for numerical estimates from the people you interview and use analytical techniques to aggregate these judgments into more complex conclusions. Numerical data, even when subjective, could be analyzed and displayed using Control charts or other analytical methods. Non numerical data cannot be analyzed in this fashion. At first, this may seem strange, as many people are not comfortable to ask or to give numerical estimates. But an exercise may demonstrate our point. Suppose you want to know if the number of people served by our emergency room has changed. If you ask the nurses to directly estimate how much the situation has changed, they may not remember. But if you ask them how many patients they served today. They will surely remember. If later you ask again, you can get more estimates. In time you will have enough estimates to track any changes in the emergency room utilization. You can track these changes in a control chart only if the estimates are numerical.
You are probably not convinced. You think that surely people's guesses are likely to be inaccurate. Some believe that humans are inherently ill equipped to arrive at numerical estimates because of cognitive limitations. According to Hogarth,
"Man is a selective, sequential information processing system with limited capacity, he is ill-suited for assessing probability distributions."
While cognitive limitations may lead to errors in estimating numerical estimates, we could use different techniques to produce reliable and accurate estimates. First, you must rely on experts. People familiar with the underlying process. They must know the process better than anyone else. When you get your estimates from people, then obviously chances for error are reduced. Even then there are specific steps you can take to improve the estimates. Train the experts (i.e., walking them through a few assessments and demonstrating the implications of their estimates). Don't rely on one expert use a group of experts with diverse backgrounds and insights. Groups of experts can compensate for each other's cognitive biases. Specifically, experts who individually estimate numbers, discuss their differences, and then re-estimate, have been found to improve the accuracy of their estimates by as much as 33 percent. Finally, make any tool or data that experts typically use in their every day judgments available to them. Take for example the meteorologists. Everyday they make numerical judgments about the weather. They are accurate because they rely upon objective data (even though it is partially relevant) and because they rely upon numerous decision aids (e.g. computers, satellite pictures, etc.). Similarly clinicians can give you accurate estimates if they have access to their books and patient charts. In this regard, Edwards writes:
"If substantive experts are indeed allowed the time and the necessary tools (e.g. paper and pencil), they can accurately assess probabilities. Granted that assessed probability is not precise to the third digit, it nevertheless is a systematic and coherent assessment of the individual's belief."
Our own experience in eight different application areas suggests that a group of experts with access to training and the proper tools can provide sufficiently accurate numerical assessments.
A final point should also be made about combining objective and subjective data. Experts can specify the parameters of a scoring system or an index, and then objective data can be used to test the overall accuracy of the index. This approach has two advantages. First, it is based on objective data and therefore acceptable to many and second it is based on subjective data therefore can be accomplished quickly.
If experts specify the parameters of an index, there is no need to put aside data for parameter estimation. So the need for objective data is reduced, not by just a little but radically. For example, in analyzing outcomes of a process, it is often necessary to adjust the outcomes for severity of the patients' illness on admission. Severity indices can be constructed from subjective opinions or from analysis of objective data. Severity indices constructed from subjective opinions can be subsequently tested against objective data. When doing so, much fewer data is needed because the subjective index has one degree of freedom while the objective multivariate approach has many. Usually, the number of degrees of freedom in a multivariate analysis is one minus the total number of variables. For each degree of freedom, one generally needs to include 10 times as much data. Thus, if one has a 200 variable model, then there is 199 degrees of freedom and one needs approximately a database of 1990 cases. When experts specify the scoring of the 200 variable model, the scoring system maps all of the variables into a single score. As a consequence, the degrees of freedom drop and the need for data is reduced. Thus, the 200 variable model, which previously required 1990 cases, can now be analyzed with 50 cases.
One method of reducing data collection time is to put in place a number of plans for rapid response to specific questions that may be posed by the team. Thus, the facilitator approaches employees close to the process and alerts them that the team plans to ask them a few questions. The exact nature of the question is not clear but the procedure used to send the question to them and collect the question is explained and perhaps even practiced. The individuals are put on notice about the need to respond quickly.
When the team finally arrives at a key need for data, the facilitator broadcasts the question, usually through a telephone message, and within a few hours collects the response. For example, the National Institute of Drug Abuse often needs to respond to questions posed by the White House and other policy makers about effectiveness of the nation's drug policy. To respond to these policy questions based on data, they have put together informants at various hospital emergency rooms. When a question comes up (e.g., are the rates for tuberculosis increasing?); the question is broadcast to all informants who within 24 hours provide their response to the Institute. As a consequence of planning, the agency is able to respond quickly to data needs of policy makers. Similarly, improvement teams can plan for their data needs before they know what needs to be collected. They could obtain commitments from individuals about their rapid response and they can put in place mechanism for broadcasting the question to employees who agree to help.
Computers can now automatically call patients, find them in the community, ask them your questions, analyze the responses and fax the results to you. In one study we asked a secretary and a computer to compete to contact "hard to reach" persons and ask them a few questions. One the average the secretary was able to do the task in 41 hours while the computer accomplished the same task in 9 hours. Technology can help overcome the difficulty of finding people. When you use technology to collect information from people, there is one added benefit. People are more likely to tell the truth to a machine than to a person. In surveys of drug use, homosexuality, and suicide risks, patients were more likely to report their activities to a machine than to a clinician, even though they were aware that the clinician will subsequently review the computer summary.
Get More from Your Effort
One way to improve faster is to get more from existing efforts. The idea is to roll out gains in one unit of the organization to the rest of the organization, thus avoid the cost and time of having separate improvement projects in each unit. There are a number of steps you can take to encourage successful roll out.
The use of cross-functional teams is one hallmark of Total Quality Management. When cross-functional teams include a broad organizational membership, it helps the subsequent adoption of the team's suggestion. But unfortunately the larger the group the more difficult to arrive at improvements. One way to have the implementation advantage of large teams but the efficiency of small teams is to have rotating membership. Different people could work on different aspects of the problem. Thus, a broad group from the organization may arrive at problem definition while a narrow group close to the process may arrive at the possible solutions. Changing and managing the membership of cross-functional teams may improve the chances for whole system implementation.
Most improvement projects either do not do a storyboard or wait to display the storyboard until the improvement task is complete and needs to be communicated to the rest of organization. Waiting to display a storyboard is a mistake. The best stories are those that unfold over time. The employees' imagination is peaked when the story is displayed over time.
In addition, early disclosure of the improvement effort helps others in the organization begin to get involved well before a solution is reached. When organization's employees are involved, they are more likely to implement the team's suggestions.
With the growth of media technology within organizations (email, phone messages, intranets, newsletters, etc.) it is now possible to routinely broadcast progress reports of the team to the whole organization. Such broadcasts help involve the organization in the team's deliberations. Unfortunately, some managers are still unfamiliar with the use of media in organizations. But their subordinates who have grown up with constant exposure to media are sometimes more receptive.
Enormous opportunities are lost when we write a memo than send along a video. If teams want to change whole organizations more quickly they should overcome their own fear of media and take advantage of technology to reach in words and in pictures every employee involved. If you want to captivate the attention and the hearts of employees, an unfolding story is the best place to start. Tell them what is the problem, what is being planned, and why. And tell the story as it unfolds in your team meetings.
Appeal beyond rational arguments
Obviously many adopt a change because it is in their self-interest. But self-interest is not enough. Literature on change and leadership suggests many other factors at play. First, employees need to feel (not to just know) that there is a need for change. They can feel the need for change by hearing and seeing anecdotal information that supports the data on the extent of the problem. They need to hear the voice of their customers. Circulating a complaint from a customer will do wonders in making employees feel the extent of dissatisfaction with the current situation, even when data about satisfaction rates is already communicated to them.
Second, employees need to see that change will address their needs, not just the needs of their customers. They need to see that it may make their job easier or make them have more pleasant interactions with customers. Third, employees need to feel they are capable of changing (meaning that they have the resources and the mandate to change). Fourth, they need to know where to turn in case of trouble. Employees are more likely to change if they have the emotional and social support of key people they respect in the organization. Fifth, employees may decide to change but continue to act as before. They need to see that work norms are changing, along with organizational procedures and reward structure. Sixth, they need to have a method of trying things out before becoming fully committed to the change. Seven, employees need to hear that they have made the right choice even after they adopt the proposed change. Post change remorse is common and providing information to help reduce it helps in institutionalizing change.
In the end, the steps listed above may help you solve your problems faster. In some cases, they change where you are putting your emphasis and in other cases they help you do what you are doing faster. Some of our suggestions, in fact, lead you to spend more time right away in order to save time in the long run. These suggestions are based on literature and our experience with improvement. Of course, more data are needed to test our claim that these steps make improvements faster.
But the potential and the promise of rapid improvement is so large that any effort in this direction makes sense. If you save time, what are you going to do with all that time on your hand? Naturally, you could solve more problems. But is it possible that organizations can become such rapid problem solvers that less problems exist to solve? Is it possible to solve our problems so fast that we run out of significant problems to solve? Given the crisis-ridden nature of our current organization, the idea of rapid change, even if far fetched, is a pleasing idea to contemplate.