In probabilistic risk analysis, the analyst often faces situations where the event of interest is quite rare (less than 5% chance of occurrence); perhaps it has happened only once in
a decade. This review focuses on how to accurately assess probabilities of rare events.
In general there are two ways of assessing probabilities, both of which are not reasonable for assessment of rare probabilities. The most common approach is to rely on observed frequency of the event. This method cannot be applied to rare events as by definition rare events do not occur often and one has to accumulate a large data set before reliable estimates can be made. For an event that occurs once a decade, one has to collect several decades of data before a reliable estimate can be obtained.
Alternatively, many rely on experts to assess probability of events. But human beings are notoriously ill equipped to distinguish among very small probabilities. In estimating rare probabilities, sometimes orders of magnitudes are missed; and probability of 1 in million is estimated as 1 in thousand. An alternative is needed that overcomes difficulties in estimation of rare probabilities.�
Before proceeding, it is important to clarify how would anyone know if the assessed probability of a rare event is accurate. In general, accuracy of probabilistic forecasts are verified by calibration: in numerous occasions in which the same probability is forecasted, the frequency of occurrence of the event is compared to the estimated probability. For example,
suppose a weather forecaster predicts that there is 80% chance of rain. If it rains tomorrow, is this a reasonable forecast? If it does not rain, is the forecast erroneous? Neither of these questions can be answered. In most situations, a single event cannot tell us much about the frequency of that event. The accuracy of the forecast can only be established if in a large number of days, say 100 days, in which the weather forecaster has predicted 80% chance of rain, it does indeed rain for 80 days. Only then we can claim that weather forecaster is well calibrated and accurate.
Obviously, the requirement to observe a large number of similar forecast makes it difficult to verify calibration of forecast of rare events: There are not enough such forecasts or observation of the event to compare the two. So how could one assess the reasonableness of probability estimates for rare events? In the case of rare events, it may be possible to assess accuracy of the probability of the rare event with a single observation to the contrary. If a rare event occurs more frequently, we may have to revise our assessment of it. If an event is expected to occur once every 100,000 occasions, then observing the occurrence of the event after 25 occasions will signal a problem with the estimate. For example, Cooke (1991) reports that administrators of NASA had predicted the probability of shuttle failure at one in every 100,000 occasions. Colglazier and Weatherwax (1983) had predicted such failure at one in every 35 flights. When the Challenger Space Shuttle failed after only 25 flights, that NASA administrators were wrong in assuming shuttle failures would be very
rare.
In recent years, there have been many occasions in which risks of rare events have been assessed and subsequent events have helped confirm the accuracy of the risk analysis or improve aspects of the analysis. Probabilistic risk analysis originated in aerospace industry. One of the earliest comprehensive studies was started after the loss of life due to a fire in Apollo flight AS-204 in 1967. In 1969, the Space Shuttle Task Group in the Office of Manned Space Flight of NASA suggested that the probability of loss of life should be less than 1 percent. Colglazier and Weatherwax (1983)
conducted a probabilistic risk analysis of shuttle flights. But overtime, NASA administrators abandoned numerical forecast of risks as the projected risks were so high to undermine the entire viability of the operations. Cooke (1991) and Bell and Esch (1989) report that NASA administrators "felt that the numbers could do irreparable harm." But subsequent shuttle accidents returned the emphasis to probabilistic risk analysis. Today almost all components of space shuttle go through independent risk analysis (Safaie 1991, 1992, 1994; Hoffman 1998; Planning Research Corporation, 1989, Science Applications International Corporation, 1993, 1995). A good example of such risk analysis can be found in the work of Pate-Cornell and Fischbeck (1993, 1994), where they assessed the risk of tiles breaking away from the shuttle. In this award winning study, the authors linked management practices to risks of various tiles on the
shuttle.
In nuclear safety, several studies have focused on reactor safety. The first such study was the Reactor Safety Study (1975). The study was followed by a series of critical reviews (Environmental Protection Agency, 1976; Union of Concerned Scientists, 1977, American Physical Society, 1975), including in 1997 a Congressional bill to mandate a review panel to examine the limitations of the study. The near failure of reactor core at Three Miles Island, however, proved that the scenarios anticipated in the study were indeed correct, though the probability of human failures were underestimated. Not surprisingly, reviews of Three Miles Island re-emphasized the need for conducting probabilistic risk analysis (Rogovin and Frampton, 1980; Kemeny et al. 1979). Kaplan and Garrick (1981) conducted a study of probability of reactor melt down. In 1983, the U.S. Nuclear Regulation Commission
issued a manual for how to conduct Probabilistic Risk Analysis. Probabilistic risk analysis has also been used by the energy firms not focused on nuclear power to predict catastrophic events (Cooke, Jager 1998; Rasmussen, 1981; Ortwin,
1998)
Probabilistic risk analysis has been applied to a variety of natural disasters including earthquake predictions (Chang, Shinozuka, Moore 2000), predicting floods and coastal designs (Voortman, van Gelder, Vrijling 2002; Mai, Zimmermann, 2003; Kaczmarek 2003 ), environmental pollution (Slob, Pieters 1998; Moore, Sample, Suter, Parkhurst, Scott, 1999). A large number of studies focus on waste disposal and environmental health (Ewing, Palenik, Konikow 2004; Sadiq, Husain, Veitch, Bose. 2003; Cohen 2003; Garrick, Kaplan 1999). In health care probabilistic risk analysis has focused on analysis of root causes of sentinel adverse events such as wrong side surgery or failure mode and effect analysis of near catastrophic events (Bonnabry, et. al 2005). Amgen pharmaceutical has also used the procedure for deciding on new product development (Keefeer, 2001). In failure mode analysis within health care most often the rank order of rare probabilities are assessed and the magnitude of the probability is ignored (DeRosier, Stalhandske, Bagian, Nudell
2002).
The application to terrorism is new. Taylor, Krings and Alves-Foss (2002) have applied probabilistic risk analysis to assessment of cyber terrorism risks. Others have suggested the use of these techniques in assessment of terrorism ( Apostolakis, Lemon 2005; Haimes, Longstaff
2002).
There are a number of methods available for assessing probability of rare events. This review discuses four approaches: use of fault trees, similarity judgments, importance sampling, and time to the event. Each of these approaches are further discussed
below.
The concept of fault trees and reliability trees has a long history in space and nuclear industry. Several books (Krouwer, 2004) and papers describe this tool (Marx and Slonim, 2003). The first step in conducting fault trees is to identify the sentinel adverse event that should be analyzed. Then all possible ways in which the sentinel event may occur is listed. It is possible that several events must co-occur before the sentinel event may occur. For example, in assessing the probability of an employee providing information to outsiders, several events must co-occur. First the employee must be disgruntled. Second, information must be available to the employee. Third, outsiders must have contact with the employee. Fourth, the employee must have a method of transferring the data. All of these events must co-occur before hospital data is sold to an outside party. None of these events are sufficient to cause the sentinel event. In a fault tree, when several events must co-occur, we use an "And" gate to show it. Each of these events can, in part, depend on other factors. For example, there may be several ways to transfer the data: on paper, electronically by email, or electronically on disk. Any one of these events can lead to transfer of data. In fault tree when any one of a series of events may be sufficient by themselves to cause the next event to occur, we show this by an "Or" gate. Fault tree is a collection of events connected to each other by "and" and "Or" gates. Each event depends on a series of other related events, providing for a complex web of relationships. A fault tree suggests a robust work process when several events must co-occur before the catastrophic failure occurs. The more "And" gates are in the tree structure, the more robust the work process modeled. In contrast, it is also possible for several events by themselves to lead to catastrophic failure. The more "Or" gates in the path to failure, the less robust the work
process.
The second step is to estimate probabilities for the fault tree. Since the catastrophic failure is rare, it is difficult to asses this probability directly. Instead, the probability of various events leading to this failure are assessed. For example, the probability of a finding a disgruntled employee can be assessed. The probability of an employee having access to large data sets can be assessed by counting employees who have such access during the course of their work. The probability of an employee being approached by someone to sell data can be assessed by providing an expert data on frequency of reported crimes and asking him/her to estimate the additional unreported rate. In short, through objective data or subjective opinions of experts various probabilities in the fault tree can be assessed. The fault tree can then be used to assess the probability of the catastrophic and rare event using the following
formula:
Pcatastrophic failure = ∑i ∏j pi,j
In the above formula, "j" represents all events that are related to each other through an "And" gate and "i" represents all events that are related to each other through an "Or" gate.
Sometimes, we are trying to predict an event that has no precedence but in some way and shape is similar to a previous rare event. For example, prior to September 11����{� th����{� attack on skyscrapers in New York city, terrorist tried to attack Eiffel tower by driving a hijacked plane into it. The two incidences are similar in the sense that both are tall building, which have important symbolic values. Both were attacked by a passenger jet, hoping that the jet fuel will lead to additional destruction. They are of course also different incidences occurring for different reasons at different times in different places. Should an analyst deduce from the attack on Eiffel tower that other similar attacks are
likely?
Consider another situation. Recently, there has been an attack by terrorists on a school, where children were taken hostage and and surrounded by bombs. Is it possible that a similar attack may occur in a hospital in United States and if so what is the probability of the attack. The answer to this question depends on two factors. First, what is the probability of an attack on a school?. Second, how similar is the hospital scenario to the school
situation?
Similarity judgments can be used to extend probability of known rare events to new situations. Psychologists have conducted numerous experiments showing that similarity of two situations will depend on features they share and features unique to each case (����{� Mobus, 1979; Siegel, McCord, Crawford 1982; Schwarz, Tversky 1980; Catrambone, Beike, Niedenthal 1996). In 1997,����{� Tversky summarized the research on similarity and provided a mathematical model for judgments of similarity. According to procedure suggested by Tversky s����{� imilarity of two situations "i" and "j" can be assessed by listing the following three categories of
features:
Features in the index case but not in the comparison case,
fi, not j
Features in the comparison case but not in the index case,
fnot i, j
Features in both cases, fi,j
Then similarity can be measured as the count of shared and not shared features using the following formula:
Sij = fi,j / [fi,j +
a fi, not j + b
fnot i, j]
In above formula, the constant "a" and "b" add up to 1 and are set based on whether the index case is defining prototype. These constants if different from .5 allow the comparison case to be more like the index case than vice versa. For example, they allow a child to be more like father than the father like the child. For example, consider the similarity between the attack on a hospital in America and attack on the school in Russia. First, we list features shared or unique across the two situations:
Features in the index case (attack on school) and not in the comparison case (attack on hospital)
No proximity defense, easy access
No communication system available between rooms allowing terrorist time to collect large number of people
School age children.
Features in the hospital attack scenario and not in the school case.
Difficulty in gathering the population into one central location
Availability of security officers
Features shared in both
Focused on vulnerable population
An ongoing war leading to occupation of the region
While this list is brief, it highlights the procedure. Once the list has been created the similarity of the two situations can be measured using the formula. Assume that we let the constant "a" be 0.80 and the constant "b" be 0.20. Then the similarity of the hospital situation to the school is calculated
as:
Based on this calculation, if we think that the probability of an attack on the school is, lets say 1 in 10,000; then the probability of attack on the hospital is:
Probability of attack on hospital = (1/10000) * 0.5 ≈ 5 in 100,000
One method of improving accuracy of estimates of rare events, is to purposefully examine the event in artificially constructed samples where the event is not rare (Heidelberger 1995, Glynn, Iglehart 1989; Srinivasan, 2002). Then the frequency of the event in the sample can be extrapolate to the remaining situation proportional to how narrowly the sample was drawn. The procedure is generally known as importance sampling and involves sampling data from situations where we expect to find the rare event. Assume that we have taken "M" narrowly defined samples and sample "i"
represents
Wi cases in the population of interest. If Pi
is the probability of the event in the narrowly defined sample, then probability of the rare event, P, can be calculated
as:
P = (∑i, …, M Wi Pi)/∑i=1, …, M Wi
An example may demonstrate this concept. Suppose we want to estimate the probability of a successful theft of data by overcoming password protection in a computer. For most organization such an attack is rare, but the attack is more likely to be seen in computers that are infected by a virus. Suppose in an organization 1 in 100 computers have a major virus. Also suppose that examination of data trails in these infected computers show that 0.3% involve loss of data. What is the probability of loss of data anywhere in the organization? This probability is calculated by weighting the narrow sample of infected computers to reflect the proportion of these computers inside the
organization:
P = (1/100) * 0.003 + (99/100) * 0
Note that in this calculation we have assumed that loss of data does not occur in computers without virus infection. This may be wrong but as a first approximation may be a reasonable step as we have anticipated that most data loss occurs among infected computers. The importance weighting procedures requires us to know a priori, with high level of certainty, both the conditions under which the rare event are more likely to occur and the prevalence of the conditions.
A special case of importance sampling
arises when we over sample rare events so that we can better understand the
relationship of causes within these events. Sometimes extant databases
exist collecting information about causes of various sentinel events over the
years. Sometimes, the data is no more than a list of what happened.
To go from these limited data to a joint distribution of causes and sentinel
event, we need to remove the effect of over sampling from the data. A revised
count of the data is calculated by following these steps:
Count the
frequency of sentinel event "s" in your databank, call this Cs
Count the
frequency of causes "i" among the sentinel event "s", call this Cis.
Count the
number of normal events, Cn,, these are the events in which the
sentinel event has not ocurrred. These data are readily available as
they are common everyday occurrences.
Estimate the
odds for occurrence of sentinel events, shown as Os for sentinel
event s. Typically, the odds is estimated from time to sentinel event
procedures described in the following section. Note that the odds is
typically several order of magnitude different from ratio of Cs/Cn
Adjust the
counts to reflect the expected proportion of sentinel event. We had
over sampled the sentinel events. This adjustment will reduce the count to
reflect the proportion we observe the sentinel event. The revised
count of causes among sentinel event, Ris, is provided by the
following formula:
An example can
describe this procedure. Suppose we want to examine the joint distribution
of wrong blood transfusion and under staffing of the operating room. Wrong
blood transfusion is quite rare, suppose that in the last 1000 operations, it
has occurred only 3 times. In each occasion an investigation was done of
various causes. Two of these occurred when we were understaffed and one
when we were not. Most patients have the right blood transfusion. In
a sample of 500 operations without wrong blood transfusion, 250 were operated on
when there was a staff shortage. With this data we can now estimate the
joint distribution of operating room understaffing and wrong blood transfusion.
Table 1 shows the operating room staffing within groups that had or did not have
wrong blood transfusion. Note that these numbers make sense as a column
but cannot be added as a row, as the population of patients with wrong blood
transfusion have been over-sampled.
Patients with no
wrong blood
transfusion in last
500 operations
Wrong Blood
Transfusion in
1000 operations
Total
Understaffed Operating Room
No
250
1
Not available
Yes
250
2
Not available
Total
500
3
Not available
Table 1: Count
within Types of Blood Transfusion
To create a Table
that is not based on separate cohorts, we start with an arbitrary large number
of normal operations, say 10000, who have not had any wrong blood
transfusions and distribute these operations based on observed rates of
understaffed operating rooms. The result of this step are shown in Table
2:
Wrong Blood Transfusion
No
Yes
Total
Understaffed Operating Room
No
5000
Yes
5000
Total
10,000
Table 2: Distribute a Large Number of Operations for Normal Patients
Note that the
odds for wrong blood transfusion is 3/997. Using the data and the formulas
provided above we can estimate the frequency of wrong blood transfusion cases
that can be expected in 10000 operations. The results are shown in Table
3:
Wrong Blood Transfusion
No
Yes
Total
Understaffed Operating Room
No
5000
Yes
5000
Total
10,000
30
Table 3: Estimate Expected Number of Wrong Blood Transfusions
Next we
distribute the estimated number of wrong blood transfusion based on the observed
staffing in the cohort of patients with wrong blood transfusion. The results are shown in Table 4.
Wrong Blood Transfusion
No
Yes
Total
Understaffed Operating Room
No
5000
10
Yes
5000
20
Total
10,000
30
Table 4: Distribute Number of Wrong Blood Transfusions
across
Operating Room Status
Note that now the
row totals can be calculated as the effects of over sampling of wrong blood
transfusions has been taken out. Table 5 shows the
joint distribution of wrong blood transfusion and operating room staffing
calculated from the original cohort data.
Wrong Blood Transfusion
No
Yes
Total
Understaffed Operating Room
No
0.499
0.001
0.500
Yes
0.499
0.002
0.501
Total
0.997
0.003
1.000
Table 5: Joint Distribution of Blood Transfusion &
Operating Room
Staffing
From these data
we can now estimate the probability of various events including the probability
of wrong blood transfusion with and without understaffed operating rooms:
p(Wrong blood transfusion | No understaffed operating room) = 0.002
Note that there
is a 2 fold increase in the probability of wrong blood transfusion if the
operating room is understaffed. Knowing that the operating room is
understaffed would increase the likelihood of wrong blood transfusion by odds of
1.33 to 1.
Likelihood ratio associated with understaffed operating room =
p(Understaffed operating room | Wrong blood transfusion) /
p(Understaffed operating room | No wrong blood transfusion) = 1.33
Note that we were able to do this
analysis from very scant data on an a rare event.
A method that can allow us to examine rare events directly is through examination of time to the event. If we assume that an event has
a
Bernoulli distribution (i.e. the event either happens or does not happen, it has a constant probability of occurrence, and the probability of the event does not depend on prior occurrences of the event); then number of consecutive occurrences of the event has a
Geometric distribution. In a geometric distribution, probability of a rare event, p, can be estimated from the average time to the event, t, using the following
formula:
p = 1 / (1+t)
Table 7 shows how this relationship can be explored to calculate rare probabilities. The expert is asked to provide the dates for the last few times the event has occurred in the last year or decade. The average time to reoccurrence is calculated and the above formula is used to estimate the probability of the
event.
ISO 17799 word
Frequency of event
Calculation
Rare probability
Negligible
Once in a decade
=1/(1+3649)
0.0003
Very low
2-3 times every 5 years
=2.5/(5*365)
0.0014
Low
<= once per year
=1/(364+1)
0.0027
Medium
<= once
per 6 months
=1/(6*30+1)
0.0056
High
<= once per month
=1/(30+1)
0.0333
Very high
=> once per week
=1/(6+1)
0.1429
Extreme
=> one per
day
=1/1
1
Table 7: Estimating probabilities from time between events
For example, suppose we want to know what is the probability of an a terrorist attack in city of Washington DC.� To calculate this probability, we need only to record the dates of the last attacks in the city and average the time between the attacks.� This average time between the reoccurrence of the event can then be used to estimate the probability of another attack.
For another example, suppose we do not know the frequency of medication errors in our hospital.� Furthermore, suppose that last year there were two reports of medication errors, one at start of the year and one in the middle of the year.� The pattern of medication error suggests 6 months time between errors.� Average time between errors allows us to estimate the daily probability of medication error:
Since there are no practical ways of observing very low probability events, it is difficult to evaluate the accuracy of our estimates.� Obviously, it is possible, that a contrary event (for example accidents occurring with more frequency than expected) will point out the inaccuracies in our estimation procedure. But in the absence of these contrary events, it is difficult to validate the probabilistic risk analysis findings.� To improve confidence in the assessment, any or all of the following additional steps can be taken:
Check the assumptions of the model.
For example, in fault tree often a series of events are linked to each other.� One could check that this is a reasonable link by examining the conditional independence among these serially linked events.� In an non-cyclical path if A is shown to affect B and B is shown to affect C,
then C should be independent of A given a specific value for B.� � Conditional independence can be checked by examining partial correlation between A and C, by querying the expert, or by examining causal graphs drawn by
experts.
Check the accuracy of the parameters of the model
While it is not easy to measure the catastrophic event, it is possible to observe the probability of various events used in the
model. If these estimates are accurate, we have more confidence in the resulting prediction of the model.
American Physical Society, Study group on light water reactor safety: Report tot he American Physical Society, Review of Modern Physicians Vol. 47, Supplemental No. 1,
1975.
Apostolakis GE, Lemon DM. A Screening Methodology for the Identification and Ranking of Infrastructure Vulnerabilities Due to Terrorism. Risk Analysis 2005, 25:2,
361-376
Bell TE, Esch K. The space shuttle: A case study of subjective engineering. IEEE Spectrum, 1989,
42-46.
Bonnabry P, Cingria L, Sadeghipour F, Ing H, Fonzo-Christe C, Pfister RE. Use of a systematic risk analysis method to improve safety in the production of paediatric parenteral nutrition solutions. Qual Saf Health Care. 2005
Apr;14(2):93-8.
Catrambone R., Beike D., Niedenthal P. (1996) Is the self-concept a habitual referent in judgments of similarity? Psychological Science; 7 (3):
158-163.
Chang SE, Shinozuka M, Moore JE. Probabilistic Earthquake Scenarios: Extending Risk Analysis Methodologies to Spatially Distributed
Systems. Earthquake Spectra, 2000, 16: 3, pp. 557-572.
Cohen BL. Probabilistic risk analysis for a high-level radioactive waste repository. Risk Anal. 2003
Oct;23(5):909-15.
Colglazier EW, Weatherwax RK. Failure estimates for the space shuttle. Abstracts for Society for Risk Analysis Annual Meeting 1986, Boston MA, p 80, Nov 9-12,
1986.
Cooke R, Jager E. A probabilistic model for the failure frequency of underground gas pipelines. Risk Anal. 1998
Aug;18(4):511-27.
Cooke RM. Experts in uncertainty: Opinion and subjective probability in science, Oxford university Press, New York,
1991.
DeRosier J, Stalhandske E, Bagian JP, Nudell T. Using health care Failure Mode and Effect Analysis: the VA National Center for Patient Safety's prospective risk analysis system. Jt Comm J Qual Improv. 2002 May;28(5):248-67,
209.
Environmental Protection Agency, Reactor Safety Study Oversight Hearings Before the Subcommittee on Energy and the Environment of the Committee on Interior and Insular Affairs, House of Representatives, 94th Congress, Second Session, Serial No. 84-61, Washington DC, June 11,
1976.
Ewing RC, Palenik CS, Konikow LF. Comment on "Probabilistic risk analysis
for a high-level radioactive waste repository" by B. L. Cohen in Risk Analysis, volume 23, 909-915. Risk Anal. 2004
Dec;24(6):1417-1419.
Fox EP. SSME Alternate Turbopump Development Program—Probabilistic Failure Methodology Interim Report. FR-20904-02,
1990.
Garrick BJ, Kaplan S. A decision theory perspective on the disposal of high-level radioactive waste. Risk Anal. 1999
Oct;19(5):903-13.
Haimes YY, Longstaff T. The Role of Risk Analysis in the Protection of Critical Infrastructures Against
Terrorism. Risk Analysis, 2002, 22:3, pp. 439-444.
Heidelberger P. Fast simulation of rare events in queuing and reliability
models. ACM Transactions on Modeling and Computer Simulation (TOMACS) archive 5:
1 43 - 85, 1995
Hoffman CR, Pugh R, Safie FM. Methods and Techniques for Risk Prediction of Space Shuttle Upgrades. AIAA,
1998
Kaczmarek Z. The impact of climate variability on flood risk in Poland. Risk Anal. 2003
Jun;23(3):559-66.
Kaplan S, Garrick B. On the quantitative definition of risk. Risk Analysis, 1981, 1: page
11-27.
Keefeer DL. Practice abstract. Interfaces 31: 5, 2001, pp 62-64.
Kemeny J. Report of the President's Commission on the Accident at Three Mile Island, Washington DC,
1979.
Krouwer JS. Managing Risk In Hospitals Using Integrated Fault Trees And Failure Mode Effects And Criticality
Analysis. AACC Press, 2004.
Marx DA, Slonim AD. Assessing patient safety risk before the injury occurs: an introduction to sociotechnical probabilistic risk modelling in health care. Qual Saf Health Care. 2003 Dec;12 Suppl
2
Mai S, Zimmermann C. Risk Analysis-Tool for Integrated Coastal Planning. Proc. of the 6th Int. Conf. on Coastal and Port Engineering,
2003.
Mobus C. (1979) The analysis of non-symmetric similarity judgments: Drift model, comparison hypothesis, Tversky's contrast model and his focus hypothesis. Archiv Fur Psychologie; 131 (2):
105-136.
Moore, DRJ, Sample BE, Suter GW, Parkhurst BR, Scott TR. A Probabilistic risk assessment of the effects of Methylmercury and PCBs on mink and Kingfishers along East Fork Poplar Creek, Oak Ridge, Tennessee,
USA. Environmental Toxicology and Chemistry, 18: 12, pp. 2941-2953, 1999.
Ortwin R. Three decades of risk research: accomplishments and new challenges. Journal of Risk Research, 1998, 1:1 pp 49 - 71.
Pate-Cornell ME, Fischbeck PS. Probabilistic Risk Analysis and Risk--Based Priority Scale for the Tiles of the Space Shuttle. Reliability Engineering and System Safety. Vol. 40, no. 3, pp. 221-238.
1993.
Pate-Cornell ME, Fischbeck PS. Risk management for tiles of the space shuttle. Interfaces, 1994, 24: 1, pp
64-86.
Planning Research Corporation, Independent Assessment of Shuttle Accident Scenario Probabilities for Galileo Mission and Comparison with NSTS Program Assessment, 1989.
Rogovin M, Frampton GT. Three Mile Island, a Report to the Commissioners and to the Public, Government Printing Office,
1980.
Rasmussen NC. The Application of Probabilistic Risk Assessment Techniques to Energy
Technologies. Annual Review of Energy, 6: 123-138, 1981.
Sadiq R, Husain T, Veitch B, Bose N. Distribution of arsenic and copper in sediment pore water: an ecological risk assessment case study for offshore drilling waste discharges. Risk Anal. 2003
Dec;23(6):1309-21.
Safie FM. A Statistical Approach for Risk Management of Space Shuttle Main Engine Components. Probabilistic Safety Assessment and Management,
1991
Safie FM. A Risk Assessment Methodology for the Space Shuttle External Tank Welds. Reliability and Maintainability Symposium,
1994.
Safie FM, Fox EP. A Probabilistic Design Analysis Approach for Launch Systems. AIAA/SAE/ASME 27th Joint Propulsion Conference,
1991.
Safie FM. Use of Probabilistic Design Methods for NASA Applications. ASME Symposium on Reliability Technology,
1992.
Siegel P.S., McCord D. M., Crawford A. R. (1982) An experimental note on Tversky's features of similarity. Bulletin of Psychonomic Society; 19 (3):
141-142.
Schwarz G, Tversky A. (1980) On the reciprocity of proximity relations. Journal of Mathematical Psychology; 22 (3):
157-175.
Science Applications International Corporation, Probabilistic Risk Assessment of the Space Shuttle Phase 1: Space Shuttle Catastrophic Failure Frequency Final Report,
1993.
Science Applications International Corporation, Probabilistic Risk Assessment of the Space Shuttle,
1995.
Slob W, Pieters MN. A probabilistic approach for deriving acceptable human intake limits and human health risks from toxicological studies: general framework. Risk Anal. 1998
Dec;18(6):787-98.
Srinivasan R. Importance Sampling. Springer, 2002.
Taylor C, Krings A, Alves-Foss J. Risk Analysis and Probabilistic Survivability Assessment (RAPSA): An Assessment Approach for Power Substation Hardening.� @ Proc. ACM Workshop on Scientific Aspects of Cyber Terrorism,
2002.
Tversky A. (1977) Features of similarity. Psychological Review; 84 (4):
327-352.
Union of Concerned Scientists. The risk of nuclear power reactors: a review of the NRC reactor study, WASH-1400,
1977.
U.S. NRC, Reactor Safety study. U.S. Nuclear Regulatory Commission, WASH-1400, NUREG-751014,
1975.
U.S. NRC, PRA Procedures Guide, U.S. Nuclear Regulatory Commission, NUREG/CR-2300,
1983.
Voortman HG, van Gelder P, Vrijling JK Risk-based design of large-scale flood defense
systems. 28th International Conference on Coastal Engineering, 2002.