Risk Analysis: An International Journal

Subscribe to Risk Analysis: An International Journal feed Risk Analysis: An International Journal
Table of Contents for Risk Analysis. List of articles from both the latest and EarlyView issues.
Updated: 43 min 25 sec ago

On the Limits of the Precautionary Principle

21 February 2019 - 5:23pm
Abstract

The precautionary principle (PP) is an influential principle of risk management. It has been widely introduced into environmental legislation, and it plays an important role in most international environmental agreements. Yet, there is little consensus on precisely how to understand and formulate the principle. In this article I prove some impossibility results for two plausible formulations of the PP as a decision‐rule. These results illustrate the difficulty in making the PP consistent with the acceptance of any tradeoffs between catastrophic risks and more ordinary goods. How one interprets these results will, however, depend on one's views and commitments. For instance, those who are convinced that the conditions in the impossibility results are requirements of rationality may see these results as undermining the rationality of the PP. But others may simply take these results to identify a set of purported rationality conditions that defenders of the PP should not accept, or to illustrate types of situations in which the principle should not be applied.

Effect of Providing the Uncertainty Information About a Tornado Occurrence on the Weather Recipients’ Cognition and Protective Action: Probabilistic Hazard Information Versus Deterministic Warnings

21 February 2019 - 5:23pm
Abstract

Currently, a binary alarm system is used in the United States to issue deterministic warning polygons in case of tornado events. To enhance the effectiveness of the weather information, a likelihood alarm system, which uses a tool called probabilistic hazard information (PHI), is being developed at National Severe Storms Laboratory to issue probabilistic information about the threat. This study aims to investigate the effects of providing the uncertainty information about a tornado occurrence through the PHI's graphical swath on laypeople's concern, fear, and protective action, as compared with providing the warning information with the deterministic polygon. The displays of color‐coded swaths and deterministic polygons were shown to subjects. Some displays had a blue background denoting the probability of any tornado formation in the general area. Participants were asked to report their levels of concern, fear, and protective action at randomly chosen locations within each of seven designated levels on each display. Analysis of a three‐stage nested design showed that providing the uncertainty information via the PHI would appropriately increase recipients’ levels of concern, fear, and protective action in highly dangerous scenarios, with a more than 60% chance of being affected by the threat, as compared with deterministic polygons. The blue background and the color‐coding type did not have a significant effect on the people's cognition of the threat and reaction to it. This study shows that using a likelihood alarm system leads to more conscious decision making by the weather information recipients and enhances the system safety.

Effect of Providing the Uncertainty Information About a Tornado Occurrence on the Weather Recipients’ Cognition and Protective Action: Probabilistic Hazard Information Versus Deterministic Warnings

21 February 2019 - 5:23pm
Abstract

Currently, a binary alarm system is used in the United States to issue deterministic warning polygons in case of tornado events. To enhance the effectiveness of the weather information, a likelihood alarm system, which uses a tool called probabilistic hazard information (PHI), is being developed at National Severe Storms Laboratory to issue probabilistic information about the threat. This study aims to investigate the effects of providing the uncertainty information about a tornado occurrence through the PHI's graphical swath on laypeople's concern, fear, and protective action, as compared with providing the warning information with the deterministic polygon. The displays of color‐coded swaths and deterministic polygons were shown to subjects. Some displays had a blue background denoting the probability of any tornado formation in the general area. Participants were asked to report their levels of concern, fear, and protective action at randomly chosen locations within each of seven designated levels on each display. Analysis of a three‐stage nested design showed that providing the uncertainty information via the PHI would appropriately increase recipients’ levels of concern, fear, and protective action in highly dangerous scenarios, with a more than 60% chance of being affected by the threat, as compared with deterministic polygons. The blue background and the color‐coding type did not have a significant effect on the people's cognition of the threat and reaction to it. This study shows that using a likelihood alarm system leads to more conscious decision making by the weather information recipients and enhances the system safety.

Comment: The Precautionary Principle and Judgment Aggregation

21 February 2019 - 5:22pm
Risk Analysis, EarlyView.

Comment: The Precautionary Principle and Judgment Aggregation

21 February 2019 - 5:22pm
Risk Analysis, EarlyView.

Reply

21 February 2019 - 5:21pm
Risk Analysis, EarlyView.

Reply

21 February 2019 - 5:21pm
Risk Analysis, EarlyView.

An Optimization‐Based Framework for the Identification of Vulnerabilities in Electric Power Grids Exposed to Natural Hazards

19 February 2019 - 6:47pm
Abstract

This article proposes a novel mathematical optimization framework for the identification of the vulnerabilities of electric power infrastructure systems (which is a paramount example of critical infrastructure) due to natural hazards. In this framework, the potential impacts of a specific natural hazard on an infrastructure are first evaluated in terms of failure and recovery probabilities of system components. Then, these are fed into a bi‐level attacker–defender interdiction model to determine the critical components whose failures lead to the largest system functionality loss. The proposed framework bridges the gap between the difficulties of accurately predicting the hazard information in classical probability‐based analyses and the over conservatism of the pure attacker–defender interdiction models. Mathematically, the proposed model configures a bi‐level max‐min mixed integer linear programming (MILP) that is challenging to solve. For its solution, the problem is casted into an equivalent one‐level MILP that can be solved by efficient global solvers. The approach is applied to a case study concerning the vulnerability identification of the georeferenced RTS24 test system under simulated wind storms. The numerical results demonstrate the effectiveness of the proposed framework for identifying critical locations under multiple hazard events and, thus, for providing a useful tool to help decisionmakers in making more‐informed prehazard preparation decisions.

An Optimization‐Based Framework for the Identification of Vulnerabilities in Electric Power Grids Exposed to Natural Hazards

19 February 2019 - 6:47pm
Abstract

This article proposes a novel mathematical optimization framework for the identification of the vulnerabilities of electric power infrastructure systems (which is a paramount example of critical infrastructure) due to natural hazards. In this framework, the potential impacts of a specific natural hazard on an infrastructure are first evaluated in terms of failure and recovery probabilities of system components. Then, these are fed into a bi‐level attacker–defender interdiction model to determine the critical components whose failures lead to the largest system functionality loss. The proposed framework bridges the gap between the difficulties of accurately predicting the hazard information in classical probability‐based analyses and the over conservatism of the pure attacker–defender interdiction models. Mathematically, the proposed model configures a bi‐level max‐min mixed integer linear programming (MILP) that is challenging to solve. For its solution, the problem is casted into an equivalent one‐level MILP that can be solved by efficient global solvers. The approach is applied to a case study concerning the vulnerability identification of the georeferenced RTS24 test system under simulated wind storms. The numerical results demonstrate the effectiveness of the proposed framework for identifying critical locations under multiple hazard events and, thus, for providing a useful tool to help decisionmakers in making more‐informed prehazard preparation decisions.

A CGE Framework for Modeling the Economics of Flooding and Recovery in a Major Urban Area

14 February 2019 - 5:30pm
Abstract

Coastal cities around the world have experienced large costs from major flooding events in recent years. Climate change is predicted to bring an increased likelihood of flooding due to sea level rise and more frequent severe storms. In order to plan future development and adaptation, cities must know the magnitude of losses associated with these events, and how they can be reduced. Often losses are calculated from insurance claims or surveying flood victims. However, this largely neglects the loss due to the disruption of economic activity. We use a forward‐looking dynamic computable general equilibrium model to study how a local economy responds to a flood, focusing on the subsequent recovery/reconstruction. Initial damage is modeled as a shock to the capital stock and recovery requires rebuilding that stock. We apply the model to Vancouver, British Columbia by considering a flood scenario causing total capital damage of $14.6 billion spread across five municipalities. GDP loss relative to a no‐flood scenario is relatively long‐lasting. It is 2.0% ($2.2 billion) in the first year after the flood, 1.7% ($1.9 billion) in the second year, and 1.2% ($1.4 billion) in the fifth year.

A CGE Framework for Modeling the Economics of Flooding and Recovery in a Major Urban Area

14 February 2019 - 5:30pm
Abstract

Coastal cities around the world have experienced large costs from major flooding events in recent years. Climate change is predicted to bring an increased likelihood of flooding due to sea level rise and more frequent severe storms. In order to plan future development and adaptation, cities must know the magnitude of losses associated with these events, and how they can be reduced. Often losses are calculated from insurance claims or surveying flood victims. However, this largely neglects the loss due to the disruption of economic activity. We use a forward‐looking dynamic computable general equilibrium model to study how a local economy responds to a flood, focusing on the subsequent recovery/reconstruction. Initial damage is modeled as a shock to the capital stock and recovery requires rebuilding that stock. We apply the model to Vancouver, British Columbia by considering a flood scenario causing total capital damage of $14.6 billion spread across five municipalities. GDP loss relative to a no‐flood scenario is relatively long‐lasting. It is 2.0% ($2.2 billion) in the first year after the flood, 1.7% ($1.9 billion) in the second year, and 1.2% ($1.4 billion) in the fifth year.

Design and Assessment Methodology for System Resilience Metrics

14 February 2019 - 5:29pm
Abstract

By providing objective measures, resilience metrics (RMs) help planners, designers, and decisionmakers to have a grasp of the resilience status of a system. Conceptual frameworks establish a sound basis for RM development. However, a significant challenge that has yet to be addressed is the assessment of the validity of RMs, whether they reflect all abilities of a resilient system, and whether or not they overrate/underrate these abilities. This article covers this gap by introducing a methodology that can show the validity of an RM against its conceptual framework. This methodology combines experimental design methods and statistical analysis techniques that provide an insight into the RM's quality. We also propose a new metric that can be used for general systems. The analysis of the proposed metric using the presented methodology shows that this metric is a better indicator of the system's abilities compared to the existing metrics.

Design and Assessment Methodology for System Resilience Metrics

14 February 2019 - 5:29pm
Abstract

By providing objective measures, resilience metrics (RMs) help planners, designers, and decisionmakers to have a grasp of the resilience status of a system. Conceptual frameworks establish a sound basis for RM development. However, a significant challenge that has yet to be addressed is the assessment of the validity of RMs, whether they reflect all abilities of a resilient system, and whether or not they overrate/underrate these abilities. This article covers this gap by introducing a methodology that can show the validity of an RM against its conceptual framework. This methodology combines experimental design methods and statistical analysis techniques that provide an insight into the RM's quality. We also propose a new metric that can be used for general systems. The analysis of the proposed metric using the presented methodology shows that this metric is a better indicator of the system's abilities compared to the existing metrics.

Validation of a Stochastic Discrete Event Model Predicting Virus Concentration on Nurse Hands

13 February 2019 - 8:00pm
Abstract

Understanding healthcare viral disease transmission and the effect of infection control interventions will inform current and future infection control protocols. In this study, a model was developed to predict virus concentration on nurses’ hands using data from a bacteriophage tracer study conducted in Tucson, Arizona, in an urgent care facility. Surfaces were swabbed 2 hours, 3.5 hours, and 6 hours postseeding to measure virus spread over time. To estimate the full viral load that would have been present on hands without sampling, virus concentrations were summed across time points for 3.5‐ and 6‐hour measurements. A stochastic discrete event model was developed to predict virus concentrations on nurses’ hands, given a distribution of virus concentrations on surfaces and expected frequencies of hand‐to‐surface and orifice contacts and handwashing. Box plots and statistical hypothesis testing were used to compare the model‐predicted and experimentally measured virus concentrations on nurses’ hands. The model was validated with the experimental bacteriophage tracer data because the distribution for model‐predicted virus concentrations on hands captured all observed value ranges, and interquartile ranges for model and experimental values overlapped for all comparison time points. Wilcoxon rank sum tests showed no significant differences in distributions of model‐predicted and experimentally measured virus concentrations on hands. However, limitations in the tracer study indicate that more data are needed to instill more confidence in this validation. Next model development steps include addressing viral concentrations that would be found naturally in healthcare environments and measuring the risk reductions predicted for various infection control interventions.

Validation of a Stochastic Discrete Event Model Predicting Virus Concentration on Nurse Hands

13 February 2019 - 8:00pm
Abstract

Understanding healthcare viral disease transmission and the effect of infection control interventions will inform current and future infection control protocols. In this study, a model was developed to predict virus concentration on nurses’ hands using data from a bacteriophage tracer study conducted in Tucson, Arizona, in an urgent care facility. Surfaces were swabbed 2 hours, 3.5 hours, and 6 hours postseeding to measure virus spread over time. To estimate the full viral load that would have been present on hands without sampling, virus concentrations were summed across time points for 3.5‐ and 6‐hour measurements. A stochastic discrete event model was developed to predict virus concentrations on nurses’ hands, given a distribution of virus concentrations on surfaces and expected frequencies of hand‐to‐surface and orifice contacts and handwashing. Box plots and statistical hypothesis testing were used to compare the model‐predicted and experimentally measured virus concentrations on nurses’ hands. The model was validated with the experimental bacteriophage tracer data because the distribution for model‐predicted virus concentrations on hands captured all observed value ranges, and interquartile ranges for model and experimental values overlapped for all comparison time points. Wilcoxon rank sum tests showed no significant differences in distributions of model‐predicted and experimentally measured virus concentrations on hands. However, limitations in the tracer study indicate that more data are needed to instill more confidence in this validation. Next model development steps include addressing viral concentrations that would be found naturally in healthcare environments and measuring the risk reductions predicted for various infection control interventions.

When Evolution Works Against the Future: Disgust's Contributions to the Acceptance of New Food Technologies

13 February 2019 - 6:35pm
Abstract

New food technologies have a high potential to transform the current resource‐consuming food system to a more efficient and sustainable one, but public acceptance of new food technologies is rather low. Such an avoidance might be maintained by a deeply preserved risk avoidance system called disgust. In an online survey, participants (N = 313) received information about a variety of new food technology applications (i.e., genetically modified meat/fish, edible nanotechnology coating film, nanotechnology food box, artificial meat/milk, and a synthetic food additive). Every new food technology application was rated according to the respondent's willingness to eat (WTE) it (i.e., acceptance), risk, benefit, and disgust perceptions. Furthermore, food disgust sensitivity was measured using the Food Disgust Scale. Overall, the WTE both gene‐technology applications and meat coated with an edible nanotechnology film were low and disgust responses toward all three applications were high. In full mediation models, food disgust sensitivity predicted the disgust response toward each new food technology application, which in turn influenced WTE them. Effects of disgust responses on the WTE a synthetic food additive were highest for and lowest for the edible nanotechnology coating film compared to the other technologies. Results indicate that direct disgust responses influence acceptance and risk and benefit perceptions of new food technologies. Beyond the discussion of this study, implications for future research and strategies to increase acceptance of new food technologies are discussed.

When Evolution Works Against the Future: Disgust's Contributions to the Acceptance of New Food Technologies

13 February 2019 - 6:35pm
Abstract

New food technologies have a high potential to transform the current resource‐consuming food system to a more efficient and sustainable one, but public acceptance of new food technologies is rather low. Such an avoidance might be maintained by a deeply preserved risk avoidance system called disgust. In an online survey, participants (N = 313) received information about a variety of new food technology applications (i.e., genetically modified meat/fish, edible nanotechnology coating film, nanotechnology food box, artificial meat/milk, and a synthetic food additive). Every new food technology application was rated according to the respondent's willingness to eat (WTE) it (i.e., acceptance), risk, benefit, and disgust perceptions. Furthermore, food disgust sensitivity was measured using the Food Disgust Scale. Overall, the WTE both gene‐technology applications and meat coated with an edible nanotechnology film were low and disgust responses toward all three applications were high. In full mediation models, food disgust sensitivity predicted the disgust response toward each new food technology application, which in turn influenced WTE them. Effects of disgust responses on the WTE a synthetic food additive were highest for and lowest for the edible nanotechnology coating film compared to the other technologies. Results indicate that direct disgust responses influence acceptance and risk and benefit perceptions of new food technologies. Beyond the discussion of this study, implications for future research and strategies to increase acceptance of new food technologies are discussed.

Risk Perceptions Toward Drinking Water Quality Among Private Well Owners in Ireland: The Illusion of Control

13 February 2019 - 6:34pm
Abstract

In rural areas where no public or group water schemes exist, groundwater is often the only source of drinking water and is extracted by drilling private wells. Typically, private well owners are responsible for the quality and testing of their own drinking water. Previous studies indicate that well owners tend to underestimate the risks of their well water being contaminated, yet little is known about why this is the case. We conducted a qualitative study by interviewing private well owners in Ireland to investigate their beliefs surrounding their water quality, which, in turn, inform their risk perceptions and their willingness to regularly test their water. Based on our findings we designed a theoretical model arguing that perceived control is central in the perceived contamination risks of well water. More specifically, we argue that well owners have the illusion of being in control over their water quality, which implies that people often perceive themselves to be more in control of a situation than they actually are. As a result, they tend to underestimate contamination risks, which subsequently impact negatively on water testing behaviors. Theoretical and practical implications are highlighted.

Risk Perceptions Toward Drinking Water Quality Among Private Well Owners in Ireland: The Illusion of Control

13 February 2019 - 6:34pm
Abstract

In rural areas where no public or group water schemes exist, groundwater is often the only source of drinking water and is extracted by drilling private wells. Typically, private well owners are responsible for the quality and testing of their own drinking water. Previous studies indicate that well owners tend to underestimate the risks of their well water being contaminated, yet little is known about why this is the case. We conducted a qualitative study by interviewing private well owners in Ireland to investigate their beliefs surrounding their water quality, which, in turn, inform their risk perceptions and their willingness to regularly test their water. Based on our findings we designed a theoretical model arguing that perceived control is central in the perceived contamination risks of well water. More specifically, we argue that well owners have the illusion of being in control over their water quality, which implies that people often perceive themselves to be more in control of a situation than they actually are. As a result, they tend to underestimate contamination risks, which subsequently impact negatively on water testing behaviors. Theoretical and practical implications are highlighted.

Null Hypothesis Testing ≠ Scientific Inference: A Critique of the Shaky Premise at the Heart of the Science and Values Debate, and a Defense of Value‐Neutral Risk Assessment

11 February 2019 - 7:04pm
Abstract

Many philosophers and statisticians argue that risk assessors are morally obligated to evaluate the probabilities and consequences of methodological error, and to base their decisions of whether to adopt a given parameter value, model, or hypothesis on those considerations. This argument is couched within the rubric of null hypothesis testing, which I suggest is a poor descriptive and normative model for risk assessment. Risk regulation is not primarily concerned with evaluating the probability of data conditional upon the null hypothesis, but rather with measuring risks, estimating the consequences of available courses of action and inaction, formally characterizing uncertainty, and deciding what to do based upon explicit values and decision criteria. In turn, I defend an ideal of value‐neutrality, whereby the core inferential tasks of risk assessment—such as weighing evidence, estimating parameters, and model selection—should be guided by the aim of correspondence to reality. This is not to say that value judgments be damned, but rather that they should be accounted for within a structured approach to decision analysis, rather than embedded within risk assessment in an informal manner.

Pages