Risk Analysis: An International Journal
Evacuation planning and management involves estimating the travel demand in the event that such action is required. This is usually done as a function of people's decision to evacuate, which we show is strongly linked to their risk awareness. We use an empirical data set, which shows tsunami evacuation behavior, to demonstrate that risk recognition is not synonymous with objective risk, but is instead determined by a combination of factors including risk education, information, and sociodemographics, and that it changes dynamically over time. Based on these findings, we formulate an ordered logit model to describe risk recognition combined with a latent class model to describe evacuation choices. Our proposed evacuation choice model along with a risk recognition class can evaluate quantitatively the influence of disaster mitigation measures, risk education, and risk information. The results obtained from the risk recognition model show that risk information has a greater impact in the sense that people recognize their high risk. The results of the evacuation choice model show that people who are unaware of their risk take a longer time to evacuate.
Flooding in urban areas during heavy rainfall, often characterized by short duration and high-intensity events, is known as “surface water flooding.” Analyzing surface water flood risk is complex as it requires understanding of biophysical and human factors, such as the localized scale and nature of heavy precipitation events, characteristics of the urban area affected (including detailed topography and drainage networks), and the spatial distribution of economic and social vulnerability. Climate change is recognized as having the potential to enhance the intensity and frequency of heavy rainfall events. This study develops a methodology to link high spatial resolution probabilistic projections of hourly precipitation with detailed surface water flood depth maps and characterization of urban vulnerability to estimate surface water flood risk. It incorporates probabilistic information on the range of uncertainties in future precipitation in a changing climate. The method is applied to a case study of Greater London and highlights that both the frequency and spatial extent of surface water flood events are set to increase under future climate change. The expected annual damage from surface water flooding is estimated to be to be £171 million, £343 million, and £390 million/year under the baseline, 2030 high, and 2050 high climate change scenarios, respectively.
Recovery of interdependent infrastructure networks in the presence of catastrophic failure is crucial to the economy and welfare of society. Recently, centralized methods have been developed to address optimal resource allocation in postdisaster recovery scenarios of interdependent infrastructure systems that minimize total cost. In real-world systems, however, multiple independent, possibly noncooperative, utility network controllers are responsible for making recovery decisions, resulting in suboptimal decentralized processes. With the goal of minimizing recovery cost, a best-case decentralized model allows controllers to develop a full recovery plan and negotiate until all parties are satisfied (an equilibrium is reached). Such a model is computationally intensive for planning and negotiating, and time is a crucial resource in postdisaster recovery scenarios. Furthermore, in this work, we prove this best-case decentralized negotiation process could continue indefinitely under certain conditions. Accounting for network controllers' urgency in repairing their system, we propose an ad hoc sequential game-theoretic model of interdependent infrastructure network recovery represented as a discrete time noncooperative game between network controllers that is guaranteed to converge to an equilibrium. We further reduce the computation time needed to find a solution by applying a best-response heuristic and prove bounds on ε-Nash equilibrium, where ε depends on problem inputs. We compare best-case and ad hoc models on an empirical interdependent infrastructure network in the presence of simulated earthquakes to demonstrate the extent of the tradeoff between optimality and computational efficiency. Our method provides a foundation for modeling sociotechnical systems in a way that mirrors restoration processes in practice.
This article presents a public value measure that can be used to aid executives in the public sector to better assess policy decisions and maximize value to the American people. Using Transportation Security Administration (TSA) programs as an example, we first identify the basic components of public value. We then propose a public value account to quantify the outcomes of various risk scenarios, and we determine the certain equivalent of several important TSA programs. We illustrate how this proposed measure can quantify the effects of two main challenges that government organizations face when conducting enterprise risk management: (1) short-term versus long-term incentives and (2) avoiding potential negative consequences even if they occur with low probability. Finally, we illustrate how this measure enables the use of various tools from decision analysis to be applied in government settings, such as stochastic dominance arguments and certain equivalent calculations. Regarding the TSA case study, our analysis demonstrates the value of continued expansion of the TSA trusted traveler initiative and increasing the background vetting for passengers who are afforded expedited security screening.
Influence of Distribution of Animals between Dose Groups on Estimated Benchmark Dose and Animal Welfare for Continuous Effects
The benchmark dose (BMD) approach is increasingly used as a preferred approach for dose–effect analysis, but standard experimental designs are generally not optimized for BMD analysis. The aim of this study was to evaluate how the use of unequally sized dose groups affects the quality of BMD estimates in toxicity testing, with special consideration of the total burden of animal distress. We generated continuous dose–effect data by Monte Carlo simulation using two dose–effect curves based on endpoints with different shape parameters. Eighty-five designs, each with four dose groups of unequal size, were examined in scenarios ranging from low- to high-dose placements and with a total number of animals set to 40, 80, or 200. For each simulation, a BMD value was estimated and compared with the “true” BMD. In general, redistribution of animals from higher to lower dose groups resulted in an improved precision of the calculated BMD value as long as dose placements were high enough to detect a significant trend in the dose–effect data with sufficient power. The improved BMD precision and the associated reduction of the number of animals exposed to the highest dose, where chemically induced distress is most likely to occur, are favorable for the reduction and refinement principles. The result thereby strengthen BMD-aligned design of experiments as a means for more accurate hazard characterization along with animal welfare improvements.
Perceptions of Risk and Vulnerability Following Exposure to a Major Natural Disaster: The Calgary Flood of 2013
Many studies have examined the general public's flood risk perceptions in the aftermath of local and regional flooding. However, relatively few studies have focused on large-scale events that affect tens of thousands of people within an urban center. Similarly, in spite of previous research on flood risks, unresolved questions persist regarding the variables that might influence perceptions of risk and vulnerability, along with management preferences. In light of the opportunities presented by these knowledge gaps, the research reported here examined public perceptions of flood risk and vulnerability, and management preferences, within the city of Calgary in the aftermath of extensive flooding in 2013. Our findings, which come from an online survey of residents, reveal that direct experience with flooding is not a differentiating factor for risk perceptions when comparing evacuees with nonevacuees who might all experience future risks. However, we do find that judgments about vulnerability—as a function of how people perceive physical distance—do differ according to one's evacuation experience. Our results also indicate that concern about climate change is an important predictor of flood risk perceptions, as is trust in government risk managers. In terms of mitigation preferences, our results reveal differences in support for large infrastructure projects based on whether respondents feel they might actually benefit from them.
This article focuses on conceptual and methodological developments allowing the integration of physical and social dynamics leading to model forecasts of circumstance-specific human losses during a flash flood. To reach this objective, a random forest classifier is applied to assess the likelihood of fatality occurrence for a given circumstance as a function of representative indicators. Here, vehicle-related circumstance is chosen as the literature indicates that most fatalities from flash flooding fall in this category. A database of flash flood events, with and without human losses from 2001 to 2011 in the United States, is supplemented with other variables describing the storm event, the spatial distribution of the sensitive characteristics of the exposed population, and built environment at the county level. The catastrophic flash floods of May 2015 in the states of Texas and Oklahoma are used as a case study to map the dynamics of the estimated probabilistic human risk on a daily scale. The results indicate the importance of time- and space-dependent human vulnerability and risk assessment for short-fuse flood events. The need for more systematic human impact data collection is also highlighted to advance impact-based predictive models for flash flood casualties using machine-learning approaches in the future.
In any crisis, there is a great deal of uncertainty, often geographical uncertainty or, more precisely, spatiotemporal uncertainty. Examples include the spread of contamination from an industrial accident, drifting volcanic ash, and the path of a hurricane. Estimating spatiotemporal probabilities is usually a difficult task, but that is not our primary concern. Rather, we ask how analysts can communicate spatiotemporal uncertainty to those handling the crisis. We comment on the somewhat limited literature on the representation of spatial uncertainty on maps. We note that many cognitive issues arise and that the potential for confusion is high. We note that in the early stages of handling a crisis, the uncertainties involved may be deep, i.e., difficult or impossible to quantify in the time available. In such circumstance, we suggest the idea of presenting multiple scenarios.
This study investigated whether, in the absence of chronic noncancer toxicity data, short-term noncancer toxicity data can be used to predict chronic toxicity effect levels by focusing on the dose–response relationship instead of a critical effect. Data from National Toxicology Program (NTP) technical reports have been extracted and modeled using the Environmental Protection Agency's Benchmark Dose Software. Best-fit, minimum benchmark dose (BMD), and benchmark dose lower limits (BMDLs) have been modeled for all NTP pathologist identified significant nonneoplastic lesions, final mean body weight, and mean organ weight of 41 chemicals tested by NTP between 2000 and 2012. Models were then developed at the chemical level using orthogonal regression techniques to predict chronic (two years) noncancer health effect levels using the results of the short-term (three months) toxicity data. The findings indicate that short-term animal studies may reasonably provide a quantitative estimate of a chronic BMD or BMDL. This can allow for faster development of human health toxicity values for risk assessment for chemicals that lack chronic toxicity data.
Accurate estimates of the amount and type of fish people eat are necessary to determine the health benefits and risks of consuming fish, and to assess compliance with fish consumption guidelines issued for fish affected by chemical contaminants. We developed a web-based and mobile-phone-enabled diary methodology to collect detailed fish consumption information for two 16-week periods in the summers of 2014 and 2015. We recruited study participants from two populations living in the Great Lakes region—women of childbearing age (WCBA) and urban residents who had purchased fishing licenses. In this article, we describe the methodology in detail and provide evidence related to participation rates, the representativeness of our sample over time, and both convergent validity and reliability of the data collection methods. Overall, 56% of WCBA and 50% of urban anglers provided complete data across both data collection periods. Among those who provided information at the beginning of Year 2, 97% of both audiences provided information throughout the entire 16-week period. Those who participated throughout the two-year period were slightly older on average (1.9–2.5 years) than other members of our original samples. We conclude that using diaries with web and smartphone technology, combined with incentives and persistent communication, has strong potential for assessing fish consumption in other areas of the country or for situations where the potential risks associated with fish consumption are substantial and the cost can be justified.
Phishing risk is a growing area of concern for corporations, governments, and individuals. Given the evidence that users vary widely in their vulnerability to phishing attacks, we demonstrate an approach for assessing the benefits and costs of interventions that target the most vulnerable users. Our approach uses Monte Carlo simulation to (1) identify which users were most vulnerable, in signal detection theory terms; (2) assess the proportion of system-level risk attributable to the most vulnerable users; (3) estimate the monetary benefit and cost of behavioral interventions targeting different vulnerability levels; and (4) evaluate the sensitivity of these results to whether the attacks involve random or spear phishing. Using parameter estimates from previous research, we find that the most vulnerable users were less cautious and less able to distinguish between phishing and legitimate emails (positive response bias and low sensitivity, in signal detection theory terms). They also accounted for a large share of phishing risk for both random and spear phishing attacks. Under these conditions, our analysis estimates much greater net benefit for behavioral interventions that target these vulnerable users. Within the range of the model's assumptions, there was generally net benefit even for the least vulnerable users. However, the differences in the return on investment for interventions with users with different degrees of vulnerability indicate the importance of measuring that performance, and letting it guide interventions. This study suggests that interventions to reduce response bias, rather than to increase sensitivity, have greater net benefit.
Risks of Allergic Contact Dermatitis Elicited by Nickel, Chromium, and Organic Sensitizers: Quantitative Models Based on Clinical Patch Test Data
Risks of allergic contact dermatitis (ACD) from consumer products intended for extended (nonpiercing) dermal contact are regulated by E.U. Directive EN 1811 that limits released Ni to a weekly equivalent dermal load of ≤0.5 μg/cm2. Similar approaches for thousands of known organic sensitizers are hampered by inability to quantify respective ACD-elicitation risk levels. To help address this gap, normalized values of cumulative risk for eliciting a positive (“≥+”) clinical patch test response reported in 12 studies for a total of n = 625 Ni-sensitized patients were modeled in relation to observed ACD-eliciting Ni loads, yielding an approximate lognormal (LN) distribution with a geometric mean and standard deviation of GMNi = 15 μg/cm2 and GSDNi = 8.0, respectively. Such data for five sensitizers (including formaldehyde and 2-hydroxyethyl methacrylate) were also ∼LN distributed, but with a common GSD value equal to GSDNi and with heterogeneous sensitizer-specific GM values each defining a respective ACD-eliciting potency GMNi/GM relative to Ni. Such potencies were also estimated for nine (meth)acrylates by applying this general LN ACD-elicitation risk model to respective sets of fewer data. ACD-elicitation risk patterns observed for Cr(VI) (n = 417) and Cr(III) (n = 78) were fit to mixed-LN models in which ∼30% and ∼40% of the most sensitive responders, respectively, were estimated to exhibit a LN response also governed by GSDNi. The observed common LN-response shape parameter GSDNi may reflect a common underlying ACD mechanism and suggests a common interim approach to quantitative ACD-elicitation risk assessment based on available clinical data.
A better understanding of the uncertainty that exists in models used for seismic risk assessment is critical to improving risk-based decisions pertaining to earthquake safety. Current models estimating the probability of collapse of a building do not consider comprehensively the nature and impact of uncertainty. This article presents a model framework to enhance seismic risk assessment and thus gives decisionmakers a fuller understanding of the nature and limitations of the estimates. This can help ensure that risks are not over- or underestimated and the value of acquiring accurate data is appreciated fully. The methodology presented provides a novel treatment of uncertainties in input variables, their propagation through the model, and their effect on the results. The study presents ranges of possible annual collapse probabilities for different case studies on buildings in different parts of the world, exposed to different levels of seismicity, and with different vulnerabilities. A global sensitivity analysis was conducted to determine the significance of uncertain variables. Two key outcomes are (1) that the uncertainty in ground-motion conversion equations has the largest effect on the uncertainty in the calculation of annual collapse probability; and (2) the vulnerability of a building appears to have an effect on the range of annual collapse probabilities produced, i.e., the level of uncertainty in the estimate of annual collapse probability, with less vulnerable buildings having a smaller uncertainty.
The use of electronic cigarettes has grown substantially over the last few years. Currently, about 4% of adults use electronic cigarettes, about 16% of high school students report use in the past 30 days, as do approximately 11–25% of college students. A hallmark of the reduction in tobacco use has been the shift in social norms concerning smoking in public. Such norms may also drive views on acceptability of public electronic cigarette use. While normative factors have been given attention, little substantive application of the literature on risk perception has been brought to bear. The overall aim of this study was to place a cognitive–affective measure of risk perception within a model that also includes social cues for e-cigarettes, addictiveness beliefs, and tobacco use to predict perceived social acceptability for public use of e-cigarettes. To do so, a cross-sectional study using an online survey was conducted among a sample of undergraduate students at a Western university (n = 395). A structural equation model showed that the acceptability of public e-cigarette use was influenced by social cues, beliefs about addiction, and cognitive risk perception, even after controlling for nicotine use. What is revealed is that cognitive assessment of e-cigarette risk and perception of addictiveness had a suppressing effect on perceived acceptability of public vaping, while greater exposure to social cues exerted a countervailing effect. This is evidence of the role that risk perception and social norms may play in the increases in electronic cigarette use that have been observed.
Graphs show promise for improving communications about different types of risks, including health risks, financial risks, and climate risks. However, graph designs that are effective at meeting one important risk communication goal (promoting risk-avoidant behaviors) can at the same time compromise another key goal (improving risk understanding). We developed and tested simple bar graphs aimed at accomplishing these two goals simultaneously. We manipulated two design features in graphs, namely, whether graphs depicted the number of people affected by a risk and those at risk of harm (“foreground+background”) versus only those affected (“foreground-only”), and the presence versus absence of simple numerical labels above bars. Foreground-only displays were associated with larger risk perceptions and risk-avoidant behavior (i.e., willingness to take a drug for heart attack prevention) than foreground+background displays, regardless of the presence of labels. Foreground-only graphs also hindered risk understanding when labels were not present. However, the presence of labels significantly improved understanding, eliminating the detrimental effect of foreground-only displays. Labels also led to more positive user evaluations of the graphs, but did not affect risk-avoidant behavior. Using process modeling we identified mediators (risk perceptions, understanding, user evaluations) that explained the effect of display type on risk-avoidant behavior. Our findings contribute new evidence to the graph design literature: unlike what was previously feared, we demonstrate that it is possible to design foreground-only graphs that promote intentions for behavior change without a detrimental effect on risk understanding. Implications for the design of graphical risk communications and decision support are discussed.
Human exposure to bacteria resistant to antimicrobials and transfer of related genes is a complex issue and occurs, among other pathways, via meat consumption. In a context of limited resources, the prioritization of risk management activities is essential. Since the antimicrobial resistance (AMR) situation differs substantially between countries, prioritization should be country specific. The objective of this study was to develop a systematic and transparent framework to rank combinations of bacteria species resistant to selected antimicrobial classes found in meat, based on the risk they represent for public health in Switzerland. A risk assessment model from slaughter to consumption was developed following the Codex Alimentarius guidelines for risk analysis of foodborne AMR. Using data from the Swiss AMR monitoring program, 208 combinations of animal species/bacteria/antimicrobial classes were identified as relevant hazards. Exposure assessment and hazard characterization scores were developed and combined using multicriteria decision analysis. The effect of changing weights of scores was explored with sensitivity analysis. Attributing equal weights to each score, poultry-associated combinations represented the highest risk. In particular, contamination with extended-spectrum β-lactamase/plasmidic AmpC-producing Escherichia coli in poultry meat ranked high for both exposure and hazard characterization. Tetracycline- or macrolide-resistant Enterococcus spp., as well as fluoroquinolone- or macrolide-resistant Campylobacter jejuni, ranked among combinations with the highest risk. This study provides a basis for prioritizing future activities to mitigate the risk associated with foodborne AMR in Switzerland. A user-friendly version of the model was provided to risk managers; it can easily be adjusted to the constantly evolving knowledge on AMR.
Bounding Analysis of Drinking Water Health Risks from a Spill of Hydraulic Fracturing Flowback Water
A bounding risk assessment is presented that evaluates possible human health risk from a hypothetical scenario involving a 10,000-gallon release of flowback water from horizontal fracturing of Marcellus Shale. The water is assumed to be spilled on the ground, infiltrates into groundwater that is a source of drinking water, and an adult and child located downgradient drink the groundwater. Key uncertainties in estimating risk are given explicit quantitative treatment using Monte Carlo analysis. Chemicals that contribute significantly to estimated health risks are identified, as are key uncertainties and variables to which risk estimates are sensitive. The results show that hypothetical exposure via drinking water impacted by chemicals in Marcellus Shale flowback water, assumed to be spilled onto the ground surface, results in predicted bounds between 10−10 and 10−6 (for both adult and child receptors) for excess lifetime cancer risk. Cumulative hazard indices (HICUMULATIVE) resulting from these hypothetical exposures have predicted bounds (5th to 95th percentile) between 0.02 and 35 for assumed adult receptors and 0.1 and 146 for assumed child receptors. Predicted health risks are dominated by noncancer endpoints related to ingestion of barium and lithium in impacted groundwater. Hazard indices above unity are largely related to exposure to lithium. Salinity taste thresholds are likely to be exceeded before drinking water exposures result in adverse health effects. The findings provide focus for policy discussions concerning flowback water risk management. They also indicate ways to improve the ability to estimate health risks from drinking water impacted by a flowback water spill (i.e., reducing uncertainty).