Risk Analysis: An International Journal
Phishing risk is a growing area of concern for corporations, governments, and individuals. Given the evidence that users vary widely in their vulnerability to phishing attacks, we demonstrate an approach for assessing the benefits and costs of interventions that target the most vulnerable users. Our approach uses Monte Carlo simulation to (1) identify which users were most vulnerable, in signal detection theory terms; (2) assess the proportion of system-level risk attributable to the most vulnerable users; (3) estimate the monetary benefit and cost of behavioral interventions targeting different vulnerability levels; and (4) evaluate the sensitivity of these results to whether the attacks involve random or spear phishing. Using parameter estimates from previous research, we find that the most vulnerable users were less cautious and less able to distinguish between phishing and legitimate emails (positive response bias and low sensitivity, in signal detection theory terms). They also accounted for a large share of phishing risk for both random and spear phishing attacks. Under these conditions, our analysis estimates much greater net benefit for behavioral interventions that target these vulnerable users. Within the range of the model's assumptions, there was generally net benefit even for the least vulnerable users. However, the differences in the return on investment for interventions with users with different degrees of vulnerability indicate the importance of measuring that performance, and letting it guide interventions. This study suggests that interventions to reduce response bias, rather than to increase sensitivity, have greater net benefit.
Risks of Allergic Contact Dermatitis Elicited by Nickel, Chromium, and Organic Sensitizers: Quantitative Models Based on Clinical Patch Test Data
Risks of allergic contact dermatitis (ACD) from consumer products intended for extended (nonpiercing) dermal contact are regulated by E.U. Directive EN 1811 that limits released Ni to a weekly equivalent dermal load of ≤0.5 μg/cm2. Similar approaches for thousands of known organic sensitizers are hampered by inability to quantify respective ACD-elicitation risk levels. To help address this gap, normalized values of cumulative risk for eliciting a positive (“≥+”) clinical patch test response reported in 12 studies for a total of n = 625 Ni-sensitized patients were modeled in relation to observed ACD-eliciting Ni loads, yielding an approximate lognormal (LN) distribution with a geometric mean and standard deviation of GMNi = 15 μg/cm2 and GSDNi = 8.0, respectively. Such data for five sensitizers (including formaldehyde and 2-hydroxyethyl methacrylate) were also ∼LN distributed, but with a common GSD value equal to GSDNi and with heterogeneous sensitizer-specific GM values each defining a respective ACD-eliciting potency GMNi/GM relative to Ni. Such potencies were also estimated for nine (meth)acrylates by applying this general LN ACD-elicitation risk model to respective sets of fewer data. ACD-elicitation risk patterns observed for Cr(VI) (n = 417) and Cr(III) (n = 78) were fit to mixed-LN models in which ∼30% and ∼40% of the most sensitive responders, respectively, were estimated to exhibit a LN response also governed by GSDNi. The observed common LN-response shape parameter GSDNi may reflect a common underlying ACD mechanism and suggests a common interim approach to quantitative ACD-elicitation risk assessment based on available clinical data.
A better understanding of the uncertainty that exists in models used for seismic risk assessment is critical to improving risk-based decisions pertaining to earthquake safety. Current models estimating the probability of collapse of a building do not consider comprehensively the nature and impact of uncertainty. This article presents a model framework to enhance seismic risk assessment and thus gives decisionmakers a fuller understanding of the nature and limitations of the estimates. This can help ensure that risks are not over- or underestimated and the value of acquiring accurate data is appreciated fully. The methodology presented provides a novel treatment of uncertainties in input variables, their propagation through the model, and their effect on the results. The study presents ranges of possible annual collapse probabilities for different case studies on buildings in different parts of the world, exposed to different levels of seismicity, and with different vulnerabilities. A global sensitivity analysis was conducted to determine the significance of uncertain variables. Two key outcomes are (1) that the uncertainty in ground-motion conversion equations has the largest effect on the uncertainty in the calculation of annual collapse probability; and (2) the vulnerability of a building appears to have an effect on the range of annual collapse probabilities produced, i.e., the level of uncertainty in the estimate of annual collapse probability, with less vulnerable buildings having a smaller uncertainty.
The use of electronic cigarettes has grown substantially over the last few years. Currently, about 4% of adults use electronic cigarettes, about 16% of high school students report use in the past 30 days, as do approximately 11–25% of college students. A hallmark of the reduction in tobacco use has been the shift in social norms concerning smoking in public. Such norms may also drive views on acceptability of public electronic cigarette use. While normative factors have been given attention, little substantive application of the literature on risk perception has been brought to bear. The overall aim of this study was to place a cognitive–affective measure of risk perception within a model that also includes social cues for e-cigarettes, addictiveness beliefs, and tobacco use to predict perceived social acceptability for public use of e-cigarettes. To do so, a cross-sectional study using an online survey was conducted among a sample of undergraduate students at a Western university (n = 395). A structural equation model showed that the acceptability of public e-cigarette use was influenced by social cues, beliefs about addiction, and cognitive risk perception, even after controlling for nicotine use. What is revealed is that cognitive assessment of e-cigarette risk and perception of addictiveness had a suppressing effect on perceived acceptability of public vaping, while greater exposure to social cues exerted a countervailing effect. This is evidence of the role that risk perception and social norms may play in the increases in electronic cigarette use that have been observed.
Graphs show promise for improving communications about different types of risks, including health risks, financial risks, and climate risks. However, graph designs that are effective at meeting one important risk communication goal (promoting risk-avoidant behaviors) can at the same time compromise another key goal (improving risk understanding). We developed and tested simple bar graphs aimed at accomplishing these two goals simultaneously. We manipulated two design features in graphs, namely, whether graphs depicted the number of people affected by a risk and those at risk of harm (“foreground+background”) versus only those affected (“foreground-only”), and the presence versus absence of simple numerical labels above bars. Foreground-only displays were associated with larger risk perceptions and risk-avoidant behavior (i.e., willingness to take a drug for heart attack prevention) than foreground+background displays, regardless of the presence of labels. Foreground-only graphs also hindered risk understanding when labels were not present. However, the presence of labels significantly improved understanding, eliminating the detrimental effect of foreground-only displays. Labels also led to more positive user evaluations of the graphs, but did not affect risk-avoidant behavior. Using process modeling we identified mediators (risk perceptions, understanding, user evaluations) that explained the effect of display type on risk-avoidant behavior. Our findings contribute new evidence to the graph design literature: unlike what was previously feared, we demonstrate that it is possible to design foreground-only graphs that promote intentions for behavior change without a detrimental effect on risk understanding. Implications for the design of graphical risk communications and decision support are discussed.
Human exposure to bacteria resistant to antimicrobials and transfer of related genes is a complex issue and occurs, among other pathways, via meat consumption. In a context of limited resources, the prioritization of risk management activities is essential. Since the antimicrobial resistance (AMR) situation differs substantially between countries, prioritization should be country specific. The objective of this study was to develop a systematic and transparent framework to rank combinations of bacteria species resistant to selected antimicrobial classes found in meat, based on the risk they represent for public health in Switzerland. A risk assessment model from slaughter to consumption was developed following the Codex Alimentarius guidelines for risk analysis of foodborne AMR. Using data from the Swiss AMR monitoring program, 208 combinations of animal species/bacteria/antimicrobial classes were identified as relevant hazards. Exposure assessment and hazard characterization scores were developed and combined using multicriteria decision analysis. The effect of changing weights of scores was explored with sensitivity analysis. Attributing equal weights to each score, poultry-associated combinations represented the highest risk. In particular, contamination with extended-spectrum β-lactamase/plasmidic AmpC-producing Escherichia coli in poultry meat ranked high for both exposure and hazard characterization. Tetracycline- or macrolide-resistant Enterococcus spp., as well as fluoroquinolone- or macrolide-resistant Campylobacter jejuni, ranked among combinations with the highest risk. This study provides a basis for prioritizing future activities to mitigate the risk associated with foodborne AMR in Switzerland. A user-friendly version of the model was provided to risk managers; it can easily be adjusted to the constantly evolving knowledge on AMR.
Bounding Analysis of Drinking Water Health Risks from a Spill of Hydraulic Fracturing Flowback Water
A bounding risk assessment is presented that evaluates possible human health risk from a hypothetical scenario involving a 10,000-gallon release of flowback water from horizontal fracturing of Marcellus Shale. The water is assumed to be spilled on the ground, infiltrates into groundwater that is a source of drinking water, and an adult and child located downgradient drink the groundwater. Key uncertainties in estimating risk are given explicit quantitative treatment using Monte Carlo analysis. Chemicals that contribute significantly to estimated health risks are identified, as are key uncertainties and variables to which risk estimates are sensitive. The results show that hypothetical exposure via drinking water impacted by chemicals in Marcellus Shale flowback water, assumed to be spilled onto the ground surface, results in predicted bounds between 10−10 and 10−6 (for both adult and child receptors) for excess lifetime cancer risk. Cumulative hazard indices (HICUMULATIVE) resulting from these hypothetical exposures have predicted bounds (5th to 95th percentile) between 0.02 and 35 for assumed adult receptors and 0.1 and 146 for assumed child receptors. Predicted health risks are dominated by noncancer endpoints related to ingestion of barium and lithium in impacted groundwater. Hazard indices above unity are largely related to exposure to lithium. Salinity taste thresholds are likely to be exceeded before drinking water exposures result in adverse health effects. The findings provide focus for policy discussions concerning flowback water risk management. They also indicate ways to improve the ability to estimate health risks from drinking water impacted by a flowback water spill (i.e., reducing uncertainty).
A Probabilistic Approach to Assess External Doses to the Public Considering Spatial Variability of Radioactive Contamination and Interpopulation Differences in Behavior Pattern
Dose assessment is an important issue from the viewpoints of protecting people from radiation exposure and managing postaccident situations adequately. However, the radiation doses received by people cannot be determined with complete accuracy because of the uncertainties and the variability associated with any process of defining individual characteristics and in the dose assessment process itself. In this study, a dose assessment model was developed based on measurements and surveys of individual doses and relevant contributors (i.e., ambient dose rates and behavior patterns) in Fukushima City for four population groups: Fukushima City Office staff, Senior Citizens’ Club, Contractors’ Association, and Agricultural Cooperative. In addition, probabilistic assessments were performed for these population groups by considering the spatial variability of contamination and interpopulation differences resulting from behavior patterns. As a result of comparison with the actual measurements, the assessment results for participants from the Fukushima City Office agreed with the measured values, thereby validating the model and the approach. Although the assessment results obtained for the Senior Citizens’ Club and the Agricultural Cooperative differ partly from the measured values, by addressing further considerations in terms of dose reduction effects due to decontamination and the impact of additional exposure sources in agricultural fields, these results can be improved. By contrast, the measurements obtained for the participants from the Contractors’ Association were not reproduced well in the present study. To assess the doses to this group, further investigations of association members’ work activities and the related dose reduction effects are needed.
When outcomes are defined over a geographic region, measures of spatial risk regarding these outcomes can be more complex than traditional measures of risk. One of the main challenges is the need for a cardinal preference function that incorporates the spatial nature of the outcomes. We explore preference conditions that will yield the existence of spatial measurable value and utility functions, and discuss their application to spatial risk analysis. We also present a simple example on household freshwater usage across regions to demonstrate how such functions can be assessed and applied.
Attacker/defender models have primarily assumed that each decisionmaker optimizes the cost of the damage inflicted and its economic repercussions from their own perspective. Two streams of recent research have sought to extend such models. One stream suggests that it is more realistic to consider attackers with multiple objectives, but this research has not included the adaption of the terrorist with multiple objectives to defender actions. The other stream builds off experimental studies that show that decisionmakers deviate from optimal rational behavior. In this article, we extend attacker/defender models to incorporate multiple objectives that a terrorist might consider in planning an attack. This includes the tradeoffs that a terrorist might consider and their adaption to defender actions. However, we must also consider experimental evidence of deviations from the rationality assumed in the commonly used expected utility model in determining such adaption. Thus, we model the attacker's behavior using multiattribute prospect theory to account for the attacker's multiple objectives and deviations from rationality. We evaluate our approach by considering an attacker with multiple objectives who wishes to smuggle radioactive material into the United States and a defender who has the option to implement a screening process to hinder the attacker. We discuss the problems with implementing such an approach, but argue that research in this area must continue to avoid misrepresenting terrorist behavior in determining optimal defensive actions.
Assessment of Instructions on Protection Against Food Contaminated with Radiocesium in Japan in 2011
The Japan Ministry of Health, Labour and Welfare (MHLW) has published instructions for radiological protection against food after the Fukushima Daiichi nuclear power plant accident in 2011. Following the instructions, the export and consumption of food items identified as being contaminated were restricted for a certain period. We assessed the validity of the imposed restriction periods for two representative vegetables (spinach and cabbage) grown in Fukushima Prefecture from two perspectives: effectiveness for reducing dietary dose and economic efficiency. To assess effectiveness, we estimated the restriction period required to maintain consumers’ dose below the guidance dose levels. To assess economic efficiency, we estimated the restriction period that maximizes the net benefit to taxpayers. All estimated restriction periods were shorter than the actual restriction periods imposed on spinach and cabbage from Fukushima in 2011, which indicates that the food restriction effectively maintained consumers’ dietary dose below the guidance dose level, but in an economically inefficient manner. We also evaluated the response of the restriction period to the sample size for each weekly food safety test and the instructions for when to remove the restriction. Stringent MHLW instructions seemed to sufficiently reduce consumers’ health risk even when the sample size for the weekly food safety test was small, but tended to increase the economic cost to taxpayers.
To solve real-life problems—such as those related to technology, health, security, or climate change—and make suitable decisions, risk is nearly always a main issue. Different types of sciences are often supporting the work, for example, statistics, natural sciences, and social sciences. Risk analysis approaches and methods are also commonly used, but risk analysis is not broadly accepted as a science in itself. A key problem is the lack of explanatory power and large uncertainties when assessing risk. This article presents an emerging new risk analysis science based on novel ideas and theories on risk analysis developed in recent years by the risk analysis community. It builds on a fundamental change in thinking, from the search for accurate predictions and risk estimates, to knowledge generation related to concepts, theories, frameworks, approaches, principles, methods, and models to understand, assess, characterize, communicate, and (in a broad sense) manage risk. Examples are used to illustrate the importance of this distinct/separate risk analysis science for solving risk problems, supporting science in general and other disciplines in particular.
Deep Uncertainties in Sea-Level Rise and Storm Surge Projections: Implications for Coastal Flood Risk Management
Sea levels are rising in many areas around the world, posing risks to coastal communities and infrastructures. Strategies for managing these flood risks present decision challenges that require a combination of geophysical, economic, and infrastructure models. Previous studies have broken important new ground on the considerable tensions between the costs of upgrading infrastructure and the damages that could result from extreme flood events. However, many risk-based adaptation strategies remain silent on certain potentially important uncertainties, as well as the tradeoffs between competing objectives. Here, we implement and improve on a classic decision-analytical model (Van Dantzig 1956) to: (i) capture tradeoffs across conflicting stakeholder objectives, (ii) demonstrate the consequences of structural uncertainties in the sea-level rise and storm surge models, and (iii) identify the parametric uncertainties that most strongly influence each objective using global sensitivity analysis. We find that the flood adaptation model produces potentially myopic solutions when formulated using traditional mean-centric decision theory. Moving from a single-objective problem formulation to one with multiobjective tradeoffs dramatically expands the decision space, and highlights the need for compromise solutions to address stakeholder preferences. We find deep structural uncertainties that have large effects on the model outcome, with the storm surge parameters accounting for the greatest impacts. Global sensitivity analysis effectively identifies important parameter interactions that local methods overlook, and that could have critical implications for flood adaptation strategies.
Risk assessors and managers face many difficult challenges related to novel cyber systems. Among these challenges are the constantly changing nature of cyber systems caused by technical advances, their distribution across the physical, information, and sociocognitive domains, and the complex network structures often including thousands of nodes. Here, we review probabilistic and risk-based decision-making techniques applied to cyber systems and conclude that existing approaches typically do not address all components of the risk assessment triplet (threat, vulnerability, consequence) and lack the ability to integrate across multiple domains of cyber systems to provide guidance for enhancing cybersecurity. We present a decision-analysis-based approach that quantifies threat, vulnerability, and consequences through a set of criteria designed to assess the overall utility of cybersecurity management alternatives. The proposed framework bridges the gap between risk assessment and risk management, allowing an analyst to ensure a structured and transparent process of selecting risk management alternatives. The use of this technique is illustrated for a hypothetical, but realistic, case study exemplifying the process of evaluating and ranking five cybersecurity enhancement strategies. The approach presented does not necessarily eliminate biases and subjectivity necessary for selecting countermeasures, but provides justifiable methods for selecting risk management actions consistent with stakeholder and decisionmaker values and technical data.
A major challenge in scenario analysis for the safety assessment of nuclear waste repositories pertains to the comprehensiveness of the set of scenarios selected for assessing the safety of the repository. Motivated by this challenge, we discuss the aspects of scenario analysis relevant to comprehensiveness. Specifically, we note that (1) it is necessary to make it clear why scenarios usually focus on a restricted set of features, events, and processes; (2) there is not yet consensus on the interpretation of comprehensiveness for guiding the generation of scenarios; and (3) there is a need for sound approaches to the treatment of epistemic uncertainties.