Risk Analysis: An International Journal
The widely used empirical Bayes (EB) and full Bayes (FB) methods for before–after safety assessment are sometimes limited because of the extensive data needs from additional reference sites. To address this issue, this study proposes a novel before–after safety evaluation methodology based on survival analysis and longitudinal data as an alternative to the EB/FB method. A Bayesian survival analysis (SARE) model with a random effect term to address the unobserved heterogeneity across sites is developed. The proposed survival analysis method is validated through a simulation study before its application. Subsequently, the SARE model is developed in a case study to evaluate the safety effectiveness of a recent red‐light‐running photo enforcement program in New Jersey. As demonstrated in the simulation and the case study, the survival analysis can provide valid estimates using only data from treated sites, and thus its results will not be affected by the selection of defective or insufficient reference sites. In addition, the proposed approach can take into account the censored data generated due to the transition from the before period to the after period, which has not been previously explored in the literature. Using individual crashes as units of analysis, survival analysis can incorporate longitudinal covariates such as the traffic volume and weather variation, and thus can explicitly account for the potential temporal heterogeneity.
Government usually faces threat from multiple attackers. However, in the literature, researchers often model attackers as one monolithic player who chooses whether to attack, how much investment to spend, and on which target, instead of treating multiple attackers as independent agents. This modeling strategy may potentially cause suboptimal defense investment if the attackers have vastly different interests and preferences and may not be combined as one in theory. In this article, we develop a sequential game with complete information. This model considers one defender explicitly dealing with multiple unmergeable attackers. Thorough numerical experiments are conducted using ratio and exponential contest success functions under different scenarios. The result is also contrasted with the corresponding single attacker model to study the effect of mishandling multiple attackers. The propositions and observations drawn from the numerical experiments provide insights for government decision making with a better understanding of the attackers' behavior.
In risk analysis and research, the concept of risk is often understood quantitatively. For example, risk is commonly defined as the probability of an unwanted event or as its probability multiplied by its consequences. This article addresses (1) to what extent and (2) how the noun risk is actually used quantitatively. Uses of the noun risk are analyzed in four linguistic corpora, both Swedish and English (mostly American English). In total, over 16,000 uses of the noun risk are studied in 14 random (n = 500) or complete samples (where n ranges from 173 to 5,144) of, for example, news and magazine articles, fiction, and websites of government agencies. In contrast to the widespread definition of risk as a quantity, a main finding is that the noun risk is mostly used nonquantitatively. Furthermore, when used quantitatively, the quantification is seldom numerical, instead relying on less precise expressions of quantification, such as high risk and increased risk. The relatively low frequency of quantification in a wide range of language material suggests a quantification bias in many areas of risk theory, that is, overestimation of the importance of quantification in defining the concept of risk. The findings are also discussed in relation to fuzzy‐trace theory. Findings of this study confirm, as suggested by fuzzy‐trace theory, that vague representations are prominent in quantification of risk. The application of the terminology of fuzzy‐trace theory for explaining the patterns of language use are discussed.
GIS‐Based Integration of Social Vulnerability and Level 3 Probabilistic Risk Assessment to Advance Emergency Preparedness, Planning, and Response for Severe Nuclear Power Plant Accidents
In the nuclear power industry, Level 3 probabilistic risk assessment (PRA) is used to estimate damage to public health and the environment if a severe accident leads to large radiological release. Current Level 3 PRA does not have an explicit inclusion of social factors and, therefore, it is not possible to perform importance ranking of social factors for risk‐informing emergency preparedness, planning, and response (EPPR). This article offers a methodology for adapting the concept of social vulnerability, commonly used in natural hazard research, in the context of a severe nuclear power plant accident. The methodology has four steps: (1) calculating a hazard‐independent social vulnerability index for the local population; (2) developing a location‐specific representation of the maximum radiological hazard estimated from current Level 3 PRA, in a geographic information system (GIS) environment; (3) developing a GIS‐based socio‐technical risk map by combining the social vulnerability index and the location‐specific radiological hazard; and (4) conducting a risk importance measure analysis to rank the criticality of social factors based on their contribution to the socio‐technical risk. The methodology is applied using results from the 2012 Surry Power Station state‐of‐the‐art reactor consequence analysis. A radiological hazard model is generated from MELCOR accident consequence code system, translated into a GIS environment, and combined with the Center for Disease Control social vulnerability index (SVI). This research creates an opportunity to explicitly consider and rank the criticality of location‐specific SVI themes based on their influence on risk, providing input for EPPR.
There has been a growing interest in understanding whether and how people adapt to extreme weather events in a changing climate. This article presents one of the first empirical analyses of adaptation to flooding on a global scale. Using a sample of 97 countries between 1985 and 2010, we investigate the extent and pattern of flood adaptation by estimating the effects of a country's climatological risk, recent flood experiences, and socioeconomic characteristics on its flood‐related fatalities. Our results provide mixed evidence on adaptation: countries facing greater long‐term climatological flooding risks do not necessarily adapt better and suffer fewer fatalities; however, after controlling for the cross‐country heterogeneity, we find that more recent flooding shocks have a significant and negative effect on fatalities from subsequent floods. These findings may suggest the short‐term learning dynamics of adaptation and potential inefficacy of earlier flood control measures, particularly those that promote increased exposure in floodplains. Our findings provide important implications for climate adaptation policy making and climate modeling.
Predictive Modeling and Categorizing Likelihoods of Quarantine Pest Introduction of Imported Propagative Commodities from Different Countries
The present study investigates U.S. Department of Agriculture inspection records in the Agricultural Quarantine Activity System database to estimate the probability of quarantine pests on propagative plant materials imported from various countries of origin and to develop a methodology ranking the risk of country–commodity combinations based on quarantine pest interceptions. Data collected from October 2014 to January 2016 were used for developing predictive models and validation study. A generalized linear model with Bayesian inference and a generalized linear mixed effects model were used to compare the interception rates of quarantine pests on different country–commodity combinations. Prediction ability of generalized linear mixed effects models was greater than that of generalized linear models. The estimated pest interception probability and confidence interval for each country–commodity combination was categorized into one of four compliance levels: “High,” “Medium,” “Low,” and “Poor/Unacceptable,” Using K‐means clustering analysis. This study presents risk‐based categorization for each country–commodity combination based on the probability of quarantine pest interceptions and the uncertainty in that assessment.
Machine Learning Methods as a Tool for Predicting Risk of Illness Applying Next‐Generation Sequencing Data
Next‐generation sequencing (NGS) data present an untapped potential to improve microbial risk assessment (MRA) through increased specificity and redefinition of the hazard. Most of the MRA models do not account for differences in survivability and virulence among strains. The potential of machine learning algorithms for predicting the risk/health burden at the population level while inputting large and complex NGS data was explored with Listeria monocytogenes as a case study. Listeria data consisted of a percentage similarity matrix from genome assemblies of 38 and 207 strains of clinical and food origin, respectively. Basic Local Alignment (BLAST) was used to align the assemblies against a database of 136 virulence and stress resistance genes. The outcome variable was frequency of illness, which is the percentage of reported cases associated with each strain. These frequency data were discretized into seven ordinal outcome categories and used for supervised machine learning and model selection from five ensemble algorithms. There was no significant difference in accuracy between the models, and support vector machine with linear kernel was chosen for further inference (accuracy of 89% [95% CI: 68%, 97%]). The virulence genes FAM002725, FAM002728, FAM002729, InlF, InlJ, Inlk, IisY, IisD, IisX, IisH, IisB, lmo2026, and FAM003296 were important predictors of higher frequency of illness. InlF was uniquely truncated in the sequence type 121 strains. Most important risk predictor genes occurred at highest prevalence among strains from ready‐to‐eat, dairy, and composite foods. We foresee that the findings and approaches described offer the potential for rethinking the current approaches in MRA.
In recent years calls have been made for a shift from risk to resilience. The basic idea is that we need to be prepared when threatening events occur, whether they are anticipated or unforeseen. This article questions the extent to which this call will have and should have implications for the risk field and science. Is the call based on a belief that this field and science should be replaced by resilience analysis and management, or is it more about priorities: Should more weight be placed on improving resilience? The article argues that the only meaningful interpretation of the call is the latter. Resilience analysis and management is today an integrated part of the risk field and science, and risk analysis in a broad sense is needed to increase relevant knowledge, develop adequate policies, and make the right decisions, balancing different concerns and using our limited resources in an effective way.
Risk analysis is an essential methodology for cybersecurity as it allows organizations to deal with cyber threats potentially affecting them, prioritize the defense of their assets, and decide what security controls should be implemented. Many risk analysis methods are present in cybersecurity models, compliance frameworks, and international standards. However, most of them employ risk matrices, which suffer shortcomings that may lead to suboptimal resource allocations. We propose a comprehensive framework for cybersecurity risk analysis, covering the presence of both intentional and nonintentional threats and the use of insurance as part of the security portfolio. A simplified case study illustrates the proposed framework, serving as template for more complex problems.
The persistent gap in flood risk awareness in Canada, and elsewhere in North America, is a continual source of worry for researchers and emergency managers; many people living in at‐risk places are simply unaware of risks and of their proximity to hazards. This study seeks to understand which residents were aware of flood risk, using unique representative survey data of Calgary residents living in the city's flood‐prone neighborhoods collected after the devastating and costly 2013 Southern Alberta Flood. The article uses logistic regression models to analyze which residents were aware of risk to their homes. Findings indicate that, in addition to various demographic predictors, many of the geographic predictors (including the elevation of one's home relative to the river) are significant predictors of awareness. Having a direct sight line to one of Calgary's two rivers is also a significant predictor in some of the models, suggesting that the visibility of hazards matters for flood risk perception, although this effect fades when many of the geographic predictors are added. Finally, the models indicate that several variables related to local, neighborhood‐based social networks are significant as well. These findings reveal that both physical surroundings and social context are important for understanding risk awareness. The article concludes by discussing the relevance for social science research on disasters and hazards, as well as for planners and emergency managers.
A Novel Approach to Chemical Mixture Risk Assessment—Linking Data from Population‐Based Epidemiology and Experimental Animal Tests
Humans are continuously exposed to chemicals with suspected or proven endocrine disrupting chemicals (EDCs). Risk management of EDCs presents a major unmet challenge because the available data for adverse health effects are generated by examining one compound at a time, whereas real‐life exposures are to mixtures of chemicals. In this work, we integrate epidemiological and experimental evidence toward a whole mixture strategy for risk assessment. To illustrate, we conduct the following four steps in a case study: (1) identification of single EDCs (“bad actors”)—measured in prenatal blood/urine in the SELMA study—that are associated with a shorter anogenital distance (AGD) in baby boys; (2) definition and construction of a “typical” mixture consisting of the “bad actors” identified in Step 1; (3) experimentally testing this mixture in an in vivo animal model to estimate a dose–response relationship and determine a point of departure (i.e., reference dose [RfD]) associated with an adverse health outcome; and (4) use a statistical measure of “sufficient similarity” to compare the experimental RfD (from Step 3) to the exposure measured in the human population and generate a “similar mixture risk indicator” (SMRI). The objective of this exercise is to generate a proof of concept for the systematic integration of epidemiological and experimental evidence with mixture risk assessment strategies. Using a whole mixture approach, we could find a higher rate of pregnant women under risk (13%) when comparing with the data from more traditional models of additivity (3%), or a compound‐by‐compound strategy (1.6%).
Integrating Operational and Organizational Aspects in Interdependent Infrastructure Network Recovery
Managing risk in infrastructure systems implies dealing with interdependent physical networks and their relationships with the natural and societal contexts. Computational tools are often used to support operational decisions aimed at improving resilience, whereas economics‐related tools tend to be used to address broader societal and policy issues in infrastructure management. We propose an optimization‐based framework for infrastructure resilience analysis that incorporates organizational and socioeconomic aspects into operational problems, allowing to understand relationships between decisions at the policy level (e.g., regulation) and the technical level (e.g., optimal infrastructure restoration). We focus on three issues that arise when integrating such levels. First, optimal restoration strategies driven by financial and operational factors evolve differently compared to those driven by socioeconomic and humanitarian factors. Second, regulatory aspects have a significant impact on recovery dynamics (e.g., effective recovery is most challenging in societies with weak institutions and regulation, where individual interests may compromise societal well‐being). And third, the decision space (i.e., available actions) in postdisaster phases is strongly determined by predisaster decisions (e.g., resource allocation). The proposed optimization framework addresses these issues by using: (1) parametric analyses to test the influence of operational and socioeconomic factors on optimization outcomes, (2) regulatory constraints to model and assess the cost and benefit (for a variety of actors) of enforcing specific policy‐related conditions for the recovery process, and (3) sensitivity analyses to capture the effect of predisaster decisions on recovery. We illustrate our methodology with an example regarding the recovery of interdependent water, power, and gas networks in Shelby County, TN (USA), with exposure to natural hazards.
Emergency material allocation is an important part of postdisaster emergency logistics that is significant for improving rescue effectiveness and reducing disaster losses. However, the traditional single‐period allocation model often causes local surpluses or shortages and high cost, and prevents the system from achieving an equitable or optimal multiperiod allocation. To achieve equitable allocation of emergency materials in the case of serious shortages relative to the demand by victims, this article introduces a multiperiod model for allocation of emergency materials to multiple affected locations (using an exponential utility function to reflect the disutility loss due to material shortfalls), and illustrates the relationship between equity of allocations and the cost of emergency response. Finally, numerical examples are presented to demonstrate both the feasibility and the usefulness of the proposed model for achieving multiperiod equitable allocation of emergency material among multiple disaster locations. The results indicate that the introduction of a nonlinear utility function to reflect the disutility of large shortfalls can make the material allocation fairer, and minimize large losses due to shortfalls. We found that achieving equity has a significant but not unreasonable impact on emergency costs. We also illustrate that using differing utility functions for different types of materials adds an important dimension of flexibility.
To prevent catastrophic asteroid–Earth collisions, it has been proposed to use nuclear explosives to deflect away earthbound asteroids. However, this policy of nuclear deflection could inadvertently increase the risk of nuclear war and other violent conflict. This article conducts risk–risk tradeoff analysis to assess whether nuclear deflection results in a net increase or decrease in risk. Assuming nonnuclear deflection options are also used, nuclear deflection may only be needed for the largest and most imminent asteroid collisions. These are low‐frequency, high‐severity events. The effect of nuclear deflection on violent conflict risk is more ambiguous due to the complex and dynamic social factors at play. Indeed, it is not clear whether nuclear deflection would cause a net increase or decrease in violent conflict risk. Similarly, this article cannot reach a precise conclusion on the overall risk–risk tradeoff. The value of this article comes less from specific quantitative conclusions and more from providing an analytical framework and a better overall understanding of the policy decision. The article demonstrates the importance of integrated analysis of global risks and the policies to address them, as well as the challenge of quantitative evaluation of complex social processes such as violent conflict.
Modeling of Distributed Generators Resilience Considering Lifeline Dependencies During Extreme Events
This article derives distributed generators resilience models considering lifeline dependencies during extreme events. The effects on power resilience of storage capacity, fuel delays, and fuel order placements are analyzed. Results indicate that storage capacity has an important role in improving overall power supply resilience as seen by loads. In addition, the presented models provide a quantifiable approach in evaluating fuel delivery resilience. The models facilitate studying fuel scheduling policies and local fuel storage sizing for specified resilience requirements. It is observed that tank autonomy greatly affects the flexibility in employing scheduling policies supplying fuel to generators. Resilience dependence on buffer autonomy is high during the first few days of extreme events, and this could have considerable effects on managing evacuations and rescue operations.