Risk Analysis: An International Journal
The Practical Significance of Measurement Error in Pulmonary Function Testing Conducted in Research Settings
Conventional spirometry produces measurement error by using repeatability criteria (RC) to discard acceptable data and terminating tests early when RC are met. These practices also implicitly assume that there is no variation across maneuvers within each test. This has implications for air pollution regulations that rely on pulmonary function tests to determine adverse effects or set standards. We perform a Monte Carlo simulation of 20,902 tests of forced expiratory volume in 1 second (FEV1), each with eight maneuvers, for an individual with empirically obtained, plausibly normal pulmonary function. Default coefficients of variation for inter‐ and intratest variability (3% and 6%, respectively) are employed. Measurement error is defined as the difference between results from the conventional protocol and an unconstrained, eight‐maneuver alternative. In the default model, average measurement error is shown to be ∼5%. The minimum difference necessary for statistical significance at p < 0.05 for a before/after comparison is shown to be 16%. Meanwhile, the U.S. Environmental Protection Agency has deemed single‐digit percentage decrements in FEV1 sufficient to justify more stringent national ambient air quality standards. Sensitivity analysis reveals that results are insensitive to intertest variability but highly sensitive to intratest variability. Halving the latter to 3% reduces measurement error by 55%. Increasing it to 9% or 12% increases measurement error by 65% or 125%, respectively. Within‐day FEV1 differences ≤5% among normal subjects are believed to be clinically insignificant. Therefore, many differences reported as statistically significant are likely to be artifactual. Reliable data are needed to estimate intratest variability for the general population, subpopulations of interest, and research samples. Sensitive subpopulations (e.g., chronic obstructive pulmonary disease or COPD patients, asthmatics, children) are likely to have higher intratest variability, making it more difficult to derive valid statistical inferences about differences observed after treatment or exposure.
Risk Perception and Human Health Risk in Rural Communities Consuming Unregulated Well Water in Saskatchewan, Canada
Rural communities dependent on unregulated drinking water are potentially at increased health risk from exposure to contaminants. Perception of drinking water safety influences water consumption, exposure, and health risk. A community‐based participatory approach and probabilistic Bayesian methods were applied to integrate risk perception in a holistic human health risk assessment. Tap water arsenic concentrations and risk perception data were collected from two Saskatchewan communities. Drinking water health standards were exceeded in 67% (51/76) of households in Rural Municipality #184 (RM184) and 56% (25/45) in Beardy's and Okemasis First Nation (BOFN). There was no association between the presence of a health exceedance and risk perception. Households in RM184 or with an annual income >$50,000 were most likely to have in‐house water treatment. The probability of consuming tap water perceived as safe (92%) or not safe (0%) suggested that households in RM184 were unlikely to drink water perceived as not safe. The probability of drinking tap water perceived as safe (77%) or as not safe (11%) suggested households in BOFN contradicted their perception and consumed water perceived as unsafe. Integration of risk perception lowered the adult incremental lifetime cancer risk by 3% to 1.3 × 10−5 (95% CI 8.4 × 10−8 to 9.0 × 10−5) for RM184 and by 8.9 × 10−6 (95% CI 2.2 × 10−7 to 5.9 × 10−5) for BOFN. Probability of exposure to arsenic concentrations >1:100,000, negligible cancer risk, was 23% for RM184 and 22% for BOFN.
Conceptualizing, assessing, and managing disaster risks involve collecting and synthesizing pluralistic information—from natural, built, and human systems—to characterize disaster impacts and guide policy on effective resilience investments. Disaster research and practice, therefore, are highly complex and inherently interdisciplinary endeavors. Characterizing the uncertainties involved in interdisciplinary disaster research is imperative, since misrepresenting uncertainty can lead to myopic decisions and suboptimal societal outcomes. Efficacious disaster mitigation should, therefore, explicitly address the uncertainties associated with all stages of hazard modeling, preparation, and response. However, uncertainty assessment and communication in the context of interdisciplinary disaster research remain understudied. In this “Perspective” article, we argue that in harnessing interdisciplinary methods and diverse data types in disaster research, careful deliberations on assessing Type III and Type IV errors are imperative. Additionally, we discuss the pathologies in frequentist approaches, calling for an increasing role for Bayesian methods in uncertainty estimations. Moreover, we discuss the potential tradeoffs associated with information and uncertainty, calling for deliberate consideration of the role of diversity of information prior to setting the scope in interdisciplinary modeling. Future research guided by further reflections on the ideas raised in this article could help push the frontiers of uncertainty estimation in interdisciplinary hazard research and practice.
Benefit–cost analysis is widely used to evaluate alternative courses of action that are designed to achieve policy objectives. Although many analyses take uncertainty into account, they typically only consider uncertainty about cost estimates and physical states of the world, whereas uncertainty about individual preferences, thus the benefit of policy intervention, is ignored. Here, we propose a strategy to integrate individual uncertainty about preferences into benefit–cost analysis using societal preference intervals, which are ranges of values over which it is unclear whether society as a whole should accept or reject an option. To illustrate the method, we use preferences for implementing a smart grid technology to sustain critical electricity demand during a 24‐hour regional power blackout on a hot summer weekend. Preferences were elicited from a convenience sample of residents in Allegheny County, Pennsylvania. This illustrative example shows that uncertainty in individual preferences, when aggregated to form societal preference intervals, can substantially change society's decision. We conclude with a discussion of where preference uncertainty comes from, how it might be reduced, and why incorporating unresolved preference uncertainty into benefit–cost analyses can be important.
Managing the Risk of Aggressive Dog Behavior: Investigating the Influence of Owner Threat and Efficacy Perceptions
Aggressive behavior in pet dogs is a serious problem for dog owners across the globe, with bite injuries representing a serious risk to both people and other dogs. The effective management of aggressive behavior in dogs represents a challenging and controversial issue. Although positive reinforcement training methods are now considered to be the most effective and humane technique to manage the risk of aggression, punishment‐based methods continue to be used. Unfortunately, there has been little scientific study into the various factors influencing whether dog owners choose to use positive reinforcement techniques to manage aggression in their dogs. As such, current understanding of how best to encourage and support dog owners to use these methods remains extremely limited. This article uses a survey methodology based on protection motivation theory (PMT) to investigate the factors that influence owner use of positive reinforcement methods to manage aggressive behavior, in an attempt to understand potential barriers and drivers of use. In addition, the article provides an initial exploration of the potential role of wider psychological factors, including owner emotional state, social influence, and cognitive bias. Findings show that the perceived efficacy of positive reinforcement methods and the perceived ability of owners to effectively implement the technique are both key factors predicting future intentions and current reported use. Future interventions should focus on enhancing owner confidence in the effective use of positive reinforcement techniques across multiple scenarios, as well as helping owners manage their own emotional responses when they encounter challenging situations and setbacks.
This article models flood occurrence probabilistically and its risk assessment. It incorporates atmospheric parameters to forecast rainfall in an area. This measure of precipitation, together with river and ground parameters, serve as parameters in the model to predict runoff and subsequently inundation depth of an area. The inundation depth acts as a guide for predicting flood proneness and associated hazard. The vulnerability owing to flood has been analyzed as social vulnerability (VS), vulnerability to property (VP), and vulnerability to the location in terms of awareness (VA). The associated risk has been estimated for each area. The distribution of risk values can be used to classify every area into one of the six risk zones—namely, very low risk, low risk, moderately low risk, medium risk, high risk, and very high risk. The prioritization regarding preparedness, evacuation planning, or distribution of relief items should be guided by the range on the risk scale within which the area under study falls. The flood risk assessment model framework has been tested on a real‐life case study. The flood risk indices for each of the municipalities in the area under study have been calculated. The risk indices and hence the flood risk zone under which a municipality is expected to lie would alter every day. The appropriate authorities can then plan ahead in terms of preparedness to combat the impending flood situation in the most critical and vulnerable areas.
This article estimates the value of a statistical life (VSL) for Chile under the hedonic wage method while accounting for individual risk preferences. Two alternative measures of risk aversion are used. First, risk aversion is directly measured using survey measures of preferences over hypothetical gambles, and second, over observed individual behaviors that may proxy for risk preferences, such as smoking status, are used. I reconcile the results with a theoretical model of economic behavior that predicts how the wage‐risk tradeoff changes as risk aversion differs across individuals. The VSL estimates range between 0.61 and 8.68 million dollars. The results using smoking behavior as a proxy for risk attitudes are consistent with previous findings. However, directly measuring risk aversion corrects the wage‐risk tradeoff estimation bias in the opposite direction. The results are robust to other observed measures of risk aversion such as drinking behavior and stock investments. Results suggest that, consistent with the literature that connects smoking behavior with labor market outcomes, smoking status could be capturing poor health productivity effect in addition to purely risk preferences.
The concept of “resilience analytics” has recently been proposed as a means to leverage the promise of big data to improve the resilience of interdependent critical infrastructure systems and the communities supported by them. Given recent advances in machine learning and other data‐driven analytic techniques, as well as the prevalence of high‐profile natural and man‐made disasters, the temptation to pursue resilience analytics without question is almost overwhelming. Indeed, we find big data analytics capable to support resilience to rare, situational surprises captured in analytic models. Nonetheless, this article examines the efficacy of resilience analytics by answering a single motivating question: Can big data analytics help cyber–physical–social (CPS) systems adapt to surprise? This article explains the limitations of resilience analytics when critical infrastructure systems are challenged by fundamental surprises never conceived during model development. In these cases, adoption of resilience analytics may prove either useless for decision support or harmful by increasing dangers during unprecedented events. We demonstrate that these dangers are not limited to a single CPS context by highlighting the limits of analytic models during hurricanes, dam failures, blackouts, and stock market crashes. We conclude that resilience analytics alone are not able to adapt to the very events that motivate their use and may, ironically, make CPS systems more vulnerable. We present avenues for future research to address this deficiency, with emphasis on improvisation to adapt CPS systems to fundamental surprise.
Do Interactions Between Environmental Chemicals and the Human Microbiome Need to Be Considered in Risk Assessments?
One of the most dynamic and fruitful areas of current health‐related research concerns the various roles of the human microbiome in disease. Evidence is accumulating that interactions between substances in the environment and the microbiome can affect risks of disease, in both beneficial and adverse ways. Although most of the research has concerned the roles of diet and certain pharmaceutical agents, there is increasing interest in the possible roles of environmental chemicals. Chemical risk assessment has, to date, not included consideration of the influence of the microbiome. We suggest that failure to consider the possible roles of the microbiome could lead to significant error in risk assessment results. Our purpose in this commentary is to summarize some of the evidence supporting our hypothesis and to urge the risk assessment community to begin considering and influencing how results from microbiome‐related research could be incorporated into chemical risk assessments. An additional emphasis in our commentary concerns the distinct possibility that research on chemical–microbiome interactions will also reduce some of the significant uncertainties that accompany current risk assessments. Of particular interest is evidence suggesting that the microbiome has an influence on variability in disease risk across populations and (of particular interest to chemical risk) in animal and human responses to chemical exposure. The possible explanatory power of the microbiome regarding sources of variability could reduce what might be the most significant source of uncertainty in chemical risk assessment.
The relatively high failure rates, with important consequences in many cases, suggest that the implicitly acceptable risk levels corresponding to temporary civil engineering structures and activities might exceed the bounds of normally acceptable levels associated with different societal activities. Among other reasons, this may be attributed to the lack of a rational approach for the assessment of risks associated with the different technologies supporting these activities in general, and for structures in particular. There is a need for establishing appropriate target reliability levels for structures under temporary use taking into account specific circumstances such as reduced risk exposure times. This issue is being addressed in this article. Acceptance criteria for building‐structure‐related risks to persons obtained in prior studies are adapted to the special circumstances of nonpermanent risk exposure. Thereby, the general principle followed is to maintain the same risk levels per time unit as for permanently occupied buildings. The adaptation is based on the statistical annual fatality rate, a life safety risk metric that allows for a consistent comparison of risks across different societal activities and technologies. It is shown that the target reliability indices taking account of the temporary use of buildings might be significantly higher than the values suggested for permanently used structures.
In this review, recent methodological developments for the benchmark dose (BMD) methodology are summarized. Specifically, we introduce the advances for the main steps in BMD derivation: selecting the procedure for defining a BMD from a predefined benchmark response (BMR), setting a BMR, selecting a dose–response model, and estimating the corresponding BMD lower limit (BMDL). Although the last decade has shown major progress in the development of BMD methodology, there is still room for improvement. Remaining challenges are the implementation of new statistical methods in user‐friendly software and the lack of consensus about how to derive the BMDL.
Many studies in the field of risk perception and acceptance of hazards include trust as an explanatory variable. Despite this, the importance of trust has often been questioned. The relevant issue is not only whether trust is crucial but also the form of trust that people rely on in a given situation. In this review, I discuss various trust models and the relationship between trust and affect heuristics. I conclude that the importance of trust varies by hazard and respondent group. Most of the studies use surveys that provide limited information about causality. Future research should focus more on experiments that test whether trust is a consequence of people's attitudes or influences their attitudes toward a technology. Furthermore, there is a need for a better understanding about the factors that determine which heuristics people rely on when evaluating hazards.
Probabilistic risk assessment (PRA) is a useful tool to assess complex interconnected systems. This article leverages the capabilities of PRA tools developed for industrial and nuclear risk analysis in community resilience evaluations by modeling the food security of a community in terms of its built environment as an integrated system. To this end, we model the performance of Gilroy, CA, a moderate‐size town, with regard to disruptions in its food supply caused by a severe earthquake. The food retailers of Gilroy, along with the electrical power network, water network elements, and bridges are considered as components of a system. Fault and event trees are constructed to model the requirements for continuous food supply to community residents and are analyzed efficiently using binary decision diagrams (BDDs). The study also identifies shortcomings in approximate classical system analysis methods in assessing community resilience. Importance factors are utilized to rank the importance of various factors to the overall risk of food insecurity. Finally, the study considers the impact of various sources of uncertainties in the hazard modeling and performance of infrastructure on food security measures. The methodology can be applicable for any existing critical infrastructure system and has potential extensions to other hazards.
Interventions Targeting Deep Tissue Lymph Nodes May Not Effectively Reduce the Risk of Salmonellosis from Ground Pork Consumption: A Quantitative Microbial Risk Assessment
The inclusion of deep tissue lymph nodes (DTLNs) or nonvisceral lymph nodes contaminated with Salmonella in wholesale fresh ground pork (WFGP) production may pose risks to public health. To assess the relative contribution of DTLNs to human salmonellosis occurrence associated with ground pork consumption and to investigate potential critical control points in the slaughter‐to‐table continuum for the control of human salmonellosis in the United States, a quantitative microbial risk assessment (QMRA) model was established. The model predicted an average of 45 cases of salmonellosis (95% CI = [19, 71]) per 100,000 Americans annually due to WFGP consumption. Sensitivity analysis of all stochastic input variables showed that cooking temperature was the most influential parameter for reducing salmonellosis cases associated with WFGP meals, followed by storage temperature and Salmonella concentration on contaminated carcass surface before fabrication. The input variables were grouped to represent three main factors along the slaughter‐to‐table chain influencing Salmonella doses ingested via WFGP meals: DTLN‐related factors, factors at processing other than DTLNs, and consumer‐related factors. The evaluation of the impact of each group of factors by second‐order Monte Carlo simulation showed that DTLN‐related factors had the lowest impact on the risk estimate among the three groups of factors. These findings indicate that interventions to reduce Salmonella contamination in DTLNs or to remove DTLNs from WFGP products may be less critical for reducing human infections attributable to ground pork than improving consumers’ cooking habits or interventions of carcass decontamination at processing.
Evacuating residents out of affected areas is an important strategy for mitigating the impact of natural disasters. However, the resulting abrupt increase in the travel demand during evacuation causes severe congestions across the transportation system, which thereby interrupts other commuters' regular activities. In this article, a bilevel mathematical optimization model is formulated to address this issue, and our research objective is to maximize the transportation system resilience and restore its performance through two network reconfiguration schemes: contraflow (also referred to as lane reversal) and crossing elimination at intersections. Mathematical models are developed to represent the two reconfiguration schemes and characterize the interactions between traffic operators and passengers. Specifically, traffic operators act as leaders to determine the optimal system reconfiguration to minimize the total travel time for all the users (both evacuees and regular commuters), while passengers act as followers by freely choosing the path with the minimum travel time, which eventually converges to a user equilibrium state. For each given network reconfiguration, the lower‐level problem is formulated as a traffic assignment problem (TAP) where each user tries to minimize his/her own travel time. To tackle the lower‐level optimization problem, a gradient projection method is leveraged to shift the flow from other nonshortest paths to the shortest path between each origin–destination pair, eventually converging to the user equilibrium traffic assignment. The upper‐level problem is formulated as a constrained discrete optimization problem, and a probabilistic solution discovery algorithm is used to obtain the near‐optimal solution. Two numerical examples are used to demonstrate the effectiveness of the proposed method in restoring the traffic system performance.
We used an agent‐based modeling (ABM) framework and developed a mathematical model to explain the complex dynamics of microbial persistence and spread within a food facility and to aid risk managers in identifying effective mitigation options. The model explicitly considered personal hygiene practices by food handlers as well as their activities and simulated a spatially explicit dynamic system representing complex interaction patterns among food handlers, facility environment, and foods. To demonstrate the utility of the model in a decision‐making context, we created a hypothetical case study and used it to compare different risk mitigation strategies for reducing contamination and spread of Listeria monocytogenes in a food facility. Model results indicated that areas with no direct contact with foods (e.g., loading dock and restroom) can serve as contamination niches and recontaminate areas that have direct contact with food products. Furthermore, food handlers’ behaviors, including, for example, hygiene and sanitation practices, can impact the persistence of microbial contamination in the facility environment and the spread of contamination to prepared foods. Using this case study, we also demonstrated benefits of an ABM framework for addressing food safety in a complex system in which emergent system‐level responses are predicted using a bottom‐up approach that observes individual agents (e.g., food handlers) and their behaviors. Our model can be applied to a wide variety of pathogens, food commodities, and activity patterns to evaluate efficacy of food‐safety management practices and quantify contamination reductions associated with proposed mitigation strategies in food facilities.
The transition to semiautonomous driving is set to considerably reduce road accident rates as human error is progressively removed from the driving task. Concurrently, autonomous capabilities will transform the transportation risk landscape and significantly disrupt the insurance industry. Semiautonomous vehicle (SAV) risks will begin to alternate between human error and technological susceptibilities. The evolving risk landscape will force a departure from traditional risk assessment approaches that rely on historical data to quantify insurable risks. This article investigates the risk structure of SAVs and employs a telematics‐based anomaly detection model to assess split risk profiles. An unsupervised multivariate Gaussian (MVG) based anomaly detection method is used to identify abnormal driving patterns based on accelerometer and GPS sensors of manually driven vehicles. Parameters are inferred for vehicles equipped with semiautonomous capabilities and the resulting split risk profile is determined. The MVG approach allows for the quantification of vehicle risks by the relative frequency and severity of observed anomalies and a location‐based risk analysis is performed for a more comprehensive assessment. This approach contributes to the challenge of quantifying SAV risks and the methods employed here can be applied to evolving data sources pertinent to SAVs. Utilizing the vast amounts of sensor‐generated data will enable insurers to proactively reassess the collective performances of both the artificial driving agent and human driver.
Evaluating Potential Distribution of High‐Risk Aquatic Invasive Species in the Water Garden and Aquarium Trade at a Global Scale Based on Current Established Populations
Aquatic non‐native invasive species are commonly traded in the worldwide water garden and aquarium markets, and some of these species pose major threats to the economy, the environment, and human health. Understanding the potential suitable habitat for these species at a global scale and at regional scales can inform risk assessments and predict future potential establishment. Typically, global habitat suitability models are fit for freshwater species with only climate variables, which provides little information about suitable terrestrial conditions for aquatic species. Remotely sensed data including topography and land cover data have the potential to improve our understanding of suitable habitat for aquatic species. In this study, we fit species distribution models using five different model algorithms for three non‐native aquatic invasive species with bioclimatic, topographic, and remotely sensed covariates to evaluate potential suitable habitat beyond simple climate matches. The species examined included a frog (Xenopus laevis), toad (Bombina orientalis), and snail (Pomacea spp.). Using a unique modeling approach for each species including background point selection based on known established populations resulted in robust ensemble habitat suitability models. All models for all species had test area under the receiver operating characteristic curve values greater than 0.70 and percent correctly classified values greater than 0.65. Importantly, we employed multivariate environmental similarity surface maps to evaluate potential extrapolation beyond observed conditions when applying models globally. These global models provide necessary forecasts of where these aquatic invasive species have the potential for establishment outside their native range, a key component in risk analyses.
Risk Information Seeking and Processing About Particulate Air Pollution in South Korea: The Roles of Cultural Worldview
This study integrates cultural theory of risk into the risk information seeking and processing model in the context of particulate air pollution in South Korea. Specifically, it examines how cultural worldviews (hierarchy, individualism, egalitarianism, and fatalism) influence the way people interpret risk about an environmental risk, which may in turn promote or deter their information seeking and processing about the risk. An online survey (N = 645) showed that egalitarianism was positively associated with perceptions of societal and personal risks, affective responses toward the risk, and informational subjective norms. Perceived societal risk, in particular, mediated the effect of egalitarianism on information insufficiency. Moreover, cultural worldview was a significant moderator of the relationships between information insufficiency and risk information seeking and processing. The positive relationship between information insufficiency and information seeking grew stronger with increasing egalitarianism. In contrast, the negative relationship between information insufficiency and heuristic processing was strengthened with increasing hierarchy. This study extends prior theories and models in risk communication by addressing the roles of cultural worldview, an important individual difference factor in interpreting environmental risks.
Living Well in Times of Threat: The Importance of Adjustment Processes to Explain Functional Adaptation to Uncertain Security in Expatriates Deployed in the Sudan
The present study investigated expatriate humanitarian aid workers’ perceptions and responses to uncertain security while deployed in the Sudan. Interviews conducted in Khartoum (n = 7) and Darfur (n = 17) focused on risk perception, concern for personal security, and strategies used to function well in an insecure environment. Despite a high perceived general risk, as well as broad knowledge and experience with security incidents, participants often expressed low concern. General adjustment processes were drawn on to explain this finding, while different constellations of processes resulted in different patterns of adjustment. Functional adjustment, resulting in adequate risk perception, protective behavior, protection, and low concern, was characterized by a constellation of complementary activation of accommodation and assimilation processes.