Risk Analysis: An International Journal
Societies worldwide are investing considerable resources into the safe development and use of nanomaterials. Although each of these protective efforts is crucial for governing the risks of nanomaterials, they are insufficient in isolation. What is missing is a more integrative governance approach that goes beyond legislation. Development of this approach must be evidence based and involve key stakeholders to ensure acceptance by end users. The challenge is to develop a framework that coordinates the variety of actors involved in nanotechnology and civil society to facilitate consideration of the complex issues that occur in this rapidly evolving research and development area. Here, we propose three sets of essential elements required to generate an effective risk governance framework for nanomaterials. (1) Advanced tools to facilitate risk-based decision making, including an assessment of the needs of users regarding risk assessment, mitigation, and transfer. (2) An integrated model of predicted human behavior and decision making concerning nanomaterial risks. (3) Legal and other (nano-specific and general) regulatory requirements to ensure compliance and to stimulate proactive approaches to safety. The implementation of such an approach should facilitate and motivate good practice for the various stakeholders to allow the safe and sustainable future development of nanotechnology.
Quantitative risk analysis is being extensively employed to support policymakers and provides a strong conceptual framework for evaluating decision alternatives under uncertainty. Many problems involving environmental risks are, however, of a spatial nature, i.e., containing spatial impacts, spatial vulnerabilities, and spatial risk-mitigation alternatives. Recent developments in multicriteria spatial analysis have enabled the assessment and aggregation of multiple impacts, supporting policymakers in spatial evaluation problems. However, recent attempts to conduct spatial multicriteria risk analysis have generally been weakly conceptualized, without adequate roots in quantitative risk analysis. Moreover, assessments of spatial risk often neglect the multidimensional nature of spatial impacts (e.g., social, economic, human) that are typically occurring in such decision problems. The aim of this article is therefore to suggest a conceptual quantitative framework for environmental multicriteria spatial risk analysis based on expected multi-attribute utility theory. The framework proposes: (i) the formal assessment of multiple spatial impacts; (ii) the aggregation of these multiple spatial impacts; (iii) the assessment of spatial vulnerabilities and probabilities of occurrence of adverse events; (iv) the computation of spatial risks; (v) the assessment of spatial risk mitigation alternatives; and (vi) the design and comparison of spatial risk mitigation alternatives (e.g., reductions of vulnerabilities and/or impacts). We illustrate the use of the framework in practice with a case study based on a flood-prone area in northern Italy.
We developed a simulation model for quantifying the spatio-temporal distribution of contaminants (e.g., xenobiotics) and assessing the risk of exposed populations at the landscape level. The model is a spatio-temporal exposure-hazard model based on (i) tools of stochastic geometry (marked polygon and point processes) for structuring the landscape and describing the exposed individuals, (ii) a dispersal kernel describing the dissemination of contaminants from polygon sources, and (iii) an (eco)toxicological equation describing the toxicokinetics and dynamics of contaminants in affected individuals. The model was implemented in the briskaR package (biological risk assessment with R) of the R software. This article presents the model background, the use of the package in an illustrative example, namely, the effect of genetically modified maize pollen on nontarget Lepidoptera, and typical comparisons of landscape configurations that can be carried out with our model (different configurations lead to different mortality rates in the treated example). In real case studies, parameters and parametric functions encountered in the model will have to be precisely specified to obtain realistic measures of risk and impact and accurate comparisons of landscape configurations. Our modeling framework could be applied to study other risks related to agriculture, for instance, pathogen spread in crops or livestock, and could be adapted to cope with other hazards such as toxic emissions from industrial areas having health effects on surrounding populations. Moreover, the R package has the potential to help risk managers in running quantitative risk assessments and testing management strategies.
Aging and Cardiometabolic Risk in European HEMS Pilots: An Assessment of Occupational Old-Age Limits as a Regulatory Risk Management Strategy
Old-age limits are imposed in some occupations in an effort to ensure public safety. In aviation, the “Age 60 Rule” limits permissible flight operations conducted by pilots aged 60 and over. Using a retrospective cohort design, we assessed this rule's validity by comparing age-related change rates of cardiometabolic incapacitation risk markers in European helicopter emergency medical service (HEMS) pilots near age 60 with those in younger pilots. Specifically, individual clinical, laboratory, and electrocardiogram (ECG)-based risk markers and an overall cardiovascular event risk score were determined from aeromedical examination records of 66 German, Austrian, Polish, and Czech HEMS pilots (average follow-up 8.52 years). Risk marker change rates were assessed using linear mixed models and generalized additive models. Body mass index increases over time were slower in pilots near age 60 compared to younger pilots, and fasting glucose levels increased only in the latter. Whereas the lipid profile remained unchanged in the latter, it improved in the former. An ECG-based arrhythmia risk marker increased in younger pilots, which persisted in the older pilots. Six-month risk of a fatal cardiovascular event (in or out of cockpit) was estimated between 0% and 0.3%. Between 41% and 95% of risk marker variability was due to unexplained time-stable between-person differences. To conclude, the cardiometabolic risk marker profile of HEMS pilots appears to improve over time in pilots near age 60, compared to younger pilots. Given large stable interindividual differences, we recommend individualized risk assessment of HEMS pilots near age 60 instead of general grounding.
Complex statistical models fitted to data from studies of atomic bomb survivors are used to estimate the human health effects of ionizing radiation exposures. We describe and illustrate an approach to estimate population risks from ionizing radiation exposure that relaxes many assumptions about radiation-related mortality. The approach draws on developments in methods for causal inference. The results offer a different way to quantify radiation's effects and show that conventional estimates of the population burden of excess cancer at high radiation doses are driven strongly by projecting outside the range of current data. Summary results obtained using the proposed approach are similar in magnitude to those obtained using conventional methods, although estimates of radiation-related excess cancers differ for many age, sex, and dose groups. At low doses relevant to typical exposures, the strength of evidence in data is surprisingly weak. Statements regarding human health effects at low doses rely strongly on the use of modeling assumptions.
Using the CAUSE Model to Understand Public Communication about Water Risks: Perspectives from Texas Groundwater District Officials on Drought and Availability
Public communication about drought and water availability risks poses challenges to a potentially disinterested public. Water management professionals, though, have a responsibility to work with the public to engage in communication about water and environmental risks. Because limited research in water management examines organizational communication practices and perceptions, insights into research and practice can be gained through investigation of current applications of these risk communication efforts. Guided by the CAUSE model, which explains common goals in communicating risk information to the public (e.g., creating Confidence, generating Awareness, enhancing Understanding, gaining Satisfaction, and motivating Enactment), semistructured interviews of professionals (N = 25) employed by Texas groundwater conservation districts were conducted. The interviews examined how CAUSE model considerations factor in to communication about drought and water availability risks. These data suggest that many work to build constituents’ confidence in their districts. Although audiences and constituents living in drought-prone areas were reported as being engaged with water availability risks and solutions, many district officials noted constituents’ lack of perceived risk and engagement. Some managers also indicated that public understanding was a secondary concern of their primary responsibilities and that the public often seemed apathetic about technical details related to water conservation risks. Overall, results suggest complicated dynamics between officials and the public regarding information access and motivation. The article also outlines extensions of the CAUSE model and implications for improving public communication about drought and water availability risks.
Spatially Representing Vulnerability to Extreme Rain Events Using Midwestern Farmers’ Objective and Perceived Attributes of Adaptive Capacity
Potential climate-change-related impacts to agriculture in the upper Midwest pose serious economic and ecological risks to the U.S. and the global economy. On a local level, farmers are at the forefront of responding to the impacts of climate change. Hence, it is important to understand how farmers and their farm operations may be more or less vulnerable to changes in the climate. A vulnerability index is a tool commonly used by researchers and practitioners to represent the geographical distribution of vulnerability in response to global change. Most vulnerability assessments measure objective adaptive capacity using secondary data collected by governmental agencies. However, other scholarship on human behavior has noted that sociocultural and cognitive factors, such as risk perceptions and perceived capacity, are consequential for modulating people's actual vulnerability. Thus, traditional assessments can potentially overlook people's subjective perceptions of changes in climate and extreme weather events and the extent to which people feel prepared to take necessary steps to cope with and respond to the negative effects of climate change. This article addresses this knowledge gap by: (1) incorporating perceived adaptive capacity into a vulnerability assessment; (2) using spatial smoothing to aggregate individual-level vulnerabilities to the county level; and (3) evaluating the relationships among different dimensions of adaptive capacity to examine whether perceived capacity should be integrated into vulnerability assessments. The result suggests that vulnerability assessments that rely only on objective measures might miss important sociocognitive dimensions of capacity. Vulnerability indices and maps presented in this article can inform engagement strategies for improving environmental sustainability in the region.
The performance of fire protection measures plays a key role in the prevention and mitigation of fire escalation (fire domino effect) in process plants. In addition to passive and active safety measures, the intervention of firefighting teams can have a great impact on fire propagation. In the present study, we have demonstrated an application of dynamic Bayesian network to modeling and safety assessment of fire domino effect in oil terminals while considering the effect of safety measures in place. The results of the developed dynamic Bayesian network—prior and posterior probabilities—have been combined with information theory, in the form of mutual information, to identify optimal firefighting strategies, especially when the number of fire trucks is not sufficient to handle all the vessels in danger.
Public Response to a Near-Miss Nuclear Accident Scenario Varying in Causal Attributions and Outcome Uncertainty
Many studies have investigated public reactions to nuclear accidents. However, few studies focused on more common events when a serious accident could have happened but did not. This study evaluated public response (emotional, cognitive, and behavioral) over three phases of a near-miss nuclear accident. Simulating a loss-of-coolant accident (LOCA) scenario, we manipulated (1) attribution for the initial cause of the incident (software failure vs. cyber terrorist attack vs. earthquake), (2) attribution for halting the incident (fail-safe system design vs. an intervention by an individual expert vs. a chance coincidence), and (3) level of uncertainty (certain vs. uncertain) about risk of a future radiation leak after the LOCA is halted. A total of 773 respondents were sampled using a 3 × 3 × 2 between-subjects design. Results from both MANCOVA and structural equation modeling (SEM) indicate that respondents experienced more negative affect, perceived more risk, and expressed more avoidance behavioral intention when the near-miss event was initiated by an external attributed source (e.g., earthquake) compared to an internally attributed source (e.g., software failure). Similarly, respondents also indicated greater negative affect, perceived risk, and avoidance behavioral intentions when the future impact of the near-miss incident on people and the environment remained uncertain. Results from SEM analyses also suggested that negative affect predicted risk perception, and both predicted avoidance behavior. Affect, risk perception, and avoidance behavior demonstrated high stability (i.e., reliability) from one phase to the next.
In assessing environmental health risks, the risk characterization step synthesizes information gathered in evaluating exposures to stressors together with dose–response relationships, characteristics of the exposed population, and external environmental conditions. This article summarizes key steps of a cumulative risk assessment (CRA) followed by a discussion of considerations for characterizing cumulative risks. Cumulative risk characterizations differ considerably from single chemical- or single source-based risk characterization. CRAs typically focus on a specific population instead of a pollutant or pollutant source and should include an evaluation of all relevant sources contributing to the exposures in the population and other factors that influence dose–response relationships. Second, CRAs may include influential environmental and population-specific conditions, involving multiple chemical and nonchemical stressors. Third, a CRA could examine multiple health effects, reflecting joint toxicity and the potential for toxicological interactions. Fourth, the complexities often necessitate simplifying methods, including judgment-based and semi-quantitative indices that collapse disparate data into numerical scores. Fifth, because of the higher dimensionality and potentially large number of interactions, information needed to quantify risk is typically incomplete, necessitating an uncertainty analysis. Three approaches that could be used for characterizing risks in a CRA are presented: the multiroute hazard index, stressor grouping by exposure and toxicity, and indices for screening multiple factors and conditions. Other key roles of the risk characterization in CRAs are also described, mainly the translational aspect of including a characterization summary for lay readers (in addition to the technical analysis), and placing the results in the context of the likely risk-based decisions.
The basic assumptions of the Cox proportional hazards regression model are rarely questioned. This study addresses whether hazard ratio, i.e., relative risk (RR), estimates using the Cox model are biased when these assumptions are violated. We investigated also the dependence of RR estimates on temporal exposure characteristics, and how inadequate control for a strong, time-dependent confounder affects RRs for a modest, correlated risk factor. In a realistic cohort of 500,000 adults constructed using the National Cancer Institute Smoking History Generator, we used the Cox model with increasing control of smoking to examine the impact on RRs for smoking and a correlated covariate X. The smoking-associated RR was strongly modified by age. Pack-years of smoking did not sufficiently control for its effects; simultaneous control for effect modification by age and time-dependent cumulative exposure, exposure duration, and time since cessation improved model fit. Even then, residual confounding was evident in RR estimates for covariate X, for which spurious RRs ranged from 0.980 to 1.017 per unit increase. Use of the Cox model to control for a time-dependent strong risk factor yields unreliable RR estimates unless detailed, time-varying information is incorporated in analyses. Notwithstanding, residual confounding may bias estimated RRs for a modest risk factor.
Should I Stay or Should I Go Now? Or Should I Wait and See? Influences on Wildfire Evacuation Decisions
As climate change has contributed to longer fire seasons and populations living in fire-prone ecosystems increase, wildfires have begun to affect a growing number of people. As a result, interest in understanding the wildfire evacuation decision process has increased. Of particular interest is understanding why some people leave early, some choose to stay and defend their homes, and others wait to assess conditions before making a final decision. Individuals who tend to wait and see are of particular concern given the dangers of late evacuation. To understand what factors might influence different decisions, we surveyed homeowners in three areas in the United States that recently experienced a wildfire. The Protective Action Decision Model was used to identify a suite of factors previously identified as potentially relevant to evacuation decisions. Our results indicate that different beliefs about the efficacy of a particular response or action (evacuating or staying to defend), differences in risk attitudes, and emphasis on different cues to act (e.g., official warnings, environmental cues) are key factors underlying different responses. Further, latent class analysis indicates there are two general classes of individuals: those inclined to evacuate and those inclined to stay, and that a substantial portion of each class falls into the wait and see category.
Although individual behavior plays a major role in community flood risk, traditional flood risk models generally do not capture information on how community policies and individual decisions impact the evolution of flood risk over time. The purpose of this study is to improve the understanding of the temporal aspects of flood risk through a combined analysis of the behavioral, engineering, and physical hazard aspects of flood risk. Additionally, the study aims to develop a new modeling approach for integrating behavior, policy, flood hazards, and engineering interventions. An agent-based model (ABM) is used to analyze the influence of flood protection measures, individual behavior, and the occurrence of floods and near-miss flood events on community flood risk. The ABM focuses on the following decisions and behaviors: dissemination of flood management information, installation of community flood protection, elevation of household mechanical equipment, and elevation of homes. The approach is place based, with a case study area in Fargo, North Dakota, but is focused on generalizable insights. Generally, community mitigation results in reduced future damage, and individual action, including mitigation and movement into and out of high-risk areas, can have a significant influence on community flood risk. The results of this study provide useful insights into the interplay between individual and community actions and how it affects the evolution of flood risk. This study lends insight into priorities for future work, including the development of more in-depth behavioral and decision rules at the individual and community level.
Insights into Flood-Coping Appraisals of Protection Motivation Theory: Empirical Evidence from Germany and France
Protection motivation theory (PMT) has become a popular theory to explain the risk-reducing behavior of residents against natural hazards. PMT captures the two main cognitive processes that individuals undergo when faced with a threat, namely, threat appraisal and coping appraisal. The latter describes the evaluation of possible response measures that may reduce or avert the perceived threat. Although the coping appraisal component of PMT was found to be a better predictor of protective intentions and behavior, little is known about the factors that influence individuals’ coping appraisals of natural hazards. More insight into flood-coping appraisals of PMT, therefore, are needed to better understand the decision-making process of individuals and to develop effective risk communication strategies. This study presents the results of two surveys among more than 1,600 flood-prone households in Germany and France. Five hypotheses were tested using multivariate statistics regarding factors related to flood-coping appraisals, which were derived from the PMT framework, related literature, and the literature on social vulnerability. We found that socioeconomic characteristics alone are not sufficient to explain flood-coping appraisals. Particularly, observational learning from the social environment, such as friends and neighbors, is positively related to flood-coping appraisals. This suggests that social norms and networks play an important role in flood-preparedness decisions. Providing risk and coping information can also have a positive effect. Given the strong positive influence of the social environment on flood-coping appraisals, future research should investigate how risk communication can be enhanced by making use of the observed social norms and network effects.
Potential Exposure and Cancer Risk from Formaldehyde Emissions from Installed Chinese Manufactured Laminate Flooring
Lumber Liquidators (LL) Chinese-manufactured laminate flooring (CLF) has been installed in >400,000 U.S. homes over the last decade. To characterize potential associated formaldehyde exposures and cancer risks, chamber emissions data were collected from 399 new LL CLF, and from LL CLF installed in 899 homes in which measured aggregate indoor formaldehyde concentrations exceeded 100 μg/m3 from a total of 17,867 homes screened. Data from both sources were combined to characterize LL CLF flooring-associated formaldehyde emissions from new boards and installed boards. New flooring had an average (±SD) emission rate of 61.3 ± 52.1 μg/m2-hour; >one-year installed boards had ∼threefold lower emission rates. Estimated emission rates for the 899 homes and corresponding data from questionnaires were used as inputs to a single-compartment, steady-state mass-balance model to estimate corresponding residence-specific TWA formaldehyde concentrations and potential resident exposures. Only ∼0.7% of those homes had estimated acute formaldehyde concentrations >100 μg/m3 immediately after LL CLF installation. The TWA daily formaldehyde inhalation exposure within the 899 homes was estimated to be 17 μg/day using California Proposition 65 default methods to extrapolate cancer risk (below the regulation “no significant risk level” of 40 μg/day). Using a U.S. Environmental Protection Agency linear cancer risk model, 50th and 95th percentile values of expected lifetime cancer risk for residents of these homes were estimated to be 0.33 and 1.2 per 100,000 exposed, respectively. Based on more recent data and verified nonlinear cancer risk assessment models, LL CLF formaldehyde emissions pose virtually no cancer risk to affected consumers.
The National Weather Service has adopted warning polygons that more specifically indicate the risk area than its previous county-wide warnings. However, these polygons are not defined in terms of numerical strike probabilities (ps). To better understand people's interpretations of warning polygons, 167 participants were shown 23 hypothetical scenarios in one of three information conditions—polygon-only (Condition A), polygon + tornadic storm cell (Condition B), and polygon + tornadic storm cell + flanking nontornadic storm cells (Condition C). Participants judged each polygon's ps and reported the likelihood of taking nine different response actions. The polygon-only condition replicated the results of previous studies; ps was highest at the polygon's centroid and declined in all directions from there. The two conditions displaying storm cells differed from the polygon-only condition only in having ps just as high at the polygon's edge nearest the storm cell as at its centroid. Overall, ps values were positively correlated with expectations of continuing normal activities, seeking information from social sources, seeking shelter, and evacuating by car. These results indicate that participants make more appropriate ps judgments when polygons are presented in their natural context of radar displays than when they are presented in isolation. However, the fact that ps judgments had moderately positive correlations with both sheltering (a generally appropriate response) and evacuation (a generally inappropriate response) suggests that experiment participants experience the same ambivalence about these two protective actions as people threatened by actual tornadoes.