Risk Analysis: An International Journal
The Petroleum Safety Authority Norway (PSA‐N) has recently adopted a new definition of risk: “the consequences of an activity with the associated uncertainty.” The PSA‐N has also been using “deficient risk assessment” for some time as a basis for assigning nonconformities in audit reports. This creates an opportunity to study the link between risk perspective and risk assessment quality in a regulatory context, and, in the present article, we take a hard look at the term “deficient risk assessment” both normatively and empirically. First, we perform a conceptual analysis of how a risk assessment can be deficient in light of a particular risk perspective consistent with the new PSA‐N risk definition. Then, we examine the usages of the term “deficient” in relation to risk assessments in PSA‐N audit reports and classify these into a set of categories obtained from the conceptual analysis. At an overall level, we were able to identify on what aspects of the risk assessment the PSA‐N is focusing and where deficiencies are being identified in regulatory practice. A key observation is that there is a diversity in how the agency officials approach the risk assessments in audits. Hence, we argue that improving the conceptual clarity of what the authorities characterize as “deficient” in relation to the uncertainty‐based risk perspective may contribute to the development of supervisory practices and, eventually, potentially strengthen the learning outcome of the audit reports.
Combining Quantitative Risk Assessment of Human Health, Food Waste, and Energy Consumption: The Next Step in the Development of the Food Cold Chain?
The preservation of perishable food via refrigeration in the supply chain is essential to extend shelf life and provide consumers with safe food. However, electricity consumed in refrigeration processes has an economical and an environmental impact. This study focuses on the cold chain of cooked ham, including transport, cold room in supermarket, display cabinet, transport by consumer, and domestic refrigerator, and aims to predict the risk for human health associated with Listeria monocytogenes, the amount of food wasted due to the growth of spoilage bacteria, and the electrical consumption to maintain product temperature through the cold chain. A set of eight intervention actions were tested to evaluate their impact on the three criteria. Results show that the modification of the thermostat of the domestic refrigerator has a high impact on food safety and food waste and a limited impact on the electrical consumption. Inversely, the modification of the airflow rate in the display cabinet has a high impact on electrical consumption and a limited impact on food safety and food waste. A cost–benefit analysis approach and two multicriteria decision analysis methods were used to rank the intervention actions. These three methodologies show that setting the thermostat of the domestic refrigerator to 4 °C presents the best compromise between the three criteria. The impact of decisionmaker preferences (criteria weight) and limitations of these three approaches are discussed. The approaches proposed by this study may be useful in decision making to evaluate global impact of intervention actions in issues involving conflicting outputs.
This article describes how risk has been conceptualized in the business and organizational literature through four distinct transformations: from the techno‐scientific perspective to the cognitive, the social‐cultural, and, finally, to the constructionist perspective. Each domain conceptualizes risk in different ways, as organizations have found it difficult to understand and mitigate using the risk management tools available. Conceptualizing risk as sensemaking becomes relevant due to the complexity of information available to the risk manager, and, coupled with time constraints, this means that risk managers increasingly rely on making sense of possible threats rather than on the accuracy of the information received. This shift presents four contributions to the current literature. First, it suggests that the role of risk management is shifting from being technical in nature to being about risk sensemaking, where the manager engages with the social and physical environment with the aim of acquiring cues that could indicate how future events will unfold. Second, a sensemaking perspective implies a shift in the use of risk management systems from being “containers” of knowledge about past risk events to lending legitimacy to the plausibility of the success of future decisions. Third, the role of the risk manager in managing individual risks changes and becomes one of managing everything using the social networks and systems available as indicators of future risk events. Finally, the risk manager and the systems he or she relies upon are regarded as a source of risk in themselves as both act as gatekeepers for organizational risk decision making.
Detailed spatial representation of socioeconomic exposure and the related vulnerability to natural hazards has the potential to improve the quality and reliability of risk assessment outputs. We apply a spatially weighted dasymetric approach based on multiple ancillary data to downscale important socioeconomic variables and produce a grid data set for Italy that contains multilayered information about physical exposure, population, gross domestic product, and social vulnerability. We test the performances of our dasymetric approach compared to other spatial interpolation methods. Next, we combine the grid data set with flood hazard estimates to exemplify an application for the purpose of risk assessment.
Contemporary studies conducted by the U.S. Army Corps of Engineers estimate probability distributions of flooding on the interior of ring levee systems by estimating surge exceedances at points along levee system boundaries, calculating overtopping volumes generated by this surface, then passing the resulting volumes of water through a drainage model to calculate interior flood depths. This approach may not accurately represent the exceedance probability of flood depths within the system interior; a storm producing 100‐year surge at one point is unlikely to simultaneously produce 100‐year surge levels everywhere around the system exterior. A conceptually preferred approach estimates surge and waves associated with a large set of storms. Each storm is run through the interior model separately, and the resulting flood depths are weighted by a parameterized likelihood of each synthetic storm. This results in an empirical distribution of flood depths accounting for geospatial variation in any individual storm's characteristics. This method can also better account for the probability of levee breaches or other system failures. The two methods can produce different estimates of flood depth exceedances and damage when applied to storm surge flooding in coastal Louisiana. Even differences in flood depth exceedances of less than 0.2 m can still produce large differences in projected damage. This article identifies and discusses differences in estimated flood depths and damage produced by each method within multiple Louisiana protection systems. The novel coupled dynamics approach represents a step toward enabling risk‐based design standards.
Security of Separated Data in Cloud Systems with Competing Attack Detection and Data Theft Processes
Empowered by virtualization technology, service requests from cloud users can be honored through creating and running virtual machines. Virtual machines established for different users may be allocated to the same physical server, making the cloud vulnerable to co‐residence attacks where a malicious attacker can steal a user's data through co‐residing their virtual machines on the same server. For protecting data against the theft, the data partition technique is applied to divide the user's data into multiple blocks with each being handled by a separate virtual machine. Moreover, early warning agents (EWAs) are deployed to possibly detect and prevent co‐residence attacks at a nascent stage. This article models and analyzes the attack success probability (complement of data security) in cloud systems subject to competing attack detection process (by EWAs) and data theft process (by co‐residence attackers). Based on the suggested probabilistic model, the optimal data partition and protection policy is determined with the objective of minimizing the user's cost subject to providing a desired level of data security. Examples are presented to illustrate effects of different model parameters (attack rate, number of cloud servers, number of data blocks, attack detection time, and data theft time distribution parameters) on the attack success probability and optimization solutions.
The presence of hazards (e.g., contaminants, pathogens) in food/feed, water, plants, or animals can lead to major economic losses related to human and animal health or the rejection of batches of food or feed. Monitoring these hazards is important but can lead to high costs. This study aimed to find the most cost‐effective sampling and analysis (S&A) plan in the cases of the mycotoxins deoxynivalenol (DON) in a wheat batch and aflatoxins (AFB1) in a maize batch. An optimization model was constructed, maximizing the number of correct decisions for accepting/rejecting a batch of cereals, with a budget as major constraint. The decision variables were the choice of the analytical method: instrumental method (e.g., liquid chromatography combined with mass‐spectrometry (LC‐MS/MS)), enzyme‐linked‐immuno‐assay (ELISA), or lateral flow devices (LFD), the number of incremental samples collected from the batch, and the number of aliquots analyzed. S&A plans using ELISA showed to be slightly more cost effective than S&A plans using the other two analytical methods. However, for DON in wheat, the difference between the optimal S&A plans using the three different analytical methods was minimal. For AFB1 in maize, the cost effectiveness of the S&A plan using instrumental methods or ELISA were comparable whereas the S&A plan considering onsite detection with LFDs was least cost effective. In case of nonofficial controls, which do not have to follow official regulations for sampling and analysis, onsite detection with ELISA for both AFB1 in maize and DON in wheat, or with LFDs for DON in wheat, could provide cost‐effective alternatives.
Decades of research identify risk perception as a largely intuitive and affective construct, in contrast to the more deliberative assessments of probability and consequences that form the foundation of risk assessment. However, a review of the literature reveals that many of the risk perception measures employed in survey research with human subjects are either generic in nature, not capturing any particular affective, probabilistic, or consequential dimension of risk; or focused solely on judgments of probability. The goal of this research was to assess a multidimensional measure of risk perception across multiple hazards to identify a measure that will be broadly useful for assessing perceived risk moving forward. Our results support the idea of risk perception being multidimensional, but largely a function of individual affective reactions to the hazard. We also find that our measure of risk perception holds across multiple types of hazards, ranging from those that are behavioral in nature (e.g., health and safety behaviors), to those that are technological (e.g., pollution), or natural (e.g., extreme weather). We suggest that a general, unidimensional measure of risk may accurately capture one's perception of the severity of the consequences, and the discrete emotions that are felt in response to those potential consequences. However, such a measure is not likely to capture the perceived probability of experiencing the outcomes, nor will it be as useful at understanding one's motivation to take mitigation action.
The study presents an integrated, rigorous statistical approach to define the likelihood of a threshold and point of departure (POD) based on dose–response data using nested family of bent‐hyperbola models. The family includes four models: the full bent‐hyperbola model, which allows for transition between two linear regiments with various levels of smoothness; a bent‐hyperbola model reduced to a spline model, where the transition is fixed to a knot; a bent‐hyperbola model with a restricted negative asymptote slope of zero, named hockey‐stick with arc (HS‐Arc); and spline model reduced further to a hockey‐stick type model (HS), where the first linear segment has a slope of zero. A likelihood‐ratio test is used to discriminate between the models and determine if the more flexible versions of the model provide better or significantly better fit than a hockey‐stick type model. The full bent‐hyperbola model can accommodate both threshold and nonthreshold behavior, can take on concave up and concave down shapes with various levels of curvature, can approximate the biochemically relevant Michaelis–Menten model, and even be reduced to a straight line. Therefore, with the use of this model, the presence or absence of a threshold may even become irrelevant and the best fit of the full bent‐hyperbola model be used to characterize the dose–response behavior and risk levels, with no need for mode of action (MOA) information. Point of departure (POD), characterized by exposure level at which some predetermined response is reached, can be defined using the full model or one of the better fitting reduced models.
A growing body of research demonstrates that believing action to reduce the risks of climate change is both possible (self‐efficacy) and effective (response efficacy) is essential to motivate and sustain risk mitigation efforts. Despite this potentially critical role of efficacy beliefs, measures and their use vary wildly in climate change risk perception and communication research, making it hard to compare and learn from efficacy studies. To address this problem and advance our understanding of efficacy beliefs, this article makes three contributions. First, we present a theoretically motivated approach to measuring climate change mitigation efficacy, in light of diverse proposed, perceived, and previously researched strategies. Second, we test this in two national survey samples (Amazon's Mechanical Turk N = 405, GfK Knowledge Panel N = 1,820), demonstrating largely coherent beliefs by level of action and discrimination between types of efficacy. Four additive efficacy scales emerge: personal self‐efficacy, personal response efficacy, government and collective self‐efficacy, and government and collective response efficacy. Third, we employ the resulting efficacy scales in mediation models to test how well efficacy beliefs predict climate change policy support, controlling for specific knowledge, risk perceptions, and ideology, and allowing for mediation by concern. Concern fully mediates the relatively strong effects of perceived risk on policy support, but only partly mediates efficacy beliefs. Stronger government and collective response efficacy beliefs and personal self‐efficacy beliefs are both directly and indirectly associated with greater support for reducing the risks of climate change, even after controlling for ideology and causal beliefs about climate change.