Risk Analysis: An International Journal

Subscribe to Risk Analysis: An International Journal feed Risk Analysis: An International Journal
Table of Contents for Risk Analysis. List of articles from both the latest and EarlyView issues.
Updated: 46 min 49 sec ago

Building an Interdisciplinary Team for Disaster Response Research: A Data‐Driven Approach

6 February 2019 - 8:00pm
Abstract

Building an interdisciplinary team is critical to disaster response research as it often deals with acute onset events, short decision horizons, constrained resources, and uncertainties related to rapidly unfolding response environments.  This article examines three teaming mechanisms for interdisciplinary disaster response research, including ad hoc and/or grant proposal driven teams, research center or institute based teams, and teams oriented by matching expertise toward long‐term collaborations. Using hurricanes as the response context, it further examines several types of critical data that require interdisciplinary collaboration on collection, integration, and analysis. Last, suggesting a data‐driven approach to engaging multiple disciplines, the article advocates building interdisciplinary teams for disaster response research with a long‐term goal and an integrated research protocol.

Whose Risk? Why Did the U.S. Public Ignore Information About the Ebola Outbreak?

6 February 2019 - 8:00pm
Abstract

To test a possible boundary condition for the risk information seeking and processing (RISP) model, this study experimentally manipulates risk perception related to the 2014 Ebola outbreak in a nationally representative sample. Multiple‐group structural equation modeling results indicate that psychological distance was negatively related to systematic processing in the high‐risk condition. In the low‐risk condition, psychological distance was positively related to heuristic processing; negative attitude toward media coverage dampened people's need for information, which subsequently influenced information processing. Risk perception elicited more fear, which led to greater information insufficiency and more heuristic processing in the low‐risk condition. In contrast, sadness was consistently related to information processing in both conditions. Model fit statistics also show that the RISP model provides a better fit to data when risk perception is elevated. Further, this study contributes to our understanding of the role of discrete emotions in motivating information processing.

Building an Interdisciplinary Team for Disaster Response Research: A Data‐Driven Approach

6 February 2019 - 8:00pm
Abstract

Building an interdisciplinary team is critical to disaster response research as it often deals with acute onset events, short decision horizons, constrained resources, and uncertainties related to rapidly unfolding response environments.  This article examines three teaming mechanisms for interdisciplinary disaster response research, including ad hoc and/or grant proposal driven teams, research center or institute based teams, and teams oriented by matching expertise toward long‐term collaborations. Using hurricanes as the response context, it further examines several types of critical data that require interdisciplinary collaboration on collection, integration, and analysis. Last, suggesting a data‐driven approach to engaging multiple disciplines, the article advocates building interdisciplinary teams for disaster response research with a long‐term goal and an integrated research protocol.

AbSRiM: An Agent‐Based Security Risk Management Approach for Airport Operations

5 February 2019 - 6:43pm
Abstract

Security risk management is essential for ensuring effective airport operations. This article introduces AbSRiM, a novel agent‐based modeling and simulation approach to perform security risk management for airport operations that uses formal sociotechnical models that include temporal and spatial aspects. The approach contains four main steps: scope selection, agent‐based model definition, risk assessment, and risk mitigation. The approach is based on traditional security risk management methodologies, but uses agent‐based modeling and Monte Carlo simulation at its core. Agent‐based modeling is used to model threat scenarios, and Monte Carlo simulations are then performed with this model to estimate security risks.

The use of the AbSRiM approach is demonstrated with an illustrative case study. This case study includes a threat scenario in which an adversary attacks an airport terminal with an improvised explosive device. The approach provides a promising way to include important elements, such as human aspects and spatiotemporal aspects, in the assessment of risk. More research is still needed to better identify the strengths and weaknesses of the AbSRiM approach in different case studies, but results demonstrate the feasibility of the approach and its potential.

AbSRiM: An Agent‐Based Security Risk Management Approach for Airport Operations

5 February 2019 - 6:43pm
Abstract

Security risk management is essential for ensuring effective airport operations. This article introduces AbSRiM, a novel agent‐based modeling and simulation approach to perform security risk management for airport operations that uses formal sociotechnical models that include temporal and spatial aspects. The approach contains four main steps: scope selection, agent‐based model definition, risk assessment, and risk mitigation. The approach is based on traditional security risk management methodologies, but uses agent‐based modeling and Monte Carlo simulation at its core. Agent‐based modeling is used to model threat scenarios, and Monte Carlo simulations are then performed with this model to estimate security risks.

The use of the AbSRiM approach is demonstrated with an illustrative case study. This case study includes a threat scenario in which an adversary attacks an airport terminal with an improvised explosive device. The approach provides a promising way to include important elements, such as human aspects and spatiotemporal aspects, in the assessment of risk. More research is still needed to better identify the strengths and weaknesses of the AbSRiM approach in different case studies, but results demonstrate the feasibility of the approach and its potential.

Agent‐Based Recovery Model for Seismic Resilience Evaluation of Electrified Communities

30 January 2019 - 9:32am
Abstract

In this article, an agent‐based framework to quantify the seismic resilience of an electric power supply system (EPSS) and the community it serves is presented. Within the framework, the loss and restoration of the EPSS power generation and delivery capacity and of the power demand from the served community are used to assess the electric power deficit during the damage absorption and recovery processes. Damage to the components of the EPSS and of the community‐built environment is evaluated using the seismic fragility functions. The restoration of the community electric power demand is evaluated using the seismic recovery functions. However, the postearthquake EPSS recovery process is modeled using an agent‐based model with two agents, the EPSS Operator and the Community Administrator. The resilience of the EPSS–community system is quantified using direct, EPSS‐related, societal, and community‐related indicators. Parametric studies are carried out to quantify the influence of different seismic hazard scenarios, agent characteristics, and power dispatch strategies on the EPSS–community seismic resilience. The use of the agent‐based modeling framework enabled a rational formulation of the postearthquake recovery phase and highlighted the interaction between the EPSS and the community in the recovery process not quantified in resilience models developed to date. Furthermore, it shows that the resilience of different community sectors can be enhanced by different power dispatch strategies. The proposed agent‐based EPSS–community system resilience quantification framework can be used to develop better community and infrastructure system risk governance policies.

Agent‐Based Recovery Model for Seismic Resilience Evaluation of Electrified Communities

30 January 2019 - 9:32am
Abstract

In this article, an agent‐based framework to quantify the seismic resilience of an electric power supply system (EPSS) and the community it serves is presented. Within the framework, the loss and restoration of the EPSS power generation and delivery capacity and of the power demand from the served community are used to assess the electric power deficit during the damage absorption and recovery processes. Damage to the components of the EPSS and of the community‐built environment is evaluated using the seismic fragility functions. The restoration of the community electric power demand is evaluated using the seismic recovery functions. However, the postearthquake EPSS recovery process is modeled using an agent‐based model with two agents, the EPSS Operator and the Community Administrator. The resilience of the EPSS–community system is quantified using direct, EPSS‐related, societal, and community‐related indicators. Parametric studies are carried out to quantify the influence of different seismic hazard scenarios, agent characteristics, and power dispatch strategies on the EPSS–community seismic resilience. The use of the agent‐based modeling framework enabled a rational formulation of the postearthquake recovery phase and highlighted the interaction between the EPSS and the community in the recovery process not quantified in resilience models developed to date. Furthermore, it shows that the resilience of different community sectors can be enhanced by different power dispatch strategies. The proposed agent‐based EPSS–community system resilience quantification framework can be used to develop better community and infrastructure system risk governance policies.

Integration of Critical Infrastructure and Societal Consequence Models: Impact on Swedish Power System Mitigation Decisions

25 January 2019 - 6:44pm
Abstract

Critical infrastructures provide society with services essential to its functioning, and extensive disruptions give rise to large societal consequences. Risk and vulnerability analyses of critical infrastructures generally focus narrowly on the infrastructure of interest and describe the consequences as nonsupplied commodities or the cost of unsupplied commodities; they rarely holistically consider the larger impact with respect to higher‐order consequences for the society. From a societal perspective, this narrow focus may lead to severe underestimation of the negative effects of infrastructure disruptions. To explore this theory, an integrated modeling approach, combining models of critical infrastructures and economic input–output models, is proposed and applied in a case study. In the case study, a representative model of the Swedish power transmission system and a regionalized economic input–output model are utilized. This enables exploration of how a narrow infrastructure or a more holistic societal consequence perspective affects vulnerability‐related mitigation decisions regarding critical infrastructures. Two decision contexts related to prioritization of different vulnerability‐reducing measures are considered—identifying critical components and adding system components to increase robustness. It is concluded that higher‐order societal consequences due to power supply disruptions can be up to twice as large as first‐order consequences, which in turn has a significant effect on the identification of which critical components are to be protected or strengthened and a smaller effect on the ranking of improvement measures in terms of adding system components to increase system redundancy.

Integration of Critical Infrastructure and Societal Consequence Models: Impact on Swedish Power System Mitigation Decisions

25 January 2019 - 6:44pm
Abstract

Critical infrastructures provide society with services essential to its functioning, and extensive disruptions give rise to large societal consequences. Risk and vulnerability analyses of critical infrastructures generally focus narrowly on the infrastructure of interest and describe the consequences as nonsupplied commodities or the cost of unsupplied commodities; they rarely holistically consider the larger impact with respect to higher‐order consequences for the society. From a societal perspective, this narrow focus may lead to severe underestimation of the negative effects of infrastructure disruptions. To explore this theory, an integrated modeling approach, combining models of critical infrastructures and economic input–output models, is proposed and applied in a case study. In the case study, a representative model of the Swedish power transmission system and a regionalized economic input–output model are utilized. This enables exploration of how a narrow infrastructure or a more holistic societal consequence perspective affects vulnerability‐related mitigation decisions regarding critical infrastructures. Two decision contexts related to prioritization of different vulnerability‐reducing measures are considered—identifying critical components and adding system components to increase robustness. It is concluded that higher‐order societal consequences due to power supply disruptions can be up to twice as large as first‐order consequences, which in turn has a significant effect on the identification of which critical components are to be protected or strengthened and a smaller effect on the ranking of improvement measures in terms of adding system components to increase system redundancy.

Clinical Capital and the Risk of Maternal Labor and Delivery Complications: Hospital Scheduling, Timing, and Cohort Turnover Effects

24 January 2019 - 11:12am
Abstract

The establishment of interventions to maximize maternal health requires the identification of modifiable risk factors. Toward the identification of modifiable hospital‐based factors, we analyze over 2 million births from 2005 to 2010 in Texas, employing a series of quasi‐experimental tests involving hourly, daily, and monthly circumstances where medical service quality (or clinical capital) is known to vary exogenously. Motivated by a clinician's choice model, we investigate whether maternal delivery complications (1) vary by work shift, (2) increase by the hours worked within shifts, (3) increase on weekends and holidays when hospitals are typically understaffed, and (4) are higher in July when a new cohort of residents enter teaching hospitals. We find consistent evidence of a sizable statistical relationship between deliveries during nonstandard schedules and negative patient outcomes. Delivery complications are higher during night shifts (OR = 1.21, 95% CI: 1.18–1.25), and on weekends (OR = 1.09, 95% CI: 1.04–1.14) and holidays (OR = 1.29, 95% CI: 1.04–1.60), when hospitals are understaffed and less experienced doctors are more likely to work. Within shifts, we show deterioration of occupational performance per additional hour worked (OR = 1.02, 95% CI: 1.01–1.02). We observe substantial additional risk at teaching hospitals in July (OR = 1.28, 95% CI: 1.14–1.43), reflecting a cohort‐turnover effect. All results are robust to the exclusion of noninduced births and intuitively falsified with analyses of chromosomal disorders. Results from our multiple‐test strategy indicate that hospitals can meaningfully attenuate harm to maternal health through strategic scheduling of staff.

Clinical Capital and the Risk of Maternal Labor and Delivery Complications: Hospital Scheduling, Timing, and Cohort Turnover Effects

24 January 2019 - 11:12am
Abstract

The establishment of interventions to maximize maternal health requires the identification of modifiable risk factors. Toward the identification of modifiable hospital‐based factors, we analyze over 2 million births from 2005 to 2010 in Texas, employing a series of quasi‐experimental tests involving hourly, daily, and monthly circumstances where medical service quality (or clinical capital) is known to vary exogenously. Motivated by a clinician's choice model, we investigate whether maternal delivery complications (1) vary by work shift, (2) increase by the hours worked within shifts, (3) increase on weekends and holidays when hospitals are typically understaffed, and (4) are higher in July when a new cohort of residents enter teaching hospitals. We find consistent evidence of a sizable statistical relationship between deliveries during nonstandard schedules and negative patient outcomes. Delivery complications are higher during night shifts (OR = 1.21, 95% CI: 1.18–1.25), and on weekends (OR = 1.09, 95% CI: 1.04–1.14) and holidays (OR = 1.29, 95% CI: 1.04–1.60), when hospitals are understaffed and less experienced doctors are more likely to work. Within shifts, we show deterioration of occupational performance per additional hour worked (OR = 1.02, 95% CI: 1.01–1.02). We observe substantial additional risk at teaching hospitals in July (OR = 1.28, 95% CI: 1.14–1.43), reflecting a cohort‐turnover effect. All results are robust to the exclusion of noninduced births and intuitively falsified with analyses of chromosomal disorders. Results from our multiple‐test strategy indicate that hospitals can meaningfully attenuate harm to maternal health through strategic scheduling of staff.

A Robust Approach for Mitigating Risks in Cyber Supply Chains

18 January 2019 - 7:14pm
Abstract

In recent years, there have been growing concerns regarding risks in federal information technology (IT) supply chains in the United States that protect cyber infrastructure. A critical need faced by decisionmakers is to prioritize investment in security mitigations to maximally reduce risks in IT supply chains. We extend existing stochastic expected budgeted maximum multiple coverage models that identify “good” solutions on average that may be unacceptable in certain circumstances. We propose three alternative models that consider different robustness methods that hedge against worst‐case risks, including models that maximize the worst‐case coverage, minimize the worst‐case regret, and maximize the average coverage in the (1−α) worst cases (conditional value at risk). We illustrate the solutions to the robust methods with a case study and discuss the insights their solutions provide into mitigation selection compared to an expected‐value maximizer. Our study provides valuable tools and insights for decisionmakers with different risk attitudes to manage cybersecurity risks under uncertainty.

A Robust Approach for Mitigating Risks in Cyber Supply Chains

18 January 2019 - 7:14pm
Abstract

In recent years, there have been growing concerns regarding risks in federal information technology (IT) supply chains in the United States that protect cyber infrastructure. A critical need faced by decisionmakers is to prioritize investment in security mitigations to maximally reduce risks in IT supply chains. We extend existing stochastic expected budgeted maximum multiple coverage models that identify “good” solutions on average that may be unacceptable in certain circumstances. We propose three alternative models that consider different robustness methods that hedge against worst‐case risks, including models that maximize the worst‐case coverage, minimize the worst‐case regret, and maximize the average coverage in the (1−α) worst cases (conditional value at risk). We illustrate the solutions to the robust methods with a case study and discuss the insights their solutions provide into mitigation selection compared to an expected‐value maximizer. Our study provides valuable tools and insights for decisionmakers with different risk attitudes to manage cybersecurity risks under uncertainty.

Let's Call it Quits: Break‐Even Effects in the Decision to Stop Taking Risks

16 January 2019 - 8:56pm
Abstract

“Chasing” behavior, whereby individuals, driven by a desire to break even, continue a risky activity (RA) despite incurring large losses, is a commonly observed phenomenon. We examine whether the desire to break even plays a wider role in decisions to stop engaging in financially motivated RA in a naturalistic setting. We test hypotheses, motivated by this research question, using a large data set: 707,152 transactions of 5,379 individual financial market spread traders between September 2004 and April 2013. The results indicate strong effects of changes in wealth around the break‐even point on the decision to cease an RA. An important mediating factor was the individual's historical long‐term performance. Those with a more profitable trading history were less affected by a fall in cash balance below the break‐even point compared to those who had been less profitable. We observe that break‐even points play an important role in the decision of nonpathological risk takers to stop RAs. It is possible, therefore, that these nonpathological cognitive processes, when occurring in extrema, may result in pathological gambling behavior such as “chasing.” Our data set focuses on RAs in financial markets and, consequently, we discuss the implications for institutions and regulators in the effective management of risk taking in markets. We also suggest that there may be a need to consider carefully the nature and role of “break‐even points” associated with a broader range of nonfinancially‐focused risk‐taking activities, such as smoking and substance abuse.

Let's Call it Quits: Break‐Even Effects in the Decision to Stop Taking Risks

16 January 2019 - 8:56pm
Abstract

“Chasing” behavior, whereby individuals, driven by a desire to break even, continue a risky activity (RA) despite incurring large losses, is a commonly observed phenomenon. We examine whether the desire to break even plays a wider role in decisions to stop engaging in financially motivated RA in a naturalistic setting. We test hypotheses, motivated by this research question, using a large data set: 707,152 transactions of 5,379 individual financial market spread traders between September 2004 and April 2013. The results indicate strong effects of changes in wealth around the break‐even point on the decision to cease an RA. An important mediating factor was the individual's historical long‐term performance. Those with a more profitable trading history were less affected by a fall in cash balance below the break‐even point compared to those who had been less profitable. We observe that break‐even points play an important role in the decision of nonpathological risk takers to stop RAs. It is possible, therefore, that these nonpathological cognitive processes, when occurring in extrema, may result in pathological gambling behavior such as “chasing.” Our data set focuses on RAs in financial markets and, consequently, we discuss the implications for institutions and regulators in the effective management of risk taking in markets. We also suggest that there may be a need to consider carefully the nature and role of “break‐even points” associated with a broader range of nonfinancially‐focused risk‐taking activities, such as smoking and substance abuse.

Modeling the Cost Effectiveness of Fire Protection Resource Allocation in the United States: Models and a 1980–2014 Case Study

16 January 2019 - 8:55pm
Abstract

The estimated cost of fire in the United States is about $329 billion a year, yet there are gaps in the literature to measure the effectiveness of investment and to allocate resources optimally in fire protection. This article fills these gaps by creating data‐driven empirical and theoretical models to study the effectiveness of nationwide fire protection investment in reducing economic and human losses. The regression between investment and loss vulnerability shows high R 2 values (≈0.93). This article also contributes to the literature by modeling strategic (national‐level or state‐level) resource allocation (RA) for fire protection with equity‐efficiency trade‐off considerations, while existing literature focuses on operational‐level RA. This model and its numerical analyses provide techniques and insights to aid the strategic decision‐making process. The results from this model are used to calculate fire risk scores for various geographic regions, which can be used as an indicator of fire risk. A case study of federal fire grant allocation is used to validate and show the utility of the optimal RA model. The results also identify potential underinvestment and overinvestment in fire protection in certain regions. This article presents scenarios in which the model presented outperforms the existing RA scheme, when compared in terms of the correlation of resources allocated with actual number of fire incidents. This article provides some novel insights to policymakers and analysts in fire protection and safety that would help in mitigating economic costs and saving lives.

Modeling the Cost Effectiveness of Fire Protection Resource Allocation in the United States: Models and a 1980–2014 Case Study

16 January 2019 - 8:55pm
Abstract

The estimated cost of fire in the United States is about $329 billion a year, yet there are gaps in the literature to measure the effectiveness of investment and to allocate resources optimally in fire protection. This article fills these gaps by creating data‐driven empirical and theoretical models to study the effectiveness of nationwide fire protection investment in reducing economic and human losses. The regression between investment and loss vulnerability shows high R 2 values (≈0.93). This article also contributes to the literature by modeling strategic (national‐level or state‐level) resource allocation (RA) for fire protection with equity‐efficiency trade‐off considerations, while existing literature focuses on operational‐level RA. This model and its numerical analyses provide techniques and insights to aid the strategic decision‐making process. The results from this model are used to calculate fire risk scores for various geographic regions, which can be used as an indicator of fire risk. A case study of federal fire grant allocation is used to validate and show the utility of the optimal RA model. The results also identify potential underinvestment and overinvestment in fire protection in certain regions. This article presents scenarios in which the model presented outperforms the existing RA scheme, when compared in terms of the correlation of resources allocated with actual number of fire incidents. This article provides some novel insights to policymakers and analysts in fire protection and safety that would help in mitigating economic costs and saving lives.

A Decision‐Centered Method to Evaluate Natural Hazards Decision Aids by Interdisciplinary Research Teams

14 January 2019 - 1:11pm
Abstract

There is a growing number of decision aids made available to the general public by those working on hazard and disaster management. When based on high‐quality scientific studies across disciplines and designed to provide a high level of usability and trust, decision aids become more likely to improve the quality of hazard risk management and response decisions. Interdisciplinary teams have a vital role to play in this process, ensuring the scientific validity and effectiveness of a decision aid across the physical science, social science, and engineering dimensions of hazard awareness, option identification, and the decisions made by individuals and communities. Often, these aids are not evaluated before being widely distributed, which could improve their impact, due to a lack of dedicated resources and guidance on how to systematically do so. In this Perspective, we present a decision‐centered method for evaluating the impact of hazard decision aids on decisionmaker preferences and choice during the design and development phase, drawing from the social and behavioral sciences and a value of information framework to inform the content, complexity, format, and overall evaluation of the decision aid. The first step involves quantifying the added value of the information contained in the decision aid. The second involves identifying the extent to which the decision aid is usable. Our method can be applied to a variety of hazards and disasters, and will allow interdisciplinary teams to more effectively evaluate the extent to which an aid can inform and improve decision making.

A Decision‐Centered Method to Evaluate Natural Hazards Decision Aids by Interdisciplinary Research Teams

14 January 2019 - 1:11pm
Abstract

There is a growing number of decision aids made available to the general public by those working on hazard and disaster management. When based on high‐quality scientific studies across disciplines and designed to provide a high level of usability and trust, decision aids become more likely to improve the quality of hazard risk management and response decisions. Interdisciplinary teams have a vital role to play in this process, ensuring the scientific validity and effectiveness of a decision aid across the physical science, social science, and engineering dimensions of hazard awareness, option identification, and the decisions made by individuals and communities. Often, these aids are not evaluated before being widely distributed, which could improve their impact, due to a lack of dedicated resources and guidance on how to systematically do so. In this Perspective, we present a decision‐centered method for evaluating the impact of hazard decision aids on decisionmaker preferences and choice during the design and development phase, drawing from the social and behavioral sciences and a value of information framework to inform the content, complexity, format, and overall evaluation of the decision aid. The first step involves quantifying the added value of the information contained in the decision aid. The second involves identifying the extent to which the decision aid is usable. Our method can be applied to a variety of hazards and disasters, and will allow interdisciplinary teams to more effectively evaluate the extent to which an aid can inform and improve decision making.

Integrated Risk Assessment and Management Methods Are Necessary for Effective Implementation of Natural Hazards Policy

14 January 2019 - 11:07am
Abstract

A transdisciplinary, integrated risk assessment and risk management process is particularly beneficial to the development of policies addressing risk from natural hazards. Strategies based on isolated risk assessment and management processes, guided by traditional “predict, then act” methods for decision making, may induce major regret if future conditions diverge from predictions. Analytic methods designed to identify robust solutions—those that perform satisfactorily over a broader range of future conditions—are more suitable for management of natural hazards risks, for at least three major reasons discussed within. Such approaches benefit from co‐production of knowledge to collaboratively produce adaptive, robust policies through an iterative process of dialogue between analysts, decisionmakers, and other stakeholders: exploring tradeoffs, searching for futures in which current plans are likely to fail, and developing adaptive management strategies responsive to evolving future conditions. The process leads to more effective adoption of risk management policies by ensuring greater feasibility of solutions, exploring a wide range of plausible future conditions, generating buy‐in, and giving a voice to actors with a diversity of perspectives. The second half of the article presents Louisiana's coastal master planning process as an exemplary model of participatory planning and integrated risk assessment and management. Louisiana planners have adopted a decision framework that incorporates insights from modern methods for decision making under deep uncertainty to effectively address the deep uncertainties and complexities characteristic of a variety of natural hazards and long‐range planning problems.

Pages