Risk Analysis: An International Journal

Subscribe to Risk Analysis: An International Journal feed Risk Analysis: An International Journal
Table of Contents for Risk Analysis. List of articles from both the latest and EarlyView issues.
Updated: 2 hours 2 min ago

Assessing Engineering Resilience for Systems with Multiple Performance Measures

5 September 2019 - 7:00pm
Abstract

Recently, efforts to model and assess a system's resilience to disruptions due to environmental and adversarial threats have increased substantially. Researchers have investigated resilience in many disciplines, including sociology, psychology, computer networks, and engineering systems, to name a few. When assessing engineering system resilience, the resilience assessment typically considers a single performance measure, a disruption, a loss of performance, the time required to recover, or a combination of these elements. We define and use a resilient engineered system definition that separates system resilience into platform and mission resilience. Most complex systems have multiple performance measures; this research proposes using multiple objective decision analysis to assess system resilience for systems with multiple performance measures using two distinct methods. The first method quantifies platform resilience and includes resilience and other “ilities” directly in the value hierarchy, while the second method quantifies mission resilience and uses the “ilities” in the calculation of the expected mission performance for every performance measure in the value hierarchy. We illustrate the mission resilience method using a transportation systems‐of‐systems network with varying levels of resilience due to the level of connectivity and autonomy of the vehicles and platform resilience by using a notional military example. Our analysis found that it is necessary to quantify performance in context with specific mission(s) and scenario(s) under specific threat(s) and then use modeling and simulation to help determine the resilience of a system for a given set of conditions. The example demonstrates how incorporating system mission resilience can improve performance for some performance measures while negatively affecting others.

Rethinking Resilience Analytics

5 September 2019 - 2:36pm
Abstract

The concept of “resilience analytics” has recently been proposed as a means to leverage the promise of big data to improve the resilience of interdependent critical infrastructure systems and the communities supported by them. Given recent advances in machine learning and other data‐driven analytic techniques, as well as the prevalence of high‐profile natural and man‐made disasters, the temptation to pursue resilience analytics without question is almost overwhelming. Indeed, we find big data analytics capable to support resilience to rare, situational surprises captured in analytic models. Nonetheless, this article examines the efficacy of resilience analytics by answering a single motivating question: Can big data analytics help cyber–physical–social (CPS) systems adapt to surprise? This article explains the limitations of resilience analytics when critical infrastructure systems are challenged by fundamental surprises never conceived during model development. In these cases, adoption of resilience analytics may prove either useless for decision support or harmful by increasing dangers during unprecedented events. We demonstrate that these dangers are not limited to a single CPS context by highlighting the limits of analytic models during hurricanes, dam failures, blackouts, and stock market crashes. We conclude that resilience analytics alone are not able to adapt to the very events that motivate their use and may, ironically, make CPS systems more vulnerable. We present avenues for future research to address this deficiency, with emphasis on improvisation to adapt CPS systems to fundamental surprise.

Integration of Critical Infrastructure and Societal Consequence Models: Impact on Swedish Power System Mitigation Decisions

5 September 2019 - 2:36pm
Abstract

Critical infrastructures provide society with services essential to its functioning, and extensive disruptions give rise to large societal consequences. Risk and vulnerability analyses of critical infrastructures generally focus narrowly on the infrastructure of interest and describe the consequences as nonsupplied commodities or the cost of unsupplied commodities; they rarely holistically consider the larger impact with respect to higher‐order consequences for the society. From a societal perspective, this narrow focus may lead to severe underestimation of the negative effects of infrastructure disruptions. To explore this theory, an integrated modeling approach, combining models of critical infrastructures and economic input–output models, is proposed and applied in a case study. In the case study, a representative model of the Swedish power transmission system and a regionalized economic input–output model are utilized. This enables exploration of how a narrow infrastructure or a more holistic societal consequence perspective affects vulnerability‐related mitigation decisions regarding critical infrastructures. Two decision contexts related to prioritization of different vulnerability‐reducing measures are considered—identifying critical components and adding system components to increase robustness. It is concluded that higher‐order societal consequences due to power supply disruptions can be up to twice as large as first‐order consequences, which in turn has a significant effect on the identification of which critical components are to be protected or strengthened and a smaller effect on the ranking of improvement measures in terms of adding system components to increase system redundancy.

Design and Assessment Methodology for System Resilience Metrics

5 September 2019 - 2:36pm
Abstract

By providing objective measures, resilience metrics (RMs) help planners, designers, and decisionmakers to have a grasp of the resilience status of a system. Conceptual frameworks establish a sound basis for RM development. However, a significant challenge that has yet to be addressed is the assessment of the validity of RMs, whether they reflect all abilities of a resilient system, and whether or not they overrate/underrate these abilities. This article covers this gap by introducing a methodology that can show the validity of an RM against its conceptual framework. This methodology combines experimental design methods and statistical analysis techniques that provide an insight into the RM's quality. We also propose a new metric that can be used for general systems. The analysis of the proposed metric using the presented methodology shows that this metric is a better indicator of the system's abilities compared to the existing metrics.

Integrating Stakeholder Mapping and Risk Scenarios to Improve Resilience of Cyber‐Physical‐Social Networks

5 September 2019 - 2:36pm
Abstract

The future of energy mobility involves networks of users, operators, organizations, vehicles, charging stations, communications, materials, transportation corridors, points of service, and so on. The integration of smart grids with plug‐in electric vehicle technologies has societal and commercial advantages that include improving grid stability, minimizing dependence on nonrenewable fuels, reducing vehicle emissions, and reducing the cost of electric vehicle ownership. However, ineffective or delayed participation of particular groups of stakeholders could disrupt industry plans and delay the desired outcomes. This article develops a framework to address enterprise resilience for two modes of disruptions—the first being the influence of scenarios on priorities and the second being the influence of multiple groups of stakeholders on priorities. The innovation of this study is to obtain the advantages of integrating two recent approaches: scenario‐based preferences modeling and stakeholder mapping. Public agencies, grid operators, plug‐in electric vehicle owners, and vehicle manufacturers are the four groups of stakeholders that are considered in this framework, along with the influence of four scenarios on priorities.

An Optimization‐Based Framework for the Identification of Vulnerabilities in Electric Power Grids Exposed to Natural Hazards

5 September 2019 - 2:36pm
Abstract

This article proposes a novel mathematical optimization framework for the identification of the vulnerabilities of electric power infrastructure systems (which is a paramount example of critical infrastructure) due to natural hazards. In this framework, the potential impacts of a specific natural hazard on an infrastructure are first evaluated in terms of failure and recovery probabilities of system components. Then, these are fed into a bi‐level attacker–defender interdiction model to determine the critical components whose failures lead to the largest system functionality loss. The proposed framework bridges the gap between the difficulties of accurately predicting the hazard information in classical probability‐based analyses and the over conservatism of the pure attacker–defender interdiction models. Mathematically, the proposed model configures a bi‐level max‐min mixed integer linear programming (MILP) that is challenging to solve. For its solution, the problem is casted into an equivalent one‐level MILP that can be solved by efficient global solvers. The approach is applied to a case study concerning the vulnerability identification of the georeferenced RTS24 test system under simulated wind storms. The numerical results demonstrate the effectiveness of the proposed framework for identifying critical locations under multiple hazard events and, thus, for providing a useful tool to help decisionmakers in making more‐informed prehazard preparation decisions.

Network Reconfiguration for Increasing Transportation System Resilience Under Extreme Events

5 September 2019 - 2:36pm
Abstract

Evacuating residents out of affected areas is an important strategy for mitigating the impact of natural disasters. However, the resulting abrupt increase in the travel demand during evacuation causes severe congestions across the transportation system, which thereby interrupts other commuters' regular activities. In this article, a bilevel mathematical optimization model is formulated to address this issue, and our research objective is to maximize the transportation system resilience and restore its performance through two network reconfiguration schemes: contraflow (also referred to as lane reversal) and crossing elimination at intersections. Mathematical models are developed to represent the two reconfiguration schemes and characterize the interactions between traffic operators and passengers. Specifically, traffic operators act as leaders to determine the optimal system reconfiguration to minimize the total travel time for all the users (both evacuees and regular commuters), while passengers act as followers by freely choosing the path with the minimum travel time, which eventually converges to a user equilibrium state. For each given network reconfiguration, the lower‐level problem is formulated as a traffic assignment problem (TAP) where each user tries to minimize his/her own travel time. To tackle the lower‐level optimization problem, a gradient projection method is leveraged to shift the flow from other nonshortest paths to the shortest path between each origin–destination pair, eventually converging to the user equilibrium traffic assignment. The upper‐level problem is formulated as a constrained discrete optimization problem, and a probabilistic solution discovery algorithm is used to obtain the near‐optimal solution. Two numerical examples are used to demonstrate the effectiveness of the proposed method in restoring the traffic system performance.

A Robust Approach for Mitigating Risks in Cyber Supply Chains

5 September 2019 - 2:36pm
Abstract

In recent years, there have been growing concerns regarding risks in federal information technology (IT) supply chains in the United States that protect cyber infrastructure. A critical need faced by decisionmakers is to prioritize investment in security mitigations to maximally reduce risks in IT supply chains. We extend existing stochastic expected budgeted maximum multiple coverage models that identify “good” solutions on average that may be unacceptable in certain circumstances. We propose three alternative models that consider different robustness methods that hedge against worst‐case risks, including models that maximize the worst‐case coverage, minimize the worst‐case regret, and maximize the average coverage in the (1−α) worst cases (conditional value at risk). We illustrate the solutions to the robust methods with a case study and discuss the insights their solutions provide into mitigation selection compared to an expected‐value maximizer. Our study provides valuable tools and insights for decisionmakers with different risk attitudes to manage cybersecurity risks under uncertainty.

Stochastic Counterfactual Risk Analysis for the Vulnerability Assessment of Cyber‐Physical Attacks on Electricity Distribution Infrastructure Networks

5 September 2019 - 2:36pm
Abstract

In December 2015, a cyber‐physical attack took place on the Ukrainian electricity distribution network. This is regarded as one of the first cyber‐physical attacks on electricity infrastructure to have led to a substantial power outage and is illustrative of the increasing vulnerability of Critical National Infrastructure to this type of malicious activity. Few data points, coupled with the rapid emergence of cyber phenomena, has held back the development of resilience analytics of cyber‐physical attacks, relative to many other threats. We propose to overcome data limitations by applying stochastic counterfactual risk analysis as part of a new vulnerability assessment framework. The method is developed in the context of the direct and indirect socioeconomic impacts of a Ukrainian‐style cyber‐physical attack taking place on the electricity distribution network serving London and its surrounding regions. A key finding is that if decision‐makers wish to mitigate major population disruptions, then they must invest resources more‐or‐less equally across all substations, to prevent the scaling of a cyber‐physical attack. However, there are some substations associated with higher economic value due to their support of other Critical National Infrastructures assets, which justifies the allocation of additional cyber security investment to reduce the chance of cascading failure. Further cyber‐physical vulnerability research must address the tradeoffs inherent in a system made up of multiple institutions with different strategic risk mitigation objectives and metrics of value, such as governments, infrastructure operators, and commercial consumers of infrastructure services.

Introduction to Resilience Analytics for Cyber–Physical–Social Networks

5 September 2019 - 2:36pm
Risk Analysis, Volume 39, Issue 9, Page 1867-1869, September 2019.

Issue Information ‐ TOC

5 September 2019 - 2:36pm
Risk Analysis, Volume 39, Issue 9, September 2019.

The Psychophysics of Terror Attack Casualty Counts

4 September 2019 - 3:21pm
Abstract

In communicating the risk that terror attacks pose to the public, government agencies and other organizations must understand which characteristics of an attack contribute to the public's perception of its severity. An attack's casualty count is one of the most commonly used metrics of a terror attack's severity, yet it is unclear whether the public responds to information about casualty count when forming affective and cognitive reactions to terror attacks. This study sought to characterize the “psychophysical function” relating terror attack casualty counts to the severity of the affective and cognitive reactions they elicit. We recruited n = 684 Mechanical Turk participants to read a realistic vignette depicting either a biological or radiological terror attack, whose death toll ranged from 20 to 50,000, and rated their levels of fear and anger along with the attack's severity. Even when controlling for the perceived plausibility of the scenarios, participants’ severity ratings of each attack were logarithmic with respect to casualty count, while ratings of fear and anger did not significantly depend on casualty count. These results were consistent across attack weapon (biological vs. radiological) and time horizon of the casualties (same‐day or anticipated to occur over several years). These results complement past work on life loss valuation and highlight a potential bifurcation between the public's affective and cognitive evaluations of terror attacks.

Modeling Pathology Workload and Complexity to Manage Risks and Improve Patient Quality and Safety

2 September 2019 - 12:39pm
Abstract

Anatomic pathology (AP) laboratories provide critical diagnostic information that help determine patient treatments and outcomes, but the risks of AP operations and their impact on patient safety and quality of care remain poorly recognized and undermanaged. Hospital‐based laboratories face an operational and risk management challenge because clinical work of unknown quantity and complexity arrives with little advance notice, which results in fluctuations in workload that can push operations beyond planned capacity, leading to diagnostic delays and potential errors. Modeling the dynamics of workload and complexity in AP offers the opportunity to better use available information to manage risks. We developed a stock‐and‐flow model of a typical AP laboratory operation and identified key exogenous inputs that drive AP work. To test the model, we generated training and validations data sets by combining data from the electronic medical records and laboratory information systems over multiple years. We demonstrate the implementation of 10‐day AP work forecast generated on a daily basis, and show its performance in comparison with actual work. Although the model somewhat underpredicts work as currently implemented, it provides a framework for prospective management of resources to ensure quality during workload surges. Although full implementation requires additional model development, we show that AP workload largely depends on few and accessible clinical inputs. Recognizing that level loading of work in a hospital is not practical, predictive modeling of work can empower laboratories to triage, schedule, or mobilize resources more effectively and better manage risks that reduce the quality or timeliness of diagnostic information.

Disaster Risk Management Policies and the Measurement of Resilience for Philippine Regions

30 August 2019 - 4:09pm
Abstract

How can a government prioritize disaster risk management policies across regions and types of interventions? Using an economic model to assess welfare risk and resilience to disasters, this article systematically tackles the questions: (1) How much asset and welfare risks does each region in the Philippines face from riverine flood disasters? (2) How resilient is each region to riverine flood disasters? (3) What are, per region, the possible interventions to strengthen resilience to riverine flood disasters and what will be their measured benefit? We study the regions of the Philippines to demonstrate the channels through which macroeconomic asset and output losses from disasters translate to consumption and welfare losses at the micro‐economic level. Apart from the regional prioritizations, we identify a menu of policy options ranked according to their level of effectiveness in increasing resilience and reducing welfare risk from riverine floods. The ranking of priorities varies for different regions when their level of expected value at risk is different. This suggests that there are region‐specific conditions and drivers that need to be integrated into considerations and policy decisions, so that these are effectively addressed.

Evaluation of Multicriteria Decision Analysis Algorithms in Food Safety: A Case Study on Emerging Zoonoses Prioritization

30 August 2019 - 4:03pm
Abstract

Decision making in food safety is a complex process that involves several criteria of different nature like the expected reduction in the number of illnesses, the potential economic or health‐related cost, or even the environmental impact of a given policy or intervention. Several multicriteria decision analysis (MCDA) algorithms are currently used, mostly individually, in food safety to rank different options in a multifactorial environment. However, the selection of the MCDA algorithm is a decision problem on its own because different methods calculate different rankings. The aim of this study was to compare the impact of different uncertainty sources on the rankings of MCDA problems in the context of food safety. For that purpose, a previously published data set on emerging zoonoses in the Netherlands was used to compare different MCDA algorithms: MMOORA, TOPSIS, VIKOR, WASPAS, and ELECTRE III. The rankings were calculated with and without considering uncertainty (using fuzzy sets), to assess the importance of this factor. The rankings obtained differed between algorithms, emphasizing that the selection of the MCDA method had a relevant impact in the rankings. Furthermore, considering uncertainty in the ranking had a high influence on the results. Both factors were more relevant than the weights associated with each criterion in this case study. A hierarchical clustering method was suggested to aggregate results obtained by the different algorithms. This complementary step seems to be a promising way to decrease extreme difference among algorithms and could provide a strong added value in the decision‐making process.

Proximity (Mis)perception: Public Awareness of Nuclear, Refinery, and Fracking Sites

27 August 2019 - 2:13pm
Abstract

Whether on grounds of perceived safety, aesthetics, or overall quality of life, residents may wish to be aware of nearby energy sites such as nuclear reactors, refineries, and fracking wells. Yet people are not always accurate in their impressions of proximity. Indeed, our data show that only 54% of Americans living within 25 miles of a nuclear site say they do, and even fewer fracking‐proximal (30%) and refinery‐proximal (24%) residents respond accurately. In this article, we analyze factors that could either help people form more accurate perceptions or distort their impressions of proximity. We evaluate these hypotheses using a large national survey sample and corresponding geographic information system (GIS) data. Results show that among those living in close proximity to energy sites, those who perceive greater risk are less likely to report living nearby. Conversely, social contact with employees of these industries increases perceived proximity regardless of actual distance. These relationships are consistent across each site type we examine. Other potential factors—such as local news use—may play a role in proximity perception on a case‐by‐case basis. Our findings are an important step toward a more generalizable understanding of how the public forms perceptions of proximity to risk sites, showing multiple potential mechanisms of bias.

Evaluating and Visualizing the Economic Impact of Commercial Districts Due to an Electric Power Network Disruption

23 August 2019 - 7:00pm
Abstract

Critical infrastructure networks enable social behavior, economic productivity, and the way of life of communities. Disruptions to these cyber–physical–social networks highlight their importance. Recent disruptions caused by natural phenomena, including Hurricanes Harvey and Irma in 2017, have particularly demonstrated the importance of functioning electric power networks. Assessing the economic impact (EI) of electricity outages after a service disruption is a challenging task, particularly when interruption costs vary by the type of electric power use (e.g., residential, commercial, industrial). In contrast with most of the literature, this work proposes an approach to spatially evaluate EIs of disruptions to particular components of the electric power network, thus enabling resilience‐based preparedness planning from economic and community perspectives. Our contribution is a mix‐method approach that combines EI evaluation, component importance analysis, and GIS visualization for decision making.

We integrate geographic information systems and an economic evaluation of sporadic electric power outages to provide a tool to assist with prioritizing restoration of power in commercial areas that have the largest impact. By making use of public data describing commercial market value, gross domestic product, and electric area distribution, this article proposes a method to evaluate the EI experienced by commercial districts. A geospatial visualization is presented to observe and compare the areas that are more vulnerable in terms of EI based on the areas covered by each distribution substation. Additionally, a heat map is developed to observe the behavior of disrupted substations to determine the important component exhibiting the highest EI. The proposed resilience analytics approach is applied to analyze outages of substations in the boroughs of New York City.

Impact of Water Level Rise on Urban Infrastructures: Washington, DC, and Shanghai as Case Studies

23 August 2019 - 12:32pm
Abstract

The observed global sea level rise owing to climate change, coupled with the potential increase in extreme storms, requires a reexamination of existing infrastructural planning, construction, and management practices. Storm surge shows the effects of rising sea levels. The recent super storms that hit the United States (e.g., Hurricane Katrina in 2005, Sandy in 2012, Harvey and Maria in 2017) and China (e.g., Typhoon Haiyan in 2010) inflicted serious loss of life and property. Water level rise (WLR) of local coastal areas is a combination of sea level rise, storm surge, precipitation, and local land subsidence. Quantitative assessments of the impact of WLR include scenario identification, consequence assessment, vulnerability and flooding assessment, and risk management using inventory of assets from coastal areas, particularly population centers, to manage flooding risk and to enhance infrastructure resilience of coastal cities. This article discusses the impact of WLR on urban infrastructures with case studies of Washington, DC, and Shanghai. Based on the flooding risk analysis under possible scenarios, the property loss for Washington, DC, was evaluated, and the impact on the metro system of Shanghai was examined.

Recognizing Structural Nonidentifiability: When Experiments Do Not Provide Information About Important Parameters and Misleading Models Can Still Have Great Fit

23 August 2019 - 12:30pm
Abstract

In the quest to model various phenomena, the foundational importance of parameter identifiability to sound statistical modeling may be less well appreciated than goodness of fit. Identifiability concerns the quality of objective information in data to facilitate estimation of a parameter, while nonidentifiability means there are parameters in a model about which the data provide little or no information. In purely empirical models where parsimonious good fit is the chief concern, nonidentifiability (or parameter redundancy) implies overparameterization of the model. In contrast, nonidentifiability implies underinformativeness of available data in mechanistically derived models where parameters are interpreted as having strong practical meaning. This study explores illustrative examples of structural nonidentifiability and its implications using mechanistically derived models (for repeated presence/absence analyses and dose–response of Escherichia coli O157:H7 and norovirus) drawn from quantitative microbial risk assessment. Following algebraic proof of nonidentifiability in these examples, profile likelihood analysis and Bayesian Markov Chain Monte Carlo with uniform priors are illustrated as tools to help detect model parameters that are not strongly identifiable. It is shown that identifiability should be considered during experimental design and ethics approval to ensure generated data can yield strong objective information about all mechanistic parameters of interest. When Bayesian methods are applied to a nonidentifiable model, the subjective prior effectively fabricates information about any parameters about which the data carry no objective information. Finally, structural nonidentifiability can lead to spurious models that fit data well but can yield severely flawed inferences and predictions when they are interpreted or used inappropriately.

Pages