On the Gap between Epidemiological Surveillance and Preparedness
Abstract
Contemporary Epidemiological Surveillance (ES) relies heavily on data analytics. These analytics are critical input for pandemics preparedness networks; however, this input is not integrated into a form suitable for decision makers or experts in preparedness. A decision support system (DSS) with Computational Intelligence (CI) tools is required to bridge the gap between epidemiological model of evidence and expert group decision. We argue that such DSS shall be a cognitive dynamic system enabling the CI and human expert to work together. The core of such DSS must be based on machine reasoning techniques such as probabilistic inference, and shall be capable of estimating risks, reliability and biases in decision making.
Keywords: Epidemiological surveillance, preparedness network, decision support, probabilistic reasoning, risk, AI reliability, bias.
I Introduction
World Health Organization distinguishes five levels of strategic preparedness for pandemic outbreak [1]:
-
1.
Community Transmission,
-
2.
Local Transmission,
-
3.
Imported cases,
-
4.
High risk of imported cases,
-
5.
Preparedness.
The entity that monitors, reports and develops the policies and strategies to prepare for response is usually called a Preparedness network. An example of at attempts to create such infrastructure was a project called the European S-HELP (Securing-Health Emergency Learning Planning, 2014-2017), which, however, was not reported to be materialized into a functioning system.
Epidemiological Surveillance (ES) is a part of any local, national or global epidemic preparedness network such as the Global Influenza Surveillance and Response System (GISRS) [2], the Communicable Diseases Network Australia (CDNA) and the Commonwealth’s National Notifiable Diseases Surveillance System (NNDSS). The ES has means to assess a state of epidemic threat in order to introduce suitable countermeasures. In GISRS, for example, the public health decision-making is based on assessing pandemic severity and transmissibility score as guided by Pandemic Severity Assessment Framework [3]. Early in a pandemic, when limited data are available, scores are rated as low-moderate or moderate-high. As additional data become available, transmissibility is assigned a value within a range of 1–5, and severity is varied within the range 1–7. These characterize the potential pandemic impact in relation to previous pandemics or seasonal epidemics. The further assessment of impact on response and mitigation of an emerging pandemic is not generally supported by any Computational Intelligence (CI) tool.

In biostatistics, various ES models have been developed and used by statisticians and epidemiologists. However, they generally are not being translated in the practice of preparedness for pandemic. The ES models can predict the pandemics mortality and morbidity, but do not account for multiple variables and uncertainties such as reliability of surveyed data or credibility of survey or testing technology. they are not directly presented in the form that enables its use in pandemics response or mitigation.
It is known that pandemics assessment “depends on rapid availability of treatment, clinical support, and vaccines” [3], and those variables are not taken into account in the known statistical ES models. Some ES models can be “stratified” for age, gender of other variables, but does not provide any causal analysis. In other words, no decision support tool, that would provide meta-analysis and assess risks in preparedness context, is readily available.
COVID-19 outbreak unveiled critical disadvantages of the existing ES model and in providing support for preparedness systems. Important lessons learnt from these failures are as follows:
-
1.
Outcomes of the ES models need to be further translated to become usable.
-
2.
Experts and PN management teams require computational intelligence (CI) support in decision-making process that is ported into the ES model.
At a system level, these disadvantages manifest themselves as a technology gap, as illustrated in Fig. 1.
In terms of a community, city or country preparedness as a whole, much more factors come into consideration in order to implement epidemic preparedness. An Epidemic Preparedness Index (EPI) developed in [4] includes 23 indicators grouped into five sub-indexes:
-
1.
Public Health Infrastructure,
-
2.
Physical Infrastructure,
-
3.
Institutional Capacity,
-
4.
Economic Resources,
-
5.
Public Health Communication.
This rating measures the national-level preparedness for major outbreaks of infectious diseases, based on various factors related to the healthcare system.
A DSS to be used in Epidemic Preparedness system at community, city or country-level shall include a system of DSSs, each takes into account the support to the experts that manage various community infrastructures and resources such as the ones listed in [4]
This approach is based on the concept of group decision making [53, 54] that is illustrated in Fig. 2. Given an evidence and experts, each of them is supported by the DSS; the task to support experts in group decision-making.

We state that each expert needs a specific-purpose DSS related to their respected field of expertise. Given data from the epidemiological model, the output of each DSS is a result of a dynamic evidential reasoning. The general principles of building such DSS, as outlined in this paper, shall be applied to build the aforementioned hierarchy of DSSs for better managing future epidemics and pandemics.
II Contribution
Uncertainties in developing epidemic/pandemic is equally unavoidable; hence, experts require a CI tool that accounts for uncertainty. The answer lies in usage of probabilistic reasoning, namely, causal (Bayesian) networks operating using probabilities and enabling knowledge inference based on priors and evidence [5]. This approach has been applied to risk assessment in multiple areas of engineering and economics [56], risk profiling [7], identity management [8], precision medicine diagnostics [28] and very recently to analysis of COVID-19 risks such as fatality and prevalence rates [57].
This paper advocates for a concept of a Decision Support System (DSS) for ES, with a CI core based on causal Bayesian networks. We define a DSS as a crucial bridging component to be integrated into the existing ES systems in order to provide situational awareness and help handle the outbreaks better.
The DSS concept has been developed for multiple applications, and were once known as “expert systems”. For example, paper [9] is still a useful guide for training experts and DSS design, e.g. automation of reasoning and interpretation strategies is the way to extend experts’ abilities to apply their own strategies efficiently.
Specific-centered DSS is a well-identified trend in using the CI in many fields including the ES. Examples of contemporary DSS are personal health monitoring systems [10], e-coaching for health [11], security checkpoints [12, 13], and multi-factor authentication systems [14].
Usability of CI tools to support a team of practitioners with or without technical background should be estimated using various measures compared to the decision making tools performance. Risk, reliability, trust and bias are emerging “precaution” measures for the cognitive DSS. The performance of the cognitive DSS is evaluated in various dimensions:
-
Technical, e.g. prediction accuracy and throughput [14],
In our study, we model the DSS as a complex multi-state dynamic system [13]. The risk assessment is an essential part of such a system [8]. The crucial idea of our approach is that risk, reliability, trust and bias should be estimated using reasoning mechanisms.
Some of the risk, reliability and trust projections have been studied in technical systems [15, 23] and social systems [31].
Failure of the existing ES to provide proper COVID-19 outbreak modeling and prediction for preparedness infrastructure revealed significant gaps in the existing concept, design, deployment, and collaboration of the national and international preparedness networks. In this study, we identify a technological gap in the ES in both technical and conceptual domains (Fig. 1). Conceptually, the ES users require a significant cognitive support using a distributed computational intelligence tools. This paper addresses the key research question: How to bridge this gap using the DSS concept?
To answer this question, we have chosen the model of a DSS called a dynamic cognitive platform. We follow a well-identified trend in academic discussion on the future generation DSS [27]. Unfortunately, this is an important but partial solution.
This paper make further steps and contributes to practice of technology gap bridging. The key contribution is twofold:
- 1.
-
2.
Development of a complete spectrum of the risk and bias measures, including ES taxonomy updating.
These results are coherent with the solutions to the following related problems:
-
The technology gap “pillars” in Fig. 1 are Protocol of the ES model and Protocol of DSS. These protocols are different, e.g. spread virus behavior and conditions of small business operation. The task is to convert the ES protocol into a DSS specification. Criteria of efficiency of conversion are an acceptance of a given field expert. Reasoning mechanism based on causal network intrinsically contains the protocol conversion. We demonstrate this phenomenon in our experiments.
-
The DSS supports an expert to make decisions under uncertainty in a specific field of an expertise. Specifically, intelligent computations help an expert in better interpretation of uncertainty under chosen precautions. The risk and bias are used in this paper as a precautions of different kind of uncertainties related to ES data [24], testing tools, human factors, ES model turning parameters, and artificial intelligent.
-
Standardization at all levels of the ES of preparedness network including the international links is essential in combating epidemic and pandemics. For example, different formats (protocols, standards) are defined as a significant obstacle in Covid-19 pandemic [55]. An example of taxonomy of information uncertainty in military situational awareness is an Admiralty Code used in NATO standard. We propose to use NATO experience in this area for the pandemics preparedness purposes.
A DSS concept suitable for the ES model is proposed in this paper.
III Background
In the face of future pandemics, challenging problems in preparedness include:
1) Improvement of the models in order to provide more reliable recommendations that are acceptable for real-world scenarios, and
2) Mitigation of the “incorrectness” of pandemic models in the decision-making response.
We advocate for a solution that utilizes the foremost machine intelligence techniques called machine reasoning (knowledge extraction and inference mechanism) that would provide information support to the decision makers.
III-A Risk taxonomy
A cognitive DSS is a semi-automated system, which deploys CI to process the data sources and to assess risks or biases; this assessment is submitted to a human operator for the final decision.
The risk, reliability, trust and bias measures are used in ES in simple forms such as ‘high-risk group’, ‘risk factor’, and ‘systematic difference in the enrollment of participants’ [24]. However, experts expect from the cognitive DSS more detailed assessments of epidemic scenarios because the corpus of risk terms and definitions is limited. For example, syndrome surveillance consists of real-time indicators for disease that allow for early detection. The DSS must support an expert with answers to the following questions: How risky a given state of ES with respect to collapse of health care resources? Can experts rely on the collected data? What kind of biases can be expected in data collection, algorithmic processing, and the decision making?
Definition 1
Risk is a measure of the extent to which an entity is threatened by a potential circumstance or event, and typically is a function of: (i) the adverse impact, also called cost or magnitude of harm, that would arise if the circumstance or event occurs, and (ii) the likelihood of event occurrence [32].
For example, in automated decision making, and in our study, the Risk is defined as a function of cost (or consequences) of a circumstance or event and its occurrence probability:
Definition 2
Bias in the cognitive DSS refers to the tendency of an assessment process to systematically over- or under-estimate the value of a population parameter.
For example, in the context of detecting or testing for an infectious disease, the bias is related to the sampling approaches (e.g. tests are performed on a proportion of cases only), sampling methodology (systemic or random), and chosen testing procedures or devices [25].
While all these biases are different, they are probabilistic in nature because the evidence and information gathered to make a decision is always incomplete, often inconclusive, frequently ambiguous, commonly dissonant, conflicting, and has various degrees of believability. Identifying and mitigating bias is essential for assessing decision risks and AI biases [16, 26, 27].
For the cognitive DSS to operate effectively on the human’s behalf, the system may need to use confidential or sensitive information of the users such as personal contact information [37]. The users and the operators must be confident that the cognitive DSS will do what they ask, and only what they ask. Human acceptance of the cognitive DSS technology is determined by the combination of the bias factors and risk factors [22, 23]. The contributing factors include belief, confidence, experience, certainty, reliability, availability, competence, credibility, completeness, and cooperation [38, 39]. In our approach, the causal inference platform calculates various uncertainty measures [7] in risk and bias assessment scenarios.
III-B Risk and Decision Reliability interpretation
Risk in decision making process manifests itself in various ways, such as the reliability of information sources (e.g. infection survey data) and the credibility of the information (e.g. testing procedure accuracy):
This relationship can be represented as follows:
-
Source reliability as the quality of being reliable, or trustworthy, is related to 1) risk as a function of potential adverse impact and the likelihood of occurrence, 2) the confidence in quality, and 3) bias as systematic over- or under-assessment of the parameter of interest.
-
Information credibility as the reputation impacting one’s ability to be believed. In ES, in particular, it can be associated with the credibility of infection testing technology.

These metrics of uncertainty closely resemble the ones known as the Admiralty Code defined in the NATO Standardization Agreements such as STANAG 2022 and STANAG 2511 [35, 33, 34] (Fig. 3). NATO uses the Admiralty Code to resolve conflicting scenarios in human-human, human-machine, and machine-machine interactions. The reliability of the DSS can be increased by using more reliable sources and creditable information, or it can be diminished due to lowered reliability of the source or/and credibility of the information. For example, scenario in Figure 3 is composed as the source reliability C <Fairly Reliable> and information credibility 4 <Doubtful true>. In this context, risk can be expressed in terms of the fair reliability of data sources while doubtful credibility of information.
In this study, we argue that similar standards shall be developed in the ES practice. Definition of measure of uncertainty in the ES will allow to formalize the reasoning upon uncertainty, and define the variables to be included in causal networks, and the associated probabilities. In addition, various scenarios developing in epidemic situations, can be characterized by various levels of uncertainty.
Fig. 4 explains the DSS mechanism for assessing different scenarios that are called the system states. For example, consider the states of the ES. The DSS analysis accordingly to the Admiralty Code (Figure 3) results in the following decision-making landscape:
States and ; and ; , and can be used for decision-making; States ; and ; and results in high-risk decisions.
IV Cognitive DSS for ES
Decision making by human experts vary in knowledge structure, self-interest and growth background, thus the emergence of preference conflict is inevitable. In order to ensure the effectiveness of emergency decision making support, it is imperative to construct a consensus process to reduce and remove preference conflict prior to decision making. This process aligns well with the concept of cognitive DSS.
IV-A Elements of a cognitive system
A cognitive DSS for ES support is a complex dynamic system with the following elements of a cognitive system [29]:
-
Perception-action cycle that enables information gain [30];
-
Memory distributed across the entire system (personal data are collected in the physical and virtual world);
-
Attention is driven by memory to prioritize the allocation of available resources; and
In most generic sense, the perception-action cycle of a cognitive dynamic system consists of an actuator, an environment, and a perceptor that are embodied in a feedback loop [29]:
-
The actuator is represented by expert team and their DSS; they make decisions and actions using available perceived information.
-
The environment is represented by an epidemiological evidence (real or modeled).
IV-B DSS Design Flow
The DSS design process including performance evaluation is shown in

Figure 5. The important parts of this process include system “precaution” measurement of the models used for measurement of uncertainty, as well as the DSS precaution measurement in terms of risks and biases with respect to the recommended decisions.
IV-C Fundamental DSS operations
In our work, given a certain causal network platform, the reasoning operations are defined as follows:
1) State assessment, such as the risk of the data source being unreliable. In modeling, risk is represented by a corresponding probability distribution function.
2) Causal analysis is based on the “cause-effect” paradigm. In particular, Granger causality analysis is an advanced tool for this purpose [47].
3) Risk reliability, trust and bias learning. In [48], an approach to learning the trust model has been proposed.
4) Risk, trust and bias propagation. The risk propagation problem was studied, in particular, as a multi-echelon supply chain problem in [52]. In [49], the trustworthiness is propagated through a three-layer model consisting of a source layer, evidence layer, and claim layer. The quality of evidence was used to compute the trustworthiness of sources. Trust can be propagated through a subjective network [36].
5) Reasoning is the ability to form an intelligent conclusion or judgment using the evidence. Causal reasoning is a judgment under uncertainty based on a causal network [5].
6) Risk, reliability, trust and bias adjustment (or calibration) aims to improve the confidence of the risk assessment. For example, negative testing in a symptomatic patient may result in a high risk indicator, but can be later adjusted using additional testing.
7) Risk, reliability, trust and bias mitigation. In most scenarios, the result of intelligent risk processing is a reduction in risk. Risk is lowered via mitigation measures and periodical re-assessment in ongoing screening processes.
8) Risk prediction. In complex systems, meta-recognition, meta-learning, and meta-analysis can be used to predict the overall success (correct assessment of the risk) or failure (incorrect one). The most valuable information for such risk assessment is in the “tails” of the probabilistic distributions [50, 51].
V Reasoning mechanism
V-A Causal network
A causal network is a directed acyclic graph where each node denotes a unique random variable. A directed edge from node to node indicates that the value that is attained by has a direct causal influence on the value that is attained by . Uncertainty inference requires data structures that will be referred to as Conditional Uncertainty Tables (CUTs). A CUT is assigned to each node in the causal network. Given a node , the CUT assigned to is a table that is indexed by all possible value assignments to the parents of . Each entry of the table is a conditional “uncertainty model” that varies according to the choice of the uncertainty metric.
Analysis of a causal network is out of the scope of this paper. However, we introduce in this paper the systematic criteria for choosing the appropriate computational tools. In addition, some details are clarified in our experimental study.
V-B Types of causal networks
Thee are multiple approaches to perform case-and-effect analysis under uncertainty using the causal network structure. Depending on the type of uncertainty measure, they causal networks can be divided to [45]:
Bayesian [5], Interval [41]; Imprecise [40]; Credal [42]; Dempster-Shafer [43]; Fuzzy [44]; and Subjective [36].
The type of a causal network shall be chosen given the DSS model and a specific scenario. The choice depends on the CUT as a carrier of primary knowledge and as appropriate to the scenario. There were several attempts to provide researchers with the “Guidelines” for choosing the best causal network platform based on the CUT. Comparison of causal computational platforms for modeling various systems is a useful strategy, such as Dempster-Shafer vs. credal networks [46], and Bayesian vs. interval vs. Dempster-Shafer vs. fuzzy networks [7, 8].
V-C Bayesian causal network
Motivation of choosing Bayesian causal networks is driven by the following:
-
a DSS can be described in causal form using cause-and-effect analysis.
-
Bayesian (probabilistic) interpretation of uncertainty provides an acceptable reliability for decision-making.
The Bayesian decision-making is based on evaluation of a prior probability given a posterior probability and likelihood (event happening given some history of previous events)
Let the nodes of a graph represent random variables and links between the nodes represent direct causal dependencies. A Bayesian causal network is based on a factored representation of joint probability distributions in the form
where denotes a set of parent nodes of the random variable . The nodes outside are conditionally independent of . Hence, the Bayesian network has a structural part reflecting causal relationships, and a probability part reflecting the strengths of the relationships. Factoring techniques have been applied to the construction of the Bayesian network.
The posterior probability of is called the belief for , . The probability is called the likelihood of given and is denoted .
V-D Reasoning for infection outbreak and impact prediction
The presence of epidemic/pandemic uncertainties is equally unavoidable; hence, experts require a CI approach that accounts for uncertainty. Probabilistic reasoning on causal (Bayesian) networks enables knowledge inference based on priors and evidence has been applied to diagnostics for precision medicine [28].
Most recently, COVID-19 test-specific risk analysis was performed in [57]: the Bayesian inference was applied to learn the proportion of population with or without symptoms from observations of those tested along with observations about testing accuracy.

An example of a simplified causal network that can serve as a framework for risk inference in the context of preparedness is shown in Figure 6. The factors affecting the risk can be evaluated and inferred for various considered scenario. The derivatives such as Preparedness Index can be derived in a similar fashion.
Note that this is only a fragment of a causal network that may be used in the DSS to assess risks related to outbreak itself. A DSS to be used in Epidemic Preparedness system shall include a system of DSSs. Each expert needs a specific-purpose DSS related to their respected field of expertise. Given data from the epidemiological model, the output of each DSS is a result of a dynamic evidential reasoning. The general principles of building such DSS, as outlined in this paper, shall be applied to build the aforementioned hierarchy of DSSs for better managing future epidemics and pandemics.
VI Conclusions and Future Work
As depicted in Figure 1, there is a technology gap between the imperfect ES model and human expert’s limitations to handle uncertainty provided by the model while striving to make reliable decisions. Current efforts regarding epidemiological models focus on the following:
-
Increase information gain about the state of epidemiological threat using perception-action mechanisms.
-
Improve an approximation of joint probabilistic distributions of epidemiological factors; causal networks suggest such possibilities including proactive possibilities.
-
Create a library of predictive behavior of virus based on genome studies and deep learning methods [59].
We propose a general DSS model as a cognitive dynamical system with embedded reasoning mechanism using causal Bayesian network. Additional benefit of this approach is that there is no need in special tools for converting outcome of the model into recommended decisions.
An open applied problem addresses a “rational” partitioning of the model outcome into an ensemble of causal networks. This is because in a preparedness network, a group of experts from different fields aim to make a decision and come to a certain consensus. Each expert needs a decision support in the respective area such as transportation, hospital readiness, health care, educational institutions, police, counter cyber attack, counter bioterrorism, etc. The DSS and human expert’s decisions are in causal relations, they are correlated, and conflicting. This is the field of a group decision making [53, 58, 54].
References
- [1] 2019 Novel Coronavirus: Strategic Preparedness and Response Plan, World Health Organization, March 2020.
- [2] World Health Organization, “Global Influenza Surveillance and Response System (GISRS),” , [Online]. Available: https://www.who.int/influenza/gisrs_laboratory/en/. [Accessed: 29-Jun-2020].
- [3] C. Reed C, M. Biggerstaff, L. Finelli, et al. Novel framework for assessing epidemiologic effects of influenza epidemics and pandemics, Emerg. Infect Dis vol. 19, no. 1, 2013, pp. 85–91.
- [4] B. Oppenheim, M. Gallivan, et al, Assessing global preparedness for the next pandemic: Development and application of an Epidemic Preparedness Index, BMJ Global Health, vol. 4, 2019, pp.1–9.
- [5] J. Pearl, The Seven Tools of Causal Inference, with Reflections on Machine Learning, Communication of the ACM, vol. 62, no. 3, 2019, pp. 54–60.
- [6] J. Pearl, Probabilistic reasoning in intelligent systems: networks of plausible inference, Morgan Kaufmann, 1988.
- [7] S. Yanushkevich, S. Eastwood, M. Drahansky, V. Shmerko, Understanding and taxonomy of uncertainty in modeling, simulation, and risk profiling for border control automation, J. Defense Modeling and Simulation: Applications, Methodology, Technology, Special Issue on Model-Driven Paradigms for Integrated Approaches to Cyber Defense - Part I, vol.15, no. 1, 2018, pp. 95–109.
- [8] S. Yanushkevich, W. Howells, K. Crockett, et al., Cognitive Identity Management: Risks, Trust and Decisions using Heterogeneous Sources, Proc. IEEE Int. Conf. Cognitive Mach. Intel., Los Angeles, 2019.
- [9] W. W. Zachary and J. M. Ryder, Decision Support Systems: Integrating Decision Aiding and Decision Training. In M. Helander, et al. (eds.), Handbook of Human-Computer Interaction, 1997, Elsevier, Amsterdam, pp. 1235–1258.
- [10] Y. Andreu, F. Chiarugi, S. Colantonio, et al., Wize Mirror – a smart, multisensory cardio-metabolic risk monitoring system, Computer Vision and Image Understanding, vol. 148, 2016, pp. 3–22.
- [11] S. F. Ochoa and F. J. Gutierrez, Architecting E-Coaching Systems: A First Step for Dealing with Their Intrinsic Design Complexity, Computer, March 2018, pp. 16–23.
- [12] R. D. Labati, A. Genovese, E. Munoz, V. Piuri, F. Scotti, and G. Sforza, Biometric Recognition in Automated Border Control: A Survey, ACM Comp. Surv., vol.49, no.2, 2016, pp. A1-A39.
- [13] S. Yanushkevich, K. Sundberg, N. Twyman, R. Guest, and V. Shmerko, Cognitive checkpoint: Emerging technologies for biometric-enabled watchlist screening, Comp. and Security, vol. 85, 2019, pp. 372–385.
- [14] A. Roy and D. Dasgupta, A fuzzy decision support system for multifactor authentication, Soft Comput., vol. 22, 2018, pp. 3959–3981.
- [15] A. Andreou, O. Goga, and P. Loiseau, Identity vs. Attribute Disclosure Risks for Users with Multiple Social Profiles, Proc. IEEE/ACM Int. Conf. Adv. Soc. Net. Anal. and Mining, 2017, pp. 163–170.
- [16] M. Whittaker, et al., AI Now Report, New York University, 2018, https://ainowinstitute.org/AI_Now_2018_Report.pdf
- [17] D. Danks and A. J. London, Algorithmic Bias in Autonomous Systems, Proc. 26th Int. Joint Conf. Artificial Intel., 2017, pp. 4691–4697.
- [18] M. Hou, H. Zhu, M. C. Zhou, and R. Arrabito, Optimizing Operator-Agent Interaction in Intelligent Adaptive Interface Design, IEEE Transaction Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 41, no. 2, 2011, pp. 161–178.
- [19] G. Montibeller and D. von Winterfeldt, Cognitive and Motivational Biases in Decision and Risk Analysis,Risk Analysis, Vol. 35, No. 7, 2015, pp. 1230–1251.
- [20] S. M. Bellovin, P. K. Dutta, and N. Reitinger, Privacy and Synthetic Datasets, Stanford Tech. Law Rev., vol. 22, no. 1, 2019, pp. 1–52.
- [21] European Union Agency for Fundamental Rights, Data quality and artificial intelligence - mitigating bias and error to protect, Publications Office of the EU, 2019.
- [22] D. Anand and K. K. Bharadwaj, Pruning trust-distrust network via reliability and risk estimates for quality recommendations, Social Network Analysis and Mining, vol. 3, 2013, pp. 65–84.
- [23] N. Feng, H. Wang, and M. Li, A security risk analysis model for information systems: Causal relationships of risk factors and vulnerability propagation analysis, Inf. Sci., vol. 256, 2014, pp. 57–73.
- [24] Glossary: Principles of Epidemiology in Public Health Practice, Third Edition. An Introduction to Applied Epidemiology and Biostatistics, Centers for Disease Control and Prevention, https://www.cdc.gov/csels/dsepd/ss1978/glossary
- [25] A manual for estimating disease burden associated with seasonal influenza, World Health Organization, 2015.
- [26] A. Gates, B. Vandermeer, L. Hartling, Technology-assisted risk of bias assessment in systematic reviews: a prospective cross-sectional evaluation of the RobotReviewer machine learning tool, J. Clinical Epidemiology, vol.96, 2018, pp. 54–62.
- [27] K. Lai, H. C. R. Oliveira, M. Hou, S. N. Yanushkevich, and V. Shmerko, Assessing Risks of Biases in Cognitive Decision Support Systems, Proc. 28th European Signal Processing Conf, Special Session “Bias in Biometrics”, Amsterdam, Netherlands, 2020.
- [28] R. A. Vinarti and L. M. Hederman, A personalized infectious disease risk prediction system, Expert Systems With Applications, vol. 131, 2019, pp. 266–274.
- [29] S. Haykin, Cognitive Dynamic Systems (Perception-Action Cycle, Radar, and Radio), New York: Cambridge University Press, 2012.
- [30] M. Hou, C. M. Burns, and S. Banbury, Intelligent adaptive systems: An interaction-centered design perspective. CRC Press, 2014.
- [31] L. C. Schaupp and L. Carter, The impact of trust, risk and optimism bias on E-file adoption, Inf. Syst. Front., vol. 12, 2010, pp. 299–309.
- [32] National Institute of Standards (NIST), Security and Privacy Controls for Information Systems and Organizations, NIST Special Publication 800-53, Revision 5, 2017.
- [33] Admiralty Code (2012) Joint Warfare Publication 2-00 Intelligence Support to Joint Operations, Joint Doctrine and Concepts Centre: Ministry of Defence (UK); http://webarchive.nationalarchives.gov.uk/ 20121026065214/ www.mod.uk/DefenceInternet/ MicroSite/ DCDC/ OurPublications/ JDWP/
- [34] U.S. Army Field Manual (2006), Intelligence source and information reliability, http://en. wikipedia.org/wiki/Intelligence_source, U.S. Army Field Manual FM 2-22.3, Human Intelligence Collector Operations, Department of the Army, Washington, DC.
- [35] E. Blasch, K. B. Laskey, A.-L. Jousselme, V. Dragos, P. C. G. Costa, and J. Dezert, URREF reliability versus credibility in information fusion (STANAG 2511), Proc. 16th Int. Conf. Information Fusion, 2013, pp. 1600–1607.
- [36] M. Ivanovska, A. Jøsang, L. Kaplan, and F Sambo, Subjective Networks: Perspectives and Challenges, Proc. 4th Int. Workshop Graph Structures for Knowledge Representation and Reasoning, M. Croitoru et al. (Eds.), Springer, 2015, pp. 107–124.
- [37] G. G. Clavell, Protect rights at automated borders Nature, vol. 543, Issue 7643, March, 2017.
- [38] R. Zhang and Y. Mao, Trust Prediction via Belief Propagation, ACM Trans. Inf. Sys., vol. 32, no. 3, Article 15, 2014.
- [39] J.-H. Cho, K. Chan, and S. Adali, A Survey on Trust Modeling, ACM Comp. Surv., vol. 48, no. 2, Article 28, 2015.
- [40] F. P. A. Coolen, M. C. M. Troffaes, and T. Augustin, Imprecise Probability, In: M. Lovric (ed.), International Encyclopedia of Statistical Science, Springer-Verlag, Berlin, Heidelberg, 2011.
- [41] L. M. De Campos, J. F. Huete, and S. Moral, Probability intervals: a tool for uncertain reasoning, Int. J. of Uncertainty, Fuzziness and Knowledge-Based Syst., vol. 2, no. 2, pp. 167–196, 1994.
- [42] F. G. Cozman, Credal networks, Artif. Intell., vol. 120, 2000, pp. 199–223.
- [43] C. Simon, P. Weber, and A. Evsukoff, Bayesian networks inference algorithm to implement Dempster-Shafer theory in reliability analysis, Reliab. Eng. and Syst. Safety vol. 93, 2008, pp. 950–963.
- [44] J. F. Baldwin and E. D. Tomaso, Inference and learning in fuzzy Bayesian networks, Proc. 12th IEEE Int. Conf. Fuzzy Syst., vol. 1, 2003, pp. 630–635.
- [45] J. Rohmer, Uncertainties in conditional probability tables of discrete Bayesian Belief Networks: A comprehensive review, Eng. Appl. Artif. Int. vol.88, 2020, 103384, pp. 1–14.
- [46] A. Misuri, N. Khakzad, G. Reniers, and V. Cozzani, Tackling uncertainty in security assessment of critical infrastructures: Dempster-Shafer Theory vs. Credal Sets Theory, emphSaf. Sci., vol. 107, 2018, pp. 62–76.
- [47] P. Spirtes and K. Zhang, Causal discovery and inference: concepts and recent methodological advances, Appl. Inform., 2016, 3, issue 3.
- [48] A. Rettinger, M. Nickles, and V. Tresp, A Statistical Relational Model for Trust Learning, Proc. 7th Int. Conf. Autonomous Agents and Multiagent Systems, 2008, pp. 763–770.
- [49] V. G. V. Vydiswaran, C. X. Zhai, and D. Roth, Content-Driven Trust Propagation Framework, Proc. 17th ACM Conf. Knowledge Discovery and Data Mining, San Diego, CA, 2011, pp. 974–982.
- [50] A.C. Davison and R. Huser, Statistics of Extremes, Annu. Rev. Stat. Appl., vol. 2, 2015, pp. 203–235.
- [51] M. Stehlik et al., On the favorable estimation for fitting heavy tailed data, Comput. Stat., vol. 25, 2010, pp. 485–503.
- [52] R. Ojha, A. Ghadge, M. K. Tiwari, and U. S. Bititci, Bayesian network modelling for supply chain risk propagation, Int. J. Production Research, vol. 56, no. 17, 2018, pp. pp. 5795–5819.
- [53] T. Bedford, Decision Making for Group Risk Reduction: Dealing with Epistemic Uncertainty, Risk Analysis, vol. 33, no. 10, 2013, pp. 1884–1898.
- [54] N. H. Kamis, F. Chiclana, and J. Levesley, Preference similarity network structural equivalence clustering basedconsensus group decision making model, Applied Soft Computing, vol. 67, 2018, pp. 706–720.
- [55] T. Alamo, D. G. Reina, M. Mammarella, and A. Abell, Covid-19: Open-Data Resources for Monitoring, Modeling, and Forecasting the Epidemic, Electronics, vol. 9, 2020, paper 827.
- [56] N. Fenton, M. Neil, Risk Assessment and Decision Analysis with Bayesian Networks, CRC Press, 2nd edition, 2019.
- [57] M. Neil, N. Fenton, M. Osman, M., and S. McLachlan, Bayesian Network Analysis of Covid-19 data reveals higher Infection Prevalence Rates and lower Fatality Rates than widely reported, Journal of Risk Research, 2020, https://doi.org/10.1080/13669877.2020.1778771
- [58] K. Neville, S. O’Riordan, A. Pope, et al., Developing a decision support tool and training system for multi-agency decision making during an emergency, European Security Research – The Next Wave, Dublin, www.ESRDublin2015.eu
- [59] Freunde von GISAID, a global database for influenza gene sequences along with associated data. https://www.gisaid.org/epiflu-applications/next-hcov-19-app/