This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

{strip}

IEEE Copyright Notice

Copyright (c) 2022 IEEE

Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Accepted to be published in: 9th Annual Conference on Computational Science & Computational Intelligence (CSCI’22: Dec 14-16, 2022, USA)


Cite as:
O. Izunna, H. Shane, and K. Jess. “Machine Learning Methods for Evaluating Public Crisis: Meta-Analysis,” 2022 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, 2022.




BibTeX:

@InProceedings{okpala2022methods,
          author = {Okpala, Izunna and Halse, Shane and Kropczynski, Jess},
          title = {Machine Learning Methods for Evaluating Public Crisis: Meta-Analysis},
          booktitle = {2022 International Conference on Computational Science and Computational
          Intelligence (CSCI)},
          month = {December 14 – 16},
          year = {2022},
          publisher = {IEEE},
}

Machine Learning Methods for Evaluating Public Crisis: Meta-Analysis

1st Izunna Okpala School of Information Technology
University of Cincinnati
[email protected]
   2nd Shane Halse School of Information Technology
University of Cincinnati
[email protected]
   3rd Jess Kropczynski School of Information Technology
University of Cincinnati
[email protected]
Abstract

This study examines machine learning methods used in crisis management. Analyzing detected patterns from a crisis involves the collection and evaluation of historical or near-real-time datasets through automated means. This paper utilized the meta-review method to analyze scientific literature that utilized machine learning techniques to evaluate human actions during crises. Selected studies were condensed into themes and emerging trends using a systematic literature evaluation of published works accessed from three scholarly databases. Results show that data from social media was prominent in the evaluated articles with 27% usage, followed by disaster management, health (COVID) and crisis informatics, amongst many other themes. Additionally, the supervised machine learning method, with an application of 69% across the board, was predominant. The classification technique stood out among other machine learning tasks with 41% usage. The algorithms that played major roles were the Support Vector Machine, Neural Networks, Naive Bayes, and Random Forest, with 23%, 16%, 15%, and 12% contributions, respectively.

Index Terms:
Crisis informatics, Disaster management, Machine Learning, Learning Algorithms, Meta Analysis

I Introduction

Over the last decade, the scientific community like IEEE and the Information Systems for Crisis Response and Management (ISCRAM) have contributed many studies that utilize real-time information sources to support situation awareness during large-scale events [1, 2]. The Machine learning field has advanced on how they automate processes to filter large volumes of data [3]. This study explores a variety of machine learning solutions utilized in scholarly articles to understand human actions towards crises. It was also informed by studies that addressed disaster management, health, politics, and other forms of crisis that utilized data beyond social media with evidential proof from many scholarly articles available in major academic databases that focused on analyzing human actions. While the first interactive medium for an individual that has no control over the mainstream media is the social media platform, local sources for tracking crises exist. People tend to report incidents, or debate about the ongoing incident via a social network that is familiar to them, or verifiable local reporting agencies and news media [4, 5]. Scientific researchers have taken advantage of the mass surge of social media data [6], and local reports to carry out machine learning procedures like predictions. The objective of this paper is to examine prevalent machine learning methods utilized by academics for managing crises, how those methods are implemented, and the source of the data.

As noted earlier, historical and real-time data play an important role in managing crisis, or in our case, evaluating crisis [7]. In order to plan for, mitigate, and avoid future crises, it is recommended to investigate historical or existing solutions. The concept of human action is predicated on some causative factors i.e. before someone acts, there is a cause [8]. The human environment is the main factor that drives crisis, and how humans manage environmental resources plays a huge role in crisis occurrence. Some of the concepts that aid in determining human actions from a data source are sentiments, perceptions, and/or attitudes [9]. While perception uniquely identifies opinion-based thoughts and impressions [10, 11], and can simply be distinguished from sentiments and attitudes, sentiments emphasizes emotions [12, 13], and attitudes leads to actions [7]. This study seeks to demonstrate with the help of peer-reviewed articles, the machine learning methods that are most prevalent in today’s world for evaluating or predicting crisis. The focus would be on public or human actions towards crisis with key concepts like attitudes and/or perceptions.

I-A Research Questions

  • 1.

    What are the dominant machine learning methods for managing crisis?

  • 2.

    What are the keywords frequently used in scientific studies addressing crisis?

II Background

Statistical methods were among the first tools used to evaluate crises [14]. In 1932, when Patrick [15] completed a multivariate analysis on some organizations, financial stakeholders began developing models to assess the likelihood of a crisis in their organization. Since then, academics have devised a number of quantitative ways to detect and evaluate crises. Some quantitative analysis like the t-test has been successful in quantifying ratios [16]. Altman [17] devised a score that was used to categorize observations into good and bad. Multiple Discriminant Analysis (MDA) also played a role in some advanced analyses to compress variance between datasets [18]. Despite their widespread use in both academia and industry, these types of models have proven to be about numbers and quantities, necessitating the need for improvements that span beyond numbers [19]. To address the constraint of these models, various research that employs pattern matching approaches has been substantially researched in the field of machine learning [20]. Several of which have proved machine learning models’ ability to deal with unbalanced datasets [21], pictorial data and text data [22]. Even the difference between parametric and non-parametric methods for analyzing risks can be detected [23].

In addressing the research questions, the authors explored some background literature related to auto-coding, pattern matching, and text analysis. Most articles tackled issues of crisis management using machine learning, data transformation/scaling, and natural language processing. Machine learning has helped IT practitioners perform tasks in a very short amount of time [24]. It appears to be a quick option for identifying disruption events, getting authentic feeds or detecting periodic incidents in real-time [25]. Learning such patterns was also a major game-changer, as the approach tries to map patterns of interest or similarities in a given dataset [26], while also showing the capacity to learn and produce accurate results [27]. The pattern that is key to understanding human actions are cues demonstrating preference. This preference can be in the form of emotions, opinions, viewpoints, or specific annotations that help in explaining why people act the way they do [28]. When evaluating or predicting actions, some key concepts to take note of are perceptions, attitudes, sentiments, etc. The term ”perception” can be misconstrued to mean the same thing as attitudes or even sentiments. In simpler terms, sentiments are concerned with people’s feelings about an event, i.e., positive and negative events [13], perception and attitudes are concerned with people’s perspective towards an event [28, 11]. Emotions can aid in understanding perception, but the distinction is that perceptions or attitudes can be formed on the basis of facts and not only emotions [29]. While attitudes are reactionary (they can produce immediate action), perceptions are internal cues suggesting future actions or attitudes [30]. Focusing on perceptions, the formal definition is the process of organizing and interpreting sensory inputs to make sense of events [11].

To detect or predict a crisis, there is a need to make sure that the dataset used for analysis is actionable. Actionable data is characterized by information that can be acted upon or data that provides sufficient insight about the future [31] i.e., the data gives insight into actions that informs valuable decisions. In other words, it is more than just data kept in data warehouses. They have undergone analytical and data manipulation and are presented in a clear, intelligible, and frequently visually appealing manner [32]. It enables researchers to spot mistakes or potential crises and capitalize on new opportunities, improve future actions, and make faster and more informed decisions for the future [33].

II-A Machine Learning for Crisis Detection and Management

There are multiple explanations as to why a crisis incident should be analyzed with machine learning. One reason is to prevent such occurrences in the future, another is to study the pattern by which people engage on such occasions in a timely fashion. The COVID-19 epidemic was the major global crisis between 2020 - 2021 [34], and many academics have attempted to evaluate live or historical data as to how the surge is escalating, including the metrics that supposedly caused the escalation. Other researchers have looked at why some people would desire to get vaccinated [11] and why others would not. Beyond the COVID-19 pandemic, research on crisis informatics tackles all forms of crisis, like disaster management, 911 or 311 incidents, political movements, natural disasters, and various forms of assault like rape, to name a few. Given that our study focuses on crisis situations, machine learning has shown promise in the scientific community, as evident in the number of articles that apply automated means for detecting, predicting, or averting crisis events. Almost all the major unsupervised and supervised algorithms like the neural network, support vector machine, Naïve Bayes algorithm, K-means clustering, K nearest neighbor, decision trees, and gradient boost algorithms have been applied in various capacities. Several crisis events can be averted or addressed in a timely manner when trained models with high accuracy are used, especially in cases where human involvement in solving the problem is near impossible or time-consuming.

Averting dangers has been made easier and more expressive with the help of several data sources and machine learning methods. The Twitter platform is one such medium from which actionable data can be derived. Thousands of academics have explored the platform since it can be used to generate insights during crisis [35]. Additionally, when dealing with data from varied sources, the issue of data structure limits some procedures, but with the proper application of machine learning tuning or data transformation, the issue can be mitigated. Some other data sources however are dedicated to archiving data for certain topics, such as the health crisis - World Health Organization data (WHO) [36], financial crisis - World Bank open data [37], imminent disasters in government - data.gov [38], or child mortality and maternal mortality - UNICEF [39].

Given a reliable and actionable data source, machine learning thrives at conducting computational tasks that would ordinarily take human intelligence a significant amount of time to handle. Machine learning has evolved over time in such a way that any computer device with memory can be taught to follow specific patterns [40]. The traditional approach to learning, in which humans are trained with specialized material and tested to determine their mastery of a topic, gave rise to the concept of machine learning. It describes a machine’s ability to have some type of intelligence and readiness to learn from experience. The game of checkers is one example of machine learning through experience [41]. Beyond the checkers game, machines have been trained to differentiate between authentic news and fake news as well as spam emails from authentic emails using the BERT model [42]. The value of implementing such machine-ready systems makes the prediction of crisis events seamless.

III Methodology

The methodology employed in this study is the meta-review technique. It identifies the feasible variables in a cluster of articles with a common interest. This approach is often applied to literature published in a particular language; in our case, the review focused on academic articles written in English and published in ISCRAM, ScienceDirect, and the IEEEXplore databases. These databases were selected because they have subsections that addressed crisis management using machine learning. That is not to say that some other databases with an emphasis on crisis do not exist; we mainly focused on three for this study. The search and selection criterion illustrated in Figure 1 shows the various building blocks and the flow of data across them. The search terms are essentially the keywords needed to be used in a query for specific databases. More emphasis on the search term was made in the search technique section. The AND and OR operators were used in the query structure because they help to sieve out the articles that do not fall within our query parameter. Figure 2 expanded the inclusion and exclusion criteria shown in Figure 1 to visualize the different components of the inclusion and exclusion mechanisms, and how data were truncated or reduced in the process to get a final result.

III-A Search technique

The search technique was critical in this evaluative review to ensure integrity. The paper was found using an automated search in a variety of different electronic databases. Table I shows the three scientific databases explored.

TABLE I: Scope of the search.
Index NAME URL
DB1 IEEEXplore Digital Library https://ieeexplore-ieee-org/
DB2 ScienceDirect Library https://www.sciencedirect.com/
DB3 ISCRAM Digital Library http://idl.iscram.org/

With a structured search pattern, this study aimed at getting only relevant articles targeted to answer our research question i.e., at the search stage, we sieved literature from relevant sources with the selection of appropriate keywords. The articles featured keywords like; machine learning, crisis, and disaster. The steps for keyword preparation are as follows:

  1. 1.

    Determine the search terms in relation to the research questions.

  2. 2.

    Ensure that alternative spellings, antonyms, and synonyms of the search term are identified as well.

  3. 3.

    Perform Boolean operations (AND, OR) on the search terms.

  4. 4.

    Identify the dates for the search query.

Refer to caption
Figure 1: Data search and selection Algorithm

The following keywords and operators, which are reflective of our research questions appear in the paper: (Disaster) OR (Crisis) AND (Machine Learning).

The search was conducted based on the database’s preference pertaining to query structure. The ISCRAM was queried with the Contextual Query Language (CQL) specific to ISCRAM with the code in Table II. IEEEXplore and Science Direct were less complicated with the help of their web-based portal for inputting the search parameters with the ”AND” and ”OR” operators as shown in Table II.

TABLE II: How the Databases were queried
Index Query type Query
1 IEEExplore “Crisis AND Machine learning OR Disaster”
2 ScienceDirect “Crisis AND Machine learning OR Disaster”
3 ISCRAM (CQL) “all abstract machine learning crisis disaster”

Additionally, Our search was restricted by publication year, i.e. between 2010 and 2021 (recent publications), as well as categories, which included peer-reviewed journal and conference papers only.

III-B Data Selection

The 55 articles reviewed were carefully selected using the step-by-step approach shown in Figure 1. The years from 2010 to 2021 as stated earlier were used primarily to reflect current trends and progression in machine learning practice with regard to crises. The initial response from the various databases using the query format shown in Table II were 2,274 articles from Science Direct, 16,236 articles from IEEExplore, and 76 articles from ISCRAM. Given the volume of articles, the number of publications were reduced to emphasize the significance of our research area (crisis). We specifically chose just the journal and conference articles from the computer science field for the Science Direct database because this discipline was clearly prominent with more contributions in the area of machine learning. This resulted in a total of 186 publications. We employed the same strategy for the IEEE search, but the advanced search parameters were different. The initial result of 16,236 from IEEExplore was reduced to 5,770 articles by limiting the topic categories to disasters and choosing only conference and journal articles. When the computer science discipline was applied we got a total of 476 articles.

Refer to caption
Figure 2: Inclusion and Exclusion flow diagram

The ISCRAM result did not require further reduction because we received only 76 articles. Reading through all of the resulting articles would be nearly impossible and time-consuming. Therefore, we attempted to utilize a systemic approach already available from the various servers to select the first 30 relevant articles rather than random sampling. The relevance reflects the articles that have more contributions to the body of knowledge and the frequency of citations. We ended up with 90 to balance the equation between the three databases. Further selection was carried out by means of manually reading the 90 articles as demonstrated in Figure 2 to ensure that each article made use of a machine learning method, had a crisis response matching, and appropriate research questions that addressed crises. To do this, we employed the criteria below:

  1. 1.

    Include: based on the abstract that demonstrates a well-defined methodology

  2. 2.

    Include: based on a conclusion that identifies at least one of the search terms as well as a metric that shows evidence for the study

  3. 3.

    Exclude: based on a methodology that did not exemplify machine learning.

  4. 4.

    Exclude: based on research that has no strong validation or premise for validation.

TABLE III: Data Item collection form.
Index Fields Description
DI-1 Title The title of the article
DI-2 Year The publication year of the article
DI-3 Database The source of the publication
DI-4 Techniques The machine learning approach used in the article
DI-5 Research Fields The area of interest covered by the research(Computer Science)

Table III illustrates the components of each manuscript that were extracted. This not only demonstrates the connection to the research questions but also provides a mechanism to confine data extraction to only the fields that are relevant. The title, year, database, machine learning techniques, and research questions covered are among such fields.

III-C Data Extraction

The researchers manually coded the data in order to answer our two research questions. The labeling was done in four batches; methods, tasks, keywords, and algorithms, respectively. According to the selection criteria, all the publications considered for this study focused on crisis response and the application of machine learning technologies.

IV Result

From the analysis, majority of the articles reviewed supported the classification machine learning task more than the regression or clustering tasks. 3(b) shows that 41% of the reviewed papers made use of the classification task, while regression and clustering accounted for 18% and 16%, respectively. As shown in 3(a), the supervised machine learning method garnered 69% when mapped together with unsupervised and active learning, with 27% and 3%, respectively. This does not imply that one method is superior to the other, rather it demonstrates a preference across scientific communities. The method preference of supervised learning can be attributed to the availability of training data peculiar to crisis from CrisisNLP [43], https://crisisLex.org - CrisisLexT26, SoSItalyT4, and BlackLivesMatterU/T1 [44]. Sometimes, the format in which crisis data is communicated may not be easily suited as a corpus for training a model. In this case, the unsupervised learning approach comes in handy [45]. Some of the studies examined had hybrid approaches where supervised and unsupervised methods were used depending on the availability of training data for some subset of their analysis. Active learning seems not to be largely applied in crisis research. One factor that could have influenced this is that active learning is a form of semi-supervised learning that engages outside sources to label a dataset. Because of that, the practice of semi-supervised learning was not noticed on a full scale. Following the distribution timeline of methods, it is clear that there is an upsurge in the use of supervised machine learning as the years go by across the three databases, as shown in Figure 3.

Refer to caption
(a) ML methods
Refer to caption
(b) ML Tasks
Figure 3: Graph representation of the ML methods and tasks

The methods in 3(a) were further broken down into classification, clustering, regression, NLP, and sentiment analysis in 3(b). The NLP and sentiment analysis were identified separately in this figure because they can be a subclass of either classification, clustering, or regression tasks. It should be noted that sometimes they exist on their own (as transformers), not following the algorithmic process in the three classes mentioned above.

Refer to caption
Figure 4: Distribution of keywords across reviewed articles

The consistency in the usage of classification further strengthens the earlier statement about supervised learning being predominant amongst researchers. The next in line was clustering where k-nearest neighbors (kNN), K-Means, and some tasks with random forest and decision trees algorithms conform as shown in Figure 5 and Figure 6. The Regression tasks like linear and logistic regression shows that they are still relevant in today’s research gaining 4% and 10% respectively. The analysis also shows that the implementation of machine learning methods for crisis management or evaluation reached its highest in the years 2020 and 2021. Again, the increase in the usage of machine learning classification techniques can be linked to the availability of training datasets, the ease with which the algorithms can be implemented, and the understandability of training data labels. The use of complex technologies can also be a contributing factor since such analysis may take human agents a considerable amount of time to analyze. The need for automated data processing, predictions, or analysis is something that has come to stay to aid humans.

The distribution of keywords in Figure 4 yielded some noteworthy results. The terms ”COVID, disaster management, and social media” appeared more frequently in the reviewed publications, and they peaked in the year 2020. There was a clear pattern in the frequent mentions of social media and how they are used to aggregate crisis data. Our analysis also identified ”social media” as the key term that received the most attention in the years 2019, 2020, and 2021, indicating that social media is a useful tool for gathering information about crises in the modern era. We can link the mention of a health crisis and COVID in 2021 to the recent COVID-19 pandemic and the way researchers are deploying intelligent systems to aggregate data across various media. Disaster management was also used as a keyword in several articles but was scarce in the year 2021. The mention of text classification, domain adaptation, and topic modeling was noted as well, and these relate to sub-methods in analyzing crises. The topic modeling which was scarcely present but important in the machine learning community was notably used in some research. This can be connected to our earlier statement that an NLP task can act as a transformer in an unsupervised environment (e.g., top modeling makes use of unsupervised methods).

Refer to caption
Figure 5: Algorithms applied in reviewed articles (heatmap)

Another significant discovery from this study is that when we further break down the methods to their algorithmic standards or the terminologies that indicate how they function, our earlier resolve could be strengthened. The SVM algorithm appears to dominate the reviewed articles, followed by the neural network. This also strengthens our earlier statement that the classification task and the supervised method were predominant. The strength of SVM can be uncovered in structural risk minimization, efficient memory management during training, and high dimensional spaces needed in datasets. Memory management is an issue for deep learning algorithms, which have more accuracy than SVM depending on the volume of data, but it seems the benefits of SVM appeal more to researchers in the crisis domain. The neural network was also prominent among the articles studied. Amongst the variations of neural networks present in crisis research are convolutional neural networks and long-term memory (LSTM) had more traction. The Naive Bayes, random forest, decision trees, and logistic regression also show promise as identified in Figure 6.

Refer to caption
Figure 6: Machine learning algorithms applied in reviewed articles (Time series)

V Discussion

We conducted this evaluative review on scientific articles published between 2010 and 2021, with an emphasis on crisis management and machine learning. Crises can emerge in different forms, e.g., health crises, natural disasters, economic crises, food crises, and political events, among others. These variations introduce complexity in developing automated methods to manage crises as well as track emerging events to allow for fast decision-making. The methodology was explicit enough in describing the meta-review processes and how we collected and evaluated different publications that addressed machine learning for crisis management. The concept of machine learning from the literature review and result sections is demonstrative of how many ML techniques and algorithms can be used to manage crises. It is evident from our results that all the algorithms had their fair representation in terms of the value they added. The machine learning field is continuously evolving as it tries to help in the analysis of large chunks of data, easing the tasks of data scientists in an automated process and changing the way data extraction and interpretation work.

All the reviewed articles produced results based on the structure of the problem they addressed, the type of crisis tackled, the source of the data, and the volume of the data. The percentage distribution, as shown in 3(a), describes the preferred machine learning tasks. NLP tasks such as sentiment analysis, which may be classified as supervised or unsupervised learning, were highlighted as critical in automating crisis management. The classification, clustering, and regression tasks were highly preferred, with classification topping the list due to the availability of training datasets and easy implementation of the algorithms through pre-packaged libraries in popular programming languages like Python, R, and Java. Furthermore, there is a general recognition of a reproducibility crisis in science right now. Machine learning techniques are often simpler to perform analysis with. The reproducibility crisis is the increasing number of study findings that aren’t replicated when a different group of researchers performs the same experiment. This problem has ramifications in a variety of sectors where machine learning is utilized to make discoveries.

V-A Limitations

This study did not review literature from all the databases of scientific studies; instead, we culled articles from three databases that met the inclusion and exclusion criteria. As a result, our reviewed studies reflect research on public actions, crises, and machine learning from three venues. The results of this study may have been influenced by the search strategy employed in the paper, the researcher’s biases, the unequal distribution of published journals or conference proceedings, and data extraction misrepresentation.

Both automated and manual search techniques were used in this study. Hundreds of data points were found as a result of the first iteration as seen in Figure 2. The content of the research papers was used to inform the manual search procedure after the initial search. The possible studies were chosen and analyzed by three researchers. It is possible that relevant studies were skipped in the search results. As a result, the scope of this review may be constrained. Consequently, the validity of this study is limited to the 55 key papers included in this evaluative review.

VI Conclusion

Our findings show that a significant proportion of articles (41%) used classification over regression or clustering, owing to the availability of training data/corpus and pre-packaged machine learning libraries. 69% of the articles made use of the supervised machine learning method (RQ1), showing preference across scientific communities in dealing with crises. Consequently, 27% of the studies made use of the unsupervised learning technique, while the remaining 4% used active learning methods. To address RQ2, our analysis revealed the machine learning methods and prevalent keywords used in the reviewed articles. It suggests that the SVM, Neural Networks, Naive Bayes, and Random Forest algorithms, amongst others, are popular among researchers in the crisis management domain (RQ2). Also on RQ2, the keyword crisis informatics garnered great interest in the scientific literature explored. Some interesting projections like health and disaster management rose by the year 2020 and social media received the most attention in 2019, 2020, and 2021, implying that social media is beneficial to gathering crisis-related data in modern times.

References

  • [1] T. Nguyen, C. P. Lim, N. D. Nguyen, L. Gordon-Brown, and S. Nahavandi, “A review of situation awareness assessment approaches in aviation environments,” IEEE Systems Journal, vol. 13, no. 3, pp. 3590–3603, 2019.
  • [2] K. Steen-Tveit and J. Radianti, “Analysis of common operational picture and situational awareness during multiple emergency response scenarios.” in ISCRAM, 2019.
  • [3] M. Y. Kabir and S. Madria, “Emocov: Machine learning for emotion detection, analysis and visualization using covid-19 tweets,” Online Social Networks and Media, vol. 23, p. 100135, 2021.
  • [4] H. Hwang and K.-O. Kim, “Social media as a tool for social movements: The effect of social media use and social capital on intention to participate in social movements,” International Journal of Consumer Studies, vol. 39, no. 5, pp. 478–488, 2015.
  • [5] O. Oh, M. Agrawal, and H. R. Rao, “Community intelligence and social media services: A rumor theoretic analysis of tweets during social crises,” MIS quarterly, pp. 407–426, 2013.
  • [6] E. Nathan, M. Pranav, K. Monica, H. Shane, and K. Jess, “An exploration of methods using social media to examine local attitudes towards mask-wearing during a pandemic,” in 18th ISCRAM Conference – Blacksburg, VA, USA May 2021.   ISCRAM, 2021, pp. 1–14.
  • [7] S. Praveen, R. Ittamalla, and G. Deepak, “Analyzing the attitude of indian citizens towards covid-19 vaccine–a text analytics study,” Diabetes & Metabolic Syndrome: Clinical Research & Reviews, vol. 15, no. 2, pp. 595–599, 2021.
  • [8] M. Fritts, Actions and Causes: A Methodological Inquiry into the Explanation of Action.   The University of Wisconsin-Madison, 2020.
  • [9] B. Chen, J. M. Zhang, Z. Jiang, J. Shao, T. Jiang, Z. Wang, K. Liu, S. Tang, H. Gu, and J. Jiang, “Media and public reactions toward vaccination during the ‘hepatitis b vaccine crisis’ in china,” Vaccine, vol. 33, no. 15, pp. 1780–1785, 2015.
  • [10] M. R. Haupt, A. Jinich-Diamant, J. Li, M. Nali, and T. K. Mackey, “Characterizing twitter user topics and communication network dynamics of the “liberate” movement during covid-19 using unsupervised machine learning and social network analysis,” Online Social Networks and Media, vol. 21, p. 100114, 2021.
  • [11] O. Izunna, R. Guillermo, Romera, Z. Weibing, H. Shane, and K. Jess, “Perception analysis: Pro- and anti- vaccine classification with nlp and machine learning,” Proceedings of the 55th Hawaii International Conference on System Sciences, pp. 2981–2990, 2022.
  • [12] P. Sv, J. Tandon, H. Hinduja et al., “Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study,” Diabetes & Metabolic Syndrome: Clinical Research & Reviews, 2021.
  • [13] O. U. Izunna, I. P. Ngozi, and E. A. Obiajulu, “Analysis of evaluated sentiments; a pseudo-linguistic approach and online acceptability index for decision-making with data: Nigerian election in view,” Internet of Things and Cloud Computing, vol. 7, no. 2, p. 39, 2019.
  • [14] C. Young, “Model uncertainty and the crisis in science,” Socius, vol. 4, p. 2378023117737206, 2018.
  • [15] P. Patrick, “A comparison of ratios of successful industrial enterprises with those of failed firms,” Certified Public Accountant, vol. 2, pp. 598–605, 1932.
  • [16] R. Sonmez, “Impact of occasional overtime on construction labor productivity: quantitative analysis,” Canadian Journal of Civil Engineering, vol. 34, no. 7, pp. 803–808, 2007.
  • [17] E. I. Altman, “Financial ratios, discriminant analysis and the prediction of corporate bankruptcy,” The journal of finance, vol. 23, no. 4, pp. 589–609, 1968.
  • [18] J. A. Ohlson, “Financial ratios and the probabilistic prediction of bankruptcy,” Journal of accounting research, pp. 109–131, 1980.
  • [19] X. Song, A. Mitnitski, J. Cox, and K. Rockwood, “Comparison of machine learning techniques with classical statistical models in predicting health outcomes,” in MEDINFO 2004.   IOS Press, 2004, pp. 736–740.
  • [20] F. Barboza, H. Kimura, and E. Altman, “Machine learning models and bankruptcy prediction,” Expert Systems with Applications, vol. 83, pp. 405–417, 2017.
  • [21] M. A. H. Farquad and I. Bose, “Preprocessing unbalanced data using support vector machine,” Decision Support Systems, vol. 53, no. 1, pp. 226–233, 2012.
  • [22] D. Faure and C. Nédellec, “Knowledge acquisition of predicate argument structures from technical texts using machine learning: The system asium,” in International Conference on Knowledge Engineering and Knowledge Management.   Springer, 1999, pp. 329–334.
  • [23] D. G. Altman and J. M. Bland, “Parametric v non-parametric methods for data analysis,” Bmj, vol. 338, 2009.
  • [24] S. R. Chowdhury, M. Imran, M. R. Asghar, S. Amer-Yahia, and C. Castillo, “Tweet4act: Using incident-specific profiles for classifying crisis-related messages.” in ISCRAM.   Citeseer, 2013.
  • [25] K. Starbird, G. Muzny, and L. Palen, “Learning from the crowd: Collaborative filtering techniques for identifying on-the-ground twitterers during mass disruptions.” in ISCRAM.   Citeseer, 2012.
  • [26] L. Szczyrba, Y. Zhang, D. Pamukcu, and D. I. Eroglu, “A machine learning method to quantify the role of vulnerability in hurricane damage,” in ISCRAM 2020 Conference Proceedings–17th International Conference on Information Systems for Crisis Response and Management, 2020.
  • [27] F. Angaramo and C. Rossi, “Online clustering and classification for real-time event detection in twitter.” in ISCRAM, 2018.
  • [28] S. Wiggins and J. Potter, “Attitudes and evaluative practices: Category vs. item and subjective vs. objective constructions in everyday food assessments,” British journal of social psychology, vol. 42, no. 4, pp. 513–531, 2003.
  • [29] K. D. Landreville and C. Niles, ““and that’sa fact!”: The roles of political ideology, psrs, and perceived source credibility in estimating factual content in partisan news,” Journal of Broadcasting & Electronic Media, vol. 63, no. 2, pp. 177–194, 2019.
  • [30] J. O. Faronbi, O. Adebowale, G. O. Faronbi, O. O. Musa, and S. J. Ayamolowo, “Perception knowledge and attitude of nursing students towards the care of older patients,” International journal of Africa nursing sciences, vol. 7, pp. 37–42, 2017.
  • [31] J. Kropczynski, R. Grace, J. Coche, S. Halse, E. Obeysekare, A. Montarnal, F. Benaben, and A. Tapia, “Identifying actionable information on social media for emergency dispatch,” Proceedings of the ISCRAM Asia Pacific, p. 11, 2018.
  • [32] N. Spasojevic and A. Rao, “Identifying actionable messages on social media,” in 2015 IEEE International Conference on Big Data (Big Data).   IEEE, 2015, pp. 2273–2281.
  • [33] M. Avvenuti, S. Cresci, F. Del Vigna, T. Fagni, and M. Tesconi, “Crismap: a big data crisis mapping system based on damage detection and geoparsing,” Information Systems Frontiers, vol. 20, no. 5, pp. 993–1011, 2018.
  • [34] X. Wu, R. C. Nethery, B. M. Sabath, D. Braun, and F. Dominici, “Exposure to air pollution and covid-19 mortality in the united states,” MedRxiv, 2020.
  • [35] J. L. Hellerstein, S. Ma, and C.-S. Perng, “Discovering actionable patterns in event data,” IBM Systems Journal, vol. 41, no. 3, pp. 475–493, 2002.
  • [36] B. Karo, C. Haskew, A. S. Khan, J. A. Polonsky, M. K. A. Mazhar, and N. Buddha, “World health organization early warning, alert and response system in the rohingya crisis, bangladesh, 2017–2018,” Emerging infectious diseases, vol. 24, no. 11, p. 2074, 2018.
  • [37] C. E. Weaver, “Open data for development: The world bank, aid transparency, and the good governance of international financial institutions,” in Good Governance and Modern International Financial Institutions.   Brill Nijhoff, 2019, pp. 129–183.
  • [38] C. Birchall, “‘data. gov-in-a-box’ delimiting transparency,” European Journal of Social Theory, vol. 18, no. 2, pp. 185–202, 2015.
  • [39] W. H. Organization et al., Trends in maternal mortality: 1990-2015: estimates from WHO, UNICEF, UNFPA, World Bank Group and the United Nations Population Division.   World Health Organization, 2015.
  • [40] C. Stauffer and W. E. L. Grimson, “Learning patterns of activity using real-time tracking,” IEEE Transactions on pattern analysis and machine intelligence, vol. 22, no. 8, pp. 747–757, 2000.
  • [41] A. L. Samuel, “Some studies in machine learning using the game of checkers,” IBM Journal of research and development, vol. 3, no. 3, pp. 210–229, 1959.
  • [42] J. Cao and C. Lai, “A bilingual multi-type spam detection model based on m-bert,” in GLOBECOM 2020-2020 IEEE Global Communications Conference.   IEEE, 2020, pp. 1–6.
  • [43] R. ALRashdi and S. O’Keefe, “Deep learning and word embeddings for tweet classification for crisis response,” arXiv preprint arXiv:1903.11024, 2019.
  • [44] A. Olteanu, C. Castillo, F. Diaz, and S. Vieweg, “Crisislex: A lexicon for collecting and filtering microblogged communications in crises,” in Eighth international AAAI conference on weblogs and social media, 2014.
  • [45] C. Arachie, M. Gaur, S. Anzaroot, W. Groves, K. Zhang, and A. Jaimes, “Unsupervised detection of sub-events in large scale disasters,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 01, 2020, pp. 354–361.