This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Detecting Malicious Accounts in Permissionless Blockchains using Temporal Graph Properties

Rachit Agarwal {rachitag, shikhar, sandeeps}@cse.iitk.ac.in Shikhar Barve {rachitag, shikhar, sandeeps}@cse.iitk.ac.in Sandeep K. Shukla {rachitag, shikhar, sandeeps}@cse.iitk.ac.in
Abstract

The temporal nature of modeling accounts as nodes and transactions as directed edges in a directed graph – for a blockchain, enables us to understand the behavior (malicious or benign) of the accounts. Predictive classification of accounts as malicious or benign could help users of the permissionless blockchain platforms to operate in a secure manner. Motivated by this, we introduce temporal features such as burst and attractiveness on top of several already used graph properties such as the node degree and clustering coefficient. Using identified features, we train various Machine Learning (ML) algorithms and identify the algorithm that performs the best in detecting which accounts are malicious. We then study the behavior of the accounts over different temporal granularities of the dataset before assigning them malicious tags. For Ethereum blockchain, we identify that for the entire dataset - the ExtraTreesClassifier performs the best among supervised ML algorithms. On the other hand, using cosine similarity on top of the results provided by unsupervised ML algorithms such as K-Means on the entire dataset, we were able to detect 554 more suspicious accounts. Further, using behavior change analysis for accounts, we identify 814 unique suspicious accounts across different temporal granularities.

Index terms— Blockchain, Machine Learning, Temporal Graphs, Behavior Analysis, Ethereum, Suspect Identification

1 Introduction

A Blockchains is an ever-growing large directed temporal network with more and more industries starting to adopt it for their businesses. In permissionless blockchains, interactions (also called as transactions) happen between different types of accounts. In Ethereum mainnet public blockchain, these accounts can be either Externally Owned Accounts (EOA) or Smart Contracts (SC). Here, transactions from an EOA (called as an external transaction) are recorded on the blockchain ledger whereas transactions from an SC (called as an internal transaction) are not recorded on the ledger.

With actual money involved in most of the permissionless blockchains, an account must be able to perform secure transactions. Recently, many security threats to various blockchain platforms have been identified [1]. For some identified vulnerabilities, counter-measures have been implemented. We do not delve into surveying all the security threats. In [2], the authors survey security flaws that exist in Ethereum blockchains. In many of the security vulnerabilities identified in Ethereum blockchain, hackers target other accounts by either hacking SCs or implementing malicious SCs for cybercrimes such as ransomware, scams, phishing, and hacking of exchanges or wallets [3].

With an ever-increasing growth and adoption of blockchain technology by the industry and the crypto-currency market, permissionless blockchains are at the epicenter of increased security vulnerabilities and attacks. Our motivation for this work is based on the fact that there is limited work on learning the behaviors of the accounts in permissionless blockchains which are malicious and potentially victimize other accounts in the future. In short, we aim to identify malicious accounts so that the potential victims and blockchains can deploy counter-measures. In this paper, henceforth, we use term blockchain to represent permissionless blockchain. The techniques proposed in related studies classify accounts as malicious using either machine learning (ML) algorithms or motif-based (basic building subgraphs of a network) methods. Nonetheless, the features used by the available techniques are: (a) limited and not learned from the previous attacks on blockchains, and (b) extracted from the aggregated snapshot of time-dependent transaction graphs that do not consider temporal evolution of the graphs.

The temporal aspects attached to the features are essential in understanding the actual behavior of an account before we can classify it as malicious. For example, inDegree and outDegree features are time-variant and should be considered a time series. Nonetheless, it has been proven that the aggregated node degree distribution for accounts follows a power-law in blockchains such as Ethereum [4]. Here, questions that we ask are: does such behavior exist in all accounts? Is there a burst of degree for certain accounts at certain instances and can the existence of such bursts be used to identify malicious activity? To answer these question, we first identify the existence of bursts. Then to study the effect of bursts, we introduce features such as temporal burst, degree burst, balance burst, and gasPrice burst.

The fat-tailed nature of power-law degree distribution also gives rise to neighbor-hood-based fitness preferential attachment in blockchains [5]. In [5] authors defined fitness as “the ability of the node to attract new connections” and showed that the accounts that have high fitness sometimes are short-lived and indulge mostly in malicious activities while when they are long-lived they represent large organizations. Here, the authors define the fitness factor considering one previous time instance interactions. As it does not consider a temporal window, one drawback of the method lies in its ability to correctly classify malicious transactions that appear at an interval of 2 time units or more. Inspired by this, we define a neighborhood-based feature called attractiveness that takes into account a temporal window of size θa\theta_{a} where (0<θa<TDS)(0<\theta_{a}<T_{DS}) and TDST_{DS} is the duration for which we collect the dataset (DSDS). Our attractiveness measure takes into account the stability of directed transactions that happened between two accounts in the past. Intuitively, a malicious account will have high attractiveness as it will tend to transact with new accounts while benign accounts will have high neighborhood stability or low attractiveness.

As the behavior of an account can change from malicious to benign or from benign to malicious over time, there is a need for continuous monitoring and analysis of the real-time transactions given the history of transactions performed by an account. We thus study the evolution of malicious behavior over different timescales by creating sub datasets and then answer would a certain account show malicious behavior in future? Towards this, we first apply different ML algorithms and identify the most suitable unsupervised ML algorithm in the entire dataset that is able to cluster accounts most accurately. Then we apply the identified algorithm to different sub datasets within a temporal scale to capture the behavior changes.

In summary, following are our main contributions:

  • Feature Engineering: We identify feature vector for identifying malicious accounts based on previous attacks on blockchains and perform time series analysis. As new features, we propose temporal burst, degree burst, balance burst, gasPrice burst, and attractiveness.

  • Comparative analysis: We perform a comparative study with techniques proposed in related studies and identify best possible supervised and unsupervised ML algorithm with related hyperparameters when we use Ethereum transaction data.

  • Results: Our results demonstrate that ExtraTreesClassifier performs best with respect to balanced accuracy under supervised settings for the entire dataset while when using clustering techniques, we are able to identify 554 more suspect accounts. Analysis of behavioral changes reveal 814 suspects across different temporal granularities.

The rest of the paper is organized as follows. In section 2, we present background and the state of the art techniques for identifying malicious accounts and compare them. In sections 3 and section 4, we present detailed description of our methodology and the feature vector, respectively. This is followed by in-depth evaluation along with the results in section 5. We finally conclude in section 6 providing details on prospective future work. Further, in Table 1 we provide list of acronyms used in the paper.

Acronym Meaning Acronym Meaning
EOA Externally Owned Account CC Clustering Coefficient
SC Smart Contract Bal Balance
ML Machine Learning TF Transaction Fee
AS Active State BB Burst
iD inDegree IET Inter-Event Time
oD outDegree A Attractiveness
PoW Proof of Work LOF Local Outlier Factor
EVM Ethereum Virtual Machine
Table 1: List of Acronyms.

2 Background and Related Work

There are two types of blockchain technologies, permissionless and permissioned. The major difference between two technologies is that in permissioned blockchain prior access approval is needed for performing any action on the blockchain while in permissionless blockchain anyone can perform actions on the blockchain without any approval. Further, there is no way to censor anyone from permissionless blockchains. Such aspects lead to more frauds and malicious activities to prevail in permissionless blockchains. Ethereum and Bitcoin use permissionless technology.

Ethereum was developed by Vitalik Buterin in 2013 [6] and allows users to run programs in its trusted virtual environment known as Ethereum Virtual Machine (EVM). These programs are called Smart Contracts (SC) and are stored on the ledger along with transactions performed on a given fixed address. Ethereum uses “Ether” as its native crypto-currency for transfer and transaction fees. Smart Contracts can also send, store and receive ethers. Once deployed it is a hard coded program that could only be fed with input to get output. Smart Contracts are also used by some applications for their processing. Such applications are called distributed applications or dapps. Although Ethereum is known for its security and trust a small bug in SC code can cause huge loss [7] of crypto-currency. Unlike Bitcoin, Ethereum uses list of accounts. For a valid transaction, amount is transferred from sender to receiver. If receiver is a SC, its code is executed and the state of the SC is updated. Internally, a SC could send a message or perform internal transactions with other accounts. Ethereum currently uses a refined form of PoW (Proof of Work) consensus algorithm. PoW is computationally expensive and energy inefficient.

There are vast number of studies in fraud detection [8]. Nonetheless, targeting Ethereum, Chen et al. [2] base their survey on attacks and defences in Ethereum. We do not survey all the attacks and defense mechanisms in this work. However, we provide an in-depth understanding of different methods used to detect accounts involved in malicious activity. Several works have tried to identify or categorize malicious accounts and activities in different types of blockchains. As blockchains have graph structure, most of these techniques study graph properties (such as node degree) to identify features before applying supervised or unsupervised learning.

In [9], authors used a bitcoin transaction network to detect malicious activity. They were able to detect three malicious attacks using unsupervised ML algorithms with a limited amount of available transaction data. In their followup work, they used a more comprehensive bitcoin transaction dataset (starting from genesis block until April 7th, 2013) [10]. They employed data in two types of graphs namely User Graph and Transaction Graph. In user-graph nodes represent accounts and edges represent transactions, whereas in transaction-graph nodes represent transactions and edges represent flow of bitcoins. They first studied the flow of bitcoins to prove the existence of anomalies and then performed clustering to identify different attacks. They were able to detect the existence of one attack using the Local Outlier Factor (LOF). Inspired by [10], in [11], Monamo et al. also used bitcoin transaction data and proposed an update to counter scaling issues that are inherent in LOF. They validated their approach using trimmed K-Means, argued its usefulness in detecting anomalies and detected 5 out of 30 fraudsters.

In another bitcoin-related malicious activity detection [12], authors studied the detection of addresses involved in the Ponzi scheme. They used supervised learning and validated their results after addressing the class imbalance that is inherent in any malicious activity related to datasets. They identified that the Gini coefficient of outgoing values and the ratio between incoming and total transactions are the most important features for detecting Ponzi scheme related accounts. In another Ponzi scheme related study, in [13], authors use Ethereum data to extract features from operation codes (opcodes) of the smart contract’s bytecode. Their motivation behind the study was based on the fact that the opcodes reflect logic implemented in a SC and therefore provide useful features for identifying Ponzi and non-Ponzi SC. They also figured out that opcode features are more efficient than account based features while detecting Ponzi scheme accounts. In [14], authors use partial Ethereum transaction data to classify malicious accounts. They also performed a sensitivity analysis to study the effect of different classifiers on the feature set. In [15], to counter class imbalance, authors assumed that accounts connected to malicious accounts via incoming transactions are also malicious. They then studied various supervised ML algorithms to identify malicious accounts over this over-sampled Ethereum dataset. In a followup study of [15], in [16], authors used only those benign accounts who have never transacted with malicious accounts. Due to this, their feature vector has only transaction based properties but not the graph based properties.

N-motifs are frequently occurring subgraphs that serve as a basic building block of a network. Authors in [17] defined N-motif as a path of length 2N2N between two entities where transactions are also considered as vertices. Using N-motifs that are present in the transaction graph, in [17], authors studied transactions happening between entities (people or organizations with multiple accounts). They were able to correctly identify malicious accounts involved in gambling. In another study [18], authors analysed transfer of funds within a subnet and used temporal feature such as how quickly funds are cashed.

We present all the above-mentioned techniques in detail in Table 2 and present the features that the techniques used along with studied ML algorithm, their hyperparameters, accounts considered in the dataset and performance score. Note that all these techniques use features that are based on some graph properties, transacting amount, and active state to train the ML model. However, several other studies, such as [4, 19], use inferences drawn from the analysis of the transaction graph to mark malicious accounts. In [4], authors try to identify accounts indulging in DDos attack and argue that accounts that create multiple rarely used contracts are malicious. A similar approach is followed in [20] where they used only verified SC codes and introduced features like SC size, lifetime and average time between transactions (i.e. Inter-event time). In [19], authors deploy honeypot and analyze RPC requests to identify malicious accounts. They then analyze transactions to mark accounts as suspicious that accept crypto-currency from malicious accounts. They perform behavior analysis to identify fisher accounts and attacks such as crypto-currency stealing.

Table 2: Features used in related studies
Used features based on
# B/C AS iD oD Bal TF BB A CC IET ML Algo Used Dataset Hyperparameters Performance
[9] B - - - K-Means 100Ka k[1,14]k\in[1,14] kopt=7,8k_{opt}=7,8
Mahalanobis Distance ×\times 0.0256MDE
ν\nu-SVM ν=0.005\nu=0.005 0.1441MDE
[10] B - - - - Local Outlier Factor 6.3Ma k=8k=8 0.55MDE
[11] B - - - - - K-Means 1Ma k[1,14]k\in[1,14] kopt=8k_{opt}=8
Trimmed K-Means k[1,15]k\in[1,15], α=0.01\alpha=0.01 kopt=8k_{opt}=8
[12] B - - - - RIPPER\dagger \ddagger6432a cost [1,40]\in[1,40] 0.996ac
Bayes Network ×\times 0.983ac
Random Forest ×\times 0.996ac
[13] E - - - - - XGBoost \ddagger1382sc ×\times 0.94p, 0.81r
[14] E - - - - - Random Forest 350Ka RFPARAM 0.85r, 0.05p
SVM cost=1cost=1, γ=0.077\gamma=0.077 0.87r, 0.02p
XGBoost XGBPARAM 0.8r, 0.07p
[15] E - - - - - Decision Tree 300a ×\times 0.93ac
SVM ×\times 0.83ac
KNN k=5k=5 0.91ac
MLP ×\times 0.86ac
NaiveBayes ×\times 0.89ac
Random Forest ×\times 0.99ac
[16] E - - - - - - Decision Tree 9375a ×\times 0.92ac
KNN ×\times 0.92ac
XGBoost ×\times 0.96ac
Random Forest ×\times 0.95ac
[17] B - - - - - Adaboost 1000Ma estimators=50estimators=50, rate=1rate=1 >0.2r>0.2^{r}
Random Forest estimators=10estimators=10 >0.85r>0.85^{r}
Gradient boosting estimators=100estimators=100, rate=0.1rate=0.1 >0.93r>0.93^{r}
depth=3depth=3
  • B/C Blockchain, B Bitcoin, E Ethereum, a accounts, sc smart contracts, MDE Dual Evaluation Metric, ac accuracy, p Precision, r Recall, it is a propositional rule learner that relies on a sequential covering logic, Ponzi scheme data, RFPARM features = 3, leaf samples = 10, threshold probability = 0.99, XGBPARAM depth = 3, child weight = 8, subsample = 1, probability = 0.99, × not provided.

All the above techniques either use a limited set of ML algorithms on a highly scaled-down data inducing over-fitting or apply inferences on the graph structure to identify malicious activities and accounts. In most cases, studies use features that do not capture temporal behavior and are approximated by the mean behavior, thereby, further inducing a bias in their study thus having high accuracy. Techniques that use large datasets and have high class imbalance, on the other hand, either have high recall and low precision or low recall and high precision [14]. Nonetheless, using our features, we identified ML algorithm that provides better precision as well as better recall.

3 Methodology

We use Ethereum mainnet blockchain transaction data and first validate our assumptions and approach. We segment the transaction data into sub-datasets (SDSD) to capture the behavioral changes. We create the SDSDs using different temporal granularities (TgT_{g} such that TgTGT_{g}\in T_{G}) where TG={DayT_{G}=\{Day, WeekWeek, MonthMonth, QuarterQuarter, HalfYearlyHalfYearly, YearYear, All}All\}. A granularity becomes coarser as we move from Day to Year. Here a SDSD in a Day consists of transactions of 6000 blocks. The choice of 6000 blocks is based on the fact that in Ethereum approx 6000 blocks are created every day. At a coarser TgT_{g}, a SDSD in a Week consists of 7 Days data. Similarly, a SDSD in a Month consists of 30 Days data, a SDSD in a Quarter consists of 3 Month data, a SDSD in a HalfYearly consists of 6 Months data, and a SDSD in a Year consists of 12 Months of data.

On all the features that are time series based (features described in section 4), we perform time series analysis of all the SDSDs at different TgT_{g} to quantify them using tsfresh that “extracts characteristics from time series” [21, 22]. The analysis reveals that features such as quantile and median best describe the time series for most of the features we have. We observe this behavior not only in the entire dataset but also in different SDSDs at different TgT_{g}s.

We first apply the AutoML pipeline using TPOT [23] to identify the best ML classifier on the entire dataset and validate state of the art techniques. We configure TPOT with existing tested ML algorithms and their hyperparameters. Note that TPOT internally performs imputation and feature scaling also. Nonetheless, as our aim is to detect malicious accounts, we also apply clustering to identify accounts that show similar behavior to that of malicious accounts. For the entire dataset, we find that K-Means provides best silhouette score for k=9k=9 when we consider both EOAs and SCs. For clusters identified as malicious, we use cosine similarity to quantify the similarity among the accounts within the cluster. We acknowledge that there are other methods as well to identify similarity, but for this work we use cosine similarity. With this method we are able to identify 293 more suspect accounts that have similar behavior as malicious accounts. When considering only EOAs, we identify best silhouette score at k=10k=10 and 554 more suspects.

Assuming that K-Means with hyperparameter k=9k=9 identified for entire dataset performs best for all temporal sub-datasets at different temporal granularities, we determine a probability for an account to be malicious at different temporal granularities. Across all temporal granularities we identify 814 unique accounts as suspects.

4 Feature Engineering

We do not describe the blockchain graph models as they are well understood. Instead, we directly present features that we extract from the blockchain temporal graph structure. The set of features (FF) defined in the related work is limited and, in most cases, does not convey correct temporal behavior. We extend the feature set and introduce new features to detect malicious accounts. We follow a two-fold methodology to identify the relevant features. First, we study different attacks that have happened in the past to understand what features malicious accounts have used for malicious activity. Second, as most of the account features (for example, inDegree) are time series based, we perform time series analysis to identify features that best represent the salient properties of the relevant time series. Below we provide a list of all the features we use:

  • Non Time Series based (set Fn|FnFF_{n}|F_{n}\subset F)

    • Active state (AS): malicious activities are usually short-lived [5] and remain, for example, until remediation is introduced. It is thus essential that we consider features such as when the account first transacted (transactedFirst), last transacted (transactedLast), how long it has been active (durationActive), and since when the account is continuously transacting (activeSinceLast).

  • Time Series based (set Ft|FtFF_{t}|F_{t}\subset F): We analyze each of the following time series based features using tsfresh [21, 22] and select 3 top features identified for each of the following attributes. Nonetheless, as inter-event time (IET) itself is a time series, we use it as a feature as well.

    • inDegree (iD): it represents the number of transactions in which the account under consideration is a receiver at a particular instant. Most of the malicious activities involve transfer of money to a malicious account. Thus, it is one of the most important features used to understand the behavior of a malicious account. In [15], the author found that uniqueInDegree (defined as unique accounts from which the account under consideration has ever received money) to be one of the most critical feature for identification of malicious accounts. On top, we also use aggregated inDegreeAgg as a feature.

    • outDegree (oD): represents the number of transactions in which the account under consideration has sent money at a particular instant. In some attacks such as Bitpoint Hack [24], after the attacker has received amount of sum from the victims in an alias account, they transferred the received sum to another account they hold or to an exchange. Such attacks increase the importance of outDegree as a potential feature. Similar to the case above, we also use aggregated outDegreeAgg as a feature.

    • Balance (Bal): our motivation to use it as a feature is based on the fact that most malicious activities in a permissionless blockchains are finance based. For example, in one of the famous Parity Multisig wallets [25] attack the malicious account drained more than 150k Ethers (currency used in Ethereum blockchain). Thus the currency held by an account as well as its flow is an important feature. We identify balance time series for both in/out case. Besides balance, we identify for each instance max balance for both in and out cases (maxInBalance and maxOutBalance), zeroBalanceTransactions (transactions where no money was transferred either to or from an account), totalBalance (final balance held with the account), and averagePerInBalance (average of received balance) as features.

    • Transaction Fees (TF): in crypto-currency based blockchains, a transaction is marked by transaction fees that a sender is willing to spend on a particular transaction. In Ethereum blockchain, operations like transferring Ethers require a fixed sequence of instructions which consume 21,000 Gas (TF=Gas×GasPriceTF=Gas\times GasPrice). Several attackers put higher gas price to bribe the miner so that a particular transaction of interest to them is included in the next block [19]. Nonetheless, in DDos attack [26], an attacker created multiple accounts at very low gas price to increase synchronization and processing time. Thus it is also an essential features.

    • Attractiveness (A): mostly, malicious accounts tend to interact with accounts that they have not interacted with before. The probability of interacting with the same account that they have interacted before is very low. Consider NitN_{i}^{t} to be the neighborhood (accounts with whom the account ii has received crypto-currency) of account ii at time tt, T={t,t1,,tθa}T=\{t,t-1,\cdots,t-\theta_{a}\}, and θa\theta_{a} the time window size. Based on this, we define attractiveness (AitA_{i}^{t}) for account ii at time tt as shown in equation 1.

      Ait={1|Nit(jT{t}Nij)||jTNij|,when j0 and Nit0,otherwise.A_{i}^{t}=\begin{cases}1-\frac{\left|N_{i}^{t}\bigcap\left(\bigcup_{j\in T-\{t\}}N_{i}^{j}\right)\right|}{\left|\bigcup_{j\in T}N_{i}^{j}\right|},&\text{when }j\geq 0\text{ and }N_{i}^{t}\neq\emptyset\\ 0,&\text{otherwise}.\end{cases} (1)
  • Burst (BB) (set Fb|FbFF_{b}|F_{b}\subset F): bursty behavior is defined as temporal non-homogeneous sequence of events [27] and has been characterized by a fat-tailed inter-event time (Δt\Delta t) distribution. In one of the bitcoin blockchain attacks (Allinvain Theft [28]), a malicious account generated a large number of transactions to taint the bitcoin platform. Motivated by this incident, we define four types of bursts (temporal, degree, balance and gasPrice) that occur in the network under consideration. As an account can either be a sender or a receiver, the following burst types are defined for cases (a) when the account acts as a sender, (b) when the account acts as a receiver, and (c) when the account acts as both a sender as well as a receiver.

    • Temporal Burst: for an account ii, non-homogeneous occurrences of events (in our case transactions) lead to some transactions occurring where Δt\Delta t is less than a threshold, θti\theta_{t}^{i}, while for other transactions Δt\Delta t is large. If a transaction happens when Δt<θti\Delta t<\theta_{t}^{i}, we assume that it is a burst. Some burst can be long lived while some burst can be short lived, meaning, some event can happen continuously for long time intervals before going dormant. As features, we identify number of such temporal bursts (numberOfTemporalBursts) and the duration of the longest burst (longestBurstDuration) for both in and out transactions separately as well.

    • Degree Burst: it has been proven that the degree (also inDegree and outDegree) distribution of the aggregated transactions in blockchain such as Ethereum follows a power-law (fat tailed) distribution [4] with α[2.8,2.6]\alpha\in[-2.8,-2.6]. This suggests that many accounts do not transact often while there are very few accounts that act as hubs (for example, exchanges). Nonetheless, when considering the temporal aspects, we believe such behavior also exists where some accounts have a very high degree for some instant while for other instants they have a low degree. Thus, we define a degree burst when at a given instant of time the degree of an account, ii, is greater than θdi\theta_{d}^{i}. Similar to the temporal case, for degree bursts we also identify number of degree bursts (numberOfDegreeBursts) that happened for an account over time, number of instances where the degree burst happened (numberOfDegreeBurstInstances), and the time at which the largest burst of degree happened (largestBurstAt). Note that these features except for numberOfDegreeBurstInstances are defined for both in and out transactions separately as well.

    • Balance Burst: in some cases transactions happen from accounts ii to account jj where the involved sum of crypto-currency was very large (more than a threshold value θbi\theta_{b}^{i}). For example, some accounts associated to Silk Road [29] or involved in money laundering sometimes transact large sum for illegal activities. Busty behavior of transaction amount could be helpful in identifying potential malicious activities and accounts. Similar to the above cases, for an account ii, we identify number of unique instances where balance is more than θbi\theta_{b}^{i} (numberOfBalanceBurstyInstances), and number transactions more than θbi\theta_{b}^{i} (numberOfBalanceBursts). Note that, we define these factors for both only in and out case.

    • GasPrice Burst: As described before, an attacker can put higher gas price (more than a threshold value θgi\theta_{g}^{i}) to bribe the miner so that the transaction is included in the block. This activity although abnormal is useful in understanding account’s behavior. Towards this, similar to previous cases, we define numberOfGasPriceBurstyInstances as number of instances where the gasPrice was set more than θgi\theta_{g}^{i}. This is only defined for in case as gasPrice is only set by the sender.

Note that features such as in/outDegree, burst, attractiveness are some graph-based temporal features. Besides these features, other graph-based properties that we use as feature includes clustering-coefficient (CC) [30]. For an account ii, let Nit,inN_{i}^{t,in} be the neighborhood of account ii at time tt from which the account has received the crypto-currency, Nit,outN_{i}^{t,out} be the neighborhood of account ii at time tt to which the account has paid the crypto-currency. Thus, the total account degree is degitot=|Nit,in|+|Nit,out|deg_{i}^{tot}=|N_{i}^{t,in}|+|N_{i}^{t,out}|. Let Nit,=Nit,inNit,outN_{i}^{t,\leftrightarrow}=N_{i}^{t,in}\bigcap N_{i}^{t,out} and air=1a_{ir}=1 if there is a transaction between ii and rr, otherwise 0. We similarly define aisa_{is}, aria_{ri}, asia_{si}, arsa_{rs}, asra_{sr}. For a directed graph, CC of account ii (CCitCC_{i}^{t}) at time tt is defined as equation 2 [31].

CCit=rs(air+ari)(ais+asi)(asr+ars)2[degitot(degitot1)2|Nit,|].CC_{i}^{t}=\frac{\sum_{r}\sum_{s}(a_{ir}+a_{ri})(a_{is}+a_{si})(a_{sr}+a_{rs})}{2\left[deg_{i}^{tot}(deg_{i}^{tot}-1)-2|N_{i}^{t,\leftrightarrow}|\right]}. (2)

5 Results and Evaluation

We evaluate the effectiveness of our method using Ethereum’s external transactions data which is publicly available for download using the Etherscan APIs [32]. Note that the APIs do not provide any information about the account (such as the name and the account type). Nonetheless, as the hash of the accounts is available, one can check the associated information using the Ethereum Blockchain Explorer [33]. We perform all our evaluations using Python.

Refer to caption
(a) InDegree Distribution
Refer to caption
(b) OutDegree Distribution
Figure 1: Degree Distribution of accounts.

5.1 Dataset

Ethereum as on 20th December 2019 had \approx79M accounts. Out of these accounts, 3362 accounts were already tagged to be involved in malicious activities. The tags mainly include Phishing (3168 accounts), Gambling (8 accounts), Cryptopia-Hack (6 accounts), Heist (16 accounts), Suspicious (4), Bitpoint Hack (2 accounts), Compromised (21 accounts), Spam (10 accounts), Upbit-Hack (123 accounts), Unsafe (1 account), Scam (1 account) and Bugs (2 accounts). We look for other sources such as Cryptoscam.db [34] as well to know the ground truth about the accounts as some of the accounts might not be tagged as malicious. As a result, we find 329 more malicious accounts with a total of 3691 unique malicious EOAs and SCs. Upon further investigation, we find that out of these 3691 EOAs and SCs, 746 never transacted and were mostly involved in the token trade until 7th December 2019. We thus remove them from our malicious accounts dataset. In these remaining set of malicious accounts, there are 158 SCs and 2 marked compromised exchanges. Note that for these accounts we collect only-but-all external transactions (transactions from EOAs to SCs, and between different EOAs). Also note that at the time of this study Ethereum had removed most of the malicious tags. But recently Ethereum provided new tags and marked more accounts as malicious. As of 27th May 2020, there were 4708 malicious accounts out of which 2019 were newly tagged accounts. Out of these 2019 accounts only 1252 accounts ever transacted. Out of these 1252 accounts 1029 were created before 7th December 2019 in which only 3 are present in our dataset. As the number of malicious accounts is constantly evolving, we take this opportunity to cross validate accounts that our analysis found malicious.

There is a high class imbalance in the dataset as the number of benign accounts is large. Thus, we perform random under-sampling to uniformly sample 697K benign accounts from the 79M Ethereum accounts. In the total \approx700K accounts we have, there are 7 exchanges and 23,141 SCs while rest accounts are EOAs.

A unique transaction, Tx, contains information about blockHash, blockNumber, source, destination, gas, gasPrice, Transaction hash, balance, and timestamp of the block. Note that the Tx data does not include the timestamp of when a transaction was performed by the account. The only time related information, we are able to extract is the information about when a block is mined. However, currently we do not use this information. We assume a time bin of 1 block for our study. We assign respective blockNumber as a timestamp to all the transactions111The block numbers are continuous thus giving a notion of timestamp.. Based on this notion of timestamp, we also segment the data into several SDSD of different TgT_{g} and study the behavior of the accounts. We describe in the section 1 the different TgT_{g}s we consider. For statistical purposes, we have 1,531 Day SDSDs, 219 Week SDSDs, 52 Month SDSDs, 18 Quarter SDSDs, 9 HalfYearly SDSDs, 5 Year SDSDs, and the entire dataset. A total of 1835 datasets.

For our study we assign: (i) θt\theta_{t} = 2 so that continuous burst of smallest size are also captured, (ii) for an account ii, θdi\theta_{d}^{i} = 0.8×(max(d))0.8\times(max(d)) where dd is the in/outdegree of an account in the considered SDSD, (iii) θbi=0.8×(max(b))\theta_{b}^{i}=0.8\times(max(b)) where bb is the transaction balance for either in or out case, (iv) θai\theta_{a}^{i} to be equal to the duration of the SDSD to keep the entire history of neighbors that a particular account transacted in the past in the given that sub-dataset, and (v)) θgi=0.8×(max(gasPrice))\theta_{g}^{i}=0.8\times(max(gasPrice)) where gasPrice is the the gas price for transactions associated with account ii. We then analyse different time series based features to identify there characteristics as potential features.

5.2 Results

For the entire dataset, we first study inDegree and outDegree distribution for both malicious and benign accounts to validate the fat-tailed behavior of the degree distribution. From fig. 1, we identify that power-law distribution [35] with xmin=2.3x_{min}=2.3, α[2.37,2.54]\alpha\in[2.37,2.54] and α[2.23,2.33]\alpha\in[2.23,2.33] fits inDegree and outDegree distribution, respectively, for both malicious and benign accounts. Here α\alpha and xminx_{min} are the powerlaw exponent and minimum xx from where the powerlaw distribution is observed, respectively.

Refer to caption
(a) Temporal inDegree distribution for all accounts
Refer to caption
(b) Temporal outDegree distribution for all accounts
Figure 2: Temporal Degree Distribution of individual accounts.

The fat-tailed nature of degree is evident because some accounts interact with more number of accounts at a certain instant, thereby inducing a bursty behavior. We study the distribution of inDegree for all individual accounts to understand if such behavior is shown by all the accounts. Fig. 2a presents distribution of inDegree for different accounts. We identify that the inDegree of very few accounts is high (>>100) for very few time instances while most of the time it is low suggesting the existence of bursts. We observe a similar behavior for outDegree as well (see fig. 2b).

Refer to caption
(a) Distribution of Δt\Delta t
Refer to caption
(b) account-wise distribution of Δt\Delta t
Figure 3: Distribution of Δt\Delta t

Next, we validate the existence of temporal bursts. For this we study the distribution of inter-event time (Δt\Delta t) for all accounts. We find that it follows power law with xmin=3x_{min}=3 and α=1.25\alpha=1.25 and α=1.76\alpha=1.76 for benign and malicious cases, respectively (see fig. 3a). Nonetheless, we also observe a truncation at 1.5×1061.5\times 10^{6} blocks. The truncation reflects that some accounts are inactive or did not perform any transactions for long period of time. When looking at the individual level, we observe that only few accounts have very large inter-event time (>1×106>1\times 10^{6}) where the probability of occurrence of such events is very low. Most of the activity happens where the inter-event time is very small (see fig. 3b).

Refer to caption
Figure 4: Attractiveness

The attractiveness behavior of malicious and benign accounts differ significantly (see fig. 4). Most malicious accounts have high attractiveness value while most of the benign accounts have low attractiveness value. This justifies our assumption that most malicious accounts target those accounts that they have not previously interacted with.Some attacks (Upbit Hack - Fake_Phishing1431: ‘0xdf9191889649c442836ef55de5036a7b694115b6’) uses multiple accounts to evade detection while transferring money to exchanges. They use multiple accounts as buffer between account and exchange. This is the reason for relatively high probability (p(A=0)>0.2p(A=0)>0.2) for the low values of attractiveness (A=0A=0) for malicious accounts. Similarly for some benign accounts p(A=1)=0.1p(A=1)=0.1 because such accounts only have 1 incoming transaction in whole lifetime portraying account interacted only with new accounts.

For the entire dataset, after applying tsfresh, for every temporal feature FtjFtF_{t}^{j}\in F_{t} we get a set of features (F^tj\hat{F}_{t}^{j}) that describes FtjF_{t}^{j}. From F^tj\hat{F}_{t}^{j}, we choose top three feature. We use Gini as the scoring method to identify the top three feature. After this process, we get a total of 59 features. For the entire dataset, using pearson correlation, we remove highly correlated features and find 36 important features. We also perform PCA to identify 28 features that cover >>98.2% variance to further reduce the feature space in the entire dataset.

For the analysis purposes, besides performing PCA to identify 28 features and before running the AutoML tool (TPOT) to identify the best supervised learning algorithm, we segment the entire dataset into six dataset configurations. Note that these six dataset configurations are different from the temporal SDSDs. Three out of these six dataset configurations use all types of accounts (EOA and SC) and have 59, 36, and 28 features, respectively. While for the remaining three, we separate EOAs from SCs and use only EOAs. These three configurations again have 59, 36, and 28 features, respectively. We configure TPOT with all the supervised ML algorithms used in the state of the art studies along with other supervised ML algorithms to identify the algorithm that gives best balanced accuracy.

Table 3 lists different dataset configurations we have used along with the algorithm that provided the best balanced accuracy along with precision, recall and F1-score for each class. For each dataset configuration and the algorithm that provided the best balanced accuracy, we only provide values to those hyperparameters for which the values are different from the default case. We identify that ExtraTreesClassifier provides overall best balanced accuracy for all the dataset configurations and among them dataset with 59 features and all the account types has best balanced accuracy. The difference in balanced accuracy score between the dataset configurations when 36 and 59 features are used is only 0.5% for both when we consider only EOAs and all the accounts, respectively. Given such results, we show that correlated features do not provide much gain and can be removed without the loss of accuracy.

Refer to caption
Figure 5: Cosine similarity between newly identified malicious accounts and old malicious accounts.

To validate our results, we test ExtraTreesClassifier with identified hyperparameters on newly identified set of 1252 malicious accounts. The classifier achieves 50% balanced accuracy. However, when we train the classifier with identified hyperparameters on the total dataset (dataset consisting of previously used 700k accounts and new 1252 accounts), we were able to achieve 92%\approx 92\% balanced accuracy. This makes us wonder if the new malicious nodes have different characteristics. We check cosine similarity between the old 2946 malicious accounts and the new 1252 malicious accounts (cf. figure 5). We find that most of the newly added malicious accounts had low similarity score. Only one new malicious account had similarity score >0.985>0.985 with only one old malicious account. In many cases the similarity score even reached <0.89<-0.89 showing that the accounts are not similar and there are some new aspects used by new malicious accounts. Note that to identify cosine similarity we do not use features such as transactedlasttransactedlast and transactedFirsttransactedFirst because many of the accounts were created after 7th Dec 2019.

TPOT
Features Data identified Accuracy Precision Recall F1 score
Segment Classifier balanced Mal Ben Mal Ben Mal Ben
28 Only EOA ExtraTrees 0.872 0.38 1.00 0.75 0.99 0.50 1.00
(PCA) EOA and SC ExtraTrees 0.873 0.22 1.00 0.76 0.99 0.34 0.99
36 Only EOA ExtraTrees 0.876 0.11 1.00 0.78 0.97 0.19 0.99
EOA and SC ExtraTrees 0.882 0.24 1.00 0.78 0.99 0.37 0.99
59 Only EOA ExtraTrees 0.881 0.26 1.00 0.77 0.99 0.38 0.99
EOA and SC ExtraTrees 0.887 0.29 1.00 0.78 0.99 0.42 1.00
  • 28 (PCA) EOA

    ExtraTreesClassifier(class_weight = ‘balanced’, max_features = 0.4, max_samples = 0.3, min_samples_leaf = 11, min_samples_split = 19, n_estimators = 600)

  • 28 (PCA) EOA and SC

    ExtraTreesClassifier(class_weight = ‘balanced’, criterion = ’entropy’, max_features = 0.25, max_samples = 0.15, min_samples_leaf = 13, min_samples_split = 4, n_estimators = 800, n_jobs = 20, random_state = 100)

  • 36 EOA

    ExtraTreesClassifier(bootstrap = true, class_weight = ‘balanced’, max_features = 0.15, max_samples = 0.7, min_samples_leaf = 8, min_samples_split = 18, n_estimators = 200, n_jobs = 10, random_state = 100)

  • 36 EOA and SC

    ExtraTreesClassifier(class_weight = ‘balanced’, criterion = ’entropy’, max_features = 0.45, max_samples = 0.75, min_samples_leaf = 18, min_samples_split = 6, n_estimators = 200)

  • 59 EOA

    ExtraTreesClassifier(class_weight = ‘balanced’, max_features = 0.2, max_samples = 0.75, min_samples_leaf = 13, min_samples_split = 19)

  • 59 EOA and SC

    ExtraTreesClassifier(class_weight = ‘balanced’, criterion = ’entropy’, max_features = 0.3, max_samples = 0.3, min_samples_leaf = 14, min_samples_split = 20, n_estimators = 200)

Table 3: Balanced accuracy, Precision, Recall and F1 score for both malicious (Mal) and benign (Ben) accounts with best identified ML algorithm for supervised case when using different dataset configurations.
Refer to caption
Figure 6: Silhouette Scores for clusters identified by K-Means for different dataset configurations and k[3,24]k\in[3,24].
Refer to caption
Figure 7: Clusters with number of malicious accounts for (a) when only EOAs
are considered, (b) when both EOAs and SCs are considered.

We next test unsupervised learning algorithms such as K-Means, DBSCAN, HDBSCAN, and oneClassSVM to identify suspect accounts in the entire dataset. We find that for the six dataset configurations (mentioned above and not the SDSDs) and different values of k[3,24]k\in[3,24], K-Means provide the best silhouette score (score = 0.365) when k=10k=10 clusters and when we use all the features but only EOAs (‘59 - EOA’) (see fig. 6). Among these 10 clusters, for one initial condition, one cluster had the most number of already known malicious EOAs (73.9%(2062/2788)\approx 73.9\%~{}(2062/2788)) (see fig. 7). We then identify the similarity between all the accounts in the identified cluster. We identify 554 benign accounts whose behavior (cosine similarity) (see fig. 8) is within 1ϵ1-\epsilon where ϵ0\epsilon\rightarrow 0 to that of malicious accounts. For our analysis we use ϵ=107\epsilon=10^{-7}. We cross validate the transactions performed by these 554 benign accounts and find that (a) most of the EOAs have small transactedLast value, meaning, those accounts never transacted in recent past (in past 6 months 494 EOAs never interacted), (b) atleast 38 EOAs only have incoming transactions and are not exchanges, and (c) totalBalance [0.0,150.0]\in[0.0,150.0] Ethers with a median of 0.001 Ethers.

Refer to caption
Figure 8: Cosine similarity between malicious accounts and benign accounts in
the cluster with best Silhouette score.

When considering both EOAs and SCs, we obtain the best silhouette score (score = 0.356) for k=9k=9 clusters but for the case when we use all the 59 features (‘59 - EOA and SC’) (see fig. 6). In this case, for one initial condition, there was one cluster with a maximum number of already tagged malicious EOAs (64.3%(1793/2788)\approx 64.3\%~{}(1793/2788)) and malicious SCs (62.6%(99/158)\approx 62.6\%~{}(99/158)). We identify 293 potential suspects EOAs and no suspect SCs within this cluster using our previous method. Out of these 293 accounts, 160 EOAs were also detected in the set of 554 accounts. We further tested if the accounts we identified as suspects are present in the list of newly tagged malicious accounts. We found that none of the 3 new malicious tagged accounts that transacted during our analysis period were not in our list of suspects. This is possible as the accounts must have changed their behavior and become malicious after our collection period. We do not reveal the account hash for the sake of privacy and not maligning benign accounts in interacting with these either 554 or 293 suspects until they are officially tagged malicious. Other unsupervised ML algorithms did not perform better than K-Means. The range of silhouette scores for HDBSCAN was [0.06,0.022]\in[-0.06,-0.022] while oneClassSVM did not converge.

To further understand the temporal behavior changes before classifying the accounts as malicious we use temporal sub-datasets (SDSDs) created at different temporal granularities (TgT_{g}, see section 1). Consider a TgTGT_{g}\in T_{G} which consists of a several SDSDs. Let this set be set SD(Tg)SD(T_{g}) where SD(Tg)={SD(Tg)1,SD(Tg)2,SD(T_{g})=\{SD(T_{g})_{1},~{}SD(T_{g})_{2}, ,SD(Tg)j,,SD(Tg)n}~{}\cdots,~{}SD(T_{g})_{j},~{}\cdots,~{}SD(T_{g})_{n}\}. Further, consider an account ii. We first analyse all the time-series based features in each SD(Tg)jSD(T_{g})_{j} and characterise them. We employ a similar approach as before where we identify F^ti\hat{F}_{t}^{i} using tsfresh for a FtiFF_{t}^{i}\in F in a given SD(Tg)jSD(T_{g})_{j} and use three features in F^ti\hat{F}_{t}^{i} with highest gini score.

We then use K-Means with previously identified hyperparameter (k=9k=9) and perform clustering. As before, we tag accounts in each SD(Tg)jSD(T_{g})_{j} as malicious and benign after identifying cosine similarity. This results in a vector (MM) for each account of size nin_{i} where each element (MjM_{j}) in MM is either 0 or 1 and nin_{i} is the number of SDSDs in a TgT_{g} in which the account appears. Here 0 represents not identified as malicious. Let this set of SDSDs be SD(Tg)={SD(Tg)1i,SD(Tg)2i,,SD(Tg)ji,,SD(Tg)ni}SD(T_{g})=\{SD(T_{g})_{1}^{i},~{}SD(T_{g})_{2}^{i},~{}\cdots,~{}SD(T_{g})_{j}^{i},~{}\cdots,~{}SD(T_{g})_{n}^{i}\}. MM depicts the behavior of an account ii where a change in behavior is captured if MjMj+1M_{j}\neq M_{j+1}. We note that only one benign account, as per our analysis, has changed its behaviour most number of times (591) in the Tg=DayT_{g}=Day. Figure 9 shows probability distribution of number of changes in behavior performed by accounts. The figure only considers those accounts where the change happened at least once. For the daily case, as the data was significant we identify that lognormal-positive distribution with parameters xmin=1x_{min}=1, μ=1.25\mu=1.25, and σ=2.36\sigma=2.36 best fits the data. Further, across all TgT_{g}s there were 9254 unique benign accounts that showed unstable behavior.

Refer to caption
Figure 9: Probability distribution of number of changes in behavior of accounts with certain probability for being benign at different TgT_{g}s.

From MM, the probability of a particular account ii to be malicious in a given TgT_{g} is given by pmi=jSD(Tg)iMjnip_{m}^{i}=\frac{\sum_{j\in SD(T_{g})^{i}}M_{j}}{n_{i}}. Number of accounts with certain probability for being benign at different TgT_{g}s is shown in figure 10. We identify 814 unique accounts across different TgT_{g}s as suspects that have pmi=0p_{m}^{i}=0. Further, as seen from the figure, most of the accounts accounts were identified as benign.

Refer to caption
Figure 10: Number of accounts with certain probability for being benign at different TgT_{g}s on a semi-log scale.

6 Conclusion

Growth of blockchains technology and concept has found its implementation not only in the financial sector such as crypto-currency market, hedge-fund, and insurance but also in sectors such as governance, education, healthcare, and law enforcement. Although blockchains are privacy-preserving, with an increase in its adoption, security threats are inevitable, more diverse, and deployed using novel techniques. It is essential to have secure transactions. Motivated by the fact that there is limited work in identifying accounts involved in potential malicious activities and those available do not target temporal aspects of blockchains, in this work, we present a way to detect malicious accounts considering the temporal nature of the blockchains.

In this work, we present graph-based temporal features (such as burst and attractiveness) that are inspired by the existing attacks in the blockchain on top of existing features used to identify malicious accounts. To do so, we first conduct a systematic study of the temporal behavior of the blockchain graph on a collected transaction data in one of the blockchains called Ethereum. Our results show that ExtraTreesClassifier performs best under the supervised setting and achieves balanced accuracy [87.2,88.7]\in[87.2,88.7] for different dataset configurations. Moreover, under the unsupervised settings, K-Means was able to cluster max 73.9% known malicious accounts together and identify 554 more suspects that had similar behavior to that of malicious accounts. When considering behavioral changes over time and studying them over different temporal granularities, we are able to detect the probability of an account being malicious at a particular temporal granularity.

Given such results, we expect that benign accounts would be more careful while transacting with suspects and safe-guard themselves from any fraud and security threats. Nonetheless, the current technique is applicable to permissionless blockchain. We would like to investigate the applicability of our method to blockchains where features such as Transaction Fees and Balance are missing. Despite whether a particular blockchain is permissionless or permissioned, there are many other centrality measures such as closeness, betweenness and page-rank that are applicable in blockchain graph. One another future research direction is to incorporate these measures as features and study the behavior of the accounts before tagging them as malicious or benign. Nonetheless, in this work, we detected suspects using supervised learning and unsupervised learning algorithms. Reinforcement learning is another type of ML that can be applied and studied to detect malicious activity. As our validations failed on the newly tagged malicious accounts one perspective is to study new features and new methods that the new malicious accounts are using and deploying to perform illegal activities.

Competing interests

The authors declare that they have no competing interests.

Author’s contributions

RA, SB and SKS designed the research, RA and SB conducted experiments. All authors read and approved the final manuscript.

Availability of data and materials

The list of 2946 malicious accounts used will be made available upon request. Nonetheless, list of all 4708 malicious accounts is publicly available on Etherscan and can be crawled using [36]. The Ethereum transaction Data is also public and can be downloaded using Etherscan APIs [32].

Acknowledgements

Not applicable.

Funding

This work is partially funded by the National Blockchain Project at IIT Kanpur sponsored by the National Cyber Security Coordinator’s office of the Government of India and partially by the C3i Center funding from the Science and Engineering Research Board of the Government of India.

References

  • [1] Bryk, A.: Blockchain Attack Vectors: Vulnerabilities of the Most Secure Technology. (Accessed 13/12/2019) (2018). https://www.apriorit.com/dev-blog/578-blockchain-attack-vectors
  • [2] Chen, H., Pendleton, M., Njilla, L., Xu, S.: A Survey on Ethereum Systems Security: Vulnerabilities, Attacks and Defenses. ACM Computing Surveys 53(3), 1–43 (2020). doi:10.1145/3391195
  • [3] Chainalysis: 2019 Crypto Crime Report: Decoding Hacks, Darknet Markets, and Scams. (Accessed 30/03/2020) (2019). https://go.chainalysis.com/2019-Crypto-Crime-Report.html
  • [4] Chen, T., Zhu, Y., Li, Z., Chen, J., Li, X., Luo, X., Lin, X., Zhang, X.: Understanding Ethereum via Graph Analysis. In: IEEE INFOCOM 2018, pp. 1484–1492. IEEE, Honolulu (2018). doi:10.1109/INFOCOM.2018.8486401
  • [5] Aspembitova, A., Feng, L., Melnikov, V., Chew, L.: Fitness preferential attachment as a driving mechanism in bitcoin transaction network. PLOS ONE 14(8), 1–20 (2019). doi:10.1371/journal.pone.0219346
  • [6] Buterin, V.: Ethereum: A Next-Generation SmartContract and Decentralized Application Platform (2013). https://ethereum.org/whitepaper/
  • [7] Atzei, N., Bartoletti, M., Cimoli, T.: A Survey of Attacks on Ethereum Smart Contracts SoK. In: Proceedings of the 6th International Conference on Principles of Security and Trust, pp. 164–186. Springer-Verlag, Berlin (2017). doi:10.1007/978-3-662-54455-6_8
  • [8] Abdallah, A., Maarof, M., Zainal, A.: Fraud detection system: A survey. Journal of Network and Computer Applications 68, 90–113 (2016). doi:10.1016/j.jnca.2016.04.007
  • [9] Pham, T., Lee, S.: Anomaly Detection in Bitcoin Network Using Unsupervised Learning Methods (2016). 1611.03941
  • [10] Pham, T., Lee, S.: Anomaly Detection in the Bitcoin System A Network Perspective (2017). 1611.03942
  • [11] Monamo, P., Marivate, V., Twala, B.: Unsupervised Learning for Robust Bitcoin Fraud Detection. In: Information Security for South Africa (ISSA), pp. 129–134. IEEE, Johannesburg (2016). doi:10.1109/ISSA.2016.7802939
  • [12] Bartoletti, M., Pes, B., Serusi, S.: Data Mining for Detecting Bitcoin Ponzi Schemes. In: Crypto Valley Conference on Blockchain Technology, Zug, pp. 75–84 (2018). doi:10.1109/CVCBT.2018.00014
  • [13] Chen, W., Zheng, Z., Cui, J., Ngai, E., Zheng, P., Zhou, Y.: Detecting Ponzi Schemes on Ethereum: Towards Healthier Blockchain Technology. In: World Wide Web Conference, Lyon, pp. 1409–1418 (2018). doi:10.1145/3178876.3186046
  • [14] Ostapowicz, M., Zbikowski, K.: Detecting Fraudulent Accounts on Blockchain: A Supervised Approach. In: Cheng, R., Mamoulis, N., Sun, Y., Huang, X. (eds.) Web Information Systems Engineering, pp. 18–31. Springer, Hong Kong (2019). doi:10.1007/978-3-030-34223-4_2
  • [15] Singh, A.: Anomaly detection in the Ethereum network. Technical report, Indian Institute of Technology, Kanpur (2019)
  • [16] Kumar, N., Singh, A., Handa, A., Shukla, S.: Detecting Malicious Accounts on the Ethereum Blockchain with Supervised Learning. In: 4th International Symposium on Cyber Security Cryptology and Machine Learning (CSCML 2020). Springer, Be’er Sheva, Israel (2020)
  • [17] Zola, F., Eguimendia, M., Bruse, J., Urrutia, R.: Cascading Machine Learning to Attack Bitcoin Anonymity. In: 2nd International Conference on Blockchain, pp. 10–17. IEEE, Atlanta (2019). doi:10.1109/Blockchain.2019.00011
  • [18] Goldsmith, D., Grauer, K., Shmalo, Y.: Analyzing hack subnetworks in the bitcoin transaction graph. Applied Network Science 5(1) (2020). doi:10.1007/s41109-020-00261-7
  • [19] Cheng, Z., Hou, X., Li, R., Zhou, Y., Luo, X., Li, J., Ren, K.: Towards a First Step to Understand the Cryptocurrency Stealing Attack on Ethereum. In: 22nd International Symposium on Research in Attacks, Intrusions and Defenses, pp. 47–60. USENIX, Beijing (2019). https://www.usenix.org/conference/raid2019/presentation/cheng
  • [20] Jung, E., Tilly, M., Gehani, A., Ge, Y.: Data Mining-Based Ethereum Fraud Detection. In: 2nd International Conference on Blockchain, pp. 266–273. IEEE, Atlanta (2019). doi:10.1109/Blockchain.2019.00042
  • [21] Christ, M., Kempa-Liehr, A., Feindt, M.: Distributed and parallel time series feature extraction for industrial big data applications (2016). 1610.07717
  • [22] Christ, M., Braun, N., Neuffer, J., Kempa-Liehr, A.: Time Series FeatuRe Extraction on basis of Scalable Hypothesis tests (tsfresh – A Python package). Neurocomputing 307, 72–77 (2018). doi:10.1016/j.neucom.2018.03.067
  • [23] Olson, R., Bartley, N., Urbanowicz, R., Moore, J.: Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science. In: Genetic and Evolutionary Computation Conference, pp. 485–492. ACM, Denver (2016). doi:10.1145/2908812.2908918
  • [24] O’Neal, S.: Bitpoint Hack Shows That Regulators’ Scrutiny Does Not Equal Safety. (Accessed 01/06/2020) (2020). https://cointelegraph.com/news/bitpoint-hack-shows-that-regulators-scrutiny-does-not-equal-safety
  • [25] Palladino, S.: The Parity Wallet Hack Explained. (Accessed 01/06/2020) (2020). https://blog.openzeppelin.com/on-the-parity-wallet-multisig-hack-405a8c12e8f7/
  • [26] Buterin, V.: Transaction spam attack: Next Steps. (Accessed 01/06/2020) (2020). https://blog.ethereum.org/2016/09/22/transaction-spam-attack-next-steps/
  • [27] Karsai, M., Kaski, K., Barabási, A., Kertész, J.: Universal features of correlated bursty behaviour. Scientific Reports 2(397), 1–7 (2012). doi:10.1038/srep00397
  • [28] -: Allinvain Theft. (Accessed 01/06/2020) (2020). https://bitcointalk.org/index.php?topic=83794.0#post_toc_20
  • [29] Spagnuolo, M., Maggi, F., Zanero, S.: BitIodine: Extracting Intelligence from the Bitcoin Network. In: Christin, N., Safavi-Naini, R. (eds.) in Proc. of 18th Financial Cryptography and Data Security, pp. 457–468. Springer, Christ Church, Barbados (2014). doi:10.1007/978-3-662-45472-5_29
  • [30] Watts, D.J., Strogatz, S.H.: Collective dynamics of ‘small-world’ networks. Nature 393(6684), 440–442 (1998). doi:10.1038/30918
  • [31] Fagiolo, G.: Clustering in complex directed networks. Physical Review E 76, 026107 (2007). doi:10.1103/PhysRevE.76.026107
  • [32] Etherscan: Ethereum Developer APIs. (Accessed 01/06/2020) (2020). https://etherscan.io/apis
  • [33] Bitfly Gmbh: Etherchain - The Ethereum Blockchain Explorer. (Accessed 01/06/2020) (2020). https://www.etherchain.org/
  • [34] MyCrypto Inc.: CryptoScamDB. (Accessed 07/12/2019) (2019). https://cryptoscamdb.org/
  • [35] Alstott, J., Bullmore, E., Plenz, D.: Powerlaw: A Python Package for Analysis of Heavy-Tailed Distributions. PLoS ONE 9(1), 85777 (2014). doi:10.1371/journal.pone.0085777
  • [36] Etherscan: Label Word Cloud. (Accessed 01/06/2020) (2020). https://etherscan.io/labelcloud/