This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: 1Chongqing Technology and Business University,
2Telecom SudParis, Institute Polytechnique de Paris,
3Chongqing University of Arts and Sciences
{zhuhongyu,jinxin4,liaohongcao}@ctbu.edu.cn

Relax DARTS: Relaxing the Constraints of Differentiable Architecture Search for Eye Movement Recognition

Hongyu Zhu1    Xin Jin1 Equal contribution; ✉Corresponding author    Hongchao Liao1    Yan Xiang3   
Mounim A. El-Yacoubi2
   Huafeng Qin1
Abstract

Eye movement biometrics is a secure and innovative identification method. Deep learning methods have shown good performance, but their network architecture relies on manual design and combined priori knowledge. To address these issues, we introduce automated network search (NAS) algorithms to the field of eye movement recognition and present Relax DARTS, which is an improvement of the Differentiable Architecture Search (DARTS) to realize more efficient network search and training. The key idea is to circumvent the issue of weight sharing by independently training the architecture parameters α\alpha to achieve a more precise target architecture. Moreover, the introduction of module input weights β\beta allows cells the flexibility to select inputs, to alleviate the overfitting phenomenon and improve the model performance. Results on four public databases demonstrate that the Relax DARTS achieves state-of-the-art recognition performance. Notably, Relax DARTS exhibits adaptability to other multi-feature temporal classification tasks.

Keywords:
Eye movement biometrics, Differentiable Architecture Search.

1 Introduction

Biometrics technology for authentication or identification and has been the subject of increasing attention in recent times. In contrast to traditional static physiological characteristics[1, 2, 3], behavioral biometric modalities such as gait[4], eye movements[5], and handwriting[6], show natural liveness identification, so as to achieve higher security.

Among various behavioral biometric modalities, eye movement biometric systems show notable advantages to support liveness detection[7] and spoof-resistant continuous authentication[8]. Also, it can be easily and seamlessly integrated with iris[3], pupil[9], and other ocular physiological attributes for multimodal recognition. Eye movement biometrics have received considerable attention in the past two decades[10].

Refer to caption

Figure 1: The overall flow chart of eye movement (EM) recognition using the NAS algorithm. Compared to the existing DARTS-based algorithms[11, 12, 13], we directly utilize the searched architectures for training and evaluation without stacking of units.

Eye movements are defined as the coordinated contraction and relaxation of the six ocular muscles, which are regulated by brainstem nerves during visual processing. These movements can be broadly classified into gaze and sweep based on the angular velocity of eye rotation. These movements permit the focusing of attention on objects of interest. Eye tracking devices employ near-infrared light reflections on the eye to detect the gaze point’s position. This process captures the trajectory of gaze movements, thereby obtaining raw eye movement data. Furthermore, the resulting data undergoes feature coding and matching in order to achieve identification. As eye movements are under the control of the brain’s neuroelectric signals, the data provides a wealth of information about the brain’s cognitive processes occurring in real-time, and thus the eye movement is highly distinctive and stable over time[14].

In this paper, to automatically search for models with excellent eye movement recognition performance, we propose Relax DARTS, a differentiable architecture search algorithm based on cell structure independent search with global search performance to relax the constraints of DARTS for eye movement recognition network search and training. Figure 1 illustrates the general principle of Relax DARTS for achieving eye movement recognition. The key idea is to give independent search spaces to cells, increasing the degree of freedom during the network structure search by independently training each cell architecture parameter α\alpha in the network. Additionally, the input weight β\beta of the cell automatically selects cell input inputs. The contributions of our work are summarized below:

  • We make the first attempt to introduce the NAS for eye movement recognition and propose a differentiable architecture search approach to automatically search powerful networks.

  • The network has been relaxed to permit greater flexibility. The search strategy based on parameter sharing and cell stacking is discarded. In contrast, each cell is granted the autonomy to independently update the architectural parameters α\alpha and choose the operations.

  • The network is granted global search capabilities. Each cell can choose the proportion of its inputs from direct and skip connections, or even None. Through the learning of the parameter β\beta, enabling the network to global architectural adjustments while conducting local cell searches.

  • We evaluated the Relax DARTS in terms of authentication performance in four public datasets. The experimental results demonstrated that our approach outperforms existing works in terms of reducing the verification error.

2 Related Work

2.1 Traditional eye movement recognition method

Traditional methods typically use manual techniques to filter useful features from eye movement data and then apply ML algorithms for feature extraction and recognition. Common features include eye movement velocity, gaze duration, and path length.

In 2004, Kasprowski[15] proved that eye movement data contains information that can be used for identity recognition firstly. In 2008, Komogortsev[16] fully considered the bioanatomic properties of the eyeball and established a linear horizontal oculomotor plant mechanical model, [17] expanded on this foundation and proposed the concept of oculomotor plant characteristics (OPC). In 2017, Bayat[18] conducted recognition experiments on a self-constructed dataset, using eye movement data combined with pupil size. In 2018, Li[19] and colleagues extracted eye movement features using a multi-channel Gabor wavelet transform.

2.2 Deep Learning based eye movement recognition method

DL models are extensively utilized in computer vision (CV)[20] and natural language processing (NLP)[21], but also in the field of eye movement recognition, due to their capacity for end-to-end feature learning.

In 2019, Lena[22]developed a convolutional network (CNN)-based Siamese Network that utilizes eye movement data as input into two separate sub-networks for recognition. These studies[23] fine-tuned the model and expanded it into the DeepEyedentificationLive (DEL) model. In 2022, Dillon et al.[24] proposed exponentially-dilated CNNs for recognizing eye movement features. In addition, in 2023, Taha[25] et al. collected vehicle driver eye movement data via a remote low-frequency acquisition device, and extracted data features by combining Long Short Term Memory (LSTM) and dense networks to achieve end-to-end driver identification. In the same year, Qin et al.[5] extracted temporal and spatial features of eye-movement data for the combination using improved LSTM and Transformer algorithms to achieve state-of-the-art performance.

2.3 Differentiable architecture search

DARTS[11] uses SoftmaxSoftmax to treat the selection of candidate operations as an optimization problem for the architecture weight parameters in a continuous space so that the whole architecture search process can be differentiated to achieve gradient-based optimization, which is the most popular among the NAS algorithms at present. DARTS+[26] proposes introducing the early stopping mechanism in the search stage to address the issue of skip-connection enrichment during the training process of DARTS. This issue leads to a significant performance loss of the final model. Fair DARTS [12] also addresses the skip-connection enrichment phenomenon by transforming the candidate operations in the search phase from competition to cooperation. This is achieved by utilizing the Sigmoid function instead of Softmax to score the architectures. In addition, DARTS-[13] proposed using auxiliary skip-connections to leverage the advantages of skip-connections in search operations compared to other operations.

Refer to caption

Figure 2: The framework of the proposed Relax DARTS algorithm

3 Approach

This section presents the Relax DARTS algorithm for eye movement recognition. As shown in Fig.2, our approach consists of two stages e.g. local cell search and global network search. As all cells do not share the weights, the representation capacity of the network is increased. Besides, our method reduces the gap between the proxy network and the target network by global search.

3.1 Review Relax DARTS

The objective of DARTS is to find a Normal Cell that maintains the output feature dimension and a Reduction Cell that reduces the output feature dimension by half. This is achieved by searching for the optimal combination of operations from the candidate operations in the search space. In the search phase, Cell can be viewed as a directed acyclic graph. The path from node ii to jj represents a candidate operation o(i,j)𝒪o^{(i,j)}\in\mathcal{O}, and node jj represents a potential feature mapping x(j)x^{(j)}. Each Cell comprises two input nodes, four intermediate nodes x(j)=i<jo(i,j)(x(i))x^{(j)}=\sum_{i<j}o^{(i,j)}(x^{(i)}), and one output node. DARTS assigns an architectural weight vector α(i,j)\alpha^{(i,j)} to each o(i,j)o^{(i,j)}, and relaxes the categorical choice of a particular operation by the output probability distribution of SoftmaxSoftmax:

Cell(x)=o¯(i,j)(x)=o𝒪exp(αo(i,j))o𝒪exp(αo(i,j))o(x).\displaystyle Cell(x)=\bar{o}^{(i,j)}(x)=\sum_{o\in\mathcal{O}}\frac{\exp(\alpha_{o}^{(i,j)})}{\sum_{o^{\prime}\in\mathcal{O}}\exp(\alpha_{o^{\prime}}^{(i,j)})}o(x). (1)

The task of Cell architecture search then reduces to learning a matrix α\alpha consisting of a set of continuous variables {α(i,j)}\{\alpha^{(i,j)}\}.

In contrast to the α\alpha weight-sharing strategy in DARTS, each Cell in Relax DARTS performs select operations based on a distinct weight matrix α\alpha. Furthermore, the ratio of the two input nodes in the CelliCell_{i}, namely the inputs from Celli1Cell_{i-1} and Celli2Cell_{i-2}, is adjusted by setting the learnable weight parameter βi={β0i,β1i}\beta_{i}=\{{\beta_{0}}^{i},{\beta_{1}}^{i}\}. The inputs s0is_{0}^{i} and s1is_{1}^{i} of CelliCell_{i} are represented as:

s0i=exp(β0i)exp(β0i)+exp(β1i)Celli2(x),\displaystyle s_{0}^{i}=\frac{\exp({\beta_{0}}^{i})}{\exp({\beta_{0}}^{i})+\exp({\beta_{1}}^{i})}Cell_{i-2}(x), (2)
s1i=exp(β1i)exp(β0i)+exp(β1i)Celli1(x).\displaystyle s_{1}^{i}=\frac{\exp({\beta_{1}}^{i})}{\exp({\beta_{0}}^{i})+\exp({\beta_{1}}^{i})}Cell_{i-1}(x). (3)

The objective of the search phase is to optimize the weights α\alpha, β\beta, and ww in the supernet, which is formed by all the mixing operations, using gradient descent. The training and validation loss, denoted by train\mathcal{L}_{train} and val\mathcal{L}_{val}, respectively, are determined by the weights α\alpha, β\beta and ww together. Specifically, architecture search aims to find α\alpha^{*} and β\beta^{*} corresponding to minimizing val\mathcal{L}_{val} by Eq.(4 and 5) and the corresponding network weights ww^{*} by minimizing train\mathcal{L}_{train} by Eq.(6). This task can be regarded as a triple optimization problem for joint optimization:

α=minval(w(α),α),\alpha^{*}=\min\mathcal{L}_{val}(w^{*}(\alpha),\alpha), (4)
β=minval(w(α,β),β),\beta^{*}=\min\mathcal{L}_{val}(w^{*}(\alpha,\beta),\beta), (5)
w(α,β)=argminwtrain(w,α,β).w^{*}(\alpha,\beta)=\mathrm{argmin}_{w}\enskip\mathcal{L}_{train}(w,\alpha^{*},\beta^{*}). (6)

After obtaining the optimal cell structure by searching, training on the target task is restarted.

3.2 Local Search Strategy

To minimize the performance loss caused by parameter α\alpha sharing and the stacking of identical cells in the network structure search, the local search strategy enables each cell to be endowed with an independent architectural parameter, thereby ensuring that each cell in the network is unique according to its current position. Specifically, we propose allowing each cell to update the structural parameter α\alpha independently. After initialization, α\alpha is used as the weight matrix for each cell’s node operation, enabling independent architecture selection. To reduce the risk of overfitting, we simplify the network structure by alternately stacking the Normal Cell and Reduction Cell 3 times to become a 6-layer network. At the end of the search phase, we obtained six distinct cells instead of two.

3.3 Global Search Strategy

The performance gap that arises when using the searched network structure for training and evaluation has been a persistent issue due to differences in network architecture[27, 28]. To address this, we introduce the learnable parameter β\beta as the input weight of each cell to fine-tune the network architecture during training and optimize the global network structure. The cellicell_{i} has two input feature vectors, s0is^{i}_{0} and s1is^{i}_{1}. The input s0is^{i}_{0} is the same as the input s1i1s^{i-1}_{1} of the previous celli1cell_{i-1}, which is a skip-connect operation at the network structure level. The output of the previous cell is the other input s1is^{i}_{1}, which is a direct-connect operation. The parameter β\beta is learnable and measures the proportion of cells that choose skip-connect and direct-connect operations. We use softmax for normalization after initializing β\beta in the same way as α\alpha. The competition between the two inputs is preserved, and note that the proportions sum to 2 instead of 1, avoiding simultaneous scaling of the inputs. When the weights of the inputs are less than the threshold c=0.2c=0.2, the inputs are replaced with the all-zero tensor.

A network structure can be determined by a set of parameters α,β\alpha,\beta, and ω\omega. Evaluating the performance of the structure after optimizing the parameters at each step is costly. Therefore, we use the same approximation strategy as DARTS to perform an alternating triple optimization of α,β\alpha,\beta and ω\omega without training each network to convergence:

αval(w,α)αval(wξαwtrain(w,α),α),\nabla_{\alpha}\mathcal{L}_{val}(w,\alpha)\approx\nabla_{\alpha}\mathcal{L}_{val}(w-\xi^{\alpha}\nabla_{w}\mathcal{L}_{train}(w,\alpha),\alpha), (7)
βval(w,β)βval(wξβwtrain(w,β),β),\nabla_{\beta}\mathcal{L}_{val}(w,\beta)\approx\nabla_{\beta}\mathcal{L}_{val}(w-\xi^{\beta}\nabla_{w}\mathcal{L}_{train}(w,\beta),\beta), (8)
wtrain(w,(α,β)),\nabla_{w}\mathcal{L}_{train}(w,(\alpha,\beta))\ , (9)

The parameter optimization learning rate is represented by ξ\xi, and Algorithm1 illustrates the overall search algorithm.

Input: The traintrain and valval data; Architecture weights α\alpha; Network weights ω\omega; Input weight β\beta; Search epochs EE;
Output: The best performing network structure
Initialize α\alpha and β\beta;
Create a mixed operation parametrized by α\alpha for each edge and create a mixed input parametrized by β\beta for each cell;
Construct a supernet and initialize supernet weights ω\omega;
for  each \in [1, E]  do
       Sample batchbatch \in valval;
       CelliCell_{i} independently updates the αi\alpha_{i} by αval\nabla_{\alpha}\mathcal{L}_{val} and the βi\beta_{i} by βval\nabla_{\beta}\mathcal{L}_{val};
       Sample batchbatch \in traintrain;
       Update network weights ω\omega by descending ωtrain\nabla_{\omega}\mathcal{L}_{train};
       Derive the final architecture based on the learned α\alpha and β\beta;
      
end for
Algorithm 1 Relax DARTS

4 Experiments

To evaluate our approach, we performed extensive experiments on four public datasets: JuDo1000[7] and three sub-datasets (RAN, HSS, and TEX) in GazeBase [29]. We compare Relax DARTS with state-of-the-art work, including not only eye movement recognition algorithms: the DEL [22], Expansion CNN[30], Dense LSTM[25], EKTY[24] and EmMixformer[5], but also the classical DARTS[11] algorithm as well as its improvements Fair-DARTS[12] and DARTS-[13].

The network is partitioned and trained directly on the dataset to be evaluated, without a proxy dataset, because the computational overhead is affordable and reduces the performance gap between search and evaluation.

All DARTS-based methods perform 50 search epochs and 300 training epochs, following the same network parameter settings as in [12] and the same eye movement data partitioning as in [5] for fair baseline testing and comparison. We set the batch size to 32 for training and 256 for testing, the learning rate decays from 0.025 to 0 with cosine annealing, SGD as the optimizer with a momentum of 0.9 and a weight decay of 5×1045\times 10^{-}4, and the drop-path rate is set to 0.3. All experiments were done in PyTorch and implemented on a high-performance computer with an NVIDIA A100 Tensor Core GPU.

Table 1: Results of comparative experiments on the RAN database
RAN EER FRR@FAR
10110^{-1} 10210^{-2} 10310^{-3}
Relax DARTS 0.0657 0.0452 0.2648 0.6681
DARTS [11] 0.0956 0.0910 0.5412 0.8987
Fair DARTS[12] 0.0720 0.0532 0.3048 0.7038
DARTS-[13] 0.0788 0.0617 0.3907 0.7915
EmMixFormer[5] 0.0801 0.0680 0.2818 0.2818
DEL[23] 0.1436 0.2066 0.7383 0.9645
Expansion CNN[30] 0.1500 0.2340 0.7277 1.0000
Dense LSTM[25] 0.1161 0.1329 0.5529 0.8846
EKYT[24] 0.0885 0.0807 0.3513 0.7045
Table 2: Results of comparative experiments on the HSS database
HSS EER FRR@FAR
10110^{-1} 10210^{-2} 10310^{-3}
Relax DARTS 0.0610 0.0398 0.1504 0.3718
DARTS [11] 0.0642 0.0423 0.2307 0.5448
Fair DARTS[12] 0.0736 0.0544 0.2939 0.6424
DARTS-[13] 0.0686 0.0482 0.2634 0.6119
EmMixFormer[5] 0.0673 0.0502 0.2032 0.4659
DEL[23] 0.1309 0.1680 0.6217 0.9087
Expansion CNN[30] 0.1437 0.1894 0.6267 1.0000
Dense LSTM[25] 0.1191 0.1365 0.5082 0.8382
EKYT[24] 0.0839 0.0739 0.2691 0.5575

4.1 Identification Performance of Relax DARTS

In detail, in the traditional deep learning approach, we select the first round of data for training and the second round of data collected at different times for testing. In the DARTS-based approach, the first round of data is first divided into a training set and a validation set for the automatic search of the network, and the resulting target network architecture is then utilized for training with the first round of data and testing with the second round of data.

Tables 1-3 list the EER and FRR@FAR of each approach. It is evident that our Relax DARTS outperforms existing approaches, achieving the lowest verification error, i.e. 0.0657, 0.0610, and 0.0529 on the three subsets of the GazeBase dataset, and 0.0437 on the JuDo1000 dataset. Also, it is clear that our proposed approach achieves higher recognition accuracy compared to existing approaches at different FARs.

Table 3: Results of comparative experiments on the TEX database
TEX EER FRR@FAR
10110^{-1} 10210^{-2} 10310^{-3}
Relax DARTS 0.0529 0.0324 0.1362 0.4596
DARTS [11] 0.0580 0.0338 0.2160 0.5606
Fair DARTS[12] 0.0583 0.0347 0.2183 0.5445
DARTS-[13] 0.0596 0.0343 0.2304 0.5717
EmMixFormer[5] 0.0635 0.0407 0.2603 0.6193
DEL[23] 0.1060 0.1128 0.5750 0.9141
Expansion CNN[30] 0.1362 0.1950 0.6977 1.0000
Dense LSTM[25] 0.0971 0.0945 0.4824 0.8456
EKYT[24] 0.0736 0.0551 0.3293 0.7175
Table 4: Results of comparative experiments on the JuDo1000 database
JuDo1000 EER FRR@FAR
10110^{-1} 10210^{-2} 10310^{-3}
Relax DARTS 0.0437 0.0204 0.0953 0.3296
DARTS [Liu2018DARTSDA] 0.0583 0.0339 0.2035 0.4918
Fair DARTS[31] 0.0588 0.0365 0.1953 0.4848
DARTS-[Chu2020DARTSRS] 0.0577 0.0357 0.1911 0.4772
EmMixFormer[Qin2024EmMixformerMT] 0.0543 0.0059 0.1284 0.3359
DEL[23] 0.1238 0.0781 0.5508 0.8945
Expansion CNN[30] 0.0989 0.0586 0.3203 0.7594
Dense LSTM[25] 0.0669 0.0195 0.2305 0.6016
EKYT[24] 0.0773 0.0125 0.1953 0.4922

4.2 Ablation Experiment

To investigate the effect of each step on the model’s recognition accuracy improvement, we performed ablation experiments on the RAN sub-dataset in GazeBase. Specifically, we used DARTS as a baseline and added α\alpha-independent optimization strategy, resulting in the model denoted as ‘+α\alpha’. Finally, we introduced cell input weights β\beta for global network structure fine-tuning, resulting in the model expressed as ‘+β\beta(Relax DARTS)’.

The verification error rates resulting from the ablation schemes are presented in Table5. The experimental results suggest that α\alpha-independent optimization can significantly enhance the baseline model’s recognition performance. This is because the open α\alpha-optimization strategy increases the degree of freedom of network search, resulting in unique cells that are adapted to the current network location. In addition, including input weights β\beta achieves the lowest EER and yields the best performance. This enables fine-tuning of the network architecture while the network performs cell searching, allowing for global search.

Table 5: Results of the ablation experiments on the RAN database
RAN EER FRR@FAR
10110^{-1} 10210^{-2} 10310^{-3}
DARTS [11] 0.0956 0.0910 0.5412 0.8987
+α+\alpha 0.0777 0.0599 0.3648 0.7654
+β+\beta (Relax DARTS) 0.0657 0.0452 0.2648 0.6681

5 Conclusion

In this paper, we relax the search strategy of DARTS and propose Relax DARTS, which implements alternating local and global architectures for end-to-end model search and training for eye movement biometric authentication. Our experimental results demonstrate that our approach outperforms existing methods and achieves a new state-of-the-art verification accuracy.

References

  • [1] Yu Liu, Fangyin Wei, Jing Shao, Lu Sheng, Junjie Yan, and Xiaogang Wang. Exploring disentangled feature representation beyond face identification. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2080–2089, 2018.
  • [2] Yang Peng, Peng Liu, Yu Wang, Guan Gui, Bamidele Adebcisi, and Haris Gacanin. Radio frequency fingerprint identification based on slice integration cooperation and heat constellation trace figure. IEEE Wireless Communications Letters, 11:543–547, 2022.
  • [3] J. Daugman. How iris recognition works. IEEE Transactions on Circuits and Systems for Video Technology, 14(1):21–30, 2004.
  • [4] Chi Xu, Yasushi Makihara, Xiang Li, and Yasushi Yagi. Occlusion-aware human mesh model-based gait recognition. IEEE Transactions on Information Forensics and Security, 18:1309–1321, 2023.
  • [5] Huafeng Qin, Hongyu Zhu, Xin Jin, Qun Song, Mounîm A. El-Yacoubi, and Xinbo Gao. Emmixformer: Mix transformer for eye movement recognition. ArXiv, abs/2401.04956, 2024.
  • [6] Chun-Xia Yang, Dongzhi Zhang, Dongyue Wang, Huixin Luan, Xiaoya Chen, and Weiyu Yan. In situ polymerized mxene/polypyrrole/hydroxyethyl cellulose-based flexible strain sensor enabled by machine learning for handwriting recognition. ACS applied materials & interfaces, 2023.
  • [7] Silvia Makowski, Lena A. Jäger, Paul Prasse, and Tobias Scheffer. Biometric identification and presentation-attack detection using micro- and macro-movements of the eyes. 2020 IEEE International Joint Conference on Biometrics (IJCB), pages 1–10, 2020.
  • [8] Simon Eberz, Kasper Rasmussen, Vincent Lenders, and Ivan Martinovic. Preventing lunchtime attacks: Fighting insider threats with eye movement biometrics. In Network and Distributed System Security (NDSS) Symposium. Internet Society, 2015.
  • [9] Pieter J. Blignaut. Mapping the pupil-glint vector to gaze coordinates in a simple video-based eye tracker. Journal of Eye Movement Research, 7, 2013.
  • [10] Christina Katsini, Yasmeen Abdrabou, George E Raptis, Mohamed Khamis, and Florian Alt. The role of eye gaze in security and privacy applications: Survey and future hci research directions. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1–21, 2020.
  • [11] Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. International Conference on Learning Representations, abs/1806.09055, 2019.
  • [12] Xiangxiang Chu, Tianbao Zhou, Bo Zhang, and Jixiang Li. Fair darts: Eliminating unfair advantages in differentiable architecture search. In European conference on computer vision, pages 465–480. Springer, 2020.
  • [13] Xiangxiang Chu, Xiaoxing Wang, Bo Zhang, Shun Lu, Xiaolin Wei, and Junchi Yan. Darts-: Robustly stepping out of performance collapse without indicators. International Conference on Learning Representations, abs/2009.01027, 2021.
  • [14] Deepak Akkil, Poika Isokoski, Jari Kangas, Jussi Rantala, and R. Raisamo. Traqume: a tool for measuring the gaze tracking quality. Proceedings of the Symposium on Eye Tracking Research and Applications, 2014.
  • [15] Paweł Kasprowski and Józef Ober. Eye movements in biometrics. In ECCV Workshop BioAW, 2004.
  • [16] Oleg V. Komogortsev and Javed I. Khan. Eye movement prediction by kalman filter with integrated linear horizontal oculomotor plant mechanical model. Proceedings of the 2008 symposium on Eye tracking research & applications, 2008.
  • [17] Oleg V. Komogortsev, Alexey Karpov, Larry R. Price, and Cecilia R. Aragon. Biometric authentication via oculomotor plant characteristics. 2012 5th IAPR International Conference on Biometrics (ICB), pages 413–420, 2012.
  • [18] Akram Bayat and Marc Pomplun. Biometric identification through eye-movement patterns. In International Conference on Applied Human Factors and Ergonomics, 2017.
  • [19] Chunyong Li, Jiguo Xue, Cheng Quan, Jingwei Yue, and Chenggang Zhang. Biometric recognition via texture features of eye movement trajectories in a visual searching task. PLoS ONE, 13, 2018.
  • [20] Xin Jin, Hongyu Zhu, Mounîm A. El-Yacoubi, Hongchao Liao, Huafeng Qin, and Yun Jiang. Starlknet: Star mixup with large kernel networks for palm vein identification. ArXiv, abs/2405.12721, 2024.
  • [21] Cem Subakan, Mirco Ravanelli, Samuele Cornell, Mirko Bronzi, and Jianyuan Zhong. Attention is all you need in speech separation. ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 21–25, 2020.
  • [22] Lena A. Jäger, Silvia Makowski, Paul Prasse, Sascha Liehr, Maximilian Seidler, and Tobias Scheffer. Deep eyedentification: Biometric identification using micro-movements of the eye. ArXiv, abs/1906.11889, 2019.
  • [23] Silvia Makowski, Paul Prasse, David Robert Reich, Daniel G. Krakowczyk, Lena A. Jäger, and Tobias Scheffer. Deepeyedentificationlive: Oculomotoric biometric identification and presentation-attack detection using deep neural networks. IEEE Transactions on Biometrics, Behavior, and Identity Science, 3:506–518, 2021.
  • [24] Dillon James Lohr and Oleg V. Komogortsev. Eye know you too: A densenet architecture for end-to-end biometric authentication via eye movements. ArXiv, abs/2201.02110, 2022.
  • [25] Bilal Taha, Sherif Nagib Abbas Seha, Dae Yon Hwang, and Dimitrios Hatzinakos. Eyedrive: A deep learning model for continuous driver authentication. IEEE Journal of Selected Topics in Signal Processing, 17:637–647, 2023.
  • [26] Hanwen Liang, Shifeng Zhang, Jiacheng Sun, Xingqiu He, Weiran Huang, Kechen Zhuang, and Zhenguo Li. Darts+: Improved differentiable architecture search with early stopping. ArXiv, abs/1909.06035, 2019.
  • [27] Han Cai, Ligeng Zhu, and Song Han. Proxylessnas: Direct neural architecture search on target task and hardware. ArXiv, abs/1812.00332, 2018.
  • [28] Xin Chen, Lingxi Xie, Jun Wu, and Qi Tian. Progressive differentiable architecture search: Bridging the depth gap between search and evaluation. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 1294–1303, 2019.
  • [29] Henry K. Griffith, Dillon James Lohr, Evgeny Abdulin, and Oleg V. Komogortsev. Gazebase, a large-scale, multi-stimulus, longitudinal eye movement dataset. Scientific Data, 8, 2021.
  • [30] Dillon James Lohr, Henry K. Griffith, and Oleg V. Komogortsev. Eye know you: Metric learning for end-to-end biometric authentication using eye movements from a longitudinal dataset. IEEE Transactions on Biometrics, Behavior, and Identity Science, 4:276–288, 2021.
  • [31] Xiangxiang Chu, Tianbao Zhou, Bo Zhang, and Jixiang Li. Fair darts: Eliminating unfair advantages in differentiable architecture search. In European conference on computer vision, pages 465–480. Springer, 2020.