This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A Contactless Fingerprint Recognition System

Aman Attrish, Nagasai Bharat, Vijay Anand,  and Vivek Kanhangad A. Aman, N. Bharat, V. Anand and V. Kanhangad are with Discipline of Electrical Engineering, Indian Institute of Technology Indore, Indore 453552, India. e-mail:([email protected], ee160002016.iiti.ac.in, [email protected], [email protected])
Abstract

Fingerprints are one of the most widely explored biometric traits. Specifically, contact-based fingerprint recognition systems reign supreme due to their robustness, portability and the extensive research work done in the field. However, these systems suffer from issues such as hygiene, sensor degradation due to constant physical contact, and latent fingerprint threats. In this paper, we propose an approach for developing a contactless fingerprint recognition system that captures finger photo from a distance using an image sensor in a suitable environment. The captured finger photos are then processed further to obtain global and local (minutiae-based) features. Specifically, a siamese convolutional neural network (CNN) is designed to extract global features from a given finger photo. The proposed system computes matching scores from CNN-based features and minutiae-based features. Finally, the two scores are fused to obtain the final matching score between the probe and reference fingerprint templates. Most importantly, the proposed system is developed using the Nvidia Jetson Nano development kit, which allows us to perform contactless fingerprint recognition in real-time with minimum latency and acceptable matching accuracy. The performance of the proposed system is evaluated on an in-house IITI contactless fingerprint dataset (IITI-CFD) containing 105 train and 100 test subjects. The proposed system achieves an equal-error-rate of 2.19% on IITI-CFD.

Index Terms:
Contactless fingerprint, fingerprint recognition, siamese CNN

I Introduction

Faced with challenges of identity theft from password and PIN-based authentication systems, new technological solutions are gradually being applied. One of these technologies, biometrics, has quickly proved itself as the most suitable means of authenticating and identifying individuals in a fast, safe, and reliable way, using unique biometric traits.

In overview, fingerprints are the most widely explored biometric traits. A fingerprint image consists of patterns of ridges and valleys (furrows) found on the fingertip [1]. The automated fingerprint recognition systems (AFRS) recognize a person by a traditional method of matching a fingerprint pattern. The fingerprint recognition can be carried out in two ways, namely; fingerprint verification and fingerprint identification [2]. In the verification process, the biometric system performs a one-to-one matching of the user’s fingerprint with the fingerprint template stored in the database to verify if the claimed identity is true or false [3]. On the other hand, in the identification process, the user’s fingerprint is compared to all the fingerprint templates stored in the database to obtain the user’s identity. Hence, the identification process is computationally expensive as compared to the verification process, especially for large databases [1].

Generally, in contact-based AFRS, the fingerprint image is captured with the incorporation of an advanced complementary MOSFET (CMOS) image sensor[2]. Non-linear spatial distortions and low contrast regions due to improper pressures of the finger on the sensor platen are some of the challenges that are common in contact-based biometric systems [2]. Due to the constant physical contact of each individual’s finger with the sensor in contact-based AFRS, there might be an issue where the cleanliness of the sensor should be of the most importance. This can lead to the spread of contagious diseases to users; also, the system might not work as expected due to the accumulation of dust and dirt on the sensor. Further, contact-based sensors have a high maintenance cost as they can be faulted easily when not appropriately used during physical contact. Furthermore, these technologies encounter a significant security threat, since every acquisition of fingerprint leaves a latent print of the finger on the sensor surface, which could be easily lifted off the sensor surface.

The solution to the aforementioned issues leads to the development of the biometric system in the contactless domain using a camera sensor, which captures the fingerprint image in a suitable capturing environment. Piuri and Scotti [4] investigated techniques to suitably processing the camera images of fingertips such that the processed images are similar to the fingerprint images captured using the dedicated sensor. The primary focus of the work presented in [4] is to leverage the existing contact-based fingerprint recognition techniques to develop a contactless fingerprint recognition system using fingerprint images from mobile camera and webcam. Labati et al.[5] proposed an approach to recover perspective deformations and improper fingertip alignments in single camera systems in the contactless domain of the biometric finger systems. This approach incorporated Sony CCD camera sensor. It is done to eliminate the non-idealities of the contactless acquisition of fingerprint samples. To improve the visibility of the ridge patterns, illumination techniques presented in [6, 7, 8] are used. Lin and Kumar[9] presented a CNN-based framework to match contactless and contact-based fingerprint images. Michael et al. [10] developed a biometric system using visible and infrared imagery on the five features namely hand geometry, palm print, palmar knuckle print, palm vein, and finger vein for the recognition. Kumar [11] investigated the possibility of recognizing completely contactless finger knuckle images acquired under varying poses. Experimental results presented in [11] validate the usefulness of normalization and matching algorithms to recognize finger knuckle with different poses.

A review of the literature indicates that contactless fingerprint biometrics has not been explored much, and most of the existing research has been focused only on the analysis and simulation aspects of contactless fingerprint biometrics. Despite the advances in sensor technology and edge computation power, very little research work has implemented the algorithms on hardware and prototype a biometric system. This motivated us to develop a contactless fingerprint biometric system considering the significant research in the field of biometrics and deep learning and implement it in a system to be used in real-time.

The objective of our work is to develop a contactless fingerprint recognition system (CFRS), incorporating both deep learning and standard fingerprint matching algorithms. The primary focus is on the implementation of CFRS in real-time on hardware setup with minimum latency and high matching accuracy. Major contributions of this work are as follows: a customized siamese CNN architecture has been designed in accordance with the finger images captured from the camera sensor in the system. The siamese CNN network along with minutiae-based matching algorithm have been deployed on Nvidia Jetson Nano kit to develop a CFRS in real-time with minimum latency and acceptable matching accuracy.

The rest of the paper is organised as follows: the proposed approach is described in Section II, in which a brief overview of the challenges faced in the contactless domain, the proposed algorithm and the incorporated NIST software are presented. This is followed by the detailed description of the methods employed for the problems faced using deep learning and state-of-the-art techniques. The hardware incorporated in the project are well-demonstrated next. Experimental results and discussion are presented in Section III. This section also presents a brief description of the database developed and the performance measures used for evaluating the performance of the system. Finally, conclusion and future work are presented in Section IV.

II Proposed Approach

We have developed a CFRS consisting of three major components namely, contactless finger image capturing module, CNN based global feature matching module and minutiae feature matching module. The proposed CFRS captures finger image from a distance using Raspberry Pi NoIR camera V2 that has a Sony IMX219 8-megapixel sensor.

Refer to caption
Figure 1: Schematic diagram of proposed parallel approach

A schematic diagram of the proposed approach is presented in Fig. 1. As can be observed, the proposed approach employs a customized siamese CNN architecture to process images captured from the camera sensor in the system. Specifically, siamese network generates a fixed length embedding of the fingerprint image which is then utilized to calculate a similarity score between the probe and the reference images. Further, we have employed image enhancement technique on the captured finger image and then performed minutia-based matching using the standard NIST Biometric Image Software (NBIS). Finally, the scores obtained from both of the modules are fused to obtain the final score.

We have divided our approach in two stages i.e., developing algorithms and modifying the existing ones to increase the matching accuracy of two fingerprint templates, and implementing it on hardware with minimum latency. Generally, it is challenging to deal with contactless fingerprint images due to problems like perspective distortion and deformation as discussed in [5]. Lightning conditions affect the quality of the image, and the amount of information of fingerprint captured by the image sensor. To circumvent the aforementioned issues, we have to consider all the global features (orientation map, core and delta point locations; Fig. 2(a) and 2(b)) as well as the local features (minutiae information; Fig. 2(c)) for extracting the maximum information from a fingerprint image captured by the image sensor. Thus, we have proposed a parallel approach (Fig. 1) using deep learning to deal with global features [12] and state-of-the-art minutiae matching approach to deal with local features. Next, we provide a detailed description of each of the modules involved in the proposed CFRS.

Refer to caption
(a) Orientation map of a fingerprint
Refer to caption
(b) Core and delta points location
Refer to caption
(c) Types of minutiae of a fingerprint
Figure 2: Global and local features extracted from fingerprint image

II-A Minutiae-based approach

II-A1 Ridge-valley map extraction

For the proposed CFRS, the image enhancement part plays a vital role since the information captured by the image sensor is low as compared to images from contact-based sensors. Firstly, there is a need to extract the ridge-valley map from the given fingerprint image. We have employed adaptive mean thresholding (AMT) on gray scale image to separate a foreground pattern of interest and the background image based on the difference in pixel intensities of each area [13]. AMT works on the principle that the smaller image regions have roughly consistent illumination, hence more suitable for thresholding as compared to global thresholding. It can aid varying lighting states in the fingerprint image, e.g., those appearing as a result of a string glow, shadows, and gradients. Fig. 3 summarizes the result of the thresholding technique. First raw image (fig. 3(a)) is converted to gray-scale image (Fig. 3(b)) and then the ridge-valley map is extracted using AMT (Fig. 3(d)). The AMT technique clearly works better than the global thresholding (Fig. 3(c)).

II-A2 Minutiae matching

As presented in Fig. 2(c), local features include ridge termination and ridge bifurcation, collectively called minutiae. For minutiae extraction, we have used the standard NBIS minutiae detector MINDTCT [14]. It automatically detects and tracks records of minutiae information in the form of triplet [x,y,θ][x,y,\theta], where (x,y)(x,y) is the position of minutia point and θ\theta represents the orientation (Fig. 4). The minutiae information obtained by the MINDTCT algorithm is then used to perform matching. Specifically, NBIS fingerprint matching algorithm, BOZORTH3, is incorporated for minutiae matching [14]. It is a minutia information based fingerprint matching algorithm that calculates similarity scores (Sm)(S_{m}) using matched minutiae.

Refer to caption
(a) Raw image
Refer to caption
(b) Gray scale image
Refer to caption
(c) Global threshold
Refer to caption
(d) Adaptive mean threshold
Figure 3: Results from different thresholding techniques
Refer to caption
Figure 4: Minutia information

II-B Deep learning-based approach

Recently, CNNs have been proved to be highly acceptable for various computer vision task, especially image classification [15, 16, 17, 18]. In the proposed approach, raw fingerprint image captured by the image sensor is directly fed into customized siamese CNN. The main idea for using CNN is that a fingerprint image has lots of global features that can easily be captured by CNN. The major factor that needs to be taken care of is the size of the image. More importantly, the image size should not shrink while going deeper into the network because a fingerprint image has regular patterns, which would be ineffective if the image size shrinks. To keep the size of the image intact after each set of convolution layer filters performed. During the construction of the siamesse architecture, the convolution layers and the dense layers were taken into account such that minimum number of parameters are utilized in the architecture. This is done to reduce the latency of the system when deployed on the hardware. The number of the convolution layers were limited to 3 each having 4, 8 and 8 filters and batch normalisation layers respectively. At the end of the convolution layers, an average pooling is employed so that the least number of parameters are required while passing through the dense layers. Table I presents the architecture of the siamese CNN used in the proposed CFRS. In order to match and generate a matching score between a given pairs of fingerprint templates, a siamese network is employed [19]. It makes use of two identical CNNs with shared weights. While training the neural network, it minimizes the distance between two similar templates and maximizes the distance between two dissimilar templates with the help of distance aware contrastive loss function . The contrastive loss function is defined as [20]:

L=(1Y)12(Dw)2+(Y)12(max(0,mDw))2L=(1-Y)\frac{1}{2}(D_{w})^{2}+(Y)\frac{1}{2}(max(0,m-D_{w}))^{2} (1)

where

Y={1,sameclass0,differentclassY=\begin{cases}1,&same\;class\\ 0,&different\;class\\ \end{cases} (2)

DwD_{w} : Euclidean Distance between outputs vectors or embeddings of size 16×116\times 1 of siamese networks
mm : margin value (dissimilar pairs beyond the margin didn’t contribute to loss).
A schematic diagram of the employed siamese network is presented in Fig. 5. The embedding vectors of size 16×116\times 1 from the siamese network is used to compute Euclidean distance between any two templates. This Euclidean distance acts as the dissimilarity score (because CNN is trained to maximize this distance between two dissimilar templates and vice versa). Finally, the similarity score (Sd)(S_{d}) is calculated by taking the inverse of dissimilarity score.

TABLE I: CNN architecture
Layer name input size Filter size No. of filters Padding Stride Output size
Conv1 310×240×3310\times 240\times 3 3×33\times 3 4 1 1 310×240×4310\times 240\times 4
BatchNorm 310×240×4310\times 240\times 4 - - - - 310×240×4310\times 240\times 4
Conv2 310×240×4310\times 240\times 4 33 8 1 1 310×240×8310\times 240\times 8
BatchNorm 310×240×8310\times 240\times 8 - - - - 310×240×8310\times 240\times 8
Conv3 310×240×8310\times 240\times 8 33 8 1 1 310×240×8310\times 240\times 8
BatchNorm 310×240×8310\times 240\times 8 - - - - 310×240×8310\times 240\times 8
AveragePool 310×240×8310\times 240\times 8 22 - - 2 155×120×8155\times 120\times 8
Flatten 155×120×8155\times 120\times 8 - - - - 148800×1148800\times 1
Dense1 148800×1148800\times 1 - - - - 256×1256\times 1
Dense2 256×1256\times 1 - - - - 128×1128\times 1
Dense3 128×1128\times 1 - - - - 16×116\times 1
Refer to caption
Figure 5: Architecture of siamese network

II-C Score fusion

As presented before, both minutiae matching module and deep learning-based module generate their respective scores represented as SmS_{m} and SdS_{d}. Before fusing these scores, both scores are normalized to the range (0,1)(0,1) using min-max normalization [21]. Finally, a weighted sum is computed as:

Sf=wdSd+wmSmS_{f}=w_{d}S_{d}+w_{m}S_{m} (3)

where wdw_{d} and wmw_{m} represent the weight associated with scores obtained from deep learning and minutiae matching-based approaches. These weights wdw_{d} and wmw_{m} have been empirically set to 0.4 and 0.6.

II-D Hardware implementation

II-D1 Electrical components and image capturing environment

For our CFRS, we needed a sensor which can work in bright as well as dim light. Hence, we have used the Raspberry Pi NoIR Camera V2 [22]. It has a Sony IMX219 8-megapixel sensor, and it does not employ the infrared filter (NoIR = No InfraRed). NVIDIA Jetson Nano Developer Kit [23] is used as a computing system, which is a compact, mighty embedded computer that enabled us to run our developed system. It just needs at most 5 Watts of DC power supply. To capture the quality finger images, we designed a finger image capturing environment (Fig. 6(a)) with the help of cardboard. As presented in Fig. 6(a), an LED bulb is used to illuminate the capturing environment. The camera sensor is fixed on the top of the cardboard to capture the image of the finger placed beneath it. A small square opening is made on the front of the cardboard to enter the finger inside it. Our system is designed to capture only the fingerprint on distal phalanges, on which we have applied AMT in real-time to minimize the latency (Fig. 6(b)).

Refer to caption
(a) Image capturing environment
Refer to caption
(b) Real-time AMT
Figure 6: Pink distortion and its solution

II-D2 System implementation methodology

The CFRS is developed to work on Verification mode. In this mode of operation, the system has two phases, namely enrollment and verification. In enrollment phase, the system captures the three finger photos of the user’s finger with different placement and orientation. The average of three 16×116\times 1 embeddings corresponding to three finger photos from deep-learning based approach, and minutiae data of one of the finger photo is stored as the template in the local database (refer to Fig. 7(a)) and a unique id is assigned to the user. In the verification phase, system first captures the finger photo of the user and asks for the unique id provided during the enrollment state. With the captured finger photo, system then generates the 16×116\times 1 embedding, and minutiae data and then computes a similarity score (as discussed in score fusion) with templates of the claimed id stored in database. If the similarity score is above some threshold, the system displays a message of correct match established on the connected monitor screen (refer to Fig. 7(b)).

Refer to caption
(a) Implemented enrollment process
Refer to caption
(b) Implemented verification process
Figure 7: Enrollment and verification

II-D3 Hardware connections

A detailed schematic of all the components, along with connections, is presented in Fig. 8. As can be observed, the image sensor is connected to the computing system’s camera connector J13 pin. A custom monitor is connected via HDMI cable along with keyboard and mouse via USB cable. The image sensor is attached to the top of the image capturing environment, which provides the flexibility to put the finger facing up inside it. All the components are powered separately.

Refer to caption
Figure 8: A schematic diagram representing connection of various components of CFRS

III Experimental Results and Discussion

III-A Database preparation

As presented in Fig. 6(a), we have developed a unique contactless image capturing environment. This capturing environment is used to collect an in-house contactless fingerprint dataset referred to as IITI-CFD. We have captured a total of 1640 finger images from 206 fingers, each contributing eight impressions. A sample finger image from IITI-CFD is presented in Fig. 3(a). A detailed description of IITI-CFD is presented in Table II. For training the CNN-based approach, IITI-CFD is divided into train and test sets. Specifically, the training set consists of 840 finger images from 105 fingers. The remaining images from 100 fingers form the test set.

TABLE II: Details of IITI-CFD
Dataset Image size (pixels) Fingers Images per finger Total images
Training set 310×240310\times 240 105 8 840
Test set 310×240310\times 240 100 8 800

III-B Performance measures

For evaluating the performance of the proposed system, we have employed the following performance measures namely, equal-error-rate (EER), FMR100 and FMR1000 [24]. Let the stored template and the verification feature set be represented as TT and II. For evaluating accuracy of biometrics verification system, genuine score distribution (by comparing TT and II from the same finger) and impostor score distribution (by comparing TT and II from the different finger) are obtained. Based on the threshold (th)(th), there are two verification error rates namely, false match rate (FMR) and false non-match rate (FNMR). FMR is defined as the percentage of impostor pairs whose comparison score is greater than thth and FNMR is the percentage of genuine pairs whose comparison score is lower than thth. EER denotes the error rate at thth for which both FMR and FNMR are identical [2]. The lower the equal error rate value, the higher the accuracy of the biometric system. Generally, performance of the system is reported at all operating points (thth) by plotting the receiver operating characteristic (ROC) curve or detection error trade-off (DET) curve. The ROC curve is a visual characterization of the trade-off between the FMR and the 1-FNMR. On the other hand, DET curve plots the trade-off between FMR and FNMR. Specifically, the DET curve is utilized to calculate FMR100 and FMR1000 of the proposed system.

III-C Experimental results

The siamese network has been trained using the finger images of the training set. Specifically, the model has been trained for 70 epochs using the adaptive moment estimation (ADAM) [25] to optimize the loss function. We have employed ReLU activation function[26] and batch normalization [27] after each convolutional layer in the proposed model. The model has been trained on the Google co-laboratory (colab) platform [28] that uses a Tesla K80 which has 2496 CUDA cores as the GPU and a single-core hyper-threaded Xeon Processor with 2.3 GHz of processing speed. Besides, 12 GB GDDR5 VRAM was also provided by Google colab.

In order to calculate the performance metrics, genuine scores are obtained by comparing different impressions of the same finger and impostor scores are obtained by comparing images of different fingers. The feature embedding of all the test images are obtained from the output of the trained siamese network and the euclidean distances of the genuine pairs and imposter pairs are calculated and stored, which is then utilized in finding out the EER and plotting of the ROC curve with the help of FNMR and FMR by varying decision threshold. For the test dataset of 800 images (with 8 images from each finger), a total of 2800 genuine pairs and 14850 imposter pairs are obtained and the euclidean distances of their output embeddings were calculated. The genuine pairs are generated between all the possible pairs of different impressions of the same finger and is repeated for all the fingers in the test set. The Imposter pairs are generated by considering a single impression of all the fingers and finding all the possible pairs. This process is repeated for an another two sets of impressions of all the fingers in the test set. The same set of images were also used as test set for NBIS based minutiae matching.

The ROC curves of the proposed approach are presented in Fig. 9. As can be observed, score-level fusion of the deep learning-based approach and minutiae-based approach provides superior results than the individual approaches.

Refer to caption
Figure 9: ROC curves for comparison between deep-learning, minutiae-matching and score-fusion technique
Refer to caption
(a) Deep learning based
Refer to caption
(b) Minutiae matching based
Refer to caption
(c) Score fusion based
Figure 10: FMR and FNMR plots

The FNMR and FMR are plotted against varying threshold values which make the verdict for the system. The plots for the FNMR and FMR intersects at a point which intern is the minimum intersection point which gives the EER of the System. EER for the individual branches i.e, deep learning approach, minutiae based approach , and score fusion approach are calculated from FMR vs FNMR plots (refer to Fig. 10) and are summarized in Table. III.

Refer to caption
Figure 11: DET Curve
TABLE III: EER comparison
Approach EER
Deep Learning 11.39%11.39\%
Minutiae Matching 4.09%4.09\%
Score Fusion 2.192.19%

Further, DET curve depicts the error-rate trade-off at all possible operating points has been used to obtain FMR100100 and FMR10001000. Fig. 11 presents DET curves of three approaches. The FMR100100 and FMR10001000 obtained from DET curve are summarized in Table. IV.

TABLE IV: FMR100 and FMR1000
Approach FMR100100 FMR10001000
Deep Learning 0.5730.573 0.8720.872
Minutiae Matching 0.0810.081 0.1500.150
Score Fusion 0.0370.037 0.1230.123

As can be observed from Table III and Table IV, EER, FMR100 and FMR1000 which have been calculated for the test dataset in all the three approaches suggest that the fusion of the deep learning score with the state-of-the-art NBIS software gives us the best results as compared to the methods taken individually. The fused score value is then compared with a threshold which gives us the final verdict of the developed biometric system.

IV Conclusion

In this work, we have identified main issues with the contact-based biometric system and unfolded the scope of the contact-less biometric system. Apart from standard image processing and state-of-art feature extraction algorithms, deep learning models can be used to improve matching accuracy to even better. With the advancement in sensing technology and computation power, the contact-less domain has an enormous market scope. Our results show that contact-less biometric systems can also achieve similar accuracy, which other systems claim in the commercial market.

As part of future work, we would like to implement our developed model on a micro-controller and GPU for faster computation and embed it on the Printed Circuit Board (PCB) along with image sensor and image capturing environment. This way, our developed model would have the potential of the standalone embedded device.

References

  • [1] A. A. Moenssens, Fingerprint Techniques, 1st ed.   Chilton Book Company, Philadelphia, 1971.
  • [2] D. Maltoni, D. Maio, A. Jain, and S. Prabhakar, Handbook of fingerprint recognition, 2nd ed.   Springer Science & Business Media, 2009.
  • [3] A. Jain, L. Hong, and R. Bolle, “On-line fingerprint verification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 4, pp. 302–314, Apr. 1997.
  • [4] V. Piuri and F. Scotti, “Fingerprint biometrics via low-cost sensors and webcams,” in 2008 IEEE Second International Conference on Biometrics: Theory, Applications and Systems, Sep. 2008, pp. 1–6.
  • [5] R. D. Labati, A. Genovese, V. Piuri, and F. Scotti, “Contactless fingerprint recognition: A neural approach for perspective and rotation effects reduction,” in 2013 IEEE Symposium on Computational Intelligence in Biometrics and Identity Management (CIBIM), April 2013, pp. 22–30.
  • [6] B. Y. Hiew, A. B. J. Teoh, and D. C. L. Ngo, “Automatic digital camera based fingerprint image preprocessing,” in International Conference on Computer Graphics, Imaging and Visualisation (CGIV’06), July 2006, pp. 182–189.
  • [7] B. Y. Hiew, A. B. J. Teoh, and Y. H. Pang, “Digital camera based fingerprint recognition,” in 2007 IEEE International Conference on Telecommunications and Malaysia International Conference on Communications, May 2007, pp. 676–681.
  • [8] B. Hiew, A. B. Teoh, and D. C. Ngo, “Preprocessing of fingerprint images captured with a digital camera,” in 2006 9th International Conference on Control, Automation, Robotics and Vision, Dec 2006, pp. 1–6.
  • [9] C. Lin and A. Kumar, “A cnn-based framework for comparison of contactless to contact-based fingerprints,” IEEE Transactions on Information Forensics and Security, vol. 14, no. 3, pp. 662–676, March 2019.
  • [10] G. K. O. Michael, T. Connie, and A. B. J. Teoh, “A contactless biometric system using multiple hand features,” Journal of Visual Communication and Image Representation, vol. 23, no. 7, pp. 1068–1084, Oct. 2012.
  • [11] A. Kumar, “Toward pose invariant and completely contactless finger knuckle recognition,” IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 1, no. 3, pp. 201–209, July 2019.
  • [12] F. Zhang, S. Xin, and J. Feng, “Combining global and minutia deep features for partial high-resolution fingerprint matching,” Pattern Recognition Letters, vol. 119, pp. 139 – 147, 2019, deep Learning for Pattern Recognition. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0167865517303227
  • [13] E. Davies, Computer and Machine Vision: Theory, Algorithms and Practicalities, 4th ed.   Academic Press, 2012.
  • [14] C. I. Watson, M. D. Garris, E. Tabassi, C. L. Wilson, R. M. Mccabe, S. Janet, and K. Ko, “User’s guide to NIST biometric image software (NBIS),” 2007.
  • [15] T. Guo, J. Dong, H. Li, and Y. Gao, “Simple convolutional neural network on image classification,” in 2017 IEEE 2nd International Conference on Big Data Analysis (ICBDA)(, March 2017, pp. 721–724.
  • [16] N. Jmour, S. Zayen, and A. Abdelkrim, “Convolutional neural networks for image classification,” in 2018 International Conference on Advanced Systems and Electric Technologies (IC_ASET), 2018, pp. 397–402.
  • [17] N. Sharma, V. Jain, and A. Mishra, “An analysis of convolutional neural networks for image classification,” Procedia Computer Science, vol. 132, pp. 377 – 384, 2018, international Conference on Computational Intelligence and Data Science. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S1877050918309335
  • [18] F. Sultana, A. Sufian, and P. Dutta, “Advancements in image classification using convolutional neural network,” in 2018 Fourth International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN), 2018, pp. 122–129.
  • [19] I. Melekhov, J. Kannala, and E. Rahtu, “Siamese network features for image matching,” in 2016 23rd International Conference on Pattern Recognition (ICPR), Dec 2016, pp. 378–383.
  • [20] R. Hadsell, S. Chopra, and Y. LeCun, “Dimensionality reduction by learning an invariant mapping,” in Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2, ser. CVPR ’06.   USA: IEEE Computer Society, 2006, p. 1735–1742. [Online]. Available: https://doi.org/10.1109/CVPR.2006.100
  • [21] A. Jain, K. Nandakumar, and A. Ross, “Score normalization in multimodal biometric systems,” Pattern Recogn., vol. 38, no. 12, p. 2270–2285, Dec. 2005. [Online]. Available: https://doi.org/10.1016/j.patcog.2005.01.012
  • [22] “Raspberry Pi NoIR V2 Cam,” https://www.raspberrypi.org/products/pi-noir-camera-v2, [Online; accessed 23-Jan-2020].
  • [23] “Nvidia Jetson Nano Development Kit,” https://developer.nvidia.com/embedded/jetson-nano-developer-kit, [Online; accessed 23-Jan-2020].
  • [24] R. Cappelli, M. Ferrara, D. Maltoni, and F. Turroni, “Fingerprint verification competition at ijcb2011,” in 2011 International Joint Conference on Biometrics (IJCB), 2011, pp. 1–6.
  • [25] D. P. Kingma and J. B. Adam, “a method for stochastic optimization. arxiv preprint,” arXiv preprint arXiv:1412.6980, 2014.
  • [26] A. F. Agarap, “Deep learning using rectified linear units (relu),” arXiv preprint arXiv:1803.08375, 2018.
  • [27] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” CoRR, vol. abs/1502.03167, 2015. [Online]. Available: http://arxiv.org/abs/1502.03167
  • [28] T. Carneiro, R. V. Medeiros Da NóBrega, T. Nepomuceno, G. Bian, V. H. C. De Albuquerque, and P. P. R. Filho, “Performance analysis of google colaboratory as a tool for accelerating deep learning applications,” IEEE Access, vol. 6, pp. 61 677–61 685, 2018.