Hyperspectral Image Dataset for Individual Penguin Identification
Abstract
Remote individual animal identification is important for food safety, sport, and animal conservation. Numerous existing remote individual animal identification studies have focused on RGB images. In this paper, we tackle individual penguin identification using hyperspectral (HS) images. To the best of our knowledge, it is the first work to analyze spectral differences between penguin individuals using an HS camera. We have constructed a novel penguin HS image dataset, including 990 hyperspectral images of penguins. We experimentally demonstrate that the spectral information of HS image pixels can be used for individual penguin identification. The experimental results show the effectiveness of using HS images for individual penguin identification. The dataset and source code are available here: https://033labcodes.github.io/igrass24_penguin/

Index Terms— individual identification, hyperspectral image, pixel-wise classification, African penguin
1 Introduction
Remote individual animal identification is an important task that allows researchers to understand animal behavior and study population parameters such as population size, movement patterns, etc. There are invasive and non-invasive methods for individual animal identification. Invasive methods require capturing an animal and physical tagging, which is costly and likely to cause stress. Non-invasive methods use biometric traits such as DNA collected from hair or feces or visual assessment based on images captured in inhabited regions. DNA collection and analysis are costly and sometimes infeasible because they require going to dangerous areas to acquire the samples. Therefore, visual assessment and animal biometrics using images are highly demanded.
In this paper, we focus on individual penguin identification based on hyperspectral (HS) images. Existing image-based animal identification methods [1] [2] [3] rely on the spatial information of specific parts of the animal, such as the belly pattern of a penguin or the face of a panda. Those existing image-based methods are infeasible to use for the small size of a pixel of the target. Then, we use HS data of the single pixel. We assume an application with HS images as shown in Fig. 1. First, a single pixel in the target penguin is selected from the given HS image. Then, the machine learning model classifies penguin individuals. We analyze HS data of a single pixel because the pixel size of the target penguin in remote sensing is often too small to analyze spatial information.
To the best of our knowledge, it is the first work to analyze spectral differences between penguin individuals using an HS camera. There are existing studies identifying deep-sea megafauna using HS image [4], however, this work focused on the seafloor and did not capture the individuals. HS data has also been used in a limited number of studies to differentiate amphibian and reptile species [5] [6] [7] and insects [8], these studies only analyzed differences between species, not individuals. Kolmann et.al. distinguishing HS data from serrasalmid species also documented interspecific variation in pacus that corresponds to cryptic lineages [9].
In this paper, we tackle individual penguin identification using HS data of the single pixel. For this task, we constructed a novel dataset. This dataset comprises HS images, annotated with different penguins. Machine learning is used for the identification to evaluate the effectiveness of using HS data. In the experiment, as shown in Fig. 1, pixels of the penguins are selected from the HS images, and the selected spectral data are classified into individuals using a machine learning model.
Our contributions are twofold: First, we examined individual penguin identification using HS data and demonstrated its effectiveness. Second, we constructed a novel dataset for individual penguin identification composed of HS images 111Our dataset is available here: https://huggingface.co/datasets/dekkaiinu/hyper_penguin.
2 Penguin HS image dataset



We collected HS images of African penguins at Ueno Zoological Gardens [10]. For collecting HS images, we used the HS camera, which can capture the wave range of [nm], with bands and a spatial resolution of [nm] [11]. The image size is pixels. The distance between the HS camera and the penguins was approximately 3 to 6 meters, with the camera positioned at a height of 1.2 meters, as illustrated in Fig. 2. We collected HS penguin images outside, as shown in Fig. 3. Additionally, the camera angle was adjusted for each image to ensure that the target penguins were captured in the frame. We captured the HS images in such a manner that a group of penguins is depicted in a single frame. Each image includes 1 to 6 penguins. Figure 4 shows an example of images where an RGB image was converted from an HS image.
This dataset is annotated in two ways. The first annotation includes individual IDs of penguins at the pixel level for identifying individuals from (spectral data in) all HS images. The second consists of bounding box annotations marked with individual IDs of penguins, which are used for detecting penguins within the images. Those two ways of annotation allow us to analyze a variety of tasks.
3 Penguin Identification using HS Images
We built a learning-based network model for pixel-wise individual penguin identification from the HS data of the single pixel. The network model is a simple -layer multi-layer perceptron (MLP) model with batch normalization, rectified linear unit (ReLU), dropout, and softmax. For the denoising purpose, we applied a sized spatial box filter as preprocessing. Then, we feed the complete information of the single pixel’s HS data directly, while the dimension reduction by a principal component analysis (PCA) is recommended in some remote sensing research with HS data [12, 13].

4 Experiments
We conducted the pixel-wise individual penguin identification based on the HS data. We compare three different input data: RGB data, compressed HS data by the PCA, and the proposed complete HS data. The RGB data was synthesized from the HS data assuming standard RGB color space. For the compressed HS data, we apply the dimension reduction by the PCA. The number of components was set to five, following existing papers [12, 13].
4.1 Experimental Setup
-
•
1) Dataset: We picked up ten penguins from our penguin dataset to simplify the problem. Then, annotated HS data were split into the training, the validation, and the test data. For a fair assessment, we selected pixels from different HS images for each dataset. In the training, we used data points, selecting from each penguin id. data points were used for the validation and the test, respectively, where each penguin id has data points.
-
•
2) Evaluation Metrics: We quantitatively evaluated the identification performance of each input pixel using Overall Accuracy (OA).
-
•
3) Implementation Details: The experiments we conducted were implemented on the Pytorch platform using a workstation with i9-X CPU, -GB RAM, and an NVIDIA GeForce RTX -GB GPU. We set the number of training epochs to . We adopt the Adam optimizer with a minibatch size of . The learning rate is initialized with and decayed by multiplying a factor of after each one-tenth of the total epochs.
4.2 Experimental Results
data | Penguin ID. | Ave. | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
type | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | (%) |
RGB | 22.5 | 8.5 | 49.7 | 32.1 | 44.74 | 16.1 | 28.8 | 24.4 | 13.7 | 31.0 | 27.16 |
PCA | 36.7 | 30.6 | 67.6 | 48.9 | 64.9 | 43.6 | 56.0 | 46.4 | 60.4 | 55.2 | 51.03 |
HS (Pro.) | 70.8 | 68.7 | 83.7 | 73.3 | 94.6 | 82.3 | 86.3 | 79.3 | 93.4 | 88.2 | 82.06 |
Table 1 summarizes identification accuracy for each data type. From those results, we can find that the proposed full HS analysis archives [%] in the average accuracy, while the RGB data analysis and the compressed HS data by the PCA have only [%], and [%], respectively. The average accuracy of the proposed method may not be perfect, but we think the proposed method can help humans with individual penguin identification tasks. Further improvement of accuracy includes our future work.
We also visualize the pixel-wise identification results. Figure 5 shows the visualization of the identification results of penguin ID 05, where color represents inference penguin IDs. We can see many pixels are correctly identified, showing a magenta color (ID 05). Furthermore, individual identification can be conducted even in cases where only a part of the target penguin is visible due to other penguins appearing in the foreground, provided that the identification is performed on a pixel-wise basis. The images of Fig.6 are band images of the HS image corresponding to the upper right image in Fig.5. The band images of HS images differ by wavelength, and the HS images contain richer information than those of RGB images while the RGB image is composed of only three band images.

5 Discussion



We visualize the differences in spectral among penguin individuals to discuss our experimental results. Figure 7 represents the average spectral of penguins captured in HS images taken over specified one-hour intervals, with each spectral corresponding to a different individual. Figure 7 covers images from am on June rd for one hour, Figure 7 from pm on the same day, and Figure 7 from pm on June th for one hour. The plots depict the average values of the HS data obtained from the white parts of each penguin’s body, representing the individual’s spectral.
To minimize the influence of sunlight conditions, we visualized the HS data by dividing them into hourly intervals in Figure 7. We captured our dataset over one day to obtain the data with various variations since HS data is highly sensitive to sunlight conditions. The strongest spectral intensity, evident in Figure 7, corresponds to the time of day when sunlight is most intense.
The spectral shape of each individual is different for all intervals in Figure 7, so the average accuracy was high (shown in Table 1). Further, the distinctive spectral of the penguin ID in Figure 7 and Figure 7 seem to contribute to the highest classification accuracy of [%]. Thus, the results of the spectral plots demonstrate the effectiveness of HS data in individual identification.
6 Conclusion
This paper has presented the individual penguin identification based on HS data of the single pixel. For that purpose, first, we built the penguin HS image dataset with annotation. Then, we experimentally demonstrated that we can identify individual penguins with HS data of the single pixel by the simple MLP network. In future work, we will propose a novel data augmentation method for penguin identification, and we will try to recognize individual animals other than penguins.
Acknowledgement
We thank and honor the Ueno Zoological Gardens [10] for allowing us to capture HS images of African penguins for the future of image processing research.
References
- [1] RB Sherley, T Burghardt, PJ Barham, IC Cuthill, and NW Campbell, “Spotting the difference: towards fully-automated population monitoring of african penguins spheniscus demersus,” Endangered Species Research, vol. 11, no. 2, pp. 101 – 111, 2010.
- [2] Wojciech Michal Matkowski, Adams Wai Kin Kong, Han Su, Peng Chen, Rong Hou, and Zhihe Zhang, “Giant panda face recognition using small dataset,” in 2019 IEEE International Conference on Image Processing (ICIP). Sept. 2019, IEEE.
- [3] Daniel Schofield, Arsha Nagrani, Andrew Zisserman, Misato Hayashi, Tetsuro Matsuzawa, Dora Biro, and Susana Carvalho, “Chimpanzee face recognition from videos in the wild using deep learning,” Science Advances, vol. 5, no. 9, pp. eaaw0736, 2019.
- [4] Ines Dumke, Autun Purser, Yann Marcon, Stein M Nornes, Geir Johnsen, Martin Ludvigsen, and Fredrik Søreide, “Underwater hyperspectral imaging as an in situ taxonomic tool for deep-sea megafauna,” Scientific reports, vol. 8, no. 1, pp. 12860, 2018.
- [5] C Kenneth Dodd Jr, “Infrared reflectance in chameleons (chamaeleonidae) from kenya,” Biotropica, pp. 161–164, 1981.
- [6] Francisco Pinto, Michael Mielewczik, Frank Liebisch, Achim Walter, Hartmut Greven, and Uwe Rascher, “Non-invasive measurement of frog skin reflectivity in high spatial resolution using a dual hyperspectral approach,” PLoS One, vol. 8, no. 9, pp. e73234, 2013.
- [7] Patricia A Schwalm, Priscilla H Starrett, and Roy W McDiarmid, “Infrared reflectance in leaf-sitting neotropical frogs,” Science, vol. 196, no. 4295, pp. 1225–1226, 1977.
- [8] MICHAEL Mielewczik, Frank Liebisch, ACHIM Walter, and Hartmut Greven, “Near-infrared (nir)-reflectance in insects–phenetic studies of 181 species,” Entomologie heute, vol. 24, pp. 183–215, 2012.
- [9] MA Kolmann, M Kalacska, O Lucanus, L Sousa, D Wainwright, JP Arroyo-Mora, and MC Andrade, “Hyperspectral data as a biodiversity screening tool can differentiate among diverse neotropical fishes,” Scientific reports, vol. 11, no. 1, pp. 16157, 2021.
- [10] “Ueno Zoological Gardens,” https://www.tokyo-zoo.net/english/ueno/index.html.
- [11] “Hyper spetral camera NH9, Eba Japan Co.Ltd.,” https://ebajapan.jp/products/hyper-spectral-camera/.
- [12] Bilal Alhayani and Haci Ilhan, “Hyper spectral image classification using dimensionality reduction techniques,” International Journal of Innovative Research in Electrical, Electronics, Instrumentation and Control Engineering, vol. 5, no. 4, pp. 71–74, 2017.
- [13] Md Palash Uddin, Md Al Mamun, Masud Ibn Afjal, and Md Ali Hossain, “Information-theoretic feature selection with segmentation-based folded principal component analysis (pca) for hyperspectral image classification,” International Journal of Remote Sensing, vol. 42, no. 1, pp. 286–321, 2021.