A Benchmark for Temporal Color Constancy
Abstract
Temporal Color Constancy (CC) is a recently proposed approach that challenges the conventional single-frame color constancy. The conventional approach is to use a single frame - shot frame - to estimate the scene illumination color. In temporal CC, multiple frames from the view finder sequence are used to estimate the color. However, there are no realistic large scale temporal color constancy datasets for method evaluation. In this work, a new temporal CC benchmark is introduced. The benchmark comprises of (1) 600 real-world sequences recorded with a high-resolution mobile phone camera, (2) a fixed train-test split which ensures consistent evaluation, and (3) a baseline method which achieves high accuracy in the new benchmark and the dataset used in previous works. Results for more than 20 well-known color constancy methods including the recent state-of-the-arts are reported in our experiments.
1 Introduction
The human visual system perceives colors of objects independently of the incident illumination. This ability to perceive the colors in varying conditions as the scene is viewed under a white light is known as color constancy (CC) [1]. To achieve this property, computational color constancy algorithms are used in Image Signal Processor (ISP) pipelines of digital cameras to provide an estimate of the color of the illumination of the captured scene.
The existing color constancy algorithms can be mainly classified into two categories: 1) static methods and 2) learning-based methods. Gijsenij et al. [2] defined a third class, gamut-based methods, in their survey. Since the gamut methods often require training examples to define a target gamut [3] we include them to the learning-based category. Static methods do not rely on training data, but are based on assumed statistical or physical properties of the image formation. For instance, Gray-world [4] relates the averaged pixel values to the global illumination and Gray pixel [5] and its extension [6] identify achromatic pixels using the properties of the lambertian model or dichromatic reflection model to reveal illumination, respectively. Learning-based methods learn to map input image features to the illumination estimate. Learning-based methods can operate in the chroma space (Corrected moments [1] and Convolutional CC [7]) or in the spatial space full of rich semantic information (FC4 [8]). Static methods are easier to implement on commodity ISP hardware, but the recent advantages in the mobile CPUs and GPUs have made it intriguing to investigate whether the better performing learning-based methods can replace static methods.
The above computational color constancy methods estimate the illumination color from a single frame - referred to as the ”shot frame” in our work. However, recently Qian et al. [9] proposed an approach where multiple frames preceding the shot frame are also used in the estimation - an approach that can be termed as temporal color constancy or multi-frame color constancy. They proposed a recurrent network architecture based on AlexNet semantic features and recursive network module for sequential processing. The temporal color constancy is a realistic model of the process in a camera, where focus, gain, expose time and white balance is constantly adjusted given a video stream that is displayed to the user, until the ”shoot” button is pressed. The experiments were conducted on the SFU Gray Ball dataset [10] that is captured with a video camera where a calibration target is visible in every frame. Qian et al. demonstrated superior accuracy for the temporal multi-frame setting vs. the conventional single-frame setting, but it is unclear to which extend the SFU Gray Ball video clips are related to real use cases of customer photography. SFU Gray Ball consists of 15 sequences, the sequences are captured over long time duration and physically distant locations, and the frame resolution is low (). Moreover, the ground truth visible in every frame can convey unintentional cues to deep net methods even if masked.
Our work makes the following contributions:
-
•
We release a temporal color constancy (TCC) benchmark. The dataset consists of 600 sequences of varying length (from 3 to 17 frames). The dataset covers indoor and outdoor scenes with varying weather and daylight conditions, and is till now the largest realistic temporal dataset.
-
•
We make a benchmark analysis with over 20 statistical and learning-based single and the existing temporal methods, using a fixed train-test setting.
-
•
We propose a strong temporal color constancy baseline, termed as TCC-Net, that achieves state-of-the-art results on the new dataset and the previously used SFU Gray Ball, with fast inference speed and light memory footprint.
TCC-benchmark and TCC-Net will be made publicly available as an open-source project, to facilitate fair comparison and development of novel temporal color constancy ideas. We also provide wrapper functionality for experimenting with other datasets such as the NUS dataset [11] and include implementations of the recent methods such as FC4 [8] and C4 [9].
2 Related work
Computational color constancy (CC)
refers to the algorithms that estimate the illuminant color from an image. Gijsenij et al. [2] provide a comprehensive survey of the contemporary methods and divide them under three categories: i) static, ii) gamut-based and iii) learning-based methods. The static methods do not require training data. Well-known static methods and commonly used baselines are Gray-world [4] and General Gray-world (inc. multiple variants) [12]. More recent static methods are Gray Pixel [5] and ”Grayness Index” [6]. The static methods are inferior in the single dataset setting where training and test images are drawn from the same dataset, but outperform learning-based methods in the cross-dataset evaluations [6]. In Gijsenij’s taxonomy the gamut-based methods operate in the color spaces and thus omit the spatial domain information. A strong baseline is Gamut Mapping [3]. In our work, we assign the gamut-based methods to the learning-based methods if they use training data such as [3]. More recent methods operating in the colour spaces are Corrected moments [1], Convolutional CC [7] and its Fast Fourier implementation (FFCC) [13]. The most recent learning-based methods are based on deep architectures that use pre-trained backbone networks to extract rich semantic features: FC4 [8] and C4 [14]. We include the mentioned methods to our experiments since they report state-of-the-art results for various single-frame datasets.
Temporal color constancy
has received less attention than the single frame CC. Attention has been paid on several special cases. For example, Yang et al. [15] extract illuminant color from two distinct frames of a scene that contains specular surfaces (highlights). Prinet et al. [16] propose a probabilistic and more robust version of the Yang et al. method. Wang et al. [17] compute color constancy for video frames. In their approach existing CC methods can be used and illuminant is estimated from multiple frames of a same scene where scene boundaries are automatically detected. Yoo et al. [18] propose a color constancy algorithm for AC bulb illuminated (indoor) scenes using a high-speed camera and Qian et al. [19] for a pair of images with and without flash. However, the seminal work of temporal color constancy is Qian et al. [9] who seeded the term and proposed a temporal CC algorithm using semantic AlexNet features and a Long Short Term Memory (LSTM) recurrent neural network to process sequential input frames. Qian et al. method and the dataset used in their experiments are included to our experiments.
Public datasets
are available for the evaluation of single-frame color constancy methods, for example, Gehler-Shi Color Checker [20, 21], SFU Gray Ball [10] and NUS [11].111See http://colorconstancy.com for download links of datasets and methods. SFU Gray Ball is collected with a video camera and is therefore suitable for multi-frame color constancy experiments [9]. However, the SFU Gray Ball has very low resolution (), contains only 15 sequences, and its capture procedure does not correspond to the consumer still photography. Yoo et al. [18] have published the dataset of 80 sequences used in their experiments, but their sequences were specifically designed for AC bulb illumination experiments and high-speed capturing. Prinet et al. [16] released a small dataset of 11 sequences used in their video color constancy experiments. In summary, the existing multi-frame color constancy datasets are small and ill-suited for generic consumer still photography color constancy studies. Therefore we introduce a new dataset of 600 sequences captured with a rooted mobile phone that makes the multi-frame capture invisible to the mobile phone user and therefore better resembles consumer still photography.
3 Dataset
The multi-frame temporal color constancy (TCC) dataset was collected by university students who captured the shots during their free time. They were not given instructions but asked to take photographs whenever they wish. Students were given a Huawei Mate 20 Pro mobile phone which is one of the high-end models and was rooted and re-programmed to automatically start storing raw sensor images when the camera application was launched. The sensor images were linked to the shot frames using the date and time tags of the files.
3.1 Image Capture


The rooted phone saves the frames as unprocessed 16-bit 36482736 Bayer pattern images. High-resolution frame transfer from the ISP memory to the mobile phone storage memory requires special functionality that limits the practical capture frame rate to 1-3 frames per second (FPS).
To resolve the illuminant color ground truth the shot frame scenes need to be captured with a color calibration target installed into the scene. For example, in the Gehler-Shi dataset there is a Macbeth color checker calibration target visible in the images. In the SFU Gray Ball dataset a gray ball calibration target is attached to the video camera and is therefore visible in all captured frames. In our dataset we wanted to avoid using visible targets since they may unintentionally convey information to the learning-based methods even if they are masked in the training and test sets.
Similar to SFU Gray Ball we used a gray surface calibration target, SpyderCube (Figure 1), which is put into the shot scene instantly after the shot.222The target is always at the image center so that color shading has minimal effect on the ground truth. The students were instructed to take one shot of the calibration target in the location which was the main target or location in their photograph. The captured sequences contain 3-17 frames depending on the viewfinder duration (Figure 1). The SpyderCube object contains two neutral 18% gray surfaces, from which the one that better reflects the casting illumination was annotated and used to compute the ground truth illumination color. The ground truth was verified by manually checking all sequences using the ground truth correction. In total, 600 sequences were recorded and verified during different times of day, in various indoor and outdoor locations and in various weathers during the time period of October 2019 to January 2020.
In the dataset project page we provide linear demosaiced images in the PNG format with the pixel values normalized to , with a black level of zero and with no saturated pixels. The format correspond to that of Gehler-Shi dataset which is a popular evaluation set in color constancy literature. The black level of the specific camera sensor and device is and the saturation level is at . The final RGB images are of the resolution 18241368.
3.2 Dataset Statistics and Performance Metrics




We profile the distributions of ground truth chromaticity values of several mainstream color constancy benchmarks (Gehler-Shi, NUS 8-camera, Cube+ and SFU Gray Ball datasets) in the top-left inset of Figure 2, while we show that of the new Temporal Color Constancy dataset (TCC benchmark) in the top-right position. Our chromacity distributions are similar to the popular Gehler-Shi, NUS and Cube+ datasets.333Note that SFU Gray Ball distribution is larger than others since the data was captured with a high-end Sony VX-2000 video camera that has separate sensors for each color channel and therefore less spectral cross-talk and better channel separation. The small spatial shifts between the datasets are mainly due to different cameras used in the datasets.
In the bottom left of Figure 2 we draw the histogram of sequence lengths in the TCC benchmark. The mean length is , median and mode is . The bottom-right inset of Figure 2 shows the correlation coefficient between the sequence lengths and the ground truth vectors, which indicates that there is no clear correlation between the sequence length and the global illumination.
The main performance measure
in our work is the angular error which is used in the prior works [13, 9]. The angular error is computed from the estimated tri-stimulus (RGB) illumination vector and the ground truth vector as
(1) |
where denotes the inner product between the two vectors and is the Euclidean norm. As overall performance measures we report mean, median and trimean. Tukey’s trimean is a measure of a probability distribution’s location defined as a weighted average of the distribution’s median and its two quartiles. In addition, we report the top quartile (25%), the worst quartile (worst 25%) and the worst 5% numbers.
4 Methods
4.1 Extensions of Single-frame Methods
The conventional single-frame methods are designed to estimate the illuminant color from a single image - the shot frame. However, it is straightforward to extend the single-frame methods to the multi-frame setting. A single-frame method is executed on every frame and the per frame estimates are combined using a suitable statistical tool such as the moving average. In the following we introduce temporal extensions of the SoTA statistical and learning-based methods.
Temporal Grayness Index (T.GI):
Qian et al. [6] proposed a substantial extension of the Gray Pixel method of Yang et al. [5]. They introduced Grayness Index (GI) that provides a spatial grayness map of the input image and the pixels of the highest gray index are selected for the illumination estimation. In the temporal extension of GI, T.GI, all frames over the time are combined to form a multi-frame GI map from which the best pixels are selected.
Temporal Fast Fourier Color Constancy (T.FFCC):
We use the official temporal smoothing implementation released by the author of FFCC [13]. It is based on a simplified Kalman filter with a simplified transition model, no control model and varying observation noise. The current estimate (modeled as an isotropic Gaussian) is smoothed by multiplying with last observed estimate. For more details, we refer to the temporal smoothing section in [13].
4.2 Temporal Color Constancy Network
[\capbeside\thisfloatsetupcapbesideposition=left,capbesidewidth=5.5cm]figure[\FBwidth] Input: Input: Backbone-3-512 Backbone-3-512 2DLSTM-512-128 2DLSTM-512-128 concatenation MaxPool2d Conv-256-64 Sigmoid Conv-64-3 Sigmoid Output:

In the following, we propose a strong baseline for temporal color constancy. The baseline is a deep network architecture (TCC-Net) inspired by the RCC-Net in [9], but with the following significant improvements: 1) a more powerful backbone network for the semantic feature extraction, 2) 2D LSTM that provides more effective spatial recurrent information and 3) support for variable length sequences. The overall architecture is described in Fig. 3. TCC-Net adopts the two CNN+LSTM branch structure from RCC-Net. The first branch, the temporal branch, processes the image sequence, and the second branch, the shot frame branch, processes a pseudo zoom-out sequence in the shot frame. In TCC-Net the both branches are based on a novel 2D LSTM that produces spatio-temporal information which are merged into a single RGB vector at the end of the processing pipeline.
The backbone feature extraction network of RCC-Net (VGG-Net or AlexNet) is replaced with SqueezeNet [22] in TCC-Net. In a recent architecture for computational color constancy, FC4 [8], the SqueezeNet [22] was found superior and this was verified by our experiments (see Section 5.3). Following [8], we keep all layers up to the last convolution layer of SqueezeNet which outputs a 512-channel 2D feature map.
The second improvement is to adopt a 2D LSTM to temporally process sequences and learn a 2D spatial-temporal illumination feature map. We refer to the ordinary LSTM used by RCC-Net as “1DLSTM” due to the fact that its memory cells and the hidden states are encoded as 1D vectors. Although several 1DLSTMs can be stacked to learn more complex sequence-to-sequence mapping, the nature of 1DLSTM hinders its representative power for spatial information. 2DLSTM, introduced in [23], extends 1DLSTM to 2D space by using convolutional structures in both input-to-state and state-to-state transitions. Combining these changes, we have an end-to-end deep network which predict spatial illumination. To get the global estimate vector, averaging (or more advanced manipulation, e.g. confidence weighted averaging in [8]) is applied.
TCC-Net provides native support to varying-length input. This is implemented by the dynamic computational graph feature supported in PyTorch. In contrast, RCC-Net supports only a pre-defined and fixed length sequences (3 or 5 frames in the original paper). With Nvidia GTX 1080ti the processing speed of TCC-Net is 6 ms per frame (only the network operations).
For better understanding of the network parameters we present the key equations implemented in the TCC-Net architecture. For simplicity, the equations are given only for one branch, but the both branches share the similar stages. Given an input sequence and the SqueezeNet backbone TCC-Net proceeds as
Initialize the hidden state and the memory cell of 2D-LSTM | (2) | |||
where are 3D tensors and refer to the input, forget, and output gates of 2D-LSTM. “” denotes convolution and “” Hadamard product. 2D-LSTM has two parameters: the convolution kernel size (a larger value corresponds to faster illumination variations) and the output channel size of the convolution filter (corresponds to hidden channels of 1D-LSTM). Ablation study of the both parameters is provided in Section 5. Figure 4 visualizes the workings of the TCC-Net pipeline.
Training:
In all experiments we use the following settings. The optimizer is RMSprop [24] with the learning rate and the batch size . The network was trained for 2,000 epochs. For data augmentation, images were randomly rotated from to and randomly cropped to the size of the shorter size. Each patch was horizontally flipped with the probability . The SqueezeNet backbone was initialized with the weights pretrained on ImageNet.
5 Experiments
We run a large number of well-known methods on the new TCC Benchmark and report their accuracy in Section 5.1. In Section 5.2 we verify good performance of the new baseline method (TCC-Net) with the previously used SFU Gray Ball dataset. In Section 5.3 we provide ablation study of the main components and parameters of TCC-Net.
5.1 Method Comparison on TCC-benchmark
Method | Mean | Med. | Tri. | B25% | W25% | W5% |
---|---|---|---|---|---|---|
Single-frame Static | ||||||
White-Patch [25] | 11.20 | 10.42 | 10.87 | 1.87 | 21.48 | 26.20 |
Gray-World [4] | 6.45 | 4.74 | 5.19 | 1.19 | 14.74 | 22.78 |
Shades-of-Grey (p=4) [26] | 5.50 | 3.20 | 3.70 | 0.85 | 13.92 | 21.86 |
General Grey-World (p=1 ,=9) [12] | 6.44 | 4.76 | 5.24 | 1.18 | 14.75 | 22.83 |
1st-order Grey-Edge (p=1, =9) [12] | 5.46 | 4.09 | 4.25 | 1.01 | 12.84 | 21.06 |
2nd-order Grey-Edge (p=1, =9) [12] | 5.10 | 3.62 | 3.85 | 1.00 | 12.00 | 20.48 |
PCA (Dark+Bright) [11] | 5.45 | 3.00 | 3.68 | 0.96 | 13.78 | 22.93 |
Grayness Index (GI) [6] | 4.99 | 2.68 | 3.10 | 0.71 | 13.22 | 24.12 |
Temporal extensions | ||||||
T.GI | 4.73 | 2.96 | 3.39 | 0.82 | 11.38 | 17.42 |
Single-frame Learning-based | ||||||
Pixel-based Gamut (=4) [3] | 6.90 | 5.53 | 6.20 | 1.18 | 14.72 | 19.19 |
Edge-based Gamut (=3) [3] | 8.69 | 7.58 | 8.12 | 2.00 | 17.16 | 20.54 |
Intersection-based Gamut (=4) [3] | 8.46 | 7.94 | 7.85 | 2.03 | 16.60 | 20.80 |
Natural Images Statistics [27] | 5.63 | 6.89 | 5.88 | 1.41 | 14.61 | 22.20 |
LSRS [28] | 6.61 | 4.92 | 5.52 | 1.67 | 13.90 | 21.37 |
Exemplar-based Colour Constancy [29] | 5.24 | 3.88 | 4.21 | 1.38 | 11.58 | 19.82 |
Chakrabarti et al. 2015 [30] Empirical | 4.26 | 2.60 | 2.82 | 0.51 | 11.07 | 16.43 |
Regression (SVR) [31] | 4.00 | 3.09 | 3.45 | 1.36 | 7.81 | 11.07 |
Bayesian [20] | 4.25 | 2.86 | 3.16 | 0.93 | 9.97 | 16.27 |
Random Forest [32] | 3.76 | 2.66 | 2.94 | 0.74 | 8.54 | 13.14 |
AlexNet-FC4 [8] | 3.10 | 2.12 | 2.35 | 0.85 | 6.78 | 8.21 |
SqueezeNet-FC4 [8] | 2.84 | 2.10 | 2.23 | 0.74 | 6.39 | 7.83 |
C4 (3 stage) [14] | 2.37 | 1.60 | 1.76 | 0.57 | 5.58 | 6.85 |
FFCC(model Q) [13] | 2.33 | 1.37 | 1.60 | 0.49 | 5.84 | 10.97 |
Temporal extensions | ||||||
T.FFCC | 3.35 | 1.70 | 1.99 | 0.51 | 9.06 | 17.41 |
Temporal | ||||||
RCC-Net [9] | 2.74 | 2.23 | 2.39 | 0.75 | 5.51 | 8.21 |
Our (TCC-Net) | 1.99 | 1.21 | 1.46 | 0.30 | 4.84 | 6.34 |
The results for various single-frame static and learning-based methods (see the related work section), their temporal extensions (Section 4.1), the current temporal state-of-the-art (RCC-Net) [9] and our temporal baseline (Section 4.2) are shown in Table 1. The results demonstrate that the recent deep learning based methods (FC4 and C4) and the convolutional CC (FFCC) are clearly superior to the conventional static and learning-based methods. These methods improve the performance over the whole error distribution, i.e. both the easy and difficult test samples. On our dataset the previous temporal state-of-the-art, RCC-Net [9], is slightly inferior to the best single-frame methods C4 and FFCC.
The temporal extension of GI [6], T.GI, improves its results. On the contrary, T.FFCC, referred to as “temporal smoothing” in [13], is inferior to its single-frame version. The Kalman filter extension of FFCC provides smoother change of the illuminant estimates over the frames, but the accuracy is worse than the non-smoothed estimates. We also test Prinet et al. [16] and it achieves 7.51 mean error due to its assumption that the illumination remains constant over time.

The proposed TCC-Net (Model G in Table 3) obtains the best performance on all error measures and improves performance on the both easy and difficult cases. As compared to the previous state-of-the-art, RCC-Net, the performance improvement is over 35% in the mean error and and 43% in the median error. Considering the fact that end-users are more sensitive to large estimation errors [11] and is generally considered as the sufficient accuracy, then W25% error of the TCC-Net (4.84) is closest to the practical use among all tested methods.
In Figure 5 are examples of color-corrected images with various methods. The first two examples demonstrate easy cases from outdoors where all methods perform comparably well. The third and fourth examples represent typical view finder sequences toward a target which itself does not provide visually-rich clue for inferring the illumination color. In these sequences the two temporal methods, RCC-Net and TCC-Net, provide the best results since they effectively exploit cues from the view finder frames. The last example is a difficult case where the shot frame is a closeup of a tinted fabric material which can be of any plausible color. For the fifth sequence only the proposed TCC-Net provides an accurate estimate.
5.2 Method Comparison on SFU Gray Ball
To validate the findings in the previous experiment with the new TCC-benchmark, we replicated the experiments in Qian et al. [9], using their metrics (the mean, median, worst 5% and maximum errors) and the SFU Gray Ball dataset. The results are collected to Table 2 (cf. Table 1 in [9]).
On the temporal version of the SFU Gray Ball dataset, the proposed TCC-Net again outperforms the RCC-Net [9], with a clear margin. The difference of these two methods is particularly evident on the hardest cases as TCC-Net obtains more than 40% lower error on the both worst 5% and the maximum error metrics.
Method | Mean | Med. | W5% | Max |
---|---|---|---|---|
Single-frame Static | ||||
Gray-World [4] | 7.9 | 7.0 | – | 48.1 |
General Grey-World (p=1 (0),=9) [12] | 6.1 | 5.3 | – | 41.2 |
1st-order Grey-Edge (p=1, =9) [12] | 5.9 | 4.7 | – | 41.2 |
Gray Pixel [5] | 6.2 | 4.6 | 20.8 | 33.3 |
Shades-of-Gray [26] | 6.1 | 5.2 | – | 41.2 |
Single-frame Learning-based | ||||
Pixel-based Gamut (=5) [3] | 7.1 | 5.8 | – | 41.9 |
Edge-based Gamut (=3) [3] | 6.8 | 5.8 | – | 40.3 |
Intersection-based Gamut (=9) [3] | 6.9 | 5.8 | – | 41.9 |
Inverse-Intensity Chromaticity Space [33] | 6.6 | 5.6 | – | 76.2 |
Random Forest [32] | 6.1 | 4.8 | 13.1 | 30.6 |
LSRS [28] | 6.0 | 5.1 | – | – |
Natural Images Statistics [27] | 5.2 | 3.9 | – | 44.5 |
Exemplar-based Colour Constancy [29] | 4.4 | 3.4 | – | 45.6 |
ColorCat [34] | 4.2 | 3.2 | – | 43.7 |
Temporal | ||||
Prinet et al. [16] | 5.4 | 4.6 | – | – |
Wang et al. [17] | 5.4 | 4.1 | – | 26.8 |
RCC-Net [9] | 4.0 | 2.9 | 12.2 | 25.2 |
Our (TCC-Net) | 2.8 | 2.3 | 7.1 | 13.9 |
5.3 TCC-Net Ablation Study
Results with different components and parameter settings of TCC-Net are given in Table 3 and briefly discussed below.
Does LSTM help?
The 1-branch TCC-Net (Model B in Table 3) without the LSTM module becomes equivalent to SqueezeNet-FC4 in Table 1. However, with the LSTM module, for example the mean error is 11% lower than SqueezeNet-FC4 which can be explained only by the temporal information carried in the LSTM memory cell. Additionally, Figure 6 shows the t-SNE visualization [35] of how LSTM representation is more discriminative than that of SqueezeNet backbone in our TCC-Net. t-SNE is used to visualize high-dimensional feature data. For each of the four selected samples shown in the right-hand-side of Figure 6, SqueezeNet backbone and 2D-LSTM output deep representations. The representations are of the dimensions of (h,w,512) and (h,w,128), respectively, where where h is the height, w width and 512 (or 128) the number of the feature channels. Contrast to the SqueezeNet backbone, LSTM exploits spatio-temporal information over multiple frames and provides features which better represent the different illuminations.
TCC Configuration | Mean | Med. | Tri. | B25% | W25% | W5% | Mem. (MB) | |
---|---|---|---|---|---|---|---|---|
A | 2branch,AlexNet,1D-LSTM | 2.74 | 2.23 | 2.39 | 0.75 | 5.51 | 8.21 | 20.4 |
B | 1branch,SqueezeNet,1D-LSTM | 2.52 | 1.77 | 2.04 | 0.52 | 5.65 | 6.58 | 3.3 |
C | 2branch,SqueezeNet,1D-LSTM | 2.20 | 1.55 | 1.65 | 0.43 | 5.05 | 6.18 | 6.6 |
D | 2branch,SqueezeNet,2D-LSTM,len1 | 3.27 | 3.46 | 3.32 | 2.07 | 4.44 | 4.80 | 68.8 |
E | 2branch,SqueezeNet,2D-LSTM,len5 | 2.50 | 1.78 | 1.99 | 0.53 | 5.65 | 6.94 | 68.8 |
F | 2branch,SqueezeNet,2D-LSTM(H=64) | 2.17 | 1.59 | 1.68 | 0.40 | 5.00 | 6.72 | 33.3 |
G | 2branch,SqueezeNet,2D-LSTM,(H=128) | 1.99 | 1.21 | 1.46 | 0.30 | 4.84 | 6.34 | 68.8 |
H | 2branch,SqueezeNet,2D-LSTM,(H=512) | 2.06 | 1.09 | 1.40 | 0.30 | 5.19 | 7.65 | 476.1 |
I | 2branch,SqueezeNet,2D-LSTM(K=1) | 2.01 | 1.42 | 1.58 | 0.34 | 4.65 | 5.48 | 11.0 |
J | 2branch,SqueezeNet,2D-LSTM(K=7) | 2.08 | 1.43 | 1.60 | 0.35 | 4.83 | 5.83 | 131.0 |

Backbone network:
The Model A in Table 3 is the baseline as this configuration corresponds to RCC-Net in [9]. The effect of using SqueezeNet instead of AlexNet backbone is evident between the models A and C. The results with SqueezeNet are superior to the results with AlexNet and the memory footprint of SqueezeNet is substantially smaller making it more practical for mobile devices. Intriguingly, a single-branch TCC-Net without the pseudo sequence branch (Model B) also performs better than the RCC-Net baseline (Model A) and thus verifies superior performance of SqueezeNet for color constancy. By comparing Model B and Model C it is clear that the two branch design provides better performance than a single branch by a clear margin (the mean error is reduced by 12.7%).
1D vs. 2D LSTM:
Model G is the main model reported in Table 1. The same configuration but with 1D LSTMs is Model C. By comparing the performances of C and G it is obvious that 2D LSTMs provide better performance and achieve state-of-the-art in the TCC and SFU Gray Ball benchmarks. TCC-Net baseline (Model G) is a fully 2D convolutional architecture that is the best found architecture for illuminant estimation in temporal color constancy.
Dimensionality of LSTM hidden channels:
Three different sizes of the LSTM hidden channels, , where tested (Models F, G and H, respectively). For H=64 (Model F) the LSTM underfits and for H=512 the network starts to overfit thus making H=128 a good trade-off between training error and model generalization.
Kernel Size of 2D LSTM:
The kernel size defines the amount of spatial correlations retained by the 2D LSTM. Kernel size K=1 means that the neighbor pixels do not affect to the LSTM inference. Different kernel sizes were tested (Models G, I an J) and the best results were achieved with K=5.
Varying-length Input:
One significant difference to the previous state-of-the-art (RCC-Net) [9] is that TCC-Net allows an arbitrary number of input frames before the shot frame. We experimented on two fixed lengths, 1 (only the shot frame) and 5, and the arbitrary length (Models D, E and G, respectively). The single-frame results are the worst, five frames is the second best, and arbitrary length achieves the best performance and is the most convenient for the end-user cases where the length of a view finder sequence is unknown.
Memory Footprint :
From the perspective of deploying the deep net into a GPU/NPU-supported consumer mobile platform, we profiled the memory footprints of all TCC-Net variants in Table 3. The model C, combining SqueezeNet and 1D-LSTM, obtains a good balance between accuracy and memory print (6.6 MB). The best-performing variant G occupies memory of 68.8 MB, due to the larger dimensionality of hidden LSTM channels and the 2D LSTM structure.
6 Conclusions
Our work introduces TCC-benchmark, by far, the largest temporal color constancy dataset of high resolution images. More than 20 popular methods were evaluated on the dataset including the recent state-of-the-arts. As a new baseline method, we proposed TCC-Net which is an end-to-end learnable deep and recurrent neural network architecture. TCC-Net achieves state-of-the-art results on our TCC-benchmark and SFU Gray Ball used in the previous works on temporal CC. TCC-Net outperforms, in terms of mean angular error, the best single-image and temporal color constancy methods by 33% and 30%, respectively, on the SFU Gray Ball set and by 40% and 27%, respectively, on the TCC-benchmark. The TCC-Net represents a technique for combining SqueezeNet and 2D-LSTM to capture spatial-temporal variations in a video. We present multiple variants of TCC-Net including ones with small memory consumption and therefore suitable for mobile devices.
References
- [1] Finlayson, G.D.: Corrected-moment illuminant estimation. In: ICCV. (2013) 1904–1911
- [2] Gijsenij, A., Gevers, T., Van De Weijer, J.: Computational color constancy: Survey and experiments. TIP 20(9) (2011) 2475–2489
- [3] Gijsenij, A., Gevers, T., Van De Weijer, J.: Generalized gamut mapping using image derivative structures for color constancy. IJCV 86(2-3) (2010) 127–139
- [4] Buchsbaum, G.: A spatial processor model for object colour perception. Journal of the Franklin Institute 310(1) (1980) 1–26
- [5] Yang, K.F., Gao, S.B., Li, Y.J.: Efficient illuminant estimation for color constancy using grey pixels. In: CVPR. (2015)
- [6] Qian, Y., Kamarainen, J.K., Nikkanen, J., Matas, J.: On finding gray pixels. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). (June 2019)
- [7] Barron, J.T.: Convolutional color constancy. In: ICCV. (2015)
- [8] Hu, Y., Wang, B., Lin, S.: FC4: Fully convolutional color constancy with confidence-weighted pooling. In: CVPR. (2017)
- [9] Qian, Y., Chen, K., Kämäräinen, J., Nikkanen, J., Matas, J.: Recurrent color constancy. In: ICCV. (2017)
- [10] Ciurea, F., Funt, B.: A large image database for color constancy research. In: Color Imaging Conference (CIC). (2003)
- [11] Cheng, D., Prasad, D.K., Brown, M.S.: Illuminant estimation for color constancy: why spatial-domain methods work and the role of the color distribution. JOSA A 31(5) (May 2014) 1049–1058
- [12] Van De Weijer, J., Schmid, C., Verbeek, J.: Using high-level visual information for color constancy. In: ICCV. (2007)
- [13] Barron, J.T., Tsai, Y.T.: Fast fourier color constancy. In: CVPR. (2017)
- [14] Huanglin, Y., Ke, C., Kaiqi, W., Yanlin, Q., Zhaoxiang, Z., Kui, J.: Cascading convolutional color constancy. In: AAAI Conference on Artificial Intelligence. (2020)
- [15] Yang, Q., Wang, S., Ahuja, N., Yang, R.: A uniform framework for estimating illumination chromaticity, correspondence, and specular reflection. TIP 20(1) (2011) 53–63
- [16] Prinet, V., Lischinski, D., Werman, M.: Illuminant chromaticity from image sequences. In: ICCV. (2013)
- [17] Wang, N., Funt, B., Lang, C., Xu, D.: Video-based illumination estimation. In: Color Imaging Conference (CIC) Workshop. (2011)
- [18] Yoo, J.S., Kim, J.O.: Dichromatic model based temporal color constancy for ac light sources. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2019) 12329–12338
- [19] Qian, Y., Yan, S., Kamarainen, J.K., Matas, J.: Flash lightens gray pixels. In: ICIP. (2019)
- [20] Gehler, P.V., Rother, C., Blake, A., Minka, T., Sharp, T.: Bayesian color constancy revisited. In: CVPR. (2008)
- [21] Shi, L., Funt, B.: Re-processed version of the gehler color constancy dataset of 568 images. accessed from http://www.cs.sfu.ca/~colour/data/ (2010)
- [22] Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size. arXiv preprint arXiv:1602.07360 (2016)
- [23] Xingjian, S., Chen, Z., Wang, H., Yeung, D.Y., Wong, W.K., Woo, W.c.: Convolutional lstm network: A machine learning approach for precipitation nowcasting. In: Advances in neural information processing systems. (2015) 802–810
- [24] Tieleman, T., Hinton, G.E.: Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. In: Coursera Lecture slides: https://www.coursera.org/learn/neural-networks. (2012)
- [25] Land, E.H., McCann, J.J.: Lightness and retinex theory. Josa 61(1) (1971) 1–11
- [26] Finlayson, G.D., Trezzi, E.: Shades of gray and colour constancy. In: Color Imaging Conference (CIC). (2004)
- [27] Gijsenij, A., Gevers, T.: Color constancy using natural image statistics and scene semantics. TPAMI 33(4) (2011) 687–698
- [28] Gao, S., Han, W., Yang, K., Li, C., Li, Y.: Fefficient color constancy with local surface reflectance statistics. In: ECCV. (2014)
- [29] Joze, H.R.V., Drew, M.S.: Exemplar-based color constancy and multiple illumination. TPAMI 36(5) (2014) 860–873
- [30] Chakrabarti, A.: Color constancy by learning to predict chromaticity from luminance. In: NIPS. (2015)
- [31] Funt, B., Xiong, W.: Estimating illumination chromaticity via support vector regression. In: Color Imaging Conference (CIC). (2004)
- [32] Cheng, D., Price, B., Cohen, S., Brown, M.S.: Effective learning-based illuminant estimation using simple features. In: CVPR. (2015)
- [33] Tan, R.T., Ikeuchi, K., Nishino, K.: Color constancy through inverse-intensity chromaticity space. In: Digitally Archiving Cultural Objects. Springer (2008) 323–351
- [34] Banić, N., Lončarić, S.: Color cat: Remembering colors for illumination estimation. IEEE Signal Processing Letters 22(6) (2014) 651–655
- [35] Maaten, L.v.d., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(Nov) (2008) 2579–2605