This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Exploring Kolmogorov-Arnold networks for realistic image sharpness assessment

Shaode Yu, Ze Chen, Zhimu Yang, Jiacheng Gu, Bizu Feng School of Information and Communication Engineering
Communication University of China
Beijing, China
{yushaodecuc, chenze, 2021211123030}@cuc.edu.cn
   Qiurui Sun Center of Information & Network Technology
Beijing Normal University
Beijing, China
[email protected]
Abstract

Score prediction is crucial in evaluating realistic image sharpness based on collected informative features. Recently, Kolmogorov-Arnold networks (KANs) have been developed and witnessed remarkable success in data fitting. This study introduces the Taylor series-based KAN (TaylorKAN). Then, different KANs are explored in four realistic image databases (BID2011, CID2013, CLIVE, and KonIQ-10k) to predict the scores by using 15 mid-level features and 2048 high-level features. Compared to support vector regression, results show that KANs are generally competitive or superior, and TaylorKAN is the best one when mid-level features are used. This is the first study to investigate KANs on image quality assessment that sheds some light on how to select and further improve KANs in related tasks.

Index Terms:
Kolmogorov-Arnold network, TaylorKAN, image sharpness assessment, image quality, machine learning.

I Introduction

Blind image sharpness/blurriness assessment (BISA) is crucial in media quality assurance. It enables real-time processing without reference images and supports dynamic adjustments to image sharpness and visual fidelity in video streaming, thereby enhancing the quality of the user experience. [1]. Specifically, BISA can be used to guide the restoration process by offering a metric to optimize algorithms for image sharpening [2].

Score prediction is an indispensable step when informative features have been prepared for image sharpness representation. Support vector regression (SVR) and multi-layer perceptron (MLP) are preferred. Li etet alal craft multi-scale sharpness-aware features, and SVR performs score rating [3]. Yu etet alal design a convolutional neural network (CNN) for blurriness estimation [4], and SVR and MLP are used to improve the prediction performance [5]. Liu etet alal design orientation-aware features for SVR-based score prediction [6]. Chen etet alal weight the local binary pattern features in spatial domain and entropy and gradient features in spectral domain, and SVR predicts the quality scores from perception features [7]. Yu etet alal construct mid-level features, and MLP and SVR are evaluated [8].

For score prediction, a CNN model can be generally treated as an image-based feature extractor followed by a MLP in end-to-end optimization. Zhu etet alal. retrieve prior knowledge shared among distortions, and a deep net is fine-tuned for quality scoring [9]. Huang etet alal. investigate the inherent relationship between the attributes and the categories via graph convolution network for attribute reasoning and quality estimation [10]. Zhang etet alal. optimize a CNN on multiple databases by a hinge constraint on learning uncertainty [11]. Li and Huo consider multi-scale visual features and introduce the feedback mechanism [12]. Chen etet alal. develop multi-scale spatial pooling and combine both block attention and image understanding for improved generalization [13]. Sun etet alal. extract low-level features and high-level semantics, and a staircase structure is designed for hierarchical feature integration and quality-aware embedding [14]. Zhao etet alal. enable representation learning via a pre-text self-supervised task and use a quality-aware contrastive loss to learn distortions [15]. Zhang etet alal. design multi-task learning for quality assessment, scene classification and distortion identification [16]. Wu etet alal. fuse multi-stage semantic features for no-reference image quality prediction, and before score rating, the features are rectified using multi-level channel attention [17].

Inspired by the Kolmogorov-Arnold theorem (KAT), a novel module called Kolmogorov-Arnold Networks (KAN) has been proposed [18]. Except for the activation functions as MLP on nodes, KAN appends learnable activation functions on edges between the nodes of successive layers. Improved capacity has been shown in data fitting and knowledge representation [19]. Later, KAN variants are designed using different mathematical functions as the activation functions on edges [20].

Despite remarkable success in knowledge representation and data fitting, little is known about KANs for score prediction in BISA. This study attempts to bridge this gap. Firstly, the Taylor series-based KAN (TaylorKAN) is introduced. Then, mid-level features and high-level features of four realistic databases are prepared. After that, MLP, SVR and 6 KANs are evaluated. Experimental results indicate that KANs are generally better than SVR and MLP, and TaylerKAN is the best when using mid-level features as image quality representation.

II KAT-inspired networks

KAT asserts that a continuous multivariate function can be represented as a finite sum of continuous uni-variate functions [21]. Assuming f:[0,1]nf:[0,1]^{n}\to\mathbb{R} be a continuous function, there exist continuous uni-variate functions ϕq,p\phi_{q,p} and Φq\Phi_{q} that,

f(x1,x2,,xn)=q=12n+1Φq(p=1nϕq,p(xp)).f(x_{1},x_{2},\ldots,x_{n})=\sum_{q=1}^{2n+1}\Phi_{q}\left(\sum_{p=1}^{n}\phi_{q,p}(x_{p})\right). (1)

II-A The first KAN model

KAT-inspired KAN treats a multivariate function as learnable uni-variate spline functions on edges [18]. A KAN layer can be defined by a matrix of uni-variate functions Φ={ϕq,p}\Phi=\{\phi_{q,p}\} in which p=1,2,,ninp=1,2,\ldots,n_{\text{in}} and q=1,2,,noutq=1,2,\ldots,n_{\text{out}}. Here, ninn_{\text{in}} is the input dimension, noutn_{\text{out}} is the output dimension, and ϕq,p\phi_{q,p} is a learnable function. The activation of each node in layer l+1l+1 can be computed as

xl+1,j=i=1nlϕl,j,i(xl,i).x_{l+1,j}=\sum_{i=1}^{n_{l}}\phi_{l,j,i}(x_{l,i}). (2)

And thus, the LL-layered KAN can be generally described as

KAN(x)=(ΦL1ΦL2Φ0)(x),\text{KAN}(x)=(\Phi_{L-1}\circ\Phi_{L-2}\circ\ldots\circ\Phi_{0})(x), (3)

and implemented in a layer-to-layer connection form [19].

II-B KAN variants

The flexibility of KAN structure allows for diverse implementations using different activation functions on edges. The TaylorKAN and other involved KANs are described.

II-B1 TaylorKAN

Taylor series (Eq. 4) represent f(x)f(x) as an infinite sum of terms from the values of its derivatives f(n)f^{(n)} at a point aa. In TaylorKAN, the coefficients are learned during model training.

f(x)n=0f(n)(a)n!(xa)nf(x)\approx\sum_{n=0}^{\infty}\frac{f^{(n)}(a)}{n!}(x-a)^{n} (4)

II-B2 BSRBF-KAN

BSRBF-KAN combines B-splines (BSs) and radial basis functions (RBFs) to enhance approximation capabilities [22]. BSs ensure the continuity, and RBF provides interpolation. BSRBF can be represented as,

ϕ(x)=wbb(x)+ws(ϕBS(x)+ϕRBF(x)),\phi(x)=w_{b}b(x)+w_{s}(\phi_{BS}(x)+\phi_{RBF}(x)), (5)

where wbw_{b} and wsw_{s} are the weights of BS and RBF respectively, ϕBS(x)\phi_{BS}(x) stands for the BS function, and ϕRBF(x)\phi_{RBF}(x) denotes the RBF function. ϕRBF(r)=eϵr2\phi_{RBF}(r)=e^{-\epsilon r^{2}} is typically chosen in which rr is the Euclidean distance between the input and the center vector, and ϵ\epsilon controls the width of the Gaussian function.

II-B3 ChebyKAN

ChebyKAN employs the Chebyshev polynomials for function approximation [23]. It attempts to minimize the maximum error in polynomial approximation for high accuracy and stability. Eq. 6 shows Chebyshev polynomials defined by the recurrence relation.

{T0(x)=1T1(x)=xTn+1(x)=2xTn(x)Tn1(x)\left\{\begin{aligned} &T_{0}(x)=1\\ &T_{1}(x)=x\\ &T_{n+1}(x)=2xT_{n}(x)-T_{n-1}(x)\end{aligned}\right. (6)

II-B4 HermiteKAN

Hermite polynomials are ideal for approximating these Gaussian-like functions due to their recurrence relations and orthogonality. HermiteKAN is based on Hermite polynomials (Eq. 7).

Hn(x)=(1)nex2dndxnex2H_{n}(x)=(-1)^{n}e^{x^{2}}\frac{d^{n}}{dx^{n}}e^{-x^{2}} (7)

II-B5 JacobiKAN

JacobiKAN uses Jacobi polynomials [24]. The polynomials are orthogonal to the weight function and flexible in handling diverse boundary conditions.

Pn(α,β)(x)=12nk=0n(n+αk)(n+βnk)(x1)nk(x+1)kP_{n}^{(\alpha,\beta)}(x)=\frac{1}{2^{n}}\sum_{k=0}^{n}\binom{n+\alpha}{k}\binom{n+\beta}{n-k}(x-1)^{n-k}(x+1)^{k} (8)

II-B6 WavKAN

Wavelets are adept at capturing local variations in functions, and they provide flexibility and adaptability in complex pattern modeling. The general wavelet transformation form is shown as below,

ψa,b(x)=1aψ(xba),\psi_{a,b}(x)=\frac{1}{\sqrt{a}}\psi\left(\frac{x-b}{a}\right), (9)

where aa and bb correspond to the scaling and the translation parameters. Since different wavelet basis are available, in this study, the Mexican Hat wavelets are used in WavKAN [25],

ψ(x)=23π1/4(1x2)ex22.\psi(x)=\frac{2}{\sqrt{3\pi^{1/4}}}\left(1-x^{2}\right)e^{-\frac{x^{2}}{2}}. (10)

II-B7 Other variants

Other KANs are accessible [20]. However, limited by computing resources, above-mentioned light-weight KANs are evaluated in the current study.

III Materials and Methods

III-A Databases

Four databases (BID2011 [26], CID2013 [27], CLIVE [28] and KonIQ-10k [29]) with realistic distortions are analyzed. Table I shows the number (#) of distorted images, the year of data availability, and the score ranges of the databases.

TABLE I: General information of the databases
# images year score range
BID2011 [26] 586 2011 [0, 5]
CID2013 [27] 474 2013 [0, 100]
CLIVE [28] 1,169 2015 [0, 100]
KonIQ-10k [29] 10,073 2018 [0, 5]

III-B Feature preparation

Two sets of features are prepared. One contains 15 mid-level features per image that are the output of BISA indicators [8]. The other set includes 2048 deeply learned features which are derived from the last full-connection layer of the pre-trained ResNet50 [30]. The procedure is similar as [17].

III-B1 Mid-level features

Assuming a database {(Ii,yi)}i=1n\{(I_{i},y_{i})\}_{i=1}^{n} with nn pairs of images and scores, an indicator ηj\eta_{j} yields an objective score xi,jx_{i,j} to an image IiI_{i}, which can be formulated as xi,j=ηj(Ii)x_{i,j}=\eta_{j}(I_{i}). In the same way, mm indicators ({ηj}j=1m\{\eta_{j}\}_{j=1}^{m}) generate a feature matrix MM as shown in Eq. 11.

M=[x1,1x1,my1xn,1xn,myn]n×(m+1)M=\left[\begin{array}[]{ccc|c}x_{1,1}&\dots&x_{1,m}&y_{1}\\ \vdots&\ddots&\vdots&\vdots\\ x_{n,1}&\dots&x_{n,m}&y_{n}\end{array}\right]_{n\times(m+1)} (11)

III-B2 High-level features

Using pre-trained ResNet50 [30] as an extractor ff, it generates a dd-dimensional feature vector (d=2048d=2048) to an input image IiI_{i}. Thereby, deeply learned high-level feature matrix NN is derived as shown in Eq. 12.

N=[f(I1)y1f(Ii)yif(In)yn]n×(d+1)N=\left[\begin{array}[]{c|c}f(I_{1})&y_{1}\\ \vdots&\vdots\\ f(I_{i})&y_{i}\\ \vdots&\vdots\\ f(I_{n})&y_{n}\end{array}\right]_{n\times(d+1)} (12)

III-C Score prediction

Besides KANs, SVR and MLP are tested. Assume features (X={xi}i=1lX=\{\vec{x_{i}}\}_{i=1}^{l}) and scores (Y={yi}i=1lY=\{y_{i}\}_{i=1}^{l}) of ll samples in the training set, a weighting vector 𝒘\boldsymbol{w}, and a new input vector x\vec{x}. SVR (svrsvr) aims to find a function Rsvr(X)R_{svr}(X) that maximizes the deviation of ϵ\epsilon from the subjective score yiy_{i} of training samples. In Eq. 13, ζ(X)\zeta(X) is a nonlinear function, and γ\gamma is a bias.

g(X)=Rsvr(X)=𝒘Tζ(X)+γg(X)=R_{svr}(X)=\boldsymbol{w}^{T}\zeta(X)+\gamma (13)

MLP (mlpmlp) is designed with different numbers of the hidden layers in terms of different feature inputs. Its parameters are optimized by minimizing the difference between the output of model RmlpR_{mlp} and the ground truth YY.

𝒘=arg minYRmlp(𝒘;X)2\boldsymbol{w}^{*}=\text{arg min}\quad||Y-R_{mlp}(\boldsymbol{w};X)||^{2} (14)

III-D Performance criterion

The performance is evaluated by using Pearson linear correlation coefficient (PLCC) and Spearman rank order correlation coefficient (SRCC). Higher values indicate better performance.

Specifically, PLCC is computed after a five-parameter nonlinear mapping between objective and subjective scores (Eq. 15) in which ss denotes the predicted score, f(s)f(s) is the mapped score, and {qi}i=15\{q_{i}\}_{i=1}^{5} are the fitting parameters. PLCC values are calculated between subjective scores and mapped scores.

f(s)=q1(1211+eq2(sq3))+q4s+q5f(s)=q_{1}(\frac{1}{2}-\frac{1}{1+e^{q_{2}(s-q_{3})}})+q_{4}s+q_{5} (15)

Training time is used to evaluate the efficiency of involved score prediction models. The training time for SVR is recorded in seconds (ss), and the other models are measured in iterations per second (i/si/s), where a lower value indicates poorer computational efficiency.

III-E Implementation details

In each experiment, a database is randomly split into three subsets for training (70% samples), validation (15% samples) and testing (the remaining samples) of BIQA indicators.

For fair comparison, the KANs and MLP are configured with 3 hidden layers ([15, 26, 18, 12, 1]) for mid-level inputs and 4 hidden layers ([2048, 1536, 1024, 256, 128, 1]) for high-level feature inputs. TaylorKAN is implemented with quadratic approximation, and the other KAN models use default settings. To SVR, the radial basis function (RBF) kernel is used, and the other parameters are set with default values.

During training, an early stopping mechanism is used even though 500 iterations are pre-defined. The patience parameter is set to 20, and thus, the training will be terminated early if the validation loss fails to improve for 20 consecutive epochs. Throughout the process, the metrics to the learning rate that yields the highest sum of PLCC and SRCC are reported.

The codes are implemented on an Ubuntu operating system (version 22.04) using Python (version 3.12), PyTorch (version 2.3.0), and CUDA (version 12.1). The algorithms are executed on a GPU (A100-PCIE-40GB) with 72 GB RAM. The project is available at https://github.com/CUC-Chen/KAN4IQA.

IV Results and Discussion

Experimental results using mid-level features and high-level features are respectively shown in Table II and III where the denotes the training time of SVR in ss. Table IV compares the current study with several state-of-the-art achievement. In the tables, the best metric values are boldfaced.

IV-A BISA using mid-level features

Table II shows that compared to SVR, TaylorKAN achieves better prediction on BID2011, CID2013 and KonIQ-10k, and its PLCC value is higher on CLIVE. The other KAN models obtain better performance on BID2011, competitive results on CID2013 and KonIQ-10k, but worse prediction on CLIVE. Among the KANs, ChebyKAN is the fastest, achieving over 30 i/si/s on BID2011 and CID2013, and TaylorKAN demonstrates slightly lower but comparable computational efficiency.

TABLE II: BISA by using 15 mil-level features
BID2011 [26] CID2013 [27] CLIVE [28] KonIQ-10k [29]
PLCC SRCC time (i/si/s) PLCC SRCC time (i/si/s) PLCC SRCC time (i/si/s) PLCC SRCC time (i/si/s)
SVR 0.619 0.617 0.049 0.834 0.810 0.024 0.630 0.592 0.117 0.746 0.691 12.39
MLP 0.744 0.729 33.66 0.808 0.791 39.62 0.649 0.552 18.06 0.753 0.682 2.082
BSRBF_KAN [22] 0.675 0.680 33.23 0.845 0.795 12.99 0.562 0.479 6.347 0.725 0.650 1.463
ChebyKAN [23] 0.700 0.703 34.86 0.808 0.826 37.73 0.570 0.447 18.36 0.749 0.680 2.224
HermiteKAN 0.651 0.740 19.29 0.825 0.845 20.99 0.566 0.502 9.655 0.754 0.671 1.118
JacobiKAN [24] 0.709 0.789 16.29 0.808 0.775 20.12 0.545 0.519 9.546 0.753 0.689 1.074
WavKAN [25] 0.715 0.730 22.74 0.827 0.827 26.60 0.559 0.482 13.23 0.759 0.685 1.448
TaylorKAN (ours) 0.756 0.782 28.44 0.871 0.851 34.45 0.668 0.582 15.32 0.766 0.699 1.927

When using the 15 mid-level features for quality representation, TaylorKAN achieves the best performance, attributed to several factors. Firstly, it surpasses SVR by leveraging the Taylor series for hierarchical approximation, and meanwhile, a multi-layer network is implemented for deeper feature learning. Secondly, it outperforms other involved KANs, possibly because quadratic approximation enables more precise feature fitting. Thirdly, it achieves higher metric values than MLP, since the former uses additional activation functions on edges. At last, the simplicity of the quadratic approximation enables TaylorKAN to perform data fitting at a relatively faster pace.

IV-B BISA using high-level features

Setting SVR as the baseline, Table III suggests that KANs achieve higher values on BID2011, worse results on CID2013 and CLIVE, and close performance on KonIQ-10k in general. Notably, BSRBF_KAN and WavKAN are comparable to SVR on CID2013 and CLIVE, while TaylorKAN, BSRBF_KAN, JacobiKAN and HermiteKAN are superior on KonIQ-10k. It is found that BSRBF_KAN, WavKAN and TaylorKAN run at less than 10 i/si/s on BID2011 and CID2013.

TABLE III: BISA by using 2048 high-level features
BID2011 [26] CID2013 [27] CLIVE [28] KonIQ-10k [29]
PLCC SRCC time (i/si/s) PLCC SRCC time (i/si/s) PLCC SRCC time (i/si/s) PLCC SRCC time (i/si/s)
SVR 0.786 0.782 0.355 0.860 0.882 0.447 0.751 0.712 1.492 0.839 0.800 119.22
MLP 0.750 0.780 23.46 0.796 0.825 18.32 0.637 0.554 12.77 0.808 0.763 1.038
BSRBF_KAN [22] 0.811 0.795 6.296 0.828 0.820 6.247 0.733 0.649 3.768 0.841 0.809 0.435
ChebyKAN [23] 0.821 0.812 20.79 0.630 0.665 18.61 0.662 0.587 11.41 0.824 0.790 1.368
HermiteKAN 0.822 0.814 12.53 0.604 0.687 10.45 0.670 0.647 6.804 0.839 0.804 0.803
JacobiKAN [24] 0.820 0.806 11.38 0.596 0.605 10.65 0.733 0.651 6.426 0.842 0.803 0.699
WavKAN [25] 0.767 0.735 1.178 0.844 0.856 0.994 0.752 0.676 0.587 0.810 0.777 0.067
TaylorKAN (ours) 0.797 0.813 3.285 0.788 0.780 2.796 0.696 0.598 1.644 0.850 0.811 0.188

When applying the 2048 high-level features for image quality representation, TaylorKAN obtains slightly higher metric values on BID2011 and KonIQ-10k, while worse results on CID2013 and CLIVE than SVR. The sub-optimal performance indicates that it remains challenging for the KANs to handle high-dimensional features. On the one hand, the KAT (Eq. 1) implies that the representation of a high-dimensional function requires a large number of uni-variate functions. Consequently, the accumulation of errors and over-fitting may cause unstable data-driven approximation. On the other hand, whether these high-level features primarily learned for object recognition can be directly utilized to represent image quality has not yet been determined. Therefore, proper post-processing strategies, such as fine-tuning [9] and rectification [17], become important for smoothly transferring these deeply learned features into the representation space of image quality.

Comparison of Table II and Table III reveals that the 2048 high-level features provide more effective quality representation than the 15 mid-level features for score prediction, except on the CID2013 database. Most KANs, such as BSRBF_KAN, ChebyKAN and JacobiKAN, achieve better performance when high-level features are used, while the utilization of the 2048 features dramatically increases the computational time. On the one hand, it should be acknowledged the mid-level features are less effective than the high-level features due to their limited quantity and capacity for quality representation. Specifically, the mid-level features are primarily designed for scoring image sharpness [8], while the high-level features are hierarchically and deeply learned for general recognition task [30]. On the other hand, the KANs perform generally better on CID2013 when using the mid-level features. For example, in the case of JacobiKAN, using mid-level features leads to PLCC 0.808 and SRCC 0.775, whereas using high-level features yields PLCC 0.596 and SRCC 0.605. This finding indicates that the mid-level features are more suitable for the CID2013 database.

IV-C Current achievement on the databases

The results of TaylorKANs using mid-level features (a) and high-level features (b) and several state-of-the-art (SOTA) works are shown in Table IV. These works [11, 10, 12, 13, 14, 15, 16] develop novel CNNs that take images rather than features as input.

TABLE IV: State-of-the-art performance on realistic distorted image datasets
BID2011 [26] CID2013 [27] CLIVE [28] KonIQ-10k [29]
PLCC SRCC PLCC SRCC PLCC SRCC PLCC SRCC
TaylorKAN a 0.756 0.782 0.871 0.851 0.668 0.582 0.766 0.699
TaylorKAN b 0.797 0.813 0.788 0.780 0.696 0.598 0.850 0.811
SARQUE [10] 0.861 0.846 0.934 0.930 0.873 0.855 0.923 0.901
UNIQUE [11] 0.873 0.858 0.890 0.854 0.901 0.896
REQA [12] 0.886 0.874 0.880 0.865 0.916 0.904
CSPP-IQA [13] 0.891 0.875 0.898 0.882 0.921 0.912
StairIQA [14] 0.9284 0.9128 0.9175 0.8992 0.9362 0.9209
QPT-ResNet50 [15] 0.9109 0.8875 0.9141 0.8947 0.9413 0.9271
LIQE [16] 0.900 0.875 0.910 0.904 0.908 0.919

A significant performance gap is found between the SOTA works and TaylorKAN in score prediction. The CNNs achieve PLCC \geq 0.86 and SRCC \geq 0.84, which are much higher than those of the evaluated models (Table II and III). These SOTA works benefit not only from advanced architectures of image convolution and feature pooling but also from sophisticated module designs, such as uncertainty learning [11], spatial pooling [13], self-supervised learning [15], and multitask learning [16]. Thus, one way to improve representation power and score prediction performance is to integrate prior knowledge and advanced modules into KANs [20]. When using TaylorKAN for score prediction, high-level features can increase, but not substantially, metric values on three databases. On the one hand, the perceptual similarity is an emergent property shared across deep visual embedding [31] and thus, it is not surprise that deep features are effective for image quality representation [15]. On the other hand, how to figure out the most informative subsets of deep features is important for KANs to enhance effectiveness and efficiency in score prediction, while avoiding the challenges of high-dimensional feature processing.

This study has several limitations. Firstly, a broader range of informative features, potentially numbering in the hundreds or thousands, could be collected and refined by selecting the most relevant ones. These selected features can then be applied in KANs for further data fitting and feature embedding tailored to specific tasks. Secondly, KANs could be integrated into deep learning architectures by replacing the MLP components. Such integration may enhance KANs’ representation capacity in end-to-end optimization. Thirdly, the performance of KANs could be further assessed on other tasks, such as object recognition, to gain deeper insights into their strengths, limitations, and potential applications in feature representation.

V Conclusions

In addition to SVR and MLP, six KANs are evaluated on four realistic databases for score prediction respectively using mid-level and high-level features. The results demonstrate that KANs achieve superior or competitive performance compared to SVR and MLP, highlighting their potential for enhancing performance in score prediction and related tasks.

References

  • [1] G. Zhai, and X. Min, “Perceptual image quality assessment: A survey,” Science China Information Sciences, vol. 63(11), pp. 1–52, 2020.
  • [2] A. Li, J. Li, Q. Lin, C. Ma, and B. Yan, “Deep image quality assessment driven single image deblurring,” IEEE International Conference on Multimedia and Expo, pp. 1–6, 2020.
  • [3] L. Li, W. Xia, W. Lin, Y. Fang, and S. Wang, “No-reference and robust image sharpness evaluation based on multiscale spatial and spectral features,” IEEE Transactions on Multimedia, vol. 19(5), pp. 1030–40, 2016.
  • [4] S. Yu, F. Jiang, L. Li, and Y. Xie, “CNN-GRNN for image sharpness assessment,” Asian Conference on Computer Vision, pp. 50–61, 2016.
  • [5] S. Yu, S. Wu, L. Wang, F. Jiang, Y. Xie, and L. Li, “A shallow convolutional neural network for blind image sharpness assessment,” PloS one, Vol. 12(5), pp. e0176632, 2017.
  • [6] L. Liu, J. Gong, H. Huang, and Q. Sang, “Blind image blur metric based on orientation-aware local patterns,” Signal Processing: Image Communication, vol. 80, pp. 115654, 2020.
  • [7] J. Chen, S. Li, L. Lin, and Z. Li, “No-reference blurred image quality assessment method based on structure of structure features,” Signal Processing: Image Communication, vol. 118, pp. 117008, 2023.
  • [8] S. Yu, J. Wang, J. Gu, L. Yang, and J. Li, “A hybrid indicator for realistic blurred image quality assessment,” Journal of Visual Communication and Image Representation, vol. 94, pp. 103848, 2023.
  • [9] H. Zhu, L. Li, J. Wu, W. Dong, and G. Shi, “MetaIQA: Deep meta-learning for no-reference image quality assessment,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14143–52, 2020.
  • [10] Y. Huang, L. Li, Y. Yang, Y. Li, and Y. Guo, “Explainable and generalizable blind image quality assessment via semantic attribute reasoning,” IEEE Transactions on Multimedia, vol. 26, pp. 7672–85, 2022.
  • [11] W. Zhang, K. Ma, G. Zhai, and X. Yang, “Uncertainty-aware blind image quality assessment in the laboratory and wild,” IEEE Transactions on Image Processing, vol. 30, pp. 3474–86, 2021.
  • [12] B. Li and F. Huo, “REQA: Coarse-to-fine assessment of image quality to alleviate the range effect,” Journal of Visual Communication and Image Representation, vol. 98, pp. 104043, 2024.
  • [13] J. Chen, F. Qin, F. Lu, L. Guo, C. Li, K. Yan, and X. Zhou, “CSPP-IQA: A multi-scale spatial pyramid pooling-based approach for blind image quality assessment,” Neural Computing and Applications, pp. 1–12, 2022.
  • [14] W. Sun, X. Min, G. Zhai, and S. Ma, “Blind quality assessment for in-the-wild images via hierarchical feature fusion and iterative mixed database training,” IEEE Journal of Selected Topics in Signal Processing, vol. 17(6), pp. 1178–92, 2023.
  • [15] K. Zhao, K. Yuan, M. Sun, M. Li, and X. Wen, “Quality-aware pre-trained models for blind image quality assessment,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 22302–13, 2023.
  • [16] W. Zhang, G. Zhai, Y. Wei, X. Yang, and K. Ma, “Blind image quality assessment via vision-language correspondence: A multitask learning perspective,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14071–81, 2023.
  • [17] W. Wu, D. Huang, Y. Yao, Z. Shen, H. Zhang, C. Yan, and B. Zheng, “Feature rectification and enhancement for no-reference image quality assessment,” Journal of Visual Communication and Image Representation, vol. 98, pp. 104030, 2024.
  • [18] Z. Liu, Y. Wang, S. Vaidya, F. Ruehle, J. Halverson, M. Soljačić, T. Y. Hou, and M. Tegmark, “KAN: Kolmogorov-arnold networks,” 2024, arXiv preprint arXiv:2404.19756.
  • [19] Z. Liu, P. Ma, Y. Wang, W. Matusik, and M. Tegmark, “KAN 2.0: Kolmogorov-Arnold Networks Meet Science,” 2024, arXiv preprint arXiv:2408.10205.
  • [20] Y. Hou and D. Zhang, “A comprehensive survey on Kolmogorov Arnold networks (KAN),” 2024, arXiv preprint arXiv:2407.11075.
  • [21] J. Schmidt-Hieber, “The Kolmogorov–Arnold representation theorem revisited,” Neural networks, vol. 137, pp. 119–126, 2021.
  • [22] H. T. Ta, “BSRBF-KAN: A combination of B-splines and radial basic functions in Kolmogorov-Arnold networks,” 2024, arXiv preprint arXiv:2406.11173.
  • [23] S. S. Sidharth, “Chebyshev polynomial-based kolmogorov-arnold networks: An efficient architecture for nonlinear function approximation,” 2024, arXiv preprint arXiv:2405.07200.
  • [24] A. A. Aghaei, “fKAN: Fractional Kolmogorov-Arnold Networks with trainable Jacobi basis functions,” 2024, arXiv preprint arXiv:2406.07456.
  • [25] Z. Bozorgasl and H. Chen, “Wav-kan: Wavelet kolmogorov-arnold networks,” 2024, arXiv preprint arXiv:2405.12832.
  • [26] A. Ciancio, E. A. B. da Silva, A. Said, R. Samadani, and P. Obrador, “No-reference blur assessment of digital pictures based on multifeature classifiers,” IEEE Transactions on Image Processing, vol. 20(1), pp. 64–75, 2010.
  • [27] T. Virtanen, M. Nuutinen, M. Vaahteranoksa, P. Oittinen, and J. Häkkinen, “CID2013: A database for evaluating no-reference image quality assessment algorithms,” IEEE Transactions on Image Processing, vol. 24(1), pp. 390–402, 2014.
  • [28] D. Ghadiyaram and A. C. Bovik, “Massive online crowdsourced study of subjective and objective picture quality,” IEEE Transactions on Image Processing, vol. 25(1), pp. 372–87, 2015.
  • [29] V. Hosu, H. Lin, T. Sziranyi, and D. Saupe, “KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment,” IEEE Transactions on Image Processing, vol. 29(1), pp. 4041–56, 2020.
  • [30] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 770–8, 2016.
  • [31] R. Zhang, P. Isola, A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 586–95, 2018.