Efficient Privacy-Preserving KAN Inference Using Homomorphic Encryption
Abstract
The recently proposed Kolmogorov-Arnold Networks (KANs) offer enhanced interpretability and greater model expressiveness. However, KANs also present challenges related to privacy leakage during inference. Homomorphic encryption (HE) facilitates privacy-preserving inference for deep learning models, enabling resource-limited users to benefit from deep learning services while ensuring data security. Yet, the complex structure of KANs, incorporating nonlinear elements like the SiLU activation function and B-spline functions, renders existing privacy-preserving inference techniques inadequate. To address this issue, we propose an accurate and efficient privacy-preserving inference scheme tailored for KANs. Our approach introduces a task-specific polynomial approximation for the SiLU activation function, dynamically adjusting the approximation range to ensure high accuracy on real-world datasets. Additionally, we develop an efficient method for computing B-spline functions within the HE domain, leveraging techniques such as repeat packing, lazy combination, and comparison functions. We evaluate the effectiveness of our privacy-preserving KAN inference scheme on both symbolic formula evaluation and image classification. The experimental results show that our model achieves accuracy comparable to plaintext KANs across various datasets and outperforms plaintext MLPs. Additionally, on the CIFAR-10 dataset, our inference latency achieves over speedup compared to the naive method.
Introduction
In recent years, deep learning has made notable advancements. However, training a model requires substantial data and computational power, which is often impossible for resource-limited companies and individuals. Additionally, model owners are frequently hesitant to share their deep models due to potential breaches of intellectual property or privacy concerns (Jegorova et al. 2022). Using Machine Learning as a Service (MLaaS) offers a potential solution, allowing users to leverage encrypted deep learning services. However, user data, such as images or bills, often contain personal information, which raises significant concerns about privacy exposure in MLaaS.
To address these security issues, deep private inference has garnered considerable attention in recent years. By utilizing Secure Multiparty Computation (MPC) (Patra et al. 2021) and Homomorphic Encryption (HE) (Marcolla et al. 2022), data owners and model owners can perform inference tasks without compromising each other’s privacy (Knott et al. 2021; Lee et al. 2022). MPC-based privacy-preserving inference schemes often require substantial communication overhead (Hao et al. 2022; Pang et al. 2024). In contrast, in HE-based privacy-preserving inference frameworks (Lee et al. 2022; Ran et al. 2023; Zimerman et al. 2024), there is no need for interaction between the user and the server. Specifically, the user encrypts the input and sends it to the server, which then executes the inference algorithm and returns the encrypted result to the user. In this paper, we focus on HE-based solutions.

Although many deep private inference schemes have been proposed recently, the interpretability of deep models remains an open problem. This year, Kolmogorov-Arnold Networks (KANs) were introduced with the aim of offering greater interpretability. KANs incorporate additional non-linear elements, which increase computational demands during training. As shown in Figure 1, due to structural differences between KAN networks and traditional neural network architectures, existing HE-based privacy-preserving inference methods of deep learing cannot be directly applied to KANs. First, KAN inference requires the computation of various activation functions, like SiLU. Additionally, KAN inference necessitates the approximation of B-splines, for which no HE-based computation currently exists (Knott 1999). HE schemes capable of performing Boolean operations, such as TFHE (Chillotti et al. 2020), can compute arbitrary functions but are highly inefficient. As a result, most privacy-preserving inference methods (Lee et al. 2022; Ran et al. 2023) rely on arithmetic fully homomorphic encryption (FHE) schemes, such as RNS-CKKS (Cheon et al. 2019), to implement privacy-preserving KAN inference. These schemes support only homomorphic addition and multiplication, necessitating polynomial approximations of activation functions and B-splines.
While ReLU activation functions have been well-approximated in current privacy-preserving inference work (Lee et al. 2022; Ran et al. 2023) through high-precision sign function approximations (Lee et al. 2021), approximating the SiLU function is more challenging due to its lack of a direct relationship with the sign function. Although better polynomial approximations can be achieved through training (Ao and Boddeti 2023), the training cost for KAN networks is higher than for other neural network models due to the additional activation functions. Furthermore, in many cases, the server only has access to the model, not the original training data.
To address these challenges, we propose a novel task-specific activation function approximation method that dynamically adjusts the approximation range to suit different KAN networks. This method leverages Chebyshev’s inequality, focusing on regions with dense data distribution while avoiding emphasis on sparse edge regions. Comparisons with existing approximation methods, such as Remez (Zimerman et al. 2024), show superior model performance. Additionally, we develop an approximation method for B-spline basis functions using comparison functions based on minimax composite polynomials (Lee et al. 2021). We introduce a repeat packing technique to parallelize the computation of B-spline basis functions, enhancing computational efficiency. Moreover, we employ lazy combination techniques to eliminate extra time costs associated with rearranging encrypted data, achieving a speedup compared to the naive method under larger parameter settings.
-
•
We propose a novel method for approximating activation functions by dynamically determining the approximation range and using weighted least squares. This approach yields a polynomial approximation of the SiLU activation function that performs effectively on real datasets.
-
•
We introduce an efficient approach for computing B-spline functions in the HE domain. By utilizing techniques such as repeat packing, lazy combination, and comparison functions, we achieve highly accurate B-spline function computation.
-
•
Our experiments validate that our HE KAN maintains competitive performance in both symbolic formula evaluation and image classification. Additionally, on the CIFAR-10 dataset, our inference latency achieves over speedup compared to the naive method.
Related Work
KANs in the Plaintext Domain
KAN, as a promising alternative to MLP, has been integrated with popular models by various researchers. In (Zhang and Zhang 2024), Zhang et al. combined KAN with Graph Neural Networks, utilizing KAN for feature extraction and experimentally demonstrating the effectiveness of their proposed approach. Xu et al. (Xu et al. 2024) introduced KAN to enhance the message-passing process in Graph Collaborative Filtering, proposing a more efficient recommendation model. In (Genet and Inzirillo 2024), Genet et al. incorporated Temporal KAN into the Temporal Fusion Transformer to further improve the model’s ability to handle time series data. Li et al. (Li et al. 2024) combined KAN with U-Net for the segmentation and generation of medical images. However, these works primarily focus on the performance of KAN within neural networks and do not address the issues of security and privacy leakage in practical applications.
Privacy-Preserving Deep Inference
In 2016, Ran et al. introduced CryptoNets (Gilad-Bachrach et al. 2016), achieving privacy-preserving inference based on HE, though with very low efficiency. Since then, researchers have continued to improve inference performance on convolutional networks (Chou et al. 2018; Brutzkus, Gilad-Bachrach, and Elisha 2019; Benaissa et al. 2021), though the network structures they implemented are relatively simple. Both Lee et al.(Lee et al. 2022) and Ran et al.(Ran et al. 2023) employ more efficient ciphertext packing methods to achieve privacy-preserving inference for complex convolutional networks. Zimerman et al. (Zimerman et al. 2024) achieved privacy-preserving inference for Transformers. Additionally, researchers have combined Graph Convolutional Networks (GCN) with HE to enhance privacy protection for cloud-based graph data inference (Ran et al. 2022; Peng et al. 2024; Ran et al. 2024). There are also many privacy-preserving inference schemes based on MPC (Juvekar, Vaikuntanathan, and Chandrakasan 2018; Srinivasan, Akshayaram, and Ada 2019; Hao et al. 2022; Pang et al. 2024), but the communication overhead of MPC-based schemes is enormous, making them unsuitable for resource-constrained users. KANs differs significantly from current deep models. The aforementioned methods rely heavily on specific model structures, making them unsuitable for direct application to KAN privacy-preserving inference. Therefore, it is necessary to design a new method to achieve privacy-preserving inference for KAN.
Preliminaries
Kolmogorov-Arnold Networks
Unlike traditional neural networks that use fixed activation functions, KANs employs learnable activation functions at the network’s edges, enabling each weight parameter to be replaced by a univariate function. However, identifying suitable basis functions can be challenging, as many cannot be represented effectively. To address this, KANs utilizes spline functions for parametric approximation, providing significant flexibility. This method allows for modeling complex functions with fewer parameters, thereby enhancing the model’s interpretability.
The flexibility of spline functions enables them to adaptively model complex relationships in data by adjusting their shape, minimizing approximation errors, and improving the network’s ability to learn subtle patterns from high-dimensional datasets.
In most cases, is parametrized as a linear combination of B-splines as follows (Liu et al. 2024):
(1) |
where are trainable parameters, and are the -order B-spline basis functions at the knots. The -order B-spline basis functions are defined recursively as follows:
(2) |
(3) |
where form a non-decreasing sequence of knots, which determine the domain and influence of the basis functions.
The overall network in KAN can be expressed by the following equations:
(4) |
(5) |
Homomorphic Encryption
Following current work in privacy-preserving inference (Lee et al. 2022; Ran et al. 2023), we use RNS-CKKS (Cheon et al. 2019) as the underlying cryptographic scheme. Below, we briefly introduce the main operations in HE.
RNS-CKKS encrypts a vector at a time using Single Instruction, Multiple Data (SIMD). To clearly describe the operations of HE, we use bold lowercase letters to represent vectors, and to denote encrypted ciphertexts. For example, we use to represent the vector , and to denote its ciphertext. We use , , , and ) represent homomorphic ciphertext addition, subtraction, multiplication, and rotation.
Threat Model
The threat model in this paper is similar to that of previous HE-based privacy-preserving inference schemes (Lee et al. 2022; Ran et al. 2023). We assume a cloud-based machine learning service scenario where the cloud server hosts a pre-trained KAN model with plaintext weights. Clients send their sensitive data to the cloud server to obtain inference results from the network model. We assume the cloud server is honest-but-curious, meaning it follows the protocol but may attempt to extract users’ private information through additional computations. To protect the privacy of the client’s sensitive data, the client can encrypt the data using HE before sending it to the cloud server for privacy-preserving inference. The cloud server performs computations on the ciphertext without decrypting it, and the client decrypts the inference results using their private key. This process ensures that no information about the input data is revealed to the cloud.
Method
Similar to MLPs, the KANs is composed of multiple KANLayers stacked linearly, with each KANLayer having an input dimension of and an output dimension of . Therefore, by approximating a single KANLayer, we can approximate the entire network. The overview of our HE-friendly KAN is shown in Figure 2.
In our HE KAN, the input is a three-dimensional real-valued tensor , where and represent the height and width of the input, respectively, and represents the number of channels. We use a raster scan order (Lee et al. 2022) to convert the tensor into a vector of length , which is encrypted using CKKS-RNS as . Each KANLayer includes SiLU, B-spline, and Linear Layer. Below, we describe the approximation method for each component in detail.

SiLU Approximation
Previous approximations of the ReLU function utilized the sign function, which allows the input to be scaled to the range, resulting in a good approximation. However, SiLU cannot be approximated using sign or comparison functions. The biggest challenge in approximating SiLU is determining the appropriate approximation range.
The input range of the activation function varies across different datasets (Lee et al. 2023). We estimate the mean and variance of the activation function’s input range by feeding the dataset into a pre-trained network. Then, we use Chebyshev’s inequality to estimate the approximate distribution of the activation function’s input values.
According to Chebyshev’s inequality,
(11) |
when we set , we assume that approximately of the data falls within the interval. Therefore, we use as the approximation range to ensure that most data lies within this interval.
Next, we uniformly sample points within the approximation range and use weighted least squares to fit the activation function. We focus on fitting the regions with dense data distribution and de-emphasize the sparse edge regions. This approach allows us to obtain a polynomial that better approximates the original activation function under the same polynomial degree constraints. Let the input be , the output of the activation function be , and the output of the polynomial be . The optimization objective of weighted least squares is:
(12) |
where is the number of sampled points, and is the weight. The closer the data is to the mean, the higher the weight we assign. For example, the sum of squared residuals of data points within the interval is given a weight of 10, while those outside this interval are given a weight of 1. We then solve for the polynomial coefficients to obtain the approximate polynomial for the activation function.
B-spline Basis Approximation
We observed that the calculation of B-spline basis functions involves many parallelizable components. Therefore, we use Repeat Packing to fully leverage the SIMD capabilities of HE to accelerate B-spline calculations.
Let the input be a vector of length . After repeat packing, it becomes , denoted as , where is repeated times, with being the number of B-spline grids and being the B-spline degrees, satisfying .
Repeat packing data in ciphertexts is not easy, as ciphertext rotations are computationally expensive. We designed a fast packing method that reduces the number of rotations needed for repeat packing from to . We first rotate to the right by steps, then add it to the original to obtain a temporary ciphertext . The above steps are repeated on the temporary ciphertext, with rotation steps of . The specific algorithm is shown in Algorithm 1.
Input: ciphertext , grid parameters and , input size
Output: packed ciphertext

To simplify the description, we define the following operations:
-
•
: Concatenates columns to of matrix into a vector.
-
•
: Returns if , if , and if . We implement this using the method from (Lee et al. 2021).
Recall Equation 2 and Equation 3. We first need to determine whether the input falls within the interval , and then recursively compute the basis functions.
We combine all the intervals into a weight matrix . Then, using , we obtain a vector containing the left endpoints of all intervals and a vector containing the right endpoints. Next, we use to compare the packed input with and , and then recursively calculate the basis functions according to Equation 3.
It should be noted that operates within the range . If the input exceeds this range, the approximation may become inaccurate. Therefore, we scale the input to the range. We estimate the approximate range of the input using the training data, and represent the input within a larger range to avoid overflow. When using to compute intervals, there may be cases where is exactly equal to the interval endpoint, such as . In this case, returns , which does not match Equation 2. We can achieve the correct result by comparing with . However, the probability of equality is very small, so its impact on the overall approximation error is negligible. Therefore, in practice, we can ignore the equality case, similar to how existing papers approximate ReLU (Lee et al. 2022). The specific algorithm is shown in Algorithm 2.
Input: packed ciphertext , grid parameters and , input size , weight matrix , bound
Output: B-spline basis result
Lazy Combination and Fusion Linear Layer
In the previous section, we described how to approximate B-spline basis functions. However, according to Equation 1, we need to multiply each basis function by a coefficient and sum them to obtain the final B-spline function approximation. For a given input , this is an inner product between the coefficient vector and the basis function vector . Since we are using SIMD, computing the inner product requires rotations, which are expensive in the HE domain. Therefore, we adopt a lazy combination approach, postponing the computation of until later to reduce the number of rotations. Specifically, we combine with the weight coefficients of the subsequent linear layer to obtain a new set of weights. This reduces the number of multiplications and rotations, improving computational efficiency.
The weight matrix has dimensions . When the input size is , the coefficients of the basis functions form a matrix of size . After combining and , we obtain a large matrix of size . Therefore, following the method of (Blealtan 2024), we directly use during inference instead of the linear layer weights and .
However, due to the repeat encoding introduced during B-spline basis function computation, the result is in format, with an output vector of length . We need to repack the result of the basis function approximation into row-major order to take advantage of SIMD for fast matrix operations. This is essentially a ciphertext reordering problem, requiring multiplication by a permutation matrix of size . The method for generating the permutation matrix is provided in Algorithm 3.
To reduce multiplicative depth and the number of rotations, lazy combination is also used as mentioned above. We multiply the permutation matrix with the weight coefficients . The final fusion linear layer weight is then given by:
(13) |
The computation of the fusion linear layer, as well as the linear layer following SiLU, involves matrix multiplication. We optimize this using the baby-step giant-step (BSGS) strategy (Halevi and Shoup 2021), reducing the number of ciphertext rotations required for matrix multiplication and lowering the time cost of the linear layer.
Input: number of rows , number of columns
Output: permutation matrix
Experimental Evaluation
Experiment Setup
Environment.
Encryption Parameters.
We set the degree of the polynomial modulus to and used parameters recommended for a 128-bit security level according to (Albrecht et al. 2021). The bit-lengths of the base modulus and special modulus were set to , while the bit-length of the scale factor was set to . The multiplicative depth was set to . For the Bootstrapping parameters, we used the same settings as in (Lee et al. 2022).
Datasets and Metric.
We represent the KAN network structure as an array , denoting the number of nodes per layer. The datasets used in the experiments were MNIST (LeCun et al. 1998), Fashion-MNIST (Xiao, Rasul, and Vollgraf 2017), and CIFAR-10 (Krizhevsky, Hinton et al. 2009), and we used KAN models with , , and , respectively. Note that when evaluating the model’s inference performance, we used the models trained in the plaintext domain. So we just used inference accuracy to evaluate model performance. For the comparison of effectiveness of approximation and symbolic formulas, we used RMSE as in (Liu et al. 2024). To minimize errors, our experimental metrics were obtained by averaging the results of 10 trials.
SiLU Approximation
Approximation Interval and Polynomial Degree.
Dataset | Degree | Acc. | RMSE |
MNIST | 8 | 96.82 | 0.307 |
9 | 95.87 | 0.183 | |
10 | 97.17 | 0.166 | |
11 | 96.40 | 0.153 | |
FMNIST | 8 | 89.26 | 0.158 |
9 | 89.32 | 0.110 | |
10 | 89.35 | 0.093 | |
11 | 89.29 | 0.085 | |
CIFAR-10 | 13 | 57.08 | 0.053 |
14 | 57.21 | 0.046 | |
15 | 57.38 | 0.042 | |
16 | 57.03 | 0.025 |
Since the input range of the activation function varies across different datasets (Lee et al. 2023), different tasks require different approximation ranges. According to Equation 11, we estimated the optimal approximation interval using the . We used approximation ranges of for MNIST, for Fashion-MNIST, and for CIFAR-10. Choosing the appropriate polynomial degree for approximation is also crucial for different tasks. Table 1 shows the impact of different polynomial degrees on accuracy across datasets. For better model performance, we select a polynomial degree of for the MNIST and Fashion-MNIST datasets, and a degree of for the CIFAR-10 dataset.
Comparison of Different Approximation Methods.
Method | Metric | MNIST | FMNIST | CIFAR-10 |
Remez | RMSE | 0.087 | 0.046 | 0.016 |
Acc. | 96.76 | 89.16 | 57.17 | |
OLS | RMSE | 0.071 | 0.035 | 0.013 |
Acc. | 97.01 | 89.27 | 57.25 | |
Ours | RMSE | 0.166 | 0.093 | 0.041 |
Acc. | 97.17 | 89.35 | 57.38 |
To evaluate the effectiveness of our approach, we compare it with the Remez approximation method used by Zimerman et al. (2024) and the least squares method used by Zheng et al. (2022) in current privacy-preserving inference works. For a fair comparison, we restrict all methods to approximate within the same polynomial degree and range for each dataset. Table 2 presents the results show that our method achieves the highest approximation accuracy across all datasets. For example, on the MNIST dataset, our method achieves an approximation accuracy of , which is and higher than the Remez and the least squares method, respectively. Because our method prioritizes fitting densely distributed data regions while downplaying sparse edge regions, our method achieves better approximation in regions where the data is more concentrated. As shown in Figure 4, our method results in a polynomial that more closely approximates in .

B-spline Basis Approximation
Naive (s) | Ours (s) | Speedup () | |
(64, 3, 2) | 54.009 | 32.691 | 1.65 |
(128, 5, 3) | 71.760 | 33.527 | 2.14 |
(256, 5, 3) | 109.712 | 33.788 | 3.25 |
(256, 10, 3) | 187.691 | 33.852 | 5.54 |
(256, 10, 5) | 212.572 | 42.241 | 5.03 |
Tasks | Dataset | Baseline | MLPs | Ours |
Image | MNIST | 97.53 | 97.20 | 97.27 |
FMNIST | 89.56 | 87.10 | 89.35 | |
CIFAR-10 | 57.62 | 54.67 | 57.37 | |
Formula | Toy | 0.00158 | 0.14072 | 0.00157 |
lpmv0 | 0.00460 | 0.01404 | 0.00461 | |
lpmv1 | 0.03478 | 0.07378 | 0.03489 |
Compared to the naive method without lazy combination optimization, our proposed B-spline basis approximation method offers a significant efficiency improvement. Table 3 demonstrates that our method improves computational efficiency several times compared to the naive method. For example, when , , and , our method takes only seconds, which is faster than the naive method. This improvement is because the naive method requires a permutation operation after B-spline basis approximation to rearrange the data. The permutation in HE domain introduces additional costs, including HE ciphertext multiplications and HE rotations, leading to significant latency overhead. As the input dimension , B-spline grid , and B-spline degree increase, the speedup of our method improves.
Additionally, the results in Table 3 show that, as the parameters and increase, the HE inference latency remains relatively stable with our method. This is because our method is designed to take full advantage of SIMD of CKKS for parallel processing. Therefore, as long as the single ciphertext packing condition is satisfied, increases in and do not significantly affect the HE inference latency. However, as parameter increases, we needs to iteratively compute higher-order B-spline basis functions, increasing the HE inference latency due to the added multiplicative depth. In practical applications, the performance of the B-spline basis approximation can be optimized by carefully selecting appropriate values for , , and .
Comparison of Model Performance
To evaluate the practical performance of our proposed privacy-preserving KAN inference scheme, we conducted experiments on various datasets. We compared with plaintext KAN as a baseline, as well as with MLP (Rumelhart, Hinton, and Williams 1986), which is a fundamental component in current HE-based privacy-preserving inference methods (Benaissa et al. 2021; Lee et al. 2022; Ran et al. 2023). Apart from image classification tasks, we also conducted experiments on symbolic formulas. We selected the following symbolic functions from (Liu et al. 2024) for our experiments. The Toy Formula dataset represents a high-dimensional symbolic function , while the lpmv0 and lpmv1 datasets consist of special functions from (Liu et al. 2024).
The experimental results show that the accuracy of our privacy-preserving KAN inference is very close to that of the plaintext domain and exceeds that of MLPs, as shown in Table 4. On image benchmark datasets, the accuracy decrease of our scheme is no more than , and it outperforms MLPs. On the symbolic formula dataset, the RMSE deviation from plaintext is no more than , with a lower RMSE than MLPs, confirming the effectiveness of our HE-based KAN inference scheme.

Inference Latency
We evaluated the inference latency for a single input across different datasets in a single-threaded environment, comparing our method with the naive implementation, as shown in the figure. For the small dataset Toy-formula, our method achieves an inference latency of 324 seconds, while the naive method has a latency of 468 seconds, resulting in approximately speedup. On the larger CIFAR-10 dataset, our inference latency is 380 seconds compared to 2789 seconds for the naive method, yielding over speedup. Additionally, since servers often handle data from multiple users, the amortized processing time for multiple datasets is also important. Due to page limitation, we provide details on the multi-threaded environment’s amortized inference latency and the time distribution of various computational components in the Appendix.
Conclusion
We have proposed an efficient scheme for privacy-preserving KAN inference using HE. We designed a novel task-specific activation function approximation method to closely approximate the SiLU activation function. This approximation enhances the performance of HE networks on real-world tasks without necessitating retraining of the network model. Additionally, we introduced an efficient method for computing B-spline functions in the HE domain. By leveraging techniques such as repeated packing, lazy composition, and comparison functions, we achieved high-precision B-spline function computations. Building on these two approximation methods, we developed an HE-based KAN model inference scheme. Our approach delivers about a speedup compared to the naive implementation in the HE domain, while maintaining a minimal accuracy loss of only compared to plaintext KANs on the CIFAR-10 dataset. Given the efficiency constraints of HE, in the future, we plan to explore more effective slot utilization strategies and more compact cryptographic parameters to further enhance efficiency.
References
- Albrecht et al. (2021) Albrecht, M.; Chase, M.; Chen, H.; Ding, J.; Goldwasser, S.; Gorbunov, S.; Halevi, S.; Hoffstein, J.; Laine, K.; Lauter, K.; et al. 2021. Homomorphic encryption standard. Protecting privacy through homomorphic encryption, 31–62.
- Ao and Boddeti (2023) Ao, W.; and Boddeti, V. N. 2023. Autofhe: Automated adaption of cnns for efficient evaluation over fhe. arXiv preprint arXiv:2310.08012.
- Benaissa et al. (2021) Benaissa, A.; Retiat, B.; Cebere, B.; and Belfedhal, A. E. 2021. Tenseal: A library for encrypted tensor operations using homomorphic encryption. arXiv preprint arXiv:2104.03152.
- Blealtan (2024) Blealtan. 2024. An Efficient Implementation of Kolmogorov-Arnold Network. https://github.com/Blealtan/efficient-kan.
- Brutzkus, Gilad-Bachrach, and Elisha (2019) Brutzkus, A.; Gilad-Bachrach, R.; and Elisha, O. 2019. Low latency privacy preserving inference. In International Conference on Machine Learning, 812–821. PMLR.
- Cheon et al. (2019) Cheon, J. H.; Han, K.; Kim, A.; Kim, M.; and Song, Y. 2019. A full RNS variant of approximate homomorphic encryption. In Selected Areas in Cryptography–SAC 2018: 25th International Conference, Calgary, AB, Canada, August 15–17, 2018, Revised Selected Papers 25, 347–368. Springer.
- Cheon et al. (2017) Cheon, J. H.; Kim, A.; Kim, M.; and Song, Y. 2017. Homomorphic encryption for arithmetic of approximate numbers. In Advances in Cryptology–ASIACRYPT 2017: 23rd International Conference on the Theory and Applications of Cryptology and Information Security, Hong Kong, China, December 3-7, 2017, Proceedings, Part I 23, 409–437. Springer.
- Chillotti et al. (2020) Chillotti, I.; Gama, N.; Georgieva, M.; and Izabachène, M. 2020. TFHE: fast fully homomorphic encryption over the torus. Journal of Cryptology, 33(1): 34–91.
- Chou et al. (2018) Chou, E.; Beal, J.; Levy, D.; Yeung, S.; Haque, A.; and Fei-Fei, L. 2018. Faster cryptonets: Leveraging sparsity for real-world encrypted inference. arXiv preprint arXiv:1811.09953.
- Genet and Inzirillo (2024) Genet, R.; and Inzirillo, H. 2024. A Temporal Kolmogorov-Arnold Transformer for Time Series Forecasting. arXiv:2406.02486.
- Gilad-Bachrach et al. (2016) Gilad-Bachrach, R.; Dowlin, N.; Laine, K.; Lauter, K.; Naehrig, M.; and Wernsing, J. 2016. Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. In International conference on machine learning, 201–210. PMLR.
- Halevi and Shoup (2021) Halevi, S.; and Shoup, V. 2021. Bootstrapping for helib. Journal of Cryptology, 34(1): 7.
- Hao et al. (2022) Hao, M.; Li, H.; Chen, H.; Xing, P.; Xu, G.; and Zhang, T. 2022. Iron: Private inference on transformers. Advances in neural information processing systems, 35: 15718–15731.
- Jegorova et al. (2022) Jegorova, M.; Kaul, C.; Mayor, C.; O’Neil, A. Q.; Weir, A.; Murray-Smith, R.; and Tsaftaris, S. A. 2022. Survey: Leakage and privacy at inference time. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(7): 9090–9108.
- Juvekar, Vaikuntanathan, and Chandrakasan (2018) Juvekar, C.; Vaikuntanathan, V.; and Chandrakasan, A. 2018. GAZELLE: A low latency framework for secure neural network inference. In 27th USENIX security symposium (USENIX security 18), 1651–1669.
- Knott et al. (2021) Knott, B.; Venkataraman, S.; Hannun, A.; Sengupta, S.; Ibrahim, M.; and van der Maaten, L. 2021. Crypten: Secure multi-party computation meets machine learning. Advances in Neural Information Processing Systems, 34: 4961–4973.
- Knott (1999) Knott, G. D. 1999. Interpolating cubic splines, volume 18. Springer Science & Business Media.
- Krizhevsky, Hinton et al. (2009) Krizhevsky, A.; Hinton, G.; et al. 2009. Learning multiple layers of features from tiny images.
- LeCun et al. (1998) LeCun, Y.; Bottou, L.; Bengio, Y.; and Haffner, P. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11): 2278–2324.
- Lee et al. (2022) Lee, E.; Lee, J.-W.; Lee, J.; Kim, Y.-S.; Kim, Y.; No, J.-S.; and Choi, W. 2022. Low-complexity deep convolutional neural networks on fully homomorphic encryption using multiplexed parallel convolutions. In International Conference on Machine Learning, 12403–12422. PMLR.
- Lee et al. (2021) Lee, E.; Lee, J.-W.; No, J.-S.; and Kim, Y.-S. 2021. Minimax approximation of sign function by composite polynomial for homomorphic comparison. IEEE Transactions on Dependable and Secure Computing, 19(6): 3711–3727.
- Lee et al. (2023) Lee, S.; Lee, G.; Kim, J. W.; Shin, J.; and Lee, M.-K. 2023. HETAL: efficient privacy-preserving transfer learning with homomorphic encryption. In International Conference on Machine Learning, 19010–19035. PMLR.
- Li et al. (2024) Li, C.; Liu, X.; Li, W.; Wang, C.; Liu, H.; and Yuan, Y. 2024. U-KAN Makes Strong Backbone for Medical Image Segmentation and Generation. arXiv:2406.02918.
- Liu et al. (2024) Liu, Z.; Wang, Y.; Vaidya, S.; Ruehle, F.; Halverson, J.; Soljačić, M.; Hou, T. Y.; and Tegmark, M. 2024. Kan: Kolmogorov-arnold networks. arXiv preprint arXiv:2404.19756.
- Marcolla et al. (2022) Marcolla, C.; Sucasas, V.; Manzano, M.; Bassoli, R.; Fitzek, F. H.; and Aaraj, N. 2022. Survey on fully homomorphic encryption, theory, and applications. Proceedings of the IEEE, 110(10): 1572–1609.
- Pang et al. (2024) Pang, Q.; Zhu, J.; Möllering, H.; Zheng, W.; and Schneider, T. 2024. BOLT: Privacy-Preserving, Accurate and Efficient Inference for Transformers. In 2024 IEEE Symposium on Security and Privacy (SP), 130–130. IEEE Computer Society.
- Patra et al. (2021) Patra, A.; Schneider, T.; Suresh, A.; and Yalame, H. 2021. ABY2. 0: Improved Mixed-Protocol secure Two-Party computation. In 30th USENIX Security Symposium (USENIX Security 21), 2165–2182.
- Peng et al. (2024) Peng, H.; Ran, R.; Luo, Y.; Zhao, J.; Huang, S.; Thorat, K.; Geng, T.; Wang, C.; Xu, X.; Wen, W.; et al. 2024. Lingcn: Structural linearized graph convolutional network for homomorphically encrypted inference. Advances in Neural Information Processing Systems, 36.
- Ran et al. (2023) Ran, R.; Luo, X.; Wang, W.; Liu, T.; Quan, G.; Xu, X.; Ding, C.; and Wen, W. 2023. SpENCNN: orchestrating encoding and sparsity for fast homomorphically encrypted neural network inference. In International Conference on Machine Learning, 28718–28728. PMLR.
- Ran et al. (2022) Ran, R.; Wang, W.; Gang, Q.; Yin, J.; Xu, N.; and Wen, W. 2022. CryptoGCN: Fast and scalable homomorphically encrypted graph convolutional network inference. Advances in Neural information processing systems, 35: 37676–37689.
- Ran et al. (2024) Ran, R.; Xu, N.; Liu, T.; Wang, W.; Quan, G.; and Wen, W. 2024. Penguin: parallel-packed homomorphic encryption for fast graph convolutional network inference. Advances in Neural Information Processing Systems, 36.
- Rumelhart, Hinton, and Williams (1986) Rumelhart, D. E.; Hinton, G. E.; and Williams, R. J. 1986. Learning representations by back-propagating errors. nature, 323(6088): 533–536.
- (33) SEAL. 2020. Microsoft SEAL (release 3.6). https://github.com/Microsoft/SEAL. Microsoft Research, Redmond, WA.
- Srinivasan, Akshayaram, and Ada (2019) Srinivasan, W. Z.; Akshayaram, P.; and Ada, P. R. 2019. DELPHI: A cryptographic inference service for neural networks. In Proc. 29th USENIX secur. symp, volume 3.
- Xiao, Rasul, and Vollgraf (2017) Xiao, H.; Rasul, K.; and Vollgraf, R. 2017. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747.
- Xu et al. (2024) Xu, J.; Chen, Z.; Li, J.; Yang, S.; Wang, W.; Hu, X.; and Ngai, E. C. H. 2024. FourierKAN-GCF: Fourier Kolmogorov-Arnold Network – An Effective and Efficient Feature Transformation for Graph Collaborative Filtering. arXiv:2406.01034.
- Zhang and Zhang (2024) Zhang, F.; and Zhang, X. 2024. GraphKAN: Enhancing Feature Extraction with Graph Kolmogorov Arnold Networks. arXiv:2406.13597.
- Zheng et al. (2022) Zheng, P.; Cai, Z.; Zeng, H.; and Huang, J. 2022. Keyword spotting in the homomorphic encrypted domain using deep complex-valued CNN. In Proceedings of the 30th ACM international conference on multimedia, 1474–1483.
- Zimerman et al. (2024) Zimerman, I.; Baruch, M.; Drucker, N.; Ezov, G.; Soceanu, O.; and Wolf, L. 2024. Converting Transformers to Polynomial Form for Secure Inference Over Homomorphic Encryption. In Forty-first International Conference on Machine Learning.