Exploring the use of deep learning in task-flexible ILC*
Abstract
Growing demands in today’s industry results in increasingly stringent performance and throughput specifications. For accurate positioning of high-precision motion systems, feedforward control plays a crucial role. Nonetheless, conventional model-based feedforward approaches are no longer sufficient to satisfy the challenging performance requirements. An attractive method for systems with repetitive motion tasks is iterative learning control (ILC) due to its superior performance. However, for systems with non-repetitive motion tasks, ILC is generally not applicable, despite of some recent promising advances. In this paper, we aim to explore the use of deep learning to address the task flexibility constraint of ILC. For this purpose, a novel Task Analogy based Imitation Learning (TAIL)-ILC approach is developed. To benchmark the performance of the proposed approach, a simulation study is presented which compares the TAIL-ILC to classical model-based feedforward strategies and existing learning-based approaches, such as neural network based feedforward learning.
I Introduction
High-precision positioning systems are essential components in modern manufacturing machines and scientific equipment, see [1, 2, 3, 4]. To ensure high-throughput and high-accuracy position tracking, a two-degree-of-freedom controller structure, consisting of a feedback controller and a feedforward controller, is commonly utilized, see [5, 6, 7]. The feedback controller maintains closed-loop stability and disturbance rejection, while the feedforward controller is primarily responsible for achieving optimal position tracking performance, see [8]. Nonetheless, with the increasingly stringent demands in contemporary industry, conventional model-based feedforward techniques, e.g. [9], are no longer adequate to meet the desired performance specifications, thus necessitating for alternative feedforward approaches.
Iterative Learning Control (ILC), see [10], has emerged as a viable choice for feedforward control in motion systems that execute recurring tasks, enabling accurate position tracking. Despite its advantages, ILC exhibits significant limitations. Primarily, ILC is dependent on the assumption that the tracking error recurs from one iteration to the next, limiting its general applicability. Additionally, conventional ILC performance is constrained to a single task, see [11].
Several studies have attempted to address the task flexibility limitations of ILC by drawing on concepts from machine learning and system identification, as reported in the literature [12, 13, 14]. However, the findings from the related literature suggest that there exists a trade-off between the achievable position tracking performance and the degree of deviation from the core principle of ILC, i.e., direct iterative manipulation of signals. Instead of compromising local ILC performance to enhance task flexibility, the aim is to develop a learning-based feedforward strategy that can deliver superior position tracking performance regardless of the severity of the variation of the compensatory signal across tasks. Such an ILC variant can be imagined to make use of imitation learning in order to mimic the behaviour of conventional ILC policies generalized over multiple trajectories.
This paper introduces a novel approach to ILC, termed Task Analogy based Imitation Learning (TAIL)-ILC, from a data science perspective. By acquiring spatial feature analogies of the trajectories and their corresponding control signals, performance of conventional ILC policies can be replicated. To facilitate efficient network training, abstract lower-dimensional representations of signals are utilized. This approach offers numerous benefits in terms of training and prediction time efficiency, utilization of large datasets, and high sampling rate handling. The resulting feedforward controller comprises an encoding policy, a learning policy, and a decoding policy arranged in a cascade interconnection. Dual principal component analysis (DPCA), a standard linear dimensionality reduction technique, is utilized for the integration of the encoding and decoding policies, while a deep neural network is employed for the learning policy.
The main contributions of this paper are:
-
(C1)
A novel TAIL-ILC approach that tackles the task extension problem of ILC via learning spatial feature analogies of trajectories and their compensation signals, enabling direct imitation of ILC policies.
-
(C2)
An efficient implementation strategy is devised for the learning-based feedforward controller in terms of constructing it through the cascade interconnection of an encoder, a deep neural network, and a decoder.
This paper is organized as follows. First, the problem formulation is presented in Section II. Next, Section III presents the proposed novel TAIL-ILC approach which aims at generalizing ILC performance across various tasks through imitation learning strategies. Section IV provides a simulation study of the proposed approach with respect to existing feedforward strategies using a high-fidelity model of a moving-magnet planar actuator. In Section V, detailed comparison between the proposed TAIL-ILC approach and neural-network-based feedforward strategies is presented. Finally, conclusions on the proposed approach are presented in Section VI.
II Problem statement
II-A Background
Consider the conventional frequency domain ILC configuration illustrated by Figure 1, where corresponds to the proper transfer matrix representation of a discrete time (DT) linear-time-invariant (LTI) multiple-input multiple-output (MIMO) plant with denoting the set of real rational functions in the complex variable . Furthermore, the proper represents a LTI stabilizing DT feedback controller, which is typically constructed using rigid-body decoupling strategies, see [15]. The aim of the conventional frequency domain ILC framework is to construct an optimal feedforward policy , which minimizes the position tracking error in the presence of the motion trajectory . Under the assumption that the reference trajectory is trial invariant, the error propagation per trial is given by:
(1) |
where and . Generally, the update law for the feedforward policy is in accordance with the procedure outlined in [16]:
(2) |
where is a learning filter and denotes a robustness filter with corresponding to set of real rational functions in that have bounded singular value on the unit circle , i.e., finite norm. Both and are required to be designed for the ILC task at hand. Furthermore, by combining (1) and (2), the progression of the error and feedforward update is reformulated as:
(3a) | ||||
(3b) |
which can be reduced to:
(4a) | ||||
(4b) |
under the assumption that is diagonal and is approximately diagonal, which holds in case of rigid-body decoupled systems.
From (4), several observations can be made. First, it can be observed that the contribution of to the position tracking error is dependent on the robustness filter , which is optimally chosen as identity to negate the contribution of the reference trajectory towards the tracking error. Secondly, learning filter aims to minimize the criterion , where stands for the norm, such that the tracking error is steered to zero, which is optimally achieved when . Note that these assumptions on and yield the optimal feedforward update , which results in perfect position tracking. Moreover, when the convergence criterion is satisfied, the limit policies, i.e. , , correspond to:
(5a) | ||||
(5b) |
In spite of its simplicity and efficacy, the conventional ILC is hindered by significant limitations, the most notable of which is its confinement to a single task. Consequently, its practical utility is restricted to particular types of machinery.

II-B Problem formulation
The aim of this paper is to address the challenge of augmenting the task-flexibility of the conventional ILC by utilizing an imitation learning based controller. This approach facilitates the generalization of the optimal feedforward policy, created by the conventional ILC, for a wider range of motion profiles. The primary objective of this paper is to devise a feedforward controller that employs a learning-based mechanism, which satisfies the following requirements:
-
(R1)
The learning-based feedforward approach enables the generalization of the performance of the conventional ILC across multiple trajectories.
-
(R2)
The scalability of the learning-based feedforward approach is imperative for its implementation in systems with a high sampling rate.
III TAIL-ILC
III-A Approach
For a given dynamic system with a proper discrete transfer function under a sampling time , a reference trajectory of duration seconds can be defined as
(6) |
where corresponds to the length of the signal in DT. This reference trajectory for example can correspond to a order motion profile. A trajectory class is defined as a collection of reference trajectories such that each trajectory shares certain prominent spatial features (motion profile order, constant velocity interval length, etc.) with the others, where is the number of trajectories:
(7) |
Given a specific combination of the and filters, consider that an ILC policy exists which maps a given reference trajectory to the optimal feedforward compensation signal , see (5). This can be formally expressed as:
(8) |
Henceforth, shall be denoted as the expert policy, which is equipped with learning and robustness filters established through a process model. Our objective is to formulate an optimal student policy that approximates the performance of the optimal policy over a set of trajectories from the pertinent trajectory class. To this end, we endeavor to determine as a solution to the optimization problem:
(9) |

where and is a performance quantification measure, and are parameterized student policy candidates. The expert policy is a conventionally designed frequency domain ILC as described in Section II-A. In TAIL-ILC, the idea is to structure as :
(10) |
which is visualised in Figure 2. The TAIL-ILC controller is capable of generating a feedforward control signal based on a given reference trajectory. This process is carried out through a series of three sub-policies outlined in equation (10). The first sub-policy, , projects the reference trajectory into a lower-dimensional space referred to as the latent space. Next, the second sub-policy, , predicts a latent space representation of the feedforward signal, which is then fed into the third sub-policy, , to project the latent space feedforward signal back into the higher-dimensional output space, resulting in . Notably, the successful application of TAIL-ILC requires that all reference trajectories share certain spatial features with each other. The prediction sub-policy, , is trained on a set of reference trajectories and their corresponding feedforward control signals obtained using , which are projected into the latent space. The use of abstract representations enables the preservation of the most significant information of the signals while simultaneously reducing the amount of data used for making predictions, resulting in several advantages, such as increased training and prediction time efficiencies. The subsequent sub-section will delve into the development of each sub-policy in further detail.
III-B Student policy
The three-part student policy can be decomposed into three distinct components:
(11a) | ||||
(11b) | ||||
(11c) |
where, and and is the latent space dimensionality such that . As mentioned in Section III-A, the training data for the sub-policy , namely the pairs , are in the latent space. This shows that the ideal outputs of are of the form:
(12) |
where, an approximation error may exist between and . Additionally, we aim at:
(13) |
where, in case of using a deep neural network, is the output of the network and the prediction error is defined as:
(14) |
where, denotes the norm. Moreover, this implies that (12) becomes:
(15) |
In order to quantify the gap between performance of and that of , a distance measure is used as the performance quantification measure in (9). This is expressed as:
(16) |
Assuming that represents the set of weights and biases of the deep neural network, improving the performance of can be posed as the following optimization problem:
(17) |
The proposed approach involves propagating the parameter through the three sub-policies, with the aim of iteratively optimizing both and via (17). However, given the significant computational burden associated with this approach, there is a need for a more straightforward alternative or a reformulation of the problem. With this goal in mind, we introduce the concepts of the Expert space and Student space to provide alternative perspectives for addressing the optimization problem at hand.
Definition 1
The expert space is defined as the space of all real policies denoted by superscript having the form
Example:
-
1.
Expert policy in expert space:
(18) where
-
2.
Student policy in expert space:
(19) where
Definition 2
The student space is defined as the space of all real policies denoted by superscript having the form
Example:
-
1.
Expert policy in student space:
(20) where
-
2.
Student policy in student space:
(21) where
Table I summarizes these definitions.
Stated differently, the expert space is comprised of all the decoding policies, , which project signals into dimensions, while the student space is composed of all the encoding policies, , which project signals into dimensions.
Expert space | Student space | |
---|---|---|
Expert policy | ||
Student policy |
Based on the preceding definitions, it is worth noting that our primary objective is to determine the student policy in the expert space, . In light of these definitions, the distance metric specified in (16) can be reformulated as:
(22) |
where, corresponds to the optimization of and corresponds to the optimization of . This separation of the distance measure (16) allows the optimization problem in (17) to be segmented as:
This segregation allows us to optimize independently of , thus simplifying the optimization problem defined by (17).
III-C Choice of encoding and decoding sub-policies
The encoding and decoding sub-policies in this work employ DPCA, a well-established linear dimensionality reduction technique, due to its computational simplicity. Other commonly-used linear and non-linear dimensionality reduction methods are also available and have been reviewed in [17]. DPCA involves the identification of a linear subspace with dimensions in an dimensional space, where is significantly smaller than . This subspace is defined by a set of orthonormal bases that maximize the variance of the original data when projected onto this subspace. The orthonormal bases computed through this process are commonly referred to as principal components.
Definition 3
A data point in an arbitrary dataset is defined as a vector .
The selection of the principal components for an dimensional latent space for the data points in involves choosing the right eigenvectors that correspond to the first singular values of . It should be emphasized that the projection of a data point onto the latent space can be computed through the following method:
(23) |
where:
(24) |
In this context, , denotes the matrix of right eigenvectors of and contains the first singular values of along its diagonal elements. It is worth noting that the value of is constrained by the number of data points in . This feature of DPCA is particularly advantageous in situations where . Given the latent space representation , a reconstructed data point can be obtained as:
(25) |
where:
(26) |
Remark 1
In the light of Remark 1, for a given dataset the encoding () and decoding () sub-policies for using in the student policy can be defined as follows:
(27a) | ||||
(27b) |
IV Simulation study
This section presents a comparison study of the TAIL-ILC approach in comparison with classical ILC, an artificial neural network (ANN) based ILC, referred to as NN-ILC, see [14], and conventional rigid body feedforward, see [7, 18], which is obtained by multiplication of the acceleration profile and the inverted rigid body dynamics of the system:
(28) |
To facilitate simulation, a high-fidelity model of a moving-magnet planar actuator (MMPA), depicted in Figure 3, is considered. A detailed description of a MMPA system is given in [19].

Table II provide a concise overview of the network architecture and training specifics for sub-policy in TAIL-ILC and policy in NN-ILC, respectively. For the sake of comparability, the training parameters are kept consistent between the two networks. The networks are designed and trained using the Deep Learning toolbox in MATLAB 2019b, employing the default random parameter initialization.
The training set consists of 618 trajectories, while the test set includes 42 trajectories, each of which is 2.5 seconds long with a total of 20833 time samples. Each trajectory corresponds to a fourth-order motion profile, designed based on the approach presented by [20], and is parameterized with five parameters in the spatial domain. Individual trajectories are then generated by sweeping over a grid of values for each of these parameters. The objective of this study is to evaluate and compare the performance of the previously mentioned feedforward approaches against the expert ILC policy , which is the traditional ILC optimized for multiple trajectories of the same class. The primary aim of ILC in this context is to mitigate any unaccounted-for residual dynamics in the system and enhance classical model-based feedforward. Consequently, we also compare the combined performance of student policies with classical feedforward controllers. We demonstrate the tracking ability of TAIL-ILC and NN-ILC on two reference trajectories, namely and , which belong to the same class and are shown in Figure 4. is a randomly chosen trajectory from the training set, while is a previously unseen trajectory.
Parameter | TAIL-ILC | NN-ILC |
---|---|---|
No. of neurons in the input layer | ||
No. of hidden layers | ||
No. of neurons in hidden layers | ||
Activation | Relu | Relu |
No. of neurons in the output layer | ||
Learning rate | ||
Epochs | ||
Optimizer | adam | adam |
Minibatch size | ||
Train set | trajectories | trajectories |
Test set | trajectories | trajectories |

IV-A Time domain performance of TAIL-ILC and NN-ILC
A silicon wafer scanning application is considered where the scanning takes place during the constant velocity interval of the motion profile, see [1]. In this context, Figure 5 illustrates the position tracking error in -direction during the constant velocity interval of the reference trajectories and respectively. In addition to the performance of mass feedforward, TAIL-ILC and NN-ILC, the figure also indicates the performance of the expert ILC policy. This is to facilitate the comparison of the performance of the two deep learning based ILC variants with the baseline. As demonstrated in the left Figure, i.e. the performance of the feedforward controllers on , the expert ILC policy exhibits the highest overall performance. Nonetheless, it is noteworthy that the TAIL-ILC policy outperforms in terms of the peak tracking error achieved compared to the alternative feedforward approaches, whereas the NN-ILC policy demonstrates a superior performance in terms of the convergence time of the error. Nonetheless, when analyzing the right Figure, i.e. the performance of the feedforward approaches for a previously unseen trajectory , the expert ILC policy needs to re-learn the relevant feedforward signal. Conversely, the TAIL-ILC and NN-ILC policies are capable of achieving similar performance to the re-learned expert ILC policy without any further training. Additionally, when combined with a classical mass feedforward controller, both the TAIL-ILC and NN-ILC policies are observed to yield superior performance in terms of peak error and settling time compared to the classical mass feedforward controller alone.


IV-B TAIL-ILC vs NN-ILC
Table III provides a comparison of the training and prediction properties of the TAIL-ILC and NN-ILC student policies. Here, we compare the following parameters:
-
1.
: Time to train the neural network
-
2.
: Time to make predictions for 10 randomly selected test set trajectories.
-
3.
: Control signal prediction error averaged over 10 randomly selected train set trajectories.
-
4.
: Control signal prediction error averaged over 10 randomly selected test set trajectories.
-
5.
: Peak tracking error achieved with the predicted control signals averaged over 10 randomly selected train set trajectories.
Criterion | NN-ILC | TAIL-ILC |
---|---|---|
hr | min | |
sec | sec | |
sec | sec | |
N | N | |
N | N | |
m | m |
Here, the average control signal prediction errors of the train and the test set trajectories are calculated as the values of the performance measure in (22). As can be seen, though the original signals and trajectories are extremely high dimensional, the projection of these signals into the latent space using the proposed TAIL-ILC approach has resulted in significant improvement in training and prediction time compared to that of the NN-ILC approach.
Moreover, as observed in Figure 5, the average signal prediction error has decreased for TAIL-ILC in case of previously seen trajectories whereas the NN-ILC has improved performance for previously unseen trajectories.
V TAIL-ILC vs NN-ILC PERSPECTIVES
In the previous Section, we have seen a comparison of the performance of TAIL-ILC and NN-ILC controllers for a specific use case. However, it is more natural to view these controllers as individual instances of two fundamentally different perspectives of the problem. Hence, it is important to reflect upon the perspectives that these controllers convey and the consequences for various aspects of the resulting controllers. This is expected to provide us with a more generalised reasoning to some of the differences observed in performances of these two controllers.
V-A Time duration of trajectories
The NN-ILC and TAIL-ILC are two approaches of ILC that differ in their treatment of reference trajectories and feedforward signals. NN-ILC is capable of handling trajectories of different lengths, as it deals with them sample-wise. In contrast, TAIL-ILC processes trajectories and signals in their entirety, making it challenging to manage trajectories of varying durations due to the fixed input-output dimensionality of neural network learning models. Additionally, NN-ILC is better equipped to handle instantaneous changes in reference trajectories compared to TAIL-ILC. A possible solution to reconcile these perspectives is to use a different class of learning models, such as a recurrent neural network.
V-B Training and prediction time efficiencies
In NN-ILC, the training dataset used for encompasses all the samples from all the trajectories in the training set, along with their associated feedforward signals. Conversely, TAIL-ILC employs a training dataset for that solely includes the parameters of the trajectories and feedforward signals within the latent space, resulting in a significantly smaller dataset in comparison to the total number of samples. This characteristic leads to TAIL-ILC presenting shorter training and prediction times when compared to NN-ILC, as demonstrated by the results presented in Table III.
V-C Generalizability to previously unseen trajectories
Figure 5 demonstrates that NN-ILC outperforms TAIL-ILC in terms of generalizing performance to previously unobserved trajectories. The improved performance can be attributed to NN-ILC’s treatment of reference trajectories as points in an -dimensional space corresponding to order motion profiles, which allows it to learn a mapping to the corresponding feedforward signal time samples. As a result, the trained network can more accurately extrapolate performance to previously unobserved points in the space of possible reference trajectories. In contrast, TAIL-ILC relies primarily on analogies between individual tasks on a higher level, which may result in suboptimal performance when confronted with previously unobserved trajectories at the sample level.
VI CONCLUSION
In this work, we have primarily explored two different perspectives within the context of deep learning of the task-flexibility constraint of conventional ILC. While each of the considered approaches has its own advantages and disadvantages, it has been observed that the use of deep learning techniques in general could be a useful direction for future research in designing task-flexible ILC variants.
References
- [1] H. Butler, “Position control in lithographic equipment [applications of control],” IEEE Control Systems Magazine, vol. 31, no. 5, pp. 28–47, 2011.
- [2] N. Tamer and M. Dahleh, “Feedback control of piezoelectric tube scanners,” in Proc. of the Proc. of the Proceedings of 1994 33rd IEEE Conference on Decision and Control, vol. 2, pp. 1826–1831 vol.2, 1994.
- [3] M. Heertjes, “Data-based motion control of wafer scanners,” IFAC-PapersOnLine, vol. 49, no. 13, pp. 1–12, 2016. 12th IFAC Workshop on ALCOSP 2016.
- [4] X. Ye, Y. Zhang, and Y. Sun, “Robotic pick-place of nanowires for electromechanical characterization,” in Proc. of the 2012 IEEE International Conference on Robotics and Automation, pp. 2755–2760, 2012.
- [5] M. Boerlage, M. Steinbuch, P. Lambrechts, and M. van de Wal, “Model-based feedforward for motion systems,” in Proc. of the Proceedings of 2003 IEEE CCA, 2003., vol. 2, pp. 1158–1163 vol.2, 2003.
- [6] T. Oomen, “Advanced motion control for precision mechatronics: control, identification, and learning of complex systems,” IEEJ Journal of Industry Applications, vol. 7, pp. 127–140, Jan. 2018.
- [7] M. Steinbuch, R. Merry, M. Boerlage, M. Ronde, and M. Molengraft, van de, Advanced Motion Control Design, pp. 27–1/25. CRC Press, 2010.
- [8] M. Heertjes, D. Hennekens, and M. Steinbuch, “Mimo feed-forward design in wafer scanners using a gradient approximation-based algorithm,” Control Engineering Practice, vol. 18, pp. 495–506, 05 2010.
- [9] T. Oomen and M. Steinbuch, “Model-based control for high-tech mechatronic systems,” in Proc. of the Mechatronics and Robotics, pp. 51–80, CRC Press, 2020.
- [10] H.-S. Ahn, Y. Chen, and K. L. Moore, “Iterative learning control: Brief survey and categorization,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 37, no. 6, pp. 1099–1121, 2007.
- [11] L. Blanken, J. Willems, S. Koekebakker, and T. Oomen, “Design techniques for multivariable ilc: Application to an industrial flatbed printer,” IFAC-PapersOnLine, vol. 49, no. 21, pp. 213–221, 2016. 7th IFAC Symposium on Mechatronic Systems MECHATRONICS 2016.
- [12] J. S. van Hulst, “Rational basis functions to attenuate vibrating flexible modes with compensation of input nonlinearity: Applied to semiconductor wire bonder,” 1 2022. MSc thesis.
- [13] D. J. Hoelzle, A. G. Alleyne, and A. J. Wagoner Johnson, “Basis task approach to iterative learning control with applications to micro-robotic deposition,” IEEE Transactions on Control Systems Technology, vol. 19, no. 5, pp. 1138–1148, 2011.
- [14] S. Bosma, “The generalization of feedforward control for a periodic motion system.,” 2019.
- [15] M. Steinbuch, “Design and control of high tech systems,” in Proc. of the 2013 IEEE ICM, pp. 13–17, 2013.
- [16] L. Blanken, J. van Zundert, R. de Rozario, N. Strijbosch, T. Oomen, C. Novara, and S. Formentin, “Multivariable iterative learning control: analysis and designs for engineering applications,” IET Chapter, pp. 109–143, 2019.
- [17] I. K. Fodor, “A survey of dimension reduction techniques,” 5 2002.
- [18] I. Proimadis, Nanometer-accurate motion control of moving-magnet planar motors. PhD thesis, Department of Electrical Engineering, 2020.
- [19] I. Proimadis, C. H. H. M. Custers, R. Tóth, J. W. Jansen, H. Butler, E. Lomonova, and P. M. J. V. d. Hof, “Active deformation control for a magnetically levitated planar motor mover,” IEEE Transactions on Industry Applications, vol. 58, no. 1, pp. 242–249, 2022.
- [20] P. Lambrechts, M. Boerlage, and M. Steinbuch, “Trajectory planning and feedforward design for electromechanical motion systems,” Control Engineering Practice, vol. 13, no. 2, pp. 145–157, 2005.