An inexact golden ratio primal-dual algorithm with linesearch step for a saddle point problem
Abstract In this paper, we propose an inexact golden ratio primal-dual algorithm with linesearch step(IP-GRPDAL) for solving the saddle point problems, where two subproblems can be approximately solved by applying the notations of inexact extended proximal operators with matrix norm. Our proposed IP-GRPDAL method allows for larger stepsizes by replacing the extrapolation step with a convex combination step. Each iteration of the linesearch requires to update only the dual variable, and hence it is quite cheap. In addition, we prove convergence of the proposed algorithm and show an ergodic convergence rate for our algorithm, where represents the number of iterations. When one of the component functions is strongly convex, the accelerated convergence rate results are established by choosing adaptively some algorithmic parameters. Furthermore, when both component functions are strongly convex, the linear convergence rate results are achieved. Numerical simulation results on the sparse recovery and image deblurring problems illustrate the feasibility and efficiency of our inexact algorithms.
Keywords Convex optimization Inexact extended proximal operators Golden ratio primal-dual algorithm Linesearch Image deblurring
1 Introduction
Let and be two finite-dimensional Euclidean spaces equipped with a standard inner product and a norm . Let and be proper lower semicontinuous (l.s.c) convex functions, be a bounded linear mapping. Denote the Legendre-Fenchel conjugate of and the adjoint of by and , respectively. Now we consider the primal problem
(1.1) |
together with its dual problem
(1.2) |
If a primal-dual solution pair of (1.1) and (1.2) exists, i.e.,
then the problem (1.1) is equivalent to the following saddle-point formulation:
(1.3) |
It is well known that many application problems can be formulated as the saddle point problem (1.3) such as image restoration, magnetic resonance imaging and computer vision; see, for example, [28, 30, 35]. Two of the most popular approaches are the primal-dual algorithm (PDA) [9, 20], alternating direction method of multipliers (ADMM) method [5, 19], and their accelerated and generalized variants [24, 26, 25]. To solve model (1.3), the following first-order primal-dual algorithm (PDA) [9] has attracted much attention:
(1.4) |
where and are regularization parameters and is an extrapolation parameter, and for , the convergence of PDA was proved with the requirement on step sizes . Generally, with fixed and , a flexible extrapolation parameter is of benefit to a potential acceleration of the algorithm (1.4), which motivates researchers to enlarge the range of , e.g., see [18, 32]. Indeed, the scheme (1.4) reduces to the classic Arrow-Hurwicz method [4] when . However, the convergence of the Arrow-Hurwicz method can only be guaranteed under the restrictive assumption that and are sufficiently small. In order to overcome this difficulty, Chang et al. in [11] proposed a golden ratio primal-dual algorithm (GRPDA) for solving (1.3) based on a seminal convex combination technique introduced by Malitsky in [25], which, started at , iterates as
(1.5) |
where determines the convex combination coefficients. In [11], the iterative convergence and ergodic convergence rate results are established under the condition . Since , this stepsize condition is much relaxed than that of the PDA method (1.4); see also [12]. Further, Chang et al. incorporated linesearch strategy into the GRPDA method, in which the next iteration is implicitly computed, since the stepsize parameter is determined by the inequality including ; see Step 2 of Algorithm 3.1 in [13].
As primal-dual algorithms, however, when the proximal operators of and are not easy to compute, they did not perform ideally well in terms of computing time and efficiency, e.g., see the examples in [8, 15] and the numerical experiments in [17]. And in many practical applications, one often encounters the case that at least one of the proximal operators does not possess a closed-form solution and their evaluation involves inner iterative algorithms. In this situation, some researchers are dedicated to approximately solving the subproblems instead of finding their accurate solutions, for example, [21, 24, 27, 34, 33]. An absolute error criterion was adopted in [14], where the subproblem errors are controlled by a summable sequence of error tolerances. Jiang et al. [22, 23] studied two inexact primal-dual algorithms with absolute and relative error criteria respectively, where for the inexact primal-dual method with a relative error criterion, only the convergence rate was established. In [29], Rasch and Chambolle proposed the inexact first-order primal-dual methods by applying the concepts of inexact proxima where all the controlled errors were required to be summable. Further, Fang et al. [16] proposed an inexact primal-dual method with correction step by introducing the notations of inexact extended proximal operators with matrix norm, where the ergodic convergence rate was achieved. In [16], the accelerated versions of the proposed method under the assumptions that or is strongly convex have not been considered.
In this paper, we are concerned with the following saddle point problem:
(1.6) |
Recall that is called a saddle point of (1.6) if it satisfies the inequalities
(1.7) |
Motivated by the research works [13, 16], in this paper, we propose an inexact golden ratio primal-dual algorithm with linesearch step for solving problem (1.6) by applying the type-2 approximation of the extended proximal point introduced in [16]. The main contributions of this paper are summarized as follows.
• For the case the proximal operators of and are not easy to compute, we propose an inexact IP-GRPDAL method with extended proximal terms containing symmetric positive definite matrix. Both subproblems in our method can be approximately solved under the type-2 approximation criterias. Global convergence and ergodic convergence rate results are established under the condition , where denotes the iteration counter. Furthermore, we establish the convergence rates in case the error tolerances and are required to decrease like for some .
• Our method updates the dual variable by adopting linesearch step to allow adaptive and potentially much larger stepsizes which effectively reduces the computational effort of the algorithm iteration. In addition, the next iteration is explicitly computed compared with that in [13]; see (3.3) in Algorithm 1.
• We propose the accelerated versions of IP-GRPDAL method, which were not provided in [16]. When one of the underlying functions is strongly convex, convergence rate results are established by adaptively choosing some algorithmic parameters, for example, is replaced by ; see (3.32) of Algorithm 2. In addition, the linear convergence rate results can be established when both and are strongly convex.
• We perform numerical experiments for the sparse recovery and image deblurring problems, demonstrating that our method outperforms some existing methods [13, 9, 16, 26].
The rest of this paper is organized as follows. In Section 2, we introduce the concepts of inexact extended proximal terms and present some auxiliary material. In Section 3, the main algorithm and its accelerated versions are presented. At the same time, we also prove the convergence of our algorithms and analyze their convergence rates. Numerical experiment results are reported in Section 4. Some conclusions are presented in Section 5.
2 Preliminaries
For given and , we define and for any and . Let be a generic saddle point. When there is no confusion, we will omit the subscript in and ,
(2.1) |
By subgradient inequality, it is clear that and . Note that the functions and are convex in and , respectively. The primal-dual gap is defined as for . It is easy to verify that
(2.2) |
The system (2.1) can be reformulated as
Suppose that is a convex function in , and that is a symmetric positive definite matrix. For any and given , denote
(2.3) |
and define the proximal operator of as
(2.4) |
where and denotes the inverse of , the first-order optimality condition for the proximum gives different characterizations of the proximal operator:
Below, we recall definitions of three different types of inexact extended proximal operators with matrix norm, which can be found in [16].
Definition 2.1
Let . is said to be a type-0 approximation of the extended proximal point with precision if
(2.5) |
Definition 2.2
Let . is said to be a type-1 approximation of the extended proximal point with precision if
(2.6) |
where
Definition 2.3
Let . is said to be a type-2 approximation of the extended proximal point with precision if
(2.7) |
where
We give two simple but useful lemmas in the following.
Lemma 2.4
For any and a symmetric positive definite matrix , we have the identity
(2.8) |
For , there holds
(2.9) |
Lemma 2.5
[31] Assuming that and are the minimum and maximum eigenvalues of the symmetric positive definite matrix , respectively, we have
(2.10) |
3 Algorithm and convergence properties
In this section, we propose an inexact GRPDA algorithm and then show the convergence of the proposed method. If is further assumed to be strongly convex, we can modify our method to accelerate the convergence rate. Moreover, if both and are strongly convex, a linear convergence rate can be achieved.
3.1 Convex case
(3.1) |
(3.2) |
(3.3) |
(3.4) |
Firstly, we summarize several useful lemmas which will be used in the sequence..
Lemma 3.1
(i) The linesearch in Algorithm 1 always terminates.
(ii) There exists such that for all .
(iii) For any integer , we have for some constant , where and is the cardinality of the set , which implies with .
Proof. (i) In each iteration of the linesearch is multiplied by factor . Since (3.4) is fulfilled whenever where , the inner loop can not run indefinitely.
(ii)According to the recursive method, we assume that . Then, our goal is to show that from follows . Suppose that for some . If then . If then does not satisfy (3.4). Thus, and hence, .
(iii) The detailed proof takes similar approach with Lemma 3.1 (iii) of [13] and is thus omitted.
Lemma 3.2
Suppose that are the minimum eigenvalues of and , respectively. Let and be the sequence generated by Algorithm 1. Then, for any , there holds
(3.5) |
Proof. By Definition 2.3, the optimal condition of (3.2) yields
In view of the definition of -subdifferential, we have
(3.6) |
Setting and in (3.6), respectively, we obtain
(3.7) |
(3.8) |
In view of (3.7), we have
(3.9) |
Multiplying (3.8) by and using the fact , we obtain
(3.10) |
Similarly, for , from (3.3) we obtain
(3.11) |
Direct calculations show that a summation of (3.9), (3.10) and (3.11) gives
Applying the definitions of and in (2.1) to the above inequality, we have
which, by the definition of , implies (3.5) immediately.
Lemma 3.3
Let be the sequence generated by Algorithm 1. For , it holds that
(3.12) |
Proof. First, it is easy to verify from and that
(3.13) |
Since and are the minimum eigenvalues of and , respectively, it follows from (2.10) that
Using Cauchy-Schwarz inequality, from (3.4) we obtain
(3.14) |
Applying (2.8) and Cauchy-Schwarz inequality, from (3.5) we have
(3.15) |
In view of (3.1), we get and . Thus, from (2.9) we deduce
(3.16) |
Combining (3.14) and (3.16) with (3.15), we obtain
(3.17) |
In the following, we summarize the convergence result for Algorithm 1.
Theorem 3.4
Proof. Since and , (3.12) yields
(3.18) |
By summing over , we obtain
(3.19) |
Since the sequences and are summable, and is bounded, is bounded. From this we deduce that is bounded. Thus, and are bounded sequences. Hence, from (3.1) we know that is also bounded.
Summing up (3.12) from to , we get
Since , . Letting in the above inequality and applying the equivalence of and , where denotes the symmetric positive definite matrix, we get .
Similarly, we can deduce that . Thus, has at least one limit point and hence there exists a subsequence such that as . Similar to (3.7) and (3.11), for any , there hold
(3.20) |
and
(3.21) |
In view of Lemma 3.1 (ii), we have . Letting in (3.20) and (3.21), respectively, and taking into account that and as , we obtain
(3.22) |
This shows that is a saddle point of (1.6).
We now establish the convergence rates of Algorithm 1.
Theorem 3.5
Proof. Since , it follows from (3.12) that
By taking summation over , we obtain
(3.25) |
Since and are convex,
where . Combining (3.23) with (3.25), we obtain
(3.26) |
Lemma 3.6
( [29]) For , let . Then
Similar to Corallary 3.4 in [29], we have the following theorem.
Theorem 3.7
If and , , we have
(3.27) |
Proof. If , then the sequences and are summable. Since the sequence is bounded, from (3.24) we have
If , then and . In view of the assumption on , for some , we have
Thus,
Hence, from the boundedness of we know that . Similarly, can also obtain the same result. Therefore, we get
If , then , from Lemma 3.4 we obtain and . Thus, we have
3.2 Partially strongly convex case
Now, we consider the acceleration of Algorithm 1 assuming additionally that is -strongly convex, i.e., it holds for some that
(3.28) |
When is strongly convex,the corresponding results can be achieved similarly and is thus omitted. The accelerated version of Algorithm 1 is summarized in Algorithm 2.
(3.29) |
(3.30) |
(3.31) |
(3.32) |
(3.33) |
(3.34) |
Similar to Lemma 3.1, we have the following results.
Lemma 3.8
(i) The linesearch in Algorithm 2 always terminates.
(ii) There exists constant such that .
Proof. (i) This conclusion follows from Lemma 3.1(i) by replacing
with and setting .
(ii) The proof is similar to Lemma 4.1 (ii) in [13] and is thus omitted.
Next we will establish the convergence rate of Algorithm 2.
Theorem 3.9
Proof. Since is strongly convex, it follows from (3.30) and Definition 2.3 that
(3.37) |
Since is the maximum eigenvalue of , from (2.10) we get . Similar to (3.7)-(3.10), we obtain
(3.38) |
(3.39) |
From (3.33), for we have
(3.40) |
Then, by adding (3.38), (3.39) and (3.40) and using similar arguments as in Lemma 3.2, we deduce
(3.41) |
Using (2.8) and Cauchy-Schwarz inequality, from (3.41) we get
(3.42) |
Combining (3.16) with (3.42), we have
(3.43) |
Thus, it follows from (2.10), (3.34) and Cauchy-Schwarz inequality that
(3.44) |
Since and , we have . Therefore, substituting (3.44) into (3.43), we obtain
(3.45) |
Note that Thus, it follows from and that
(3.46) |
Since ,
(3.47) |
Substituting (3.47) into (3.45) and multiplying the resulting inequality by , we have
(3.48) |
Define , and hence (3.48) yields
By summing over in the above inequality, we obtain
(3.49) |
In view of the convexity of , from (3.49), we have
(3.50) |
According to the definition of and (3.49), we get
Thus,
(3.51) |
From Lemma 3.8(ii), we know that there exists such that for all , and hence (3.51) implies with
Since and ,
(3.52) |
Since and . This means that for some constant . This completes the proof.
3.3 Completely strongly convex case
We further assume that is -strongly convex and is -strongly convex. In this setting, Algorithm 3 can be accelerated to linear convergence by properly selecting parameters , and .
(3.53) |
(3.54) |
Now, we summarize the linear convergence rate of Algorithm 3 in the following theorem.
Theorem 3.10
Proof. Similar to the proof of (3.41) in Theorem 3.9, one can deduce that
(3.56) |
Applying the same arguments as in (3.42)-(3.45), we have
(3.57) |
Since , from (3.47) we get
Thus, the inequality (3.57) implies that
(3.58) |
Multiplying (3.58) by and summing the resulting inequality from to , we get
(3.59) |
Since and are convex, from (3.55) we deduce
Therefore, from (3.59) we have
(3.60) |
Note that , with , and
Thus,
Since and ,
This completes the proof.
4 Numerical experiments
In this section, we apply the proposed Algorithm 1 to solve sparse recovery and image deblurring problems. We compare IP-GRPDAL method with Algorithm 1 in [9](denoted by PDA), Algorithm 1 in [16](denoted by IPDA), PDAL method in [26], and GRPDAL method in [13]. All codes were written in MATLAB R2015a on a PC (with CPU Intel i5-5200U). For simplicity we set , , and in Algorithm 1.
4.1 regularized analysis sparse recovery problem
We study the following problem:
(4.1) |
where , , , and is a regularization parameter. Let , , We can rewrite (4.1) as
(4.2) |
We use Proximal Gradient Method to solve subproblems in these algorithms, and the parameters are set as follows:
(i) ;
(ii) is a random vector, where random coordinates are drawn from the uniform distribution in and the rest are zeros;
(iii) The observed value is generated by , the regularization parameter was set to be ;
(iv) We set for the PDA method. For PDAL, GRPDAL and IP-GRPDAL methods, we set and At the same time, we set and for the IP-GRPDAL method as those for the IPDA method. The initial points are and
In this experiment, we terminate all the algorithms when , where an approximation of the optimal value is obtained by running the algorithms for 5000 iterations.
In order to investigate the stability and efficiency of our method, we test three groups of problems with different pairs and run the tests 10 times for each group. The average numerical performances including the CPU time (Time, in seconds), the number of iterations (Iter) and the number of extra linesearch trial steps (LS) of PDAL, GRPDAL and IP-GRPDAL methods are reported in Table 1.
PDA | IPDA | PDAL | |||||||
---|---|---|---|---|---|---|---|---|---|
Time | Iter | Time | Iter | Time | Iter | LS | |||
100 | 100 | 10 | 62.23 | 58842 | 13.68 | 51229 | 29.27 | 28316 | 27918 |
500 | 800 | 50 | 138.89 | 60.64 | 92863 | 79.56 | 51500 | 50276 | |
1000 | 2000 | 100 | 427.67 | 391.08 | 184676 | 206.37 | 108251 | 106938 | |
GRPDAL | IP-GRPDAL | ||||||||
Time | Iter | LS | Time | Iter | LS | ||||
100 | 100 | 10 | 18.75 | 19390 | 5976 | 9.49 | 18292 | 4647 | |
500 | 800 | 50 | 58.23 | 42407 | 13980 | 53.01 | 38594 | 12260 | |
1000 | 2000 | 100 | 175.49 | 72619 | 113.30 | 58627 |



From the results presented in Table 1, we can observe that our IP-GRPDAL method takes less CPU time and number of iterations compared with the other ones. In Figure 1, the ordinate denotes the function value residuals while the abscissa denotes the CPU time, from which it can be seen that the IP-GRPDAL method is much faster than the other ones.
4.2 TV- image deblurring problem
In this subsection, we study the numerical solution of the TV- model (4.1) in [16] for image deblurring
(4.3) |
where is a given (noisy) image, is a known linear (blurring) operator, denotes the gradient operator and is a regularization parameter. We introduce the variables which satisfy . Then, (4.3) can be written as
Further, the above formula can be rewritten as
where , , and We adopt the following inequality as the stopping criterion of inner loop:
where and is defined as (4.11) in [16].
In the following we will report the numerical experiment results. In this test, average blur with hsize=9 was applied to the original image cameraman.png(256 256) by fspecial(average, 9), and salt-pepper noise was added in(see Figure 2). At the same time, we adopt the following stopping rule:
where is a solution of the TV- model (4.3).


We fixed the number of iterations as and the penalty coefficient . When the five algorithms are implemented, their respective parameters are choosen as follows:
PDA: ;
IPDA: ;
PDAL: ;
GRPDAL: ;
IP-GRPDAL: .





The restored images by the above Algorithms are displayed in Figure 3. Obviously, our IP-GRPDAL method gets better restoration quality compared with PDA, PDAL, IPDA and GRPDAL methods. In our experiment, we find that, if we increase the number of iterations to or more, all algorithms can restore the image with almost the same quality, but our algorithm needs fewer iterations than the other ones.
5 Conclusions
In this paper, we propose an inexact golden ratio primal-dual algorithm with linesearch for the saddle point problem by applying inexact extended proximal terms with matrix norm introduced in [16]. Under the assumption that , we show the convergence of our Algorithm 1, provided that both controlled error sequences and were required to be summable. The convergence rate in the ergodic sense is also established in the general convex case. When either one of the component functions is strongly convex, accelerated version of Algorithm 1 is proposed, which achieves ergodic convergence rate. Furthermore, the linear convergence results are established when both component functions are strongly convex. We also apply our method to solve sparse recovery and TV- image deblurring problems and verify their efficiency numerically. It will be a interesting open problem to establish nonergodic convergence rate of IP-GRPDAL method with linesearch.
References
- [1]
- [2]
- 3.
- 4. Arrow, K.J., Hurwicz, L., Uzawa, H.: Studies in linear and non-linear programming. Stanford Mathematical Studies in the Social Sciences, II. Stanford University Press, Stanford, Calif. With contributions by Chenery, H.B., Johnson, S.M., Karlin, S., Marschak, T., Solow, R.M (1958)
- 5. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trend in Machine Learning 3(1), 1-122(2011)
- 6. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM journal on imaging sciences 2(1), 183-202(2009)
- 7. Condat, L.: A primal dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms. Journal of Optimization Theory and Applications 158(2), 460-479(2013)
- 8. Cai, J. F., Cand s, E. J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM Journal on optimization 20(4), 1956-1982(2010)
- 9. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision 40(1), 120-145(2011)
- 10. Chambolle, A., Pock, T.: An introduction to continuous optimization for imaging. Acta Numerica 25(1), 161-319(2016)
- 11. Chang, X., Yang, J.: A golden ratio primal-dual algorithm for structured convex optimization. Journal of Scientific Computing 87(1), 1-26(2021)
- 12. Chang, X., Yang, J.: GRPDA revisited: relaxed condition and connection to Chambolle-Pock’s primal-dual algorithm. Journal of Scientific Computing 93(3), 1-18(2022)
- 13. Chang, X., Yang, J., Zhang, H.: Golden ratio primal-dual algorithm with linesearch. SIAM Journal on Optimization 32(3), 1584-1613(2022)
- 14. Eckstein, J., Bertsekas, D. P.: On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Mathematical Programming 55(1), 293-318(1992)
- 15. Ehrhardt, M. J., Betcke, M. M.: Multicontrast MRI reconstruction with structure-guided total variation. SIAM Journal on Imaging Sciences 9(3), 1084-1106(2016)
- 16. Fang, C., Hu, L., Chen, S.: An inexact primal-dual method with correction step for a saddle point problem in image debluring. Journal of Global Optimization 87(2), 965-988(2023)
- 17. Han, D., He, H., Yang, H., Yuan, X.: A customized Douglas-Rachford splitting algorithm for separable convex minimization with linear constraints. Numerische Mathematik 127(1), 167-200(2014)
- 18. He, H., Desai, J., Wang, K.: A primal-dual prediction-correction algorithm for saddle point optimization. Journal of Global Optimization 66(1), 573-583(2016)
- 19. He, B., Tao, M., Yuan, X.: Alternating direction method with Gaussian back substitution for separable convex programming. SIAM Journal on Optimization 22(2), 313-340(2012)
- 20. He, B., Yuan, X.: Convergence analysis of primal-dual algorithms for a saddle-point problem: from contraction perspective. SIAM Journal on Imaging Sciences 5(1), 119-149(2012)
- 21. Hien, L. T. K., Zhao, R., Haskell, W. B.: An inexact primal-dual smoothing framework for large-scale non-bilinear saddle point problems. Journal of Optimization Theory and Applications 200(1), 34-67(2024)
- 22. Jiang, F., Cai, X., Wu, Z., Han, D.: Approximate first-order primal-dual algorithms for saddle point problems. Mathematics of Computation 90(329), 1227-1262(2021)
- 23. Jiang, F., Wu, Z., Cai, X., Zhang, H.: A first-order inexact primal-dual algorithm for a class of convex-concave saddle point problems. Numerical Algorithms 1(1), 1-28(2021)
- 24. Liu, Y., Xu, Y., Yin, W.: Acceleration of primal-dual methods by preconditioning and simple subproblem procedures. Journal of Scientific Computing 86(2), 21-46(2021)
- 25. Malitsky, Y.: Golden ratio algorithms for variational inequalities. Mathematical Programming 184(1), 383-410(2020)
- 26. Malitsky, Y., Pock, T. A first-order primal-dual algorithm with linesearch. SIAM Journal on Optimization 28(1), 411-432(2018)
- 27. Ng, M. K., Wang, F., Yuan, X.: Inexact alternating direction methods for image recovery. SIAM Journal on Scientific Computing 33(4), 1643-1668(2011)
- 28. Pock, T., Cremers, D., Bischof, H., Chambolle, A.: An algorithm for minimizing the Mumford-Shah functional. In IEEE th International Conference on Computer Vision, IEEE, 1133-1140(2009)
- 29. Rasch, J., Chambolle, A.: Inexact first-order primal-dual algorithms. Computational Optimization and Applications 76(2), 381-430(2020)
- 30. Rudin, L. I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena 60(1-4), 259-268(1992)
- 31. Shores, T. S.: Applied Linear Algebra and Matrix Analysis. Springer, New York (2007)
- 32. Wang, K., He, H.: A double extrapolation primal-dual algorithm for saddle point problems. Journal of Scientific Computing 85(2), 1-30(2020)
- 33. Wu, Z., Song, Y., Jiang, F.: Inexact generalized ADMM with relative error criteria for linearly constrained convex optimization problems. Optimization Letters 18(2), 447-470(2024)
- 34. Xie, J.: On inexact ADMMs with relative error criteria. Computational Optimization and Applications 71(3), 743-765(2018)
- 35. Zhu, M., Chan, T.: An efficient primal-dual hybrid gradient algorithm for total variation image restoration. Ucla Cam Report 34(2), 8-34(2008)