This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Simultaneous Shape and Mesh Quality Optimization using Pre-Shape Calculus

Daniel Luft Trier University, Department of Mathematics, 54286 Trier, Germany ([email protected])    Volker Schulz Trier University, Department of Mathematics, 54286 Trier, Germany ([email protected])
Abstract

Computational meshes arising from shape optimization routines commonly suffer from decrease of mesh quality or even destruction of the mesh. In this work, we provide an approach to regularize general shape optimization problems to increase both shape and volume mesh quality. For this, we employ pre-shape calculus as established in [12]. Existence of regularized solutions is guaranteed. Further, consistency of modified pre-shape gradient systems is established. We present pre-shape gradient system modifications, which permit simultaneous shape optimization with mesh quality improvement. Optimal shapes to the original problem are left invariant under regularization. The computational burden of our approach is limited, since additional solution of possibly larger (non-)linear systems for regularized shape gradients is not necessary. We implement and compare pre-shape gradient regularization approaches for a hard to solve 2D problem. As our approach does not depend on the choice of metrics representing shape gradients, we employ and compare several different metrics.

Key words: Shape Optimization, Mesh Quality, Mesh Deformation Method, Shape Calculus

AMS subject classifications: 49Q10, 65M50, 90C48, 49J27

1 Introduction

Solutions of PDE constrained optimization problems, in particular problems where the desired control variable is a geometric shape, are only meaningful, if the state variables of the constraint can be calculated with sufficient reliability. A key component of reliable solutions is given by the quality of the computational mesh. It is well-known that poor quality of elements affect the stability, convergence, and accuracy of finite element and other solvers. This comes from effects such as poorly conditioned stiffness matrices (cf. [19]).

There are a multitude of strategies for increasing mesh quality, particularly in shape optimization. The authors of [5] enhance mesh morphing routines for shape optimization by trying to correct for the inexactness of Hadamard’s theorem due to discretization of the problem. Correcting degenerate steps requires a restriction of deformation directions based on normal fields of shapes, which leads to a linear system enlarged by additional constraints. In [9], an approach to shape optimization using the method of mappings to guarantee non-degenerate deformations of meshes is presented. For this, the shape optimization problem is regarded as an optimization in function spaces. A penalty term for the Jacobian determinants of the deformations is added, which leads to a non-smooth optimality system. Deformations computed by solving this system have less tendency to degenerate volumes of cells. These techniques are related to the techniques of our work, however they do not include a mechanism to capture a target mesh volume distribution as is presented here in subsequent sections. In [14] metrics for representing shape gradients are modified by adding a non-linear advection term. For this, the shape derivative is represented on the shape seen as a boundary mesh, which is then used as a Neumann condition to assemble a system incorporating the non-linear advection term to represent a shape gradient in volume formulation. This formulation advects nodes of the volume mesh in order to mitigate element degeneration, but requires solution of a non-linear system to construct the shape gradient. In [16], the author applies mesh smoothing inspired by centroidal Voronoi reparameterization to construct a tangential field that corrects for degenerate surface mesh cells. For correcting the volume mesh cell degeneration, a shape gradient representation featuring a non-linear advection term is used. This is motivated by the eikonal equation, while orthogonality of the correction to shape gradients is ensured by a Neumann condition. In order to mitigate roughness of gradients and resulting degeneration of meshes, the authors of [18] construct shape gradient representations by use of Steklov-Poincaré metrics. As an example they propose the linear elasticity metric, giving a more regular shape gradient representation by solution of a linear system using volume formulations of shape derivatives. In [10], the authors construct a Riemannian metric for the manifold of planar triangular meshes with given fixed connectivity, which makes the space geodesically complete. They propose a mesh morphing routine by geodesic flows, using the Hamiltonian formulation of the geodesic equation and solving by a symplectic numerical integrator. Numerical experiments in [10] suggest that cell aspect ratios are bounded away from zero and thus avoid mesh degradation during deformations.

Several of the aforementioned approaches require modification of the metrics acting as left hand sides to represent shape gradients. Often, either non-linear terms are added or systems are enlarged, which increases computational burden significantly. Further, the mesh quality resulting by these regularized deformations is arbitrary or of rather implicit nature, since no explicit criterion for mesh quality is enforced.

In this work, we want to approach these two aspects differently. By using pre-shape calculus techniques introduced in [12], we propose regularization techniques for shape gradients with two goals in mind. First, the required computational burden should be limited. We achieve this by modifying the right hand sides of shape gradient systems, instead of altering the metric. This ensures that shape gradient systems neither increase in size nor become non-linear. Secondly, our regularization explicitly targets mesh qualities defined by the user. To do so, we enforce surface and volume cell distribution via use of pre-shape derivatives of pre-shape parameterization tracking problems. Non-uniform node distributions according to targets can be beneficial, especially in the context of PDE constrained optimization.

In [6] local sensitivities for minimization of the approximation error of linear elliptic second order PDE are derived and used to refine computational meshes. Also, [2] studies various monitor functions (targets) for mesh deformation methods in 2D by using elliptic and eigenvalue methods, e.g. to ensure certain coordinate lines of the mesh. Amongst other examples, this shows possible demand for targeted non-uniform adaptation of computational meshes. Non-uniform cell distributions are possible with our approach our pre-shape calculus techniques enable efficient routines solving shape optimization problems, which simultaneously optimize quality of the surface mesh representing the shape and the volume mesh representing the hold-all domain. This is done in a manner that does not interfere with the original shape optimization problem, leaving optimal and intermediate shapes invariant. An complementing literature review is found in the introduction of [12]. For this work we build on results of [12], where pre-shape calculus is rigorously established. The results from [12] feature a structure theorem in the style of Hadamard’s theorem for shape derivatives, which also shows the relationship of classical shape and pre-shape derivatives. In particular, shape calculus and pre-shape differentiability results carry over to the pre-shape setting. Further, the pre-shape parameterization tracking problem is introduced in [12, Prop. 2, Thrm. 2], where existence of solutions is proved and a closed formula for its pre-shape derivative is derived.

This work is structured in 2 sections. First, in section 2 we establish theoretical results for pre-shape regularization routines for shape and volume mesh quality. In particular, section 2.1 focuses on the regularization for shape mesh quality, while section 2.2 we take care of volume mesh quality regularization for the hold-all domain. For both cases, existence results of regularized pre-shape solutions are provided. Also, regularized pre-shape gradient systems are formulated, and consistency of regularized and unregularized gradients is established. In section 3 we display our techniques for a model problem. Several different (un-)regularized routines are tested. As a standard approach, we recommend the linear elasticity metric as found in [17]. To illustrate the general applicability of our regularization method, we also test the very reasonable pp-Laplacian metrics as inspired by [13] to represent gradients.

2 Theoretical Aspects of Regularization by Pre-Shape Parameterization Tracking

The main goal of this paper is to introduce a regularization approach to shape optimization, which increases mesh quality during optimization at minimal additional computational cost. To achieve this, we proceed in two steps. First, in section 2.1, we analyze the case where quality of the mesh modeling a shape is optimized. This amounts to increasing quality of the (hyper-)surface shape mesh embedded in the volume hold-all domain. Then, in section 2.2, we build on the surface mesh case by also demanding to increase the volume mesh quality of the hold-all domain. Both approaches need to satisfy two properties. On the one hand, they must not interfere with the original shape optimization problem, i.e. leave the optimal shape or even intermediate shapes invariant. On the other hand, to be practically feasible, the mesh quality regularization approach should not increase computational cost. In particular, no additional solution of linear systems should be necessary if compared to standard shape gradient descent algorithms. We will achieve both properties for the case of simultaneous shape and volume mesh quality optimization.

If not stated otherwise, we assume Mn+1M\subset{\mathbb{R}}^{n+1} to be an nn-dimensional, orientable, path-connected and compact CC^{\infty}-manifold without boundary. In light of [12], we will assume Emb(M,n+1)\operatorname{Emb}(M,{\mathbb{R}}^{n+1}) to be the space of pre-shapes and Ben:=Emb(M,n+1)/Diff(M)B_{e}^{n}:=\operatorname{Emb}(M,{\mathbb{R}}^{n+1})/\operatorname{Diff}(M) to be the space of shapes.

2.1 Simultaneous Tangential Mesh Quality and Shape Optimization

In this subsection we formulate a regularized shape optimization problem to track for desired shape mesh quality using pre-shapes calculus. As mentioned in the introduction, we heavily build on the results of [12].

2.1.1 Preliminaries for Mesh Quality Optimization using Pre-Shapes

Throughout section 2, we take a look at a general prototype shape optimization problem

minΓBen𝒥(Γ).\underset{\Gamma\in B_{e}^{n}}{\min}\mathcal{J}(\Gamma). (1)

Here we only assume that the shape functional 𝒥:Ben\mathcal{J}:B_{e}^{n}\rightarrow{\mathbb{R}} is first order shape differentiable.

Now we reformulate eq. 1 in the context of pre-shape optimization by use of the canonical projection π:Emb(M,n+1)Ben\pi:\operatorname{Emb}(M,{\mathbb{R}}^{n+1})\rightarrow B_{e}^{n}. The canonical projection π\pi maps each pre-shape φEmb(M,n+1)\varphi\in\operatorname{Emb}(M,{\mathbb{R}}^{n+1}) to an equivalence class π(φ)=ΓBen\pi(\varphi)=\Gamma\in B_{e}^{n}, which consists of all pre-shapes mapping onto the same image shape in n+1{\mathbb{R}}^{n+1}. We remind the reader that we can associate π(φ)\pi(\varphi) with the set interpretation φ(M)\varphi(M) of the shape Γ\Gamma. So π(φ)\pi(\varphi) can be interpreted as the set of all parameterizations of a given shape ΓBen\Gamma\in B_{e}^{n} in n+1{\mathbb{R}}^{n+1}. The pre-shape formulation of eq. 1 takes the form

minφEmb(M,n+1)(𝒥π)(φ).\underset{\varphi\in\operatorname{Emb}(M,{\mathbb{R}}^{n+1})}{\min}(\mathcal{J}\circ\pi)(\varphi). (2)

It is important to notice that the extended target functional of eq. 2 is pre-shape differentiable in the sense of [12, Definition 3], since we assumed 𝒥\mathcal{J} to be shape differentiable (cf. [12, Prop. 1]).

Next, we need the pre-shape parameterization tracking functional introduced in [12, Prop. 2]. In the discrete setting, this functional allows us to track for given desired sizes of surface and volume mesh cells. For this, let us assume functions gM:M(0,)g^{M}:M\rightarrow(0,\infty) and fφM:φ(M)(0,)f^{M}_{\varphi}:\varphi(M)\rightarrow(0,\infty) to be smooth and fulfill the normalization condition

φ(M)fφM(s)ds=MgM(s)dsφEmb(M,n+1).\int_{\varphi(M)}f^{M}_{\varphi}(s)\;\mathrm{d}s=\int_{M}g^{M}(s)\;\mathrm{d}s\quad\forall\varphi\in\operatorname{Emb}(M,{\mathbb{R}}^{n+1}). (3)

Further, we assume ff to have shape functionality (cf. [12, Definition 2]), i.e.

fφρM=fφMρDiff(M).f^{M}_{\varphi\circ\rho}=f^{M}_{\varphi}\quad\forall\rho\in\operatorname{Diff}(M). (4)

Shape functionality for fMf^{M} simply means, that fφ1M=fφ2Mf^{M}_{\varphi_{1}}=f^{M}_{\varphi_{2}} for all pre-shapes φ1,φ2π(φ)\varphi_{1},\varphi_{2}\in\pi(\varphi) corresponding to the same shape Γ=π(φ)\Gamma=\pi(\varphi).

Finally, the pre-shape parameterization tracking problem takes the form

minφEmb(M,n+1)12φ(M)(gMφ1(s)detDτφ1(s)fφM(s))2ds=:𝔍τ(φ)\underset{\varphi\in\operatorname{Emb}(M,{\mathbb{R}}^{n+1})}{\min}\frac{1}{2}\int_{\varphi(M)}\Big{(}g^{M}\circ\varphi^{-1}(s)\cdot\det D^{\tau}\varphi^{-1}(s)-f^{M}_{\varphi}(s)\Big{)}^{2}\;\mathrm{d}s=:\mathfrak{J}^{\tau}(\varphi) (5)

where DτD^{\tau} is the covariant derivative. For each shape ΓBen\Gamma\in B_{e}^{n} there exists a pre-shape φπ(φ)=Γ\varphi\in\pi(\varphi)=\Gamma minimizing eq. 5, which is guaranteed by [12, Prop. 2].

As we need the pre-shape derivative of the parameterization tracking functional 𝔍τ\mathfrak{J}^{\tau} in order to devise our algorithms, we state it in the style of the pre-shape derivative structure theorem [12, Thrm. 1]

𝔇𝔍τ(φ)[V]=gφ𝒩,V+gφ𝒯,VVC(n+1,n+1),\mathfrak{D}\mathfrak{J}^{\tau}(\varphi)[V]=\langle g_{\varphi}^{\mathcal{N}},V\rangle+\langle g_{\varphi}^{\mathcal{T}},V\rangle\quad\forall V\in C^{\infty}({\mathbb{R}}^{n+1},{\mathbb{R}}^{n+1}), (6)

with normal/ shape component

gφ𝒩,V=φ(M)dim(M)2((gMφ1detDτφ1)2fφ2)κV,n+(gMφ1detDτφ1fφ)(fφnV,n+𝒟(fφ)[V])ds\displaystyle\begin{split}\langle g_{\varphi}^{\mathcal{N}},V\rangle=&-\int_{\varphi(M)}\frac{\operatorname{dim}(M)}{2}\cdot\Big{(}\big{(}g^{M}\circ\varphi^{-1}\cdot\det D^{\tau}\varphi^{-1}\big{)}^{2}-f_{\varphi}^{2}\Big{)}\cdot\kappa\cdot\langle V,n\rangle\\ &\qquad\qquad\quad+\Big{(}g^{M}\circ\varphi^{-1}\cdot\det D^{\tau}\varphi^{-1}-f_{\varphi}\Big{)}\cdot\Big{(}\frac{\partial f_{\varphi}}{\partial n}\cdot\langle V,n\rangle+\mathcal{D}(f_{\varphi})[V]\Big{)}\;\mathrm{d}s\end{split} (7)

and tangential/ pre-shape component

gφ𝒯,V=φ(M)12((gMφ1detDτφ1)2fφ2)divΓ(VV,nn)+(gMφ1detDτφ1fφ)ΓfφTVds.\displaystyle\begin{split}\langle g_{\varphi}^{\mathcal{T}},V\rangle=&-\int_{\varphi(M)}\frac{1}{2}\cdot\Big{(}\big{(}g^{M}\circ\varphi^{-1}\cdot\det D^{\tau}\varphi^{-1}\big{)}^{2}-f_{\varphi}^{2}\Big{)}\cdot\operatorname{div}_{\Gamma}(V-\langle V,n\rangle\cdot n)\\ &\qquad\qquad\quad+\Big{(}g^{M}\circ\varphi^{-1}\cdot\det D^{\tau}\varphi^{-1}-f_{\varphi}\Big{)}\cdot\nabla_{\Gamma}f_{\varphi}^{T}V\;\mathrm{d}s.\end{split} (8)

Notice that we signify a duality pairing by ,\langle\cdot,\cdot\rangle, and with 𝒟(fφ)[V]\mathcal{D}(f_{\varphi})[V] the classical shape derivative of fφf_{\varphi} in direction VV. For the derivation of the pre-shape derivative 𝔇𝔍τ\mathfrak{D}\mathfrak{J}^{\tau} by use of pre-shape calculus, we refer the reader to [12, Thrm. 2, Cor. 2].

2.1.2 Regularization of Shape Optimization Problems by Shape Parameterization Tracking

All ingredients necessary are now available to formulate a regularized version of a shape optimization problem. For ατ>0\alpha^{\tau}>0, we add the parameterization tracking functional in style of a regularizing term to pre-shape reformulation eq. 2

minφEmb(M,n+1)(𝒥π)(φ)+ατ𝔍τ(φ).\underset{\varphi\in\operatorname{Emb}(M,{\mathbb{R}}^{n+1})}{\min}(\mathcal{J}\circ\pi)(\varphi)+\alpha^{\tau}\cdot\mathfrak{J}^{\tau}(\varphi). (9)

We already found that the pre-shape extension eq. 2 of our initial shape optimization problem is pre-shape differentiable. Hence eq. 9 is pre-shape differentiable as well. This is foundational, since a pre-shape gradient system for the regularized problem eq. 9 can always be formulated.

The pre-shape derivative 𝔇𝔍(φ)[V]\mathfrak{D}\mathfrak{J}(\varphi)[V] of the parameterization tracking problem (cf. eq. 6) is well-defined for weakly differentiable directions VH1(n+1,n+1)V\in H^{1}({\mathbb{R}}^{n+1},{\mathbb{R}}^{n+1}). By assuming the same for the shape derivative 𝒟𝒥\mathcal{D}\mathcal{J} of the original problem eq. 1, we can create a pre-shape gradient system for eq. 9 using a weak formulation with H1H^{1}-functions. Given a symmetric, positive-definite bilinear form a(.,.)a(.,.) (e.g. [17], [13]), such a system takes the form

a(U𝒥+𝔍τ,V)=𝒟𝒥(Γ)[V]+ατgφ𝒩,V+ατgφ𝒯,VVH1(n+1,n+1).a(U^{\mathcal{J}+\mathfrak{J}^{\tau}},V)=\mathcal{D}\mathcal{J}(\Gamma)[V]\;+\;\alpha^{\tau}\cdot\langle g_{\varphi}^{\mathcal{N}},V\rangle\;+\;\alpha^{\tau}\cdot\langle g_{\varphi}^{\mathcal{T}},V\rangle\quad\forall V\in H^{1}({\mathbb{R}}^{n+1},{\mathbb{R}}^{n+1}). (10)

With Γ=π(φ)\Gamma=\pi(\varphi), the right hand side of eq. 10 is indeed the full pre-shape gradient of eq. 9. This stems from the fact that the pre-shape extension 𝒥π\mathcal{J}\circ\pi has shape functionality by construction, which makes the pre-shape derivative 𝔇(𝒥π)\mathfrak{D}(\mathcal{J}\circ\pi) equal to the shape derivative 𝒟𝒥\mathcal{D}\mathcal{J} by application of [12, Thrm. 1 (iii)(iii)].

The full pre-shape gradient system eq. 10 already achieves simultaneous solution of the shape optimization problem and regularization of shape mesh quality. Namely, it is not required to additionally solve linear systems to create a mesh quality regularized descent direction U𝒥+𝔍τU^{\mathcal{J}+\mathfrak{J}^{\tau}}, since the original shape gradient system to problem eq. 1 is modified by adding the two force terms g𝒩g^{\mathcal{N}} and g𝒯g^{\mathcal{T}}. A calculation of a descent direction U𝒥+𝔍τ=U𝒥+U𝔍τU^{\mathcal{J}+\mathfrak{J}^{\tau}}=U^{\mathcal{J}}+U^{\mathfrak{J}^{\tau}} by combining the shape gradient U𝒥U^{\mathcal{J}} and pre-shape gradient U𝔍τU^{\mathfrak{J}^{\tau}} solving the decoupled systems

a(U𝒥,V)=𝒟𝒥(Γ)[V]VH1(n+1,n+1)a(U^{\mathcal{J}},V)=\mathcal{D}\mathcal{J}(\Gamma)[V]\quad\forall V\in H^{1}({\mathbb{R}}^{n+1},{\mathbb{R}}^{n+1}) (11)

and

a(U𝔍τ,V)=ατgφ𝒩,V+ατgφ𝒯,VVH1(n+1,n+1)a(U^{\mathfrak{J}^{\tau}},V)=\alpha^{\tau}\cdot\langle g_{\varphi}^{\mathcal{N}},V\rangle+\alpha^{\tau}\cdot\langle g_{\varphi}^{\mathcal{T}},V\rangle\quad\forall V\in H^{1}({\mathbb{R}}^{n+1},{\mathbb{R}}^{n+1}) (12)

is in fact equivalent to solution of eq. 10.

However, by using eq. 10 for gradient based optimization, the first demanded property of leaving the optimal or intermediate shapes invariant is not true in general. This issue comes from involvement of the shape component gφ𝒩g^{\mathcal{N}}_{\varphi} (cf. eq. 7) of the pre-shape derivative to the parameterization tracking functional 𝔍τ\mathfrak{J}^{\tau}. It is acting solely in normal directions, therefore altering shapes by interfering with the shape derivative 𝒟𝒥\mathcal{D}\mathcal{J} of the original problem.

For this reason, we deviate from the full gradient system eq. 10 consistent with the regularized problem eq. 9 by using a modified system

a(U~,V)=𝒟𝒥(Γ)[V]+ατgφ𝒯,VVH1(n+1,n+1).a(\tilde{U},V)=\mathcal{D}\mathcal{J}(\Gamma)[V]+\alpha^{\tau}\cdot\langle g_{\varphi}^{\mathcal{T}},V\rangle\quad\forall V\in H^{1}({\mathbb{R}}^{n+1},{\mathbb{R}}^{n+1}). (13)

We project the pre-shape derivative 𝔇𝔍τ\mathfrak{D}\mathfrak{J}^{\tau} onto its tangential part, which is realized by simply removing the shape component g𝒩g^{\mathcal{N}} from the right hand side of the gradient system. By this we still have numerical feasibility, while also achieving invariance of optimal and intermediate shapes. Of course invariance is only true up to discretization errors. From a traditional shape optimization perspective, this stems from considering eq. 13 as a shape gradient system with additional force term acting on directions VV in the kernel of the classical shape derivative 𝒟𝒥\mathcal{D}\mathcal{J}. In particular, by Hadamard’s theorem or the structure theorem for pre-shape derivatives [12, Thrm. 1], directions tangential to shapes Γ\Gamma are in the kernel of 𝒟𝒥\mathcal{D}\mathcal{J}. Using the pre-shape setting, we interpret these directions as vector fields corresponding to the fiber components of Emb(M,n+1)\operatorname{Emb}(M,{\mathbb{R}}^{n+1}).

To sum up and justify the use of the pre-shape regularized problem eq. 9 and its modified gradient system eq. 13, we provide existence of solutions and consistency of the modified gradients with the regularized problem.

Theorem 1 (Shape Regularized Problems).

Assume Mn+1M\subset{\mathbb{R}}^{n+1} to be an nn-dimensional, orientable, path-connected and compact CC^{\infty}-manifold without boundary. Let shape optimization problem eq. 1 be shape differentiable and have a minimizer ΓBen\Gamma\in B_{e}^{n}. For shape parameterization tracking, let us assume functions gM:M(0,)g^{M}:M\rightarrow(0,\infty) and fφM:φ(M)(0,)f^{M}_{\varphi}:\varphi(M)\rightarrow(0,\infty) to be smooth, fulfill the normalization condition eq. 3, and ff to have shape functionality eq. 4.

Then there exists a φπ(φ)=ΓEmb(M,n+1)\varphi\in\pi(\varphi)=\Gamma\subset\operatorname{Emb}(M,{\mathbb{R}}^{n+1}) minimizing the shape regularized problem eq. 9.

The modified pre-shape gradient system eq. 13 is consistent with the full pre-shape gradient system eq. 10 and the shape gradient system of the original problem eq. 11, in the sense that

U~=0U𝒥+𝔍τ=0 and U𝒥=0.\tilde{U}=0\iff U^{\mathcal{J}+\mathfrak{J}^{\tau}}=0\text{ and }U^{\mathcal{J}}=0. (14)

In particular, if U𝒥+𝔍τ=0U^{\mathcal{J}+\mathfrak{J}^{\tau}}=0 is satisfied, the necessary first order conditions for eq. 9 and eq. 1 are satisfied as well.

Proof.

For the existence of solutions to the regularized problem eq. 9, let us assume there exists a minimizer ΓBen\Gamma\in B_{e}^{n} to the original problem eq. 1. By construction of the shape space BenB_{e}^{n} by equivalence relation there exists a φ~Emb(M,n+1)\tilde{\varphi}\in\operatorname{Emb}(M,{\mathbb{R}}^{n+1}), such that Γ=π(φ~)\Gamma=\pi(\tilde{\varphi}). So the set of pre-shapes π(φ~)\pi(\tilde{\varphi}) acts as a set of candidates for solutions to eq. 9. Since we required shape functionality of ff and normalization condition eq. 3, the existence and characterization theorem for solutions to eq. 5 (cf. [12, Prop. 2]) gives existence of a global minimizer for 𝔍τ\mathfrak{J}^{\tau} in every fiber of Emb(M,n+1)\operatorname{Emb}(M,{\mathbb{R}}^{n+1}). In particular, we can find such a φ\varphi in π(φ~)\pi(\tilde{\varphi}). From the last assertion of [12, Prop. 2] we also get that 𝔍τ(φ)=0\mathfrak{J}^{\tau}(\varphi)=0. However, since 𝔍τ0\mathfrak{J}^{\tau}\geq 0 due to the quadratic nature of the parameterization tracking functional, we get that φ\varphi is a solution to the regularized problem eq. 9.

Next, we prove the non-trivial direction ’\implies’ for consistency of gradient systems in the sense of eq. 14. Let us assume we have a pre-shape gradient U~=0\tilde{U}=0 stemming from the modified system eq. 13. This immediately gives us

𝒟𝒥(Γ)[V]+ατgφ𝒯,V=0VH1(n+1,n+1).\mathcal{D}\mathcal{J}(\Gamma)[V]+\alpha^{\tau}\cdot\langle g_{\varphi}^{\mathcal{T}},V\rangle=0\quad\forall V\in H^{1}({\mathbb{R}}^{n+1},{\mathbb{R}}^{n+1}). (15)

However, due to the structure theorem for pre-shape derivatives [12, Thrm. 1], we know that the support of shape derivative 𝒟𝒥(Γ)\mathcal{D}\mathcal{J}(\Gamma) and pure pre-shape component gφ𝒯g_{\varphi}^{\mathcal{T}} are orthogonal. Hence we have

𝒟𝒥(Γ)[V]=0 and ατgφ𝒯,V=0VH1(n+1,n+1).\mathcal{D}\mathcal{J}(\Gamma)[V]=0\text{ and }\alpha^{\tau}\cdot\langle g_{\varphi}^{\mathcal{T}},V\rangle=0\quad\forall V\in H^{1}({\mathbb{R}}^{n+1},{\mathbb{R}}^{n+1}). (16)

This is the first order condition for eq. 1, also giving us U𝒥=0U^{\mathcal{J}}=0.

It remains to show that the first order condition for the regularized problem eq. 9 is satisfied and the complete gradient U𝒥+𝔍τ=0U^{\mathcal{J}+\mathfrak{J}^{\tau}}=0. Essentially, this is a special case of result [12, Prop. 3], which characterizes global solutions of eq. 5 by fiber stationarity. From eq. 16 we see that φEmb(M,n+1)\varphi\in\operatorname{Emb}(M,{\mathbb{R}}^{n+1}) is a fiber stationary point, since the full pre-shape derivative 𝔇𝔍τ(φ)\mathfrak{D}\mathfrak{J}^{\tau}(\varphi) vanishes for all directions VV tangential to φ(M)\varphi(M). Hence [12, Prop. 3] states that φ\varphi is already a global minimizer of 𝔍τ\mathfrak{J}^{\tau} and satisfies the corresponding necessary first order condition. Even more, [12, Prop. 3] gives vanishing of gφ𝒩g_{\varphi}^{\mathcal{N}}. Therefore the right hand side of the full gradient system eq. 10 is zero, giving us a vanishing full gradient U𝒥+𝔍τU^{\mathcal{J}+\mathfrak{J}^{\tau}}.

Implication ’\Leftarrow’ can be seen by the fiber stationarity characterization as well. Since vanishing of the shape gradient U𝒥U^{\mathcal{J}} gives 𝒟𝒥(Γ)=0\mathcal{D}\mathcal{J}(\Gamma)=0, the full pre-shape derivative 𝔇𝔍τ(φ)\mathfrak{D}\mathfrak{J}^{\tau}(\varphi) must be zero if the full pre-shape gradient U𝒥τ=0U^{\mathcal{J}^{\tau}}=0. Hence the fiber stationarity characterization [12, Prop. 3] tells us that both gφ𝒯g_{\varphi}^{\mathcal{T}} and gφ𝒩g_{\varphi}^{\mathcal{N}} vanish for φ\varphi, which proves the other direction of eq. 14. ∎

With theorem 1 we can rest assured that optimization algorithms using the tangentially regularized gradient UU from eq. 13 leave stationary points of the original problem invariant. Also, vanishing of the modified gradient U~\tilde{U} indicates that we have a stationary shape Γ=π(φ)\Gamma=\pi(\varphi) with parameterization φ\varphi having desired cell volume allocation fφf_{\varphi}. Of course, this is only true up to discretization error. We also see that the modified gradient system eq. 13 captures the same information as the shape gradient system eq. 11 and full pre-shape gradient system eq. 10 combined. This might seem counterintuitive at first glance, especially since necessary information is contained in one instead of two gradient systems. However, by application of pre-shape calculus to derive the fiber stationarity characterization of solutions [12, Prop. 3], we recognize this circumstance as a consequence of the special structure of regularizing functional 𝔍τ\mathfrak{J}^{\tau}. The fact that pre-shape spaces are fiber spaces with locally orthogonal tangential bundles for parameterizations and shapes (cf. [12, Section 2]) is a fundamental prerequisite to this. We will discuss numerical results comparing optimization using standard shape gradients eq. 11 and gradients regularized by modified shape parameterization tracking eq. 13 in section 2.

Remark 1 (Applicability of Shape Regularization for General Shape Optimization Problems).

It is important to notice, that we did not need a specific structure of the original shape optimization problem eq. 1. The only assumptions made are shape differentiability and existence of an optimal shape for 𝒥\mathcal{J}. This means the tangential mesh regularization via 𝔍τ\mathfrak{J}^{\tau} (cf. eq. 9) is applicable for almost every meaningful shape optimization problem. In particular, PDE-constrained shape optimization problems can be regularized by this approach, which we will discuss in section 3. Also, the modified gradient system structure eq. 13 stays the same for different shape optimization targets 𝒥\mathcal{J}. It is solely the shape derivative 𝒟𝒥\mathcal{D}\mathcal{J} on the right hand side which changes depending on the shape problem target. Finally, existence of solutions and consistency of stationarity and first order conditions theorem 1 is given

Remark 2 (Equivalent Bi-Level Formulation of the Regularized Problem).

It is also possible to formulate the shape mesh regularization as a nonlinear bilevel problem

minφEmb(M,n+1)𝔍τ(φ) s.t. π(φ)=argminΓBen𝒥(Γ).\displaystyle\begin{split}\underset{\varphi\in\operatorname{Emb}(M,{\mathbb{R}}^{n+1})}{\min}&\;\mathfrak{J}^{\tau}(\varphi)\\ \text{ s.t. }\pi(\varphi)=&\;\underset{\Gamma\in B_{e}^{n}}{\operatorname{arg}\min}\;\mathcal{J}(\Gamma).\end{split} (17)

The upper level pre-shape optimization problem depends only on the lower level shape optimization problem by the latter restricting the set of feasible points φ\varphi. So intuitively, solving eq. 17 amounts to solving the lower level problem for a shape Γ\Gamma, and then to look for an optimal parameterization φΓBen\varphi\in\Gamma\in B_{e}^{n} in the fiber corresponding to the optimal shape. If a solution to the lower level shape optimization problem exists, a solution φ\varphi to the upper level problem exists as well (cf. [12, Prop. 2]). The lower level problem enforces that the optimal shape Γ\Gamma coincides with the fiber π(φ)\pi(\varphi).

The bilevel formulation motivates the modified gradient system eq. 13 in a consistent manner. For this, we can take the perspective of nonlinear bilevel programming as in [15]. In finite dimensions, the authors of [15] propose a way to calculate a descent direction by solving a bilevel optimization problem derived by the original problem. We remind the reader that we formulated our systems as gradient systems and not descent directions, hence a change of sign compared to systems for descent directions in [15] has to occur. Also notice that the additional constraint π(φ)=Γ\pi(\varphi)=\Gamma for the feasible set of solutions has to be added to formulate our bilevel problem in the exact style of [15]. We can proceed with a symbolic calculation following [15, Ch. 2], using relations [12, Thrm. 1 (ii), (iii)] for pre-shape derivatives and the fact that 𝔍τ\mathfrak{J}^{\tau} does not explicitly depend on Γ\Gamma of the subproblem and 𝒥\mathcal{J} not explicitly on φ\varphi of the upper level problem. This yields a bilevel problem for the gradient UU to eq. 17

maxUH1(n+1,n+1),U1𝔇𝔍τ(φ)[U] s.t. U𝒩=argmaxWH1(n+1,n+1),W1𝒟𝒥(Γ)[W],\displaystyle\begin{split}&\underset{U\in H^{1}({\mathbb{R}}^{n+1},{\mathbb{R}}^{n+1}),\|U\|\leq 1}{\max}\mathfrak{D}\mathfrak{J}^{\tau}(\varphi)[U]\\ \text{ s.t. }&U^{\mathcal{N}}=\underset{W\in H^{1}({\mathbb{R}}^{n+1},{\mathbb{R}}^{n+1}),\|W\|\leq 1}{\operatorname{arg}\max}\mathcal{D}\mathcal{J}(\Gamma)[W],\end{split} (18)

where U𝒩U^{\mathcal{N}} is the component of UU normal to Γ\Gamma. In [15, Ch. 3] a descent method is employed for eq. 18 by alternating computation of WkW_{k} and UkU_{k}.

For our situation, a gradient WkW_{k} for the lower level problem of eq. 18 corresponds to U𝒥U^{\mathcal{J}} solving the shape gradient system eq. 11. With this, the lower level constraint fixes the normal component of UU to be the shape derivative of the original problem eq. 1. By decomposing U=U𝒯+U𝒩U=U^{\mathcal{T}}+U^{\mathcal{N}} into tangential and normal directions, we see that the fixed normal component makes 𝔇𝔍τ(φ)[U𝒩]=0\mathfrak{D}\mathfrak{J}^{\tau}(\varphi)[U^{\mathcal{N}}]=0 a constant not relevant for the upper level problem. This lets us rewrite the system as

minU𝒯H1(n+1,n+1),U𝒯1gφ𝒯,U𝒯 s.t. U𝒩=U𝒥U=U𝒯+U𝒩.\displaystyle\begin{split}\underset{U^{\mathcal{T}}\in H^{1}({\mathbb{R}}^{n+1},{\mathbb{R}}^{n+1}),\|U^{\mathcal{T}}\|\leq 1}{\min}&\langle g_{\varphi}^{\mathcal{T}},U^{\mathcal{T}}\rangle\\ \text{ s.t. }U^{\mathcal{N}}=\;U^{\mathcal{J}}&\\ U=\;U^{\mathcal{T}}&+U^{\mathcal{N}}.\end{split} (19)

We clearly see that the minimization of the tangential component gφ𝒯g^{\mathcal{T}}_{\varphi} in eq. 19 does not depend on the constraints given below. Hence it can be decoupled, so by considering ατ>0\alpha^{\tau}>0 this leads to a gradient system

a(U𝒯,V)=ατgφ𝒯,VVH1(n+1,n+1).a(U^{\mathcal{T}},V)=\alpha^{\tau}\cdot\langle g_{\varphi}^{\mathcal{T}},V\rangle\quad\forall V\in H^{1}({\mathbb{R}}^{n+1},{\mathbb{R}}^{n+1}). (20)

With the same orthogonality arguments made at eq. 11 and eq. 12, a separate computation of U𝒥U^{\mathcal{J}} and UτU^{\tau} as in the general case in [15] is not necessary in our case. The gradient U=U𝒯+U𝒩U=U^{\mathcal{T}}+U^{\mathcal{N}} for the bilevel problem eq. 17 can be calculated using a single system, which coincides exactly with the modified gradient U~\tilde{U} from system eq. 13. With theorem 1, this means using the modified pre-shape gradient U~\tilde{U} as a descent direction in fact solves the bi-level problem eq. 17, the regularized problem eq. 9 and the original shape problem eq. 1 at the same time.

2.2 Simultaneous Volume Mesh Quality and Shape Optimization

In this section we introduce the necessary machinery for a regularization strategy of volume meshes representing the hold-all domain 𝔻\mathbb{D}. We will also incorporate our previous results to simultaneously optimize for shape mesh and volume mesh quality. As before, we show a way to calculate modified pre-shape gradients without need for solving additional (non-)linear systems compared to standard shape gradient calculations.

2.2.1 Suitable Pre-Shape Spaces for Hold-All Mesh Quality Optimization

Using pre-shape techniques to increase volume mesh quality is a different task than regularizing the shape mesh as we have seen in section 2.1. Using a pre-shape space Emb(M,n+1)\operatorname{Emb}(M,{\mathbb{R}}^{n+1}) for this endeavor does not make sense, since this space contains all shapes in n+1{\mathbb{R}}^{n+1} combined with their parameterization. For Emb(M,n+1)\operatorname{Emb}(M,{\mathbb{R}}^{n+1}), parameterizations are described via the nn-dimensional modeling or parameter manifold MM. We now need a second, different parameter space modeling the hold-all domain.

For this, let 𝔻n+1\mathbb{D}\subset{\mathbb{R}}^{n+1} be a compact, orientable, path-connected, n+1n+1-dimensional CC^{\infty}-manifold. This hold-all domain will replace the unbounded hold-all domain n+1{\mathbb{R}}^{n+1} of our previous discussion. In most numerical cases, the hold-all domain has a non-empty boundary 𝔻\partial\mathbb{D}. Hence we also permit non-empty 𝔻\partial\mathbb{D}, but demand CC^{\infty}-regularity of the boundary. We remind the reader that 𝔻\mathbb{D} is a compact manifold with boundary, so we have to distinguish from its interior, which we denote by int(𝔻)\operatorname{int}(\mathbb{D}).

Remark 3 (Hölder Regularity Setting).

The setting of CC^{\infty}-smoothness is taken because it is necessary to have a meaningful definition of shape space BenB_{e}^{n}. However, results concerning existence of optimal parameterizations φ\varphi for eq. 9, its pre-shape derivative formula eq. 6, regularization strategies featuring the modified gradient system eq. 13 and its consistency theorem 1 all remain valid for the Hölder-regularity case Ck,αC^{k,\alpha} for k0k\geq 0 and 1>α>01>\alpha>0 (cf. [12, Remark 3]). However, if 𝔻\mathbb{D} has Ck,αC^{k,\alpha}-regularity and non-empty boundary, it is necessary to additionally assume Ck+3,αC^{k+3,\alpha}-regularity for 𝔻\partial\mathbb{D}. If this is given, all previous and following results remain valid.

A first choice for a pre-shape space to the hold-all domain would be Emb𝔻(𝔻,𝔻)\operatorname{Emb}_{\partial\mathbb{D}}(\mathbb{D},\mathbb{D}), which is the set of all CC^{\infty}-embeddings leaving 𝔻\partial\mathbb{D} pointwise invariant, i.e.

Emb𝔻(𝔻,𝔻):={φEmb(𝔻,𝔻):φ(p)=pp𝔻}.\operatorname{Emb}_{\partial\mathbb{D}}(\mathbb{D},\mathbb{D}):=\{\varphi\in\operatorname{Emb}(\mathbb{D},\mathbb{D}):\varphi(p)=p\quad\forall p\in\partial\mathbb{D}\}. (21)

Leaving the outer boundary 𝔻\partial\mathbb{D} invariant can be beneficial from practical standpoint, since it might be part of an underlying problem formulation, such a boundary conditions of a PDE.

The pre-shape space eq. 21 in fact is the diffeomorphism group Diff𝔻(𝔻)\operatorname{Diff}_{\partial\mathbb{D}}(\mathbb{D}) leaving the boundary pointwise invariant. This is very important to notice, because it means Diff𝔻(𝔻)\operatorname{Diff}_{\partial\mathbb{D}}(\mathbb{D}) is a pre-shape space consisting exactly of one fiber. The shape of hold-all domain 𝔻\mathbb{D} is fixed, and as a consequence its tangent bundle TDiff𝔻(𝔻)T\operatorname{Diff}_{\partial\mathbb{D}}(\mathbb{D}) consists of vector fields with vanishing normal components on 𝔻\partial\mathbb{D} (cf. [20, Thrm. 3.18]), i.e.

TΦDiff𝔻(𝔻)C𝔻(𝔻,n+1):={VC(𝔻,n+1):Tr|𝔻(V)=0}ΦDiff𝔻(𝔻).T_{\Phi}\operatorname{Diff}_{\partial\mathbb{D}}(\mathbb{D})\cong C^{\infty}_{\partial\mathbb{D}}(\mathbb{D},{\mathbb{R}}^{n+1}):=\{V\in C^{\infty}(\mathbb{D},{\mathbb{R}}^{n+1}):\operatorname{Tr}_{|\partial\mathbb{D}}(V)=0\}\quad\forall\Phi\in\operatorname{Diff}_{\partial\mathbb{D}}(\mathbb{D}). (22)

Here, Tr|𝔻(V)\operatorname{Tr}_{|\partial\mathbb{D}}(V) is the trace of VV on 𝔻\partial\mathbb{D}. Even further, the structure theorem for pre-shape derivatives [12, Thrm. 1] tells us that a pre-shape differentiable functional 𝔍𝔻:Diff𝔻(𝔻)\mathfrak{J}^{\mathbb{D}}:\operatorname{Diff}_{\partial\mathbb{D}}(\mathbb{D})\rightarrow{\mathbb{R}} always has trivial shape component g𝒩=0g^{\mathcal{N}}=0 of 𝔇𝔍\mathfrak{D}\mathfrak{J}. In particular, ordinary shape calculus is not applicable for functionals of this type.

Our task to design mesh regularization strategies, which are numerically feasible and do not interfere with the original shape optimization problem. For a given shape φ(M)𝔻\varphi(M)\subset\mathbb{D}, the latter is not guaranteed when a pre-shape space Diff𝔻\operatorname{Diff}_{\partial\mathbb{D}} is used to model parameterizations of the hold-all domain. A diffeomorphism ΦDiff𝔻\Phi\in\operatorname{Diff}_{\partial\mathbb{D}} could change a given intermediate or optimal shape φ(M)\varphi(M), i.e. Φ(φ(M))φ(M)\Phi\big{(}\varphi(M)\big{)}\neq\varphi(M) in general. Therefore we have to enforce invariance of shapes by further restricting the pre-shape space for 𝔻\mathbb{D}. For this, we use the concept of diffeomorphism groups leaving submanifolds invariant (cf. [20, Ch. 3.6]). With our previous notation eq. 21, the final pre-shape space is Diff𝔻φ(M)(𝔻)\operatorname{Diff}_{\partial\mathbb{D}\cup\varphi(M)}(\mathbb{D}), for a given fixed shape φ(M)𝔻\varphi(M)\subset\mathbb{D}. This space is a subgroup of Diff𝔻(𝔻)\operatorname{Diff}_{\partial\mathbb{D}}(\mathbb{D}) and Diff(𝔻)\operatorname{Diff}(\mathbb{D}), with a tangential bundle consisting of vector fields vanishing both on 𝔻\partial\mathbb{D} and φ(M)\varphi(M) (cf. [20, Thrm. 3.18]).

2.2.2 Volume Parameterization Tracking Problem with Invariant Shapes

Now we have a pre-shape space suitable to model reparameterizations of the hold-all domain 𝔻\mathbb{D}, which can leave a given shape φ(M)\varphi(M) invariant. Next, we formulate an analogue of parameterization tracking problem eq. 9, which is tracking for the volume parameterization of the hold-all domain. Let us fix a φEmb(M,𝔻)\varphi\in\operatorname{Emb}(M,\mathbb{D}) and the corresponding shape φ(M)𝔻\varphi(M)\subset\mathbb{D}. The volume parameterization tracking problem is then

minΦDiff𝔻φ(M)(𝔻)12𝔻(g𝔻Φ1(s)detDΦ1(s)fφ(M)𝔻(s))2dx=:𝔍𝔻(Φ).\underset{\Phi\in\operatorname{Diff}_{\partial\mathbb{D}\cup\varphi(M)}(\mathbb{D})}{\min}\frac{1}{2}\int_{\mathbb{D}}\Big{(}g^{\mathbb{D}}\circ\Phi^{-1}(s)\cdot\det D\Phi^{-1}(s)-f^{\mathbb{D}}_{\varphi(M)}(s)\Big{)}^{2}\;\mathrm{d}x=:\mathfrak{J}^{\mathbb{D}}(\Phi). (23)

Notice the similarities and differences to shape parameterization tracking problem eq. 9. Both pre-shape functionals track for a parameterization dictated by target ff, and both feature a function gg describing the initial parameterization. The volume tracking functional 𝔍𝔻\mathfrak{J}^{\mathbb{D}} differs from 𝔍τ\mathfrak{J}^{\tau} by featuring a volume integral, instead of a surface one. Also, the covariant derivative of the Jacobian determinant in 𝔍𝔻\mathfrak{J}^{\mathbb{D}} is just the regular Jacobian matrix of Φ1\Phi^{-1}. The two most important differences concern their sets of feasible solutions and targets ff. The target fφ(M)𝔻f^{\mathbb{D}}_{\varphi(M)} does not depend on the pre-shape Φ\Phi for the hold-all domain, but instead depends on the shape φ(M)\varphi(M) which is left to be invariant. Having f𝔻f^{\mathbb{D}} depend on the shape of 𝔻\mathbb{D} does not make sense, since Diff𝔻φ(M)(𝔻)\operatorname{Diff}_{\partial\mathbb{D}\cup\varphi(M)}(\mathbb{D}) consists only of one fiber, i.e. the shape of 𝔻\mathbb{D} remains invariant. Hence there is a dependence of both the feasible set of pre-shapes Diff𝔻φ(M)(𝔻)\operatorname{Diff}_{\partial\mathbb{D}\cup\varphi(M)}(\mathbb{D}) and the target fφ(M)𝔻f^{\mathbb{D}}_{\varphi(M)} on the shape φ(M)\varphi(M), because we desired φ(M)\varphi(M) to stay unaltered. For this reason, the volume parameterization tracking problem eq. 23 differs from eq. 9 to such an extent, that [12, Prop. 2] does not cover existence of solutions ΦDiff𝔻φ(M)(𝔻)\Phi\in\operatorname{Diff}_{\partial\mathbb{D}\cup\varphi(M)}(\mathbb{D}) for eq. 23. This makes it necessary to formulate a result guaranteeing existence of solutions for eq. 23 under appropriate conditions.

Theorem 2 (Existence of Solutions for the Volume Parameterization Tracking Problem).

Assume 𝔻n+1\mathbb{D}\subset{\mathbb{R}}^{n+1} to a compact, orientable, path-connected, n+1n+1-dimensional CC^{\infty}-manifold, possibly with boundary. Let MM be an nn-dimensional, orientable, path-connected and compact CC^{\infty}-submanifold of 𝔻\mathbb{D} without boundary. Fix a φEmb(M,𝔻)\varphi\in\operatorname{Emb}(M,\mathbb{D}) generating a shape φ(M)𝔻\varphi(M)\subset\mathbb{D}. Denote by 𝔻φin\mathbb{D}_{\varphi}^{\text{in}} and 𝔻φout\mathbb{D}_{\varphi}^{\text{out}} the disjoint inner and outer part of 𝔻\mathbb{D} partitioned by φ(M)\varphi(M).

Let g𝔻:𝔻(0,)g^{\mathbb{D}}:\mathbb{D}\rightarrow(0,\infty) by a CC^{\infty}-function and fφ(M)𝔻:𝔻(0,)f^{\mathbb{D}}_{\varphi(M)}:\mathbb{D}\rightarrow(0,\infty) be CC^{\infty}-regular on 𝔻φ(M)\mathbb{D}\setminus\varphi(M). Further assume the normalization conditions

𝔻φinfφ(M)𝔻(x)dx=𝔻φing𝔻dx and 𝔻φoutfφ(M)𝔻(x)dx=𝔻φoutg𝔻dx.\int_{\mathbb{D}_{\varphi}^{\text{in}}}f_{\varphi(M)}^{\mathbb{D}}(x)\;\mathrm{d}x=\int_{\mathbb{D}_{\varphi}^{\text{in}}}g^{\mathbb{D}}\;\mathrm{d}x\quad\text{ and }\quad\int_{\mathbb{D}_{\varphi}^{\text{out}}}f_{\varphi(M)}^{\mathbb{D}}(x)\;\mathrm{d}x=\int_{\mathbb{D}_{\varphi}^{\text{out}}}g^{\mathbb{D}}\;\mathrm{d}x. (24)

Then there exists a global CC^{\infty}-solution to problem eq. 23, i.e. a diffeomorphism Φ~Diff(𝔻)\tilde{\Phi}\in\operatorname{Diff}(\mathbb{D}) satisfying

(g𝔻Φ~1)detDΦ~1=fφ(M)𝔻 on 𝔻φ(M) and Φ~(p)=pp𝔻φ(M).(g^{\mathbb{D}}\circ\tilde{\Phi}^{-1})\cdot\det D\tilde{\Phi}^{-1}=f_{\varphi(M)}^{\mathbb{D}}\;\text{ on }\mathbb{D}\setminus\varphi(M)\quad\text{ and }\quad\tilde{\Phi}(p)=p\quad\forall p\in\partial\mathbb{D}\cup\varphi(M). (25)
Proof.

Let the assumptions on 𝔻n+1\mathbb{D}\subset{\mathbb{R}}^{n+1} and M𝔻M\subset\mathbb{D} be true. We fix a φEmb(M,𝔻)\varphi\in\operatorname{Emb}(M,\mathbb{D}), and see that φ(M)𝔻\varphi(M)\subset\mathbb{D} is an nn-dimensional, orientable, path-connected and compact CC^{\infty}-submanifold of 𝔻\mathbb{D} as well. With φ(M)\varphi(M) being a connected and compact hypersurface of 𝔻\mathbb{D}, the celebrated Jordan-Brouwer separation theorem (cf. [8, p. 89]) guarantees existence of open and disjoint inner 𝔻φin\mathbb{D}_{\varphi}^{\text{in}} and outer 𝔻φout\mathbb{D}_{\varphi}^{\text{out}} of 𝔻\mathbb{D} separated by φ(M)\varphi(M). Next, let g𝔻g^{\mathbb{D}} and fφ(M)𝔻f_{\varphi(M)}^{\mathbb{D}} be as described, in particular satisfying normalization conditions eq. 24. With existence of a separated inner and outer, we can decouple the volume tracking problem eq. 23 into two independent subproblems

minΦinDiff𝔻in(𝔻in¯)𝔍𝔻in(Φin) and minΦoutDiff𝔻out(𝔻out¯)𝔍𝔻out(Φout).\underset{\Phi_{\text{in}}\in\operatorname{Diff}_{\partial\mathbb{D}^{\text{in}}}(\overline{\mathbb{D}^{\text{in}}})}{\min}\mathfrak{J}^{\mathbb{D}^{\text{in}}}(\Phi_{\text{in}})\quad\text{ and }\quad\underset{\Phi_{\text{out}}\in\operatorname{Diff}_{\partial\mathbb{D}^{\text{out}}}(\overline{\mathbb{D}^{\text{out}}})}{\min}\mathfrak{J}^{\mathbb{D}^{\text{out}}}(\Phi_{\text{out}}). (26)

Both problems do not feature invariant submanifolds in the interior anymore, since 𝔻φin=φ(M)\partial\mathbb{D}^{\text{in}}_{\varphi}=\varphi(M) and 𝔻φout=φ(M)𝔻\partial\mathbb{D}^{\text{out}}_{\varphi}=\varphi(M)\cup\partial\mathbb{D}. Thus interior and exterior are both CC^{\infty}-manifolds with CC^{\infty}-boundaries. With this, and two required normalization conditions eq. 24, we are in position to apply existence theorem [12, Prop. 2] for both independent subproblems eq. 26. This gives us two CC^{\infty}-diffeomorphism Φ~inDiff𝔻φin(𝔻φin¯)\tilde{\Phi}_{\text{in}}\in\operatorname{Diff}_{\partial\mathbb{D}^{\text{in}}_{\varphi}}(\overline{\mathbb{D}^{\text{in}}_{\varphi}}) and Φ~outDiff𝔻φout(𝔻φout¯)\tilde{\Phi}_{\text{out}}\in\operatorname{Diff}_{\partial\mathbb{D}^{\text{out}}_{\varphi}}(\overline{\mathbb{D}^{\text{out}}_{\varphi}}) globally solving eq. 26, which in particular satisfy

(g𝔻Φ~in1)detDΦ~in1=fφ(M)𝔻on𝔻φinand(g𝔻Φ~out1)detDΦ~out1=fφ(M)𝔻on𝔻φout.(g^{\mathbb{D}}\circ\tilde{\Phi}_{\text{in}}^{-1})\cdot\det D\tilde{\Phi}_{\text{in}}^{-1}=f_{\varphi(M)}^{\mathbb{D}}\;\text{on}\;\mathbb{D}^{\text{in}}_{\varphi}\quad\text{and}\quad(g^{\mathbb{D}}\circ\tilde{\Phi}_{\text{out}}^{-1})\cdot\det D\tilde{\Phi}_{\text{out}}^{-1}=f_{\varphi(M)}^{\mathbb{D}}\;\text{on}\;\mathbb{D}^{\text{out}}_{\varphi}. (27)

We define a global solution candidate for eq. 23 by setting Φ~:=Φ~in+Φ~out\tilde{\Phi}:=\tilde{\Phi}_{\text{in}}+\tilde{\Phi}_{\text{out}}. It is clear that Φ~\tilde{\Phi} is a bijection. Also, Φ~\tilde{\Phi} is the identity on 𝔻φ(M)\partial\mathbb{D}\cup\varphi(M), which is the second property of eq. 25. We know that Φ~in\tilde{\Phi}_{\text{in}} is CC^{\infty} on 𝔻φin¯\overline{\mathbb{D}^{\text{in}}_{\varphi}} and Φ~out\tilde{\Phi}_{\text{out}} is CC^{\infty} on 𝔻φout¯\overline{\mathbb{D}^{\text{out}}_{\varphi}}. With this, and Φ~in=Φ~out=0\tilde{\Phi}_{\text{in}}=\tilde{\Phi}_{\text{out}}=0 on φ(M)\varphi(M), we get that Φ~\tilde{\Phi} has CC^{\infty} on the entire hold-all domain 𝔻\mathbb{D}. In combination, this means Φ~Diff𝔻φ(M)(𝔻)\tilde{\Phi}\in\operatorname{Diff}_{\partial\mathbb{D}\cup\varphi(M)}(\mathbb{D}). Using eq. 27, we also get the first assertion of eq. 25. Finally, we can use eq. 25 to see that 𝔍𝔻(Φ~)=0\mathfrak{J}^{\mathbb{D}}(\tilde{\Phi})=0, since φ(M)\varphi(M) is a set of measure zero. Due to quadratic nature of eq. 23 we have 𝔍𝔻0\mathfrak{J}^{\mathbb{D}}\geq 0, which tells us that Φ~\tilde{\Phi} is a global solution to the volume parameterization tracking problem. Since we did not use any special property of φ\varphi, the argumentation holds for all φEmb(M,𝔻)\varphi\in\operatorname{Emb}(M,\mathbb{D}) and concludes the proof. ∎

Remark 4 (Necessity of two Normalization Conditions for g𝔻g^{\mathbb{D}} and fφ(M)𝔻f_{\varphi(M)}^{\mathbb{D}}).

For guaranteed existence of optimal solutions ΦDiff𝔻Γ(𝔻)\Phi\in\operatorname{Diff}_{\partial\mathbb{D}\cup\Gamma}(\mathbb{D}) to volume parameterization tracking eq. 23, it is necessary to assume to separate normalization conditions eq. 24. An invariant shape φ(M)𝔻\varphi(M)\subset\mathbb{D} acts as a boundary, partitioning the hold-all domain into inner and outer. As we require solutions Φ\Phi to leave φ(M)\varphi(M) pointwise invariant, the diffeomorphism Φ\Phi is not allowed to transport volume from outside to inside and vice versa. Hence, in general, a single normalization condition on the entire hold-all domain 𝔻\mathbb{D} of type eq. 3 is not sufficient for existence of solutions. A direct application of Dacorogna and Moser’s theorem [3] to eq. 23 yields a ΦDiff𝔻(𝔻)\Phi\in\operatorname{Diff}_{\partial\mathbb{D}}(\mathbb{D}) possibly transporting volume across φ(M)\varphi(M), which we have to restrict if φ(M)\varphi(M) is left to be invariant. As the total inner and outer volume is changing with varying φ(M)\varphi(M), we have to require normalization condition eq. 24 for each φ(M)\varphi(M) separately. For this reason our target fφ(M)𝔻f^{\mathbb{D}}_{\varphi(M)} has to depend on φ(M)\varphi(M), even though the shape of 𝔻\mathbb{D} stays the same. This has crucial implications on the design of targets f𝔻f^{\mathbb{D}} in numerical applications, which we will discuss in section 3.1.2.

Remark 5 (Generality of Invariant Shapes).

In existence result theorem 2, we have required the invariant shape to be of form φ(M)\varphi(M) for some φEmb(M,𝔻)\varphi\in\operatorname{Emb}(M,\mathbb{D}). This is solely due to the context of optimization problem eq. 1 using shape spaces BenB_{e}^{n}. It is absolutely possible to use any compact and connected hypersurface Γ𝔻\Gamma\subset\mathbb{D} as an invariant shape for eq. 23. Existence of ΦDiff𝔻Γ(𝔻)\Phi\in\operatorname{Diff}_{\partial\mathbb{D}\cup\Gamma}(\mathbb{D}) globally solving the volume parameterization tracking problem is still guaranteed for general Γ\Gamma.

As we want to regularize gradient descent methods for shape optimization, we need to explicitly specify the pre-shape derivative to the volume parameterization tracking problem eq. 23. However, it is also of theoretical interest, since it serves as an example of a problem with a pre-shape derivative having vanishing shape component. This means that neither the formulation of eq. 23 in the context of classical shape optimization, nor the derivation of a derivative using classical shape calculus are possible. Since the form of 𝔍𝔻\mathfrak{J}^{\mathbb{D}} is similar to 𝔍τ\mathfrak{J}^{\tau}, we can mimic the steps from [12, Thrm. 2], which we do not restate for brevity. Then, for given ΦDiff𝔻φ(M)(𝔻)\Phi\in\operatorname{Diff}_{\partial\mathbb{D}\cup\varphi(M)}(\mathbb{D}), the pre-shape derivative of 𝒥𝔻\mathcal{J}^{\mathbb{D}} in decomposed form (cf. [12, Thrm. 1]) is given by

𝔇𝔍𝔻(Φ)[V]=gΦ𝒩,V+gΦ𝒯,VVCMφ(M)(𝔻,n+1),\mathfrak{D}\mathfrak{J}^{\mathbb{D}}(\Phi)[V]=\langle g_{\Phi}^{\mathcal{N}},V\rangle+\langle g_{\Phi}^{\mathcal{T}},V\rangle\quad\forall V\in C_{\partial M\cup\varphi(M)}^{\infty}(\mathbb{D},{\mathbb{R}}^{n+1}), (28)

with normal/ shape component

gΦ𝒩,V=0\displaystyle\begin{split}\langle g_{\Phi}^{\mathcal{N}},V\rangle=0\end{split} (29)

and tangential/ pre-shape component

gΦ𝒯,V=𝔻12((g𝔻Φ1detDΦ1)2fφ(M)𝔻2)div(V)+(g𝔻Φ1detDΦ1fφ(M)𝔻)𝔇m(fφ(M)𝔻)[V]dx.\displaystyle\begin{split}\langle g_{\Phi}^{\mathcal{T}},V\rangle=&-\int_{\mathbb{D}}\frac{1}{2}\cdot\Big{(}\big{(}g^{\mathbb{D}}\circ\Phi^{-1}\cdot\det D\Phi^{-1}\big{)}^{2}-{f^{\mathbb{D}}_{\varphi(M)}}^{2}\Big{)}\cdot\operatorname{div}(V)\\ &\qquad\qquad\quad+\Big{(}g^{\mathbb{D}}\circ\Phi^{-1}\cdot\det D\Phi^{-1}-f^{\mathbb{D}}_{\varphi(M)}\Big{)}\cdot\mathfrak{D}_{m}\big{(}f^{\mathbb{D}}_{\varphi(M)}\big{)}[V]\;\mathrm{d}x.\end{split} (30)

Here, nn is the outer unit normal vector field of the invariant submanifold φ(M)\varphi(M). As previously mentioned, we signify a duality pairing by ,\langle\cdot,\cdot\rangle. It is also important to see that the restriction of Diff(𝔻)\operatorname{Diff}(\mathbb{D}) to DiffMφ(M)(𝔻)\operatorname{Diff}_{\partial M\cup\varphi(M)}(\mathbb{D}) does not influence the form of pre-shape derivative 𝔇𝔍𝔻\mathfrak{D}\mathfrak{J}^{\mathbb{D}}. Instead, it influences the space of applicable directions VV by restricting C(𝔻,n+1)C^{\infty}(\mathbb{D},{\mathbb{R}}^{n+1}) to CMφ(M)(𝔻,n+1)C^{\infty}_{\partial M\cup\varphi(M)}(\mathbb{D},{\mathbb{R}}^{n+1}). This stems from the relationship of tangential bundles of diffeomorphism groups with invariant submanifolds eq. 22. There is also another subtle difference of 𝔇𝔍𝔻\mathfrak{D}\mathfrak{J}^{\mathbb{D}} and 𝔇𝔍τ\mathfrak{D}\mathfrak{J}^{\tau}. Namely, the featured pre-shape material derivative 𝔇m(fφ(M)𝔻)\mathfrak{D}_{m}\big{(}f^{\mathbb{D}}_{\varphi(M)}\big{)} depends on the invariant shape φ(M)𝔻\varphi(M)\subset\mathbb{D} instead of the volume pre-shape Φ\Phi.

It is straight forward to formulate a pre-shape gradient system for 𝔇𝔍𝔻\mathfrak{D}\mathfrak{J}^{\mathbb{D}} in the style of section 2.1 using Sobolev functions. For a symmetric, positive-definite bilinear form a(.,.)a(.,.) it takes the form

a(U𝔍𝔻,V)=α𝔻𝔇𝔍𝔻(Φ)[V]VH𝔻φ(M)1(𝔻,n+1).a(U^{\mathfrak{J}^{\mathbb{D}}},V)=\alpha^{\mathbb{D}}\cdot\mathfrak{D}\mathfrak{J}^{\mathbb{D}}(\Phi)[V]\quad\forall V\in H^{1}_{\partial\mathbb{D}\cup\varphi(M)}(\mathbb{D},{\mathbb{R}}^{n+1}). (31)

Notice that the space of test functions enforces vanishing U𝔍𝔻U^{\mathfrak{J}^{\mathbb{D}}} on φ(M)\varphi(M). With a pre-shape gradient U𝔍𝔻U^{\mathfrak{J}^{\mathbb{D}}}, it is possible to optimize for the volume parameterization tracking problem eq. 23 without altering the shape of φ(M)\varphi(M).

2.2.3 Regularization of Shape Optimization Problems by Volume Parameterization Tracking

At this point, we have provided a suitable space Diff𝔻φ(M)(𝔻)\operatorname{Diff}_{\partial\mathbb{D}\cup\varphi(M)}(\mathbb{D}) for pre-shapes representing the parameterization of hold-all domains 𝔻\mathbb{D}, which leave a given shape φ(M)𝔻\varphi(M)\subset\mathbb{D} invariant. Also, we are able to guarantee existence for global minimizers to the volume version of parameterization tracking problem eq. 23 for all shapes φ(M)\varphi(M). In this subsection we are going to incorporate the volume mesh quality regularization simultaneously with shape optimization and shape mesh quality regularization.

To formulate a regularized version of the original shape optimization problem eq. 1, we have to keep the different types of pre-shapes involved in mind. These pre-shapes φEmb(M,𝔻)\varphi\in\operatorname{Emb}(M,\mathbb{D}) and ΦDiff𝔻φ(M)(𝔻)\Phi\in\operatorname{Diff}_{\partial\mathbb{D}\cup\varphi(M)}(\mathbb{D}) correspond to completely different shapes φ(M)\varphi(M) and 𝔻\mathbb{D}. This is also illustrated by looking at the pre-shapes as maps

φ:M𝔻andΦ:𝔻𝔻.\varphi:M\rightarrow\mathbb{D}\quad\text{and}\quad\Phi:\mathbb{D}\rightarrow\mathbb{D}. (32)

For this reason, we cannot simply proceed by adding 𝔍𝔻\mathfrak{J}^{\mathbb{D}} in style of a regularizer to increase volume mesh quality. From the viewpoint of problem formulation, this signifies a main difference in application of shape mesh quality regularization via 𝔍τ\mathfrak{J}^{\tau} and volume mesh quality regularization 𝔍𝔻\mathfrak{J}^{\mathbb{D}}. To avoid this issue, we formulate the volume and shape mesh regularized shape optimization problem using a bi-level approach. We have already seen in remark 2, that simultaneous shape parameterization tracking and shape optimization can be put into the bi-level framework. In fact, this was seen to be equivalent to the added regularizer approach eq. 9 with regard to gradient systems.

Let us consider weights α𝔻,ατ>0\alpha^{\mathbb{D}},\alpha^{\tau}>0. Then we formulate the simultaneous volume and shape mesh regularization of shape optimization problem eq. 1 as

minΦDiff𝔻φ(M)(𝔻)α𝔻𝔍𝔻(Φ) s.t. φ=argminφEmb(M,𝔻)(𝒥π)(φ)+ατ𝔍τ(φ).\displaystyle\begin{split}\underset{\Phi\in\operatorname{Diff}_{\partial\mathbb{D}\cup\varphi(M)}(\mathbb{D})}{\min}&\;\alpha^{\mathbb{D}}\cdot\mathfrak{J}^{\mathbb{D}}(\Phi)\\ \text{ s.t. }\varphi=&\;\underset{\varphi\in\operatorname{Emb}(M,\mathbb{D})}{\operatorname{arg}\min}\;\big{(}\mathcal{J}\circ\pi\big{)}(\varphi)+\alpha^{\tau}\cdot\mathfrak{J}^{\tau}(\varphi).\end{split} (33)

Of course, the bi-level problem eq. 33 can be formulated for α𝔻=1\alpha^{\mathbb{D}}=1 without loss of generality. To stay coherent with section 2.1 regarding pre-shape gradient systems, which feature weighted force terms, we prefer to formulate eq. 33 with a factor α𝔻>0\alpha^{\mathbb{D}}>0. Notice, that the regularization of shape optimization problem eq. 1 needs to use its pre-shape extension eq. 2. This is necessary in order to rigorously apply the pre-shape regularization strategies. In contrast to eq. 9 and eq. 17, the simultaneous volume and shape mesh quality regularized problem eq. 33 is not minimizing for one pre-shape, but for two different pre-shapes φEmb(M,𝔻)\varphi\in\operatorname{Emb}(M,\mathbb{D}) and ΦDiff𝔻φ(M)(𝔻)\Phi\in\operatorname{Diff}_{\partial\mathbb{D}\cup\varphi(M)}(\mathbb{D}). The lower level problem solves for a pre-shape corresponding to the actual parameterized shape solving the shape mesh regularized optimization problem eq. 9. On the other hand, the upper level problem looks for a pre-shape corresponding to the parameterization of the hold-all domain 𝔻\mathbb{D} with specified volume mesh quality. The set of feasible solutions Diff𝔻φ(M)(𝔻)\operatorname{Diff}_{\partial\mathbb{D}\cup\varphi(M)}(\mathbb{D}) to the upper level problem depends on the lower level problem, because the optimal shape to the latter is demanded to stay invariant.

Remark 6 (Volume Mesh Quality Regularization).

It is of course possible to regularize a shape optimization problem eq. 1 for volume mesh quality only, neglecting shape mesh parameterization tracking. In this scenario, the regularized problem takes the bi-level formulation

minΦDiff𝔻Γ(𝔻)α𝔻𝔍𝔻(Φ) s.t. Γ=argminΓBen𝒥(Γ).\displaystyle\begin{split}\underset{\Phi\in\operatorname{Diff}_{\partial\mathbb{D}\cup\Gamma}(\mathbb{D})}{\min}&\;\alpha^{\mathbb{D}}\cdot\mathfrak{J}^{\mathbb{D}}(\Phi)\\ \text{ s.t. }\Gamma=&\;\underset{\Gamma\in B_{e}^{n}}{\operatorname{arg}\min}\;\mathcal{J}(\Gamma).\end{split} (34)

Here, it is not necessary to use the pre-shape expansion eq. 2 of the original shape optimization problem.

In the remainder of this section we propose a regularized gradient system for simultaneous volume- and shape mesh quality optimization during shape optimization, and prove a corresponding existence and consistency result for the fully regularized problem. As done in section 2.1, we change the space of directions from CC^{\infty} to H1H^{1}, as it is more suitable for numerical application. We remind the reader that the pre-shape derivative 𝔇𝔍𝔻(Φ)[V]\mathfrak{D}\mathfrak{J}^{\mathbb{D}}(\Phi)[V] is defined only for directions VHMφ(M)1(𝔻,n+1)V\in H^{1}_{\partial M\cup\varphi(M)}(\mathbb{D},{\mathbb{R}}^{n+1}) which vanish on Mφ(M)\partial M\cup\varphi(M). This is inevitable, since a criterion for successful application of volume mesh regularization for shape optimization routines has to leave optimal or intermediate shapes φ(M)\varphi(M) invariant. If 𝔇𝔍𝔻(Φ)[V]\mathfrak{D}\mathfrak{J}^{\mathbb{D}}(\Phi)[V] was used for regularizing the gradient in style of an added source term (cf. eq. 13) for general directions VH1(𝔻,n+1)V\in H^{1}(\mathbb{D},{\mathbb{R}}^{n+1}), it would most certainly alter shapes and interfere with shape optimization. Also, it is not possible to put Dirichlet conditions on φ(M)\varphi(M), or to use a restricted space of test function as in eq. 31. Doing so would prohibit shape optimization itself. Hence we have to modify 𝔇𝔍𝔻\mathfrak{D}\mathfrak{J}^{\mathbb{D}}, such that general directions VH1(𝔻,n+1)V\in H^{1}(\mathbb{D},{\mathbb{R}}^{n+1}) are applicable as test functions, while the shape at hand is preserved. To resolve this problem, we introduce a projection

PrHMφ(M)1:H1HMφ(M)1,\operatorname{Pr}_{H^{1}_{\partial M\cup\varphi(M)}}:H^{1}\rightarrow H^{1}_{\partial M\cup\varphi(M)}, (35)

which is demanded to be the identity on HMφ(M)1H^{1}_{\partial M\cup\varphi(M)}. We leave the operator projecting a given direction VH1V\in H^{1} onto HMφ(M)1H^{1}_{\partial M\cup\varphi(M)} general. Suitable options include the projection via solution of a least squares problem

PrHMφ(M)1(V):=argminWHMφ(M)112WVH12.\operatorname{Pr}_{H^{1}_{\partial M\cup\varphi(M)}}(V):=\underset{W\in H^{1}_{\partial M\cup\varphi(M)}}{\operatorname{arg}\min}\frac{1}{2}\|W-V\|^{2}_{H^{1}}. (36)

In practice, it is feasible to construct a projection eq. 35 by using a finite element representation of VV and setting coefficients of basis functions on the discretization of φ(M)\varphi(M) to zero. With this projection operator, we can extend the volume tracking pre-shape derivative 𝔇𝔍𝔻\mathfrak{D}\mathfrak{J}^{\mathbb{D}} to the space HMφ(M)1H^{1}_{\partial M\cup\varphi(M)} by using

𝔇𝔍𝔻(Φ)[PrHMφ(M)1()]:H1(𝔻,n+1),V𝔇𝔍𝔻(Φ)[PrHMφ(M)1(V)],\mathfrak{D}\mathfrak{J}^{\mathbb{D}}(\Phi)[\operatorname{Pr}_{H^{1}_{\partial M\cup\varphi(M)}}(\cdot)]:H^{1}(\mathbb{D},{\mathbb{R}}^{n+1})\rightarrow{\mathbb{R}},\;V\rightarrow\mathfrak{D}\mathfrak{J}^{\mathbb{D}}(\Phi)[\operatorname{Pr}_{H^{1}_{\partial M\cup\varphi(M)}}(V)], (37)

with 𝔇𝔍𝔻\mathfrak{D}\mathfrak{J}^{\mathbb{D}} as in eq. 28.

Now we can formulate the fully regularized pre-shape gradient system for simultaneous volume-, shape mesh quality and shape optimization. We motivate the combined gradient system with the same formal calculations as in the bi-level formulation for shape quality regularization remark 2, which are inspired by [15]. Given a symmetric, positive-definite bilinear form a(.,.)a(.,.), the gradient system takes the form

a(U,V)=𝒟𝒥(Γ)[V]+ατgφ𝒯,V+α𝔻𝔇𝔍𝔻(Φ)[PrHMφ(M)1(V)]VH1(𝔻,n+1),a(U,V)=\mathcal{D}\mathcal{J}(\Gamma)[V]\;+\;\alpha^{\tau}\cdot\langle g_{\varphi}^{\mathcal{T}},V\rangle\;+\;\alpha^{\mathbb{D}}\cdot\mathfrak{D}\mathfrak{J}^{\mathbb{D}}(\Phi)[\operatorname{Pr}_{H^{1}_{\partial M\cup\varphi(M)}}(V)]\quad\forall V\in H^{1}(\mathbb{D},{\mathbb{R}}^{n+1}), (38)

where gφ𝒯g_{\varphi}^{\mathcal{T}} is the tangential component of shape regularization eq. 8 and 𝒟𝒥(Γ)\mathcal{D}\mathcal{J}(\Gamma) is the shape derivative of the original shape objective with Γ=π(φ)\Gamma=\pi(\varphi). Notice that the fully regularized pre-shape gradient system eq. 38 looks similar to the shape gradient system eq. 11 of the original problem, differing only by two added force terms on the right hand side. These force terms can be thought of as regularizing terms to the original shape gradient. In practice, this means simultaneous volume and shape mesh quality improvement for shape optimization amounts to adding two terms on the right hand side of the gradient system. Hence they can also be viewed as a (pre-)shape gradient regularization by added force terms.

Theorem 3 (Volume and Shape Regularized Problems).

Let shape optimization problem eq. 1 be shape differentiable and have a minimizer ΓBen\Gamma\in B_{e}^{n}. For shape and volume parameterization tracking, let the assumptions of both theorem 1 and theorem 2 be true.

Then there exists a φπ(φ)=ΓEmb(M,𝔻)\varphi\in\pi(\varphi)=\Gamma\subset\operatorname{Emb}(M,\mathbb{D}) and a ΦDiff𝔻φ(M)(𝔻)\Phi\in\operatorname{Diff}_{\partial\mathbb{D}\cup\varphi(M)}(\mathbb{D}) minimizing the volume and shape regularized bi-level problem eq. 33.

The fully regularized pre-shape gradient UU from system eq. 38 is consistent with the modified shape regularized gradient U~\tilde{U} from system eq. 13 and volume tracking pre-shape gradient U𝔍𝔻U^{\mathfrak{J}^{\mathbb{D}}} from system eq. 10, in the sense that

U=0U~=0 and U𝔍𝔻=0.U=0\iff\tilde{U}=0\text{ and }U^{\mathfrak{J}^{\mathbb{D}}}=0. (39)

In particular, if U=0U=0 is satisfied, the necessary first order conditions for volume tracking eq. 23, shape tracking eq. 9 and the original problem eq. 1 are all satisfied simultaneously.

Proof.

For the proof, we need to guarantee existence of solutions to eq. 33, consistency of gradients eq. 39 and conclude the last assertion concerning necessary first order conditions.

First, let the assumptions of theorem 3 be given. This includes all assumptions made on MM, 𝔻\mathbb{D} and functions gM,g𝔻,fφ,fφ(M)𝔻g^{M},g^{\mathbb{D}},f_{\varphi},f^{\mathbb{D}}_{\varphi(M)} summarized in theorem 1 and theorem 2. Fix a solution ΓBen\Gamma\in B_{e}^{n} to the original problem eq. 1. With the theorem for shape regularized problems theorem 1, we have guaranteed existence of a solution φπ(φ)=ΓEmb(M,𝔻)\varphi\in\pi(\varphi)=\Gamma\subset\operatorname{Emb}(M,\mathbb{D}) to the lower level problem of eq. 33, which coincides with the shape regularized problem eq. 9. Let us select such a solution φEmb(M,𝔻)\varphi\in\operatorname{Emb}(M,\mathbb{D}). This fixes the set of solution candidates Diff𝔻φ(M)(𝔻)\operatorname{Diff}_{\partial\mathbb{D}\cup\varphi(M)}(\mathbb{D}). The existence theorem for volume tracking with invariant shapes theorem 2 gives existence of a pre-shape ΦDiff𝔻φ(M)(𝔻)\Phi\in\operatorname{Diff}_{\partial\mathbb{D}\cup\varphi(M)}(\mathbb{D}) solving the upper level problem of eq. 33 while leaving φ(M)\varphi(M) invariant. This proves existence of solutions to the volume and shape regularized bi-level problem eq. 33.

For consistency of pre-shape gradients eq. 39, we first prove ’\Rightarrow’ by assuming U=0U=0. The right hand side of the volume and shape regularized gradient system eq. 38 consists of three added functionals 𝒟𝒥(Γ)\mathcal{D}\mathcal{J}(\Gamma), gφ𝒯g_{\varphi}^{\mathcal{T}} and 𝔇𝔍𝔻(Φ)[PrHMφ(M)1()]\mathfrak{D}\mathfrak{J}^{\mathbb{D}}(\Phi)[\operatorname{Pr}_{H^{1}_{\partial M\cup\varphi(M)}}(\cdot)]. Due to U=0U=0, the full right hand side of eq. 39 vanishes in particular for all directions VHMφ(M)1(𝔻,n+1)H1(𝔻,n+1)V\in H^{1}_{\partial M\cup\varphi(M)}(\mathbb{D},{\mathbb{R}}^{n+1})\subset H^{1}(\mathbb{D},{\mathbb{R}}^{n+1}). Our functionals 𝒟𝒥(Γ)\mathcal{D}\mathcal{J}(\Gamma) and gφ𝒯g_{\varphi}^{\mathcal{T}} are only supported on directions VV not vanishing on φ(M)\varphi(M) due to the structure theorem for pre-shape derivatives [12, Thrm. 1] and their underlying pre-shape space being Emb(M,𝔻)\operatorname{Emb}(M,\mathbb{D}). This implies vanishing of 𝔇𝔍𝔻(Φ)[PrHMφ(M)1()]\mathfrak{D}\mathfrak{J}^{\mathbb{D}}(\Phi)[\operatorname{Pr}_{H^{1}_{\partial M\cup\varphi(M)}}(\cdot)] for all VHMφ(M)1(𝔻,n+1)V\in H^{1}_{\partial M\cup\varphi(M)}(\mathbb{D},{\mathbb{R}}^{n+1}), which in turn gives a vanishing right hand side for eq. 31 and U𝔍𝔻=0U^{\mathfrak{J}^{\mathbb{D}}}=0. This also means that individually considered 𝔇𝔍𝔻(Φ)[PrHMφ(M)1()]\mathfrak{D}\mathfrak{J}^{\mathbb{D}}(\Phi)[\operatorname{Pr}_{H^{1}_{\partial M\cup\varphi(M)}}(\cdot)] vanishes for all directions VH1(𝔻,n+1)V\in H^{1}(\mathbb{D},{\mathbb{R}}^{n+1}). Thus the remaining part 𝒟𝒥(Γ)+ατgφ𝒯,\mathcal{D}\mathcal{J}(\Gamma)+\alpha^{\tau}\cdot\langle g_{\varphi}^{\mathcal{T}},\cdot\rangle vanishes for all VH1(𝔻,n+1)V\in H^{1}(\mathbb{D},{\mathbb{R}}^{n+1}) as well, which immediately gives us U~=0\tilde{U}=0 by eq. 13.

For ’\Leftarrow’, let us assume U~=U𝔍𝔻=0\tilde{U}=U^{\mathfrak{J}^{\mathbb{D}}}=0. Since U~\tilde{U} vanishes, we see from eq. 13 that 𝒟𝒥(Γ)+ατgφ𝒯,\mathcal{D}\mathcal{J}(\Gamma)+\alpha^{\tau}\cdot\langle g_{\varphi}^{\mathcal{T}},\cdot\rangle has to vanish too for all VH1(𝔻,n+1)V\in H^{1}(\mathbb{D},{\mathbb{R}}^{n+1}). And as 𝔇𝔍𝔻\mathfrak{D}\mathfrak{J}^{\mathbb{D}} vanishes for all VHMφ(M)1V\in H^{1}_{\partial M\cup\varphi(M)}, considering the projection operator gives that 𝔇𝔍𝔻(Φ)[PrHMφ(M)1()]\mathfrak{D}\mathfrak{J}^{\mathbb{D}}(\Phi)[\operatorname{Pr}_{H^{1}_{\partial M\cup\varphi(M)}}(\cdot)] vanishes for all VH1(𝔻,n+1)V\in H^{1}(\mathbb{D},{\mathbb{R}}^{n+1}). Together, this means the complete right hand side of eq. 38 vanishes, which gives us U=0U=0 and establishes consistency eq. 39.

The last assertion concerning necessary optimality conditions for volume tracking eq. 23, shape tracking eq. 9 and the original problem eq. 1 are a consequence of consistency eq. 39. If U=0U=0, we immediately get U𝔍𝔻=0U^{\mathfrak{J}^{\mathbb{D}}}=0, which implies the necessary first order condition for volume tracking via eq. 31. Consistency eq. 39 and vanishing U=0U=0 also give U~=0\tilde{U}=0. The last part of main theorem theorem 1 for shape regularized problems then tells us that necessary first order conditions for shape tracking eq. 9 and the original problem eq. 1 are satisfied as well. ∎

The theorem for volume and shape regularized problems theorem 3 is of great importance, since it guarantees the existence of solutions to the regularized problem eq. 33 for a given shape optimization problem. It also tells us, that the shape Γ\Gamma solving the original problem eq. 1 remains unchanged by the volume and shape regularization. This is due to two invariance properties. For one, it stems from guaranteed existence of a minimizing pre-shape φ\varphi in the fiber π(φ)\pi(\varphi) corresponding to the optimal shape Γ\Gamma. And secondly, the optimal pre-shape Φ\Phi representing the parameterization of the hold-all domain 𝔻\mathbb{D} comes from Diff𝔻φ(M)(𝔻)\operatorname{Diff}_{\partial\mathbb{D}\cup\varphi(M)}(\mathbb{D}), which means it leaves the optimal shape φ(M)\varphi(M) pointwise invariant. Furthermore theorem 3 justifies the use of pre-shape gradient system eq. 38 modified by force terms for volume and shape regularization. This gives a practical and straight forward applicability of volume and shape regularization strategies in shape optimization problems.

Remark 7 (Numerical Feasibility).

Our second criterion for a good regularization strategy also holds. Calculation of a regularized gradient via eq. 38 is numerically feasible, since it does not require additional solves of (non-)linear systems if compared to the standard shape gradient system eq. 11. In fact, the volume and shape regularized pre-shape gradient is a combination of three gradients

U=U𝒥+U𝒯+U𝔍𝔻,U=U^{\mathcal{J}}+U^{\mathcal{T}}+U^{\mathfrak{J}^{\mathbb{D}}}, (40)

coming from the original problem eq. 11, modified shape tracking eq. 20 and volume tracking eq. 31. Instead of solving three systems separately, our approach permits a combined solution of only one linear system with the exact same size of the gradient system eq. 11 to the original problem. Together with invariance of the optimal shape, both criteria for a satisfactory mesh regularization technique are achieved.

Remark 8 (Applicability of Volume and Shape Regularization for General Shape Optimization Problems).

We want to remind the reader, that there is no need for the original shape optimization problem eq. 1 to have a specific structure. It solely needs to be shape differentiable and to have a solution in order to successfully apply volume and shape regularization. The exact same assertions made in remark 1 for shape regularization are also true for simultaneous volume and shape regularization.

Remark 9 (Volume Regularization without Shape Regularization).

As mentioned in remark 6, it is of course possible to apply volume mesh regularization without shape regularization. A result similar to theorem 3 can be formulated for the volume regularized problem eq. 34 by following analogous arguments. In particular, the volume regularized pre-shape gradient system takes the form

a(U𝒥+𝔍𝔻,V)=𝒟𝒥(Γ)[V]+α𝔻𝔇𝔍𝔻(Φ)[PrHMφ(M)1(V)]VH1(𝔻,n+1).a(U^{\mathcal{J}+\mathfrak{J}^{\mathbb{D}}},V)=\mathcal{D}\mathcal{J}(\Gamma)[V]\;+\;\alpha^{\mathbb{D}}\cdot\mathfrak{D}\mathfrak{J}^{\mathbb{D}}(\Phi)[\operatorname{Pr}_{H^{1}_{\partial M\cup\varphi(M)}}(V)]\quad\forall V\in H^{1}(\mathbb{D},{\mathbb{R}}^{n+1}). (41)

Also, the according consistency of gradients is given by

U𝒥+𝔍𝔻=0U𝒥=0 and U𝔍𝔻=0,U^{\mathcal{J}+\mathfrak{J}^{\mathbb{D}}}=0\iff U^{\mathcal{J}}=0\text{ and }U^{\mathfrak{J}^{\mathbb{D}}}=0, (42)

for pre-shape gradients U𝒥U^{\mathcal{J}} of the original problem eq. 11 and U𝔍𝔻U^{\mathfrak{J}^{\mathbb{D}}} volume tracking problem eq. 31. The necessary first order conditions for eq. 1 and eq. 23 if U𝒥+𝔍𝔻=0U^{\mathcal{J}+\mathfrak{J}^{\mathbb{D}}}=0.

3 Implementation of Methods

The theoretical results of shape and volume regularization for shape optimization problems given in section 2.1 and section 2.2 was given in an abstract setting, where the objects involved remained general. In this section, we give an example which displays the regularization approaches for mesh quality in practice. The abstract systems and functionals will be stated explicitly, so that the user can apply regularization by referencing the exemplary problem as a guideline. In section 3.1 we elaborate the process of regularizing a model problem. We also propose an additional modification for simultaneous shape and volume regularization, which allows for movement of the boundary of the hold-all domain 𝔻\partial\mathbb{D} to increase mesh quality. Thereafter, we present numerical results section 3.2, comparing several (un-)regularized optimization approaches. To be more specific, we test two bilinear forms and four regularizations of gradients for a standard gradient descent algorithm with a backtracking line search. The two bilinear forms are given by the linear elasticity as found in [17] and the pp-Laplacian inspired by [13] and studies found in [4]. Different gradients tested will be the unregularized, the shape regularized, the shape and volume regularized, and the shape and volume regularized one with varying outer boundary.

3.1 Model Problem and Application of Pre-Shape Mesh Quality Regularization

3.1.1 Model Problem Formulation and Regularization

In this section we formulate a model problem to test our pre-shape regularization strategies. For this, we choose a tracking type shape optimization problem in two dimensions, constrained by a Poisson equation with varying source term. To highlight the difference of shape and pre-shape calculus techniques, we formulate and test the model problem in two ways. First, we use the classical shape space framework. The second reformulation uses the pre-shape setting, where pre-shape parameterization tracking regularizers can be added.

To start, we set the model manifold for shapes and the hold-all domain to

M=S(0.5,0.5)0.35 and 𝔻=[0,1]×[0,2.35].M=S^{0.35}_{(0.5,0.5)}\text{ and }\mathbb{D}=[0,1]\times[0,2.35]. (43)

The model manifold M𝔻M\subset\mathbb{D} is a sphere with radius 0.350.35 centered in (0.5,0.5)(0.5,0.5), consisting of 6363 surface nodes and edges. It is embedded in the hold-all domain 𝔻\mathbb{D}, which is given by a rectangle [0,1]×[0,2.35][0,1]\times[0,2.35] with non-trivial boundary 𝔻\partial\mathbb{D}. The hold-all domain 𝔻\mathbb{D} consists of 14021402 nodes and has 741741 volume cells. They are illustrated in fig. 1. This problem is hard because solution requires a large deformation at a single local region of the initial shape. Since the mesh is locally refined near the shape, nearby cells are especially prone to degeneration by large deformations.

Notice that the manifold MM acts as an initial shape for the optimization routines to start. This approach is always applicable, i.e. the manifold MM for the pre-shape space Emb(M,𝔻)\operatorname{Emb}(M,\mathbb{D}) can always be picked as the initial shape. With this, the corresponding starting pre-shape is the identity idM\operatorname{id}_{M} of MM.

For the shape optimization problem, we employ a piecewise constant source term varying dependent on the shape

rφ(M)(x)={r1 for x𝔻out¯r2 for x𝔻in.\displaystyle r_{\varphi(M)}(x)=\begin{cases}r_{1}\in{\mathbb{R}}\;&\text{ for }x\in\overline{\mathbb{D}^{\text{out}}}\\ r_{2}\in{\mathbb{R}}\;&\text{ for }x\in\mathbb{D}^{\text{in}}.\end{cases} (44)

A perimeter regularization with parameter ν>0\nu>0 is added as well. Combining this, the optimization problem takes the form

minΓBen12𝔻|yy¯|2dx+νΓ1dss.t. Δy=rΓin int(𝔻)y=0 on 𝔻.\displaystyle\begin{split}\underset{\Gamma\in B_{e}^{n}}{\min}\;\frac{1}{2}\int_{\mathbb{D}}|y&-\bar{y}|^{2}\;\mathrm{d}x+\nu\int_{\Gamma}1\;\mathrm{d}s\\ \text{s.t. }-\Delta y&=r_{\Gamma}\quad\text{in }\,\operatorname{int}(\mathbb{D})\\ y&=0\quad\;\text{ on }\partial\mathbb{D}.\end{split} (45)

To calculate the target y¯H1(𝔻)\bar{y}\in H^{1}(\mathbb{D}) of the shape problem, we use the source term eq. 44 and solve the Poisson problem for the target shape pictured in fig. 1. Problem eq. 45 is formulated using the classical shape space approach, since the control variable Γ\Gamma is stemming from the shape space BenB_{e}^{n}, and represents eq. 1 from the theoretical section 2.

Next, we reformulate eq. 45 using pre-shapes, while we also add the regularizing term 𝔍τ\mathfrak{J}^{\tau} for shape mesh quality with parameter ατ>0\alpha^{\tau}>0

minφEmb(M,𝔻)12𝔻|yy¯|2dx+νφ(M)1ds+ατ2φ(M)(gMφ1(s)detDτφ1(s)fφ(s))2dss.t. Δy=rφ(M)in 𝔻y=0 on 𝔻.\displaystyle\begin{split}\underset{\varphi\in\operatorname{Emb}(M,\mathbb{D})}{\min}\;\frac{1}{2}\int_{\mathbb{D}}|y&-\bar{y}|^{2}\;\mathrm{d}x+\nu\int_{\varphi(M)}1\;\mathrm{d}s+\frac{\alpha^{\tau}}{2}\int_{\varphi(M)}\Big{(}g^{M}\circ\varphi^{-1}(s)\cdot\det D^{\tau}\varphi^{-1}(s)-f_{\varphi}(s)\Big{)}^{2}\;\mathrm{d}s\\ \text{s.t. }-\Delta y&=r_{\varphi(M)}\quad\text{in }\,\mathbb{D}\\ y&=0\qquad\;\ \text{ on }\partial\mathbb{D}.\end{split} (46)

We remind the reader, that the regularizer can only be added in the pre-shape context, since it is not shape differentiable.

Technically, the combined volume and shape mesh quality regularized problem is given by formulating a bi-level problem with volume regularizing objective 𝔍𝔻\mathfrak{J}^{\mathbb{D}} as the upper level problem and lower level problem eq. 46, i.e.

minΦDiff𝔻φ(M)(𝔻)α𝔻2𝔻(g𝔻Φ1(x)detDΦ1(x)fφ(M)𝔻(x))2dxs.t. φ=argminφEmb(M,𝔻)12𝔻|yy¯|2dx+νφ(M)1ds+ατ2φ(M)(gMφ1(s)detDτφ1(s)fφ(s))2dss.t. Δy=rφ(M)in 𝔻y=0 on 𝔻.\displaystyle\begin{split}\underset{\Phi\in\operatorname{Diff}_{\partial\mathbb{D}\cup\varphi(M)}(\mathbb{D})}{\min}\;\frac{\alpha^{\mathbb{D}}}{2}&\int_{\mathbb{D}}\Big{(}g^{\mathbb{D}}\circ\Phi^{-1}(x)\cdot\det D\Phi^{-1}(x)-f^{\mathbb{D}}_{\varphi(M)}(x)\Big{)}^{2}\;\mathrm{d}x\\ \text{s.t. }\varphi=\underset{\varphi\in\operatorname{Emb}(M,\mathbb{D})}{\operatorname{arg}\min}&\;\frac{1}{2}\int_{\mathbb{D}}|y-\bar{y}|^{2}\;\mathrm{d}x+\nu\int_{\varphi(M)}1\;\mathrm{d}s\\ &+\frac{\alpha^{\tau}}{2}\int_{\varphi(M)}\Big{(}g^{M}\circ\varphi^{-1}(s)\cdot\det D^{\tau}\varphi^{-1}(s)-f_{\varphi}(s)\Big{)}^{2}\;\mathrm{d}s\\ \text{s.t. }-\Delta y&=r_{\varphi(M)}\quad\text{in }\,\mathbb{D}\\ y&=0\qquad\;\ \text{ on }\partial\mathbb{D}.\end{split} (47)

We remind the reader that, despite its intimidating form, bi-level problem eq. 47 has guaranteed existence of solutions by theorem 3. The same is true for the shape regularized problem eq. 46 by theorem 1.

3.1.2 Constructing Initial and Target Node Densities ff and gg

To explicitly construct the regularizing terms, we need initial node densities gMH1(M,(0,))g^{M}\in H^{1}(M,(0,\infty)) of MM and g𝔻H1(𝔻,(0,))g^{\mathbb{D}}\in H^{1}(\mathbb{D},(0,\infty)) of 𝔻\mathbb{D}. Also, we need to specify target node densities fφf_{\varphi} and fφ(M)𝔻f^{\mathbb{D}}_{\varphi(M)}, which describe the cell volume structure of optimal meshes representing φ(M)\varphi(M) and 𝔻\mathbb{D}.

The approach used in this work is to represent the initial point distributions gMg^{M} and g𝔻g^{\mathbb{D}} by using a continuous Galerkin Ansatz with linear elements similar to [12, Ch. 3]. Degrees of freedom are situated at the mesh vertices and set to the average of inverses of surrounding cell volumes, i.e.

g(p)=1|𝒞|C𝒞1vol(C).g(p)=\frac{1}{|\mathcal{C}|}\cdot\sum_{C\in\mathcal{C}}\frac{1}{\operatorname{vol}(C)}. (48)

In the shape case g=gMg=g^{M}, a vertex pp is part of the initial discretized shape MM and 𝒞\mathcal{C} is the set of its neighboring cells CC in MM. For 11-dimensional MM cells CC correspond to edges, for 22-dimensional MM to faces. In the volume mesh case g=g𝔻g=g^{\mathbb{D}}, pp is a vertex of the initial discretized hold-all domain 𝔻\mathbb{D} and 𝒞\mathcal{C} is the set of its neighboring volume cells CC in 𝔻\mathbb{D}.

Next, we specify a way to construct target parameterizations fφf_{\varphi} and fφ(M)𝔻f^{\mathbb{D}}_{\varphi(M)}, together with their pre-shape material derivatives. We define a target for shape parameterization tracking fφf_{\varphi} by a global target field q:𝔻(0,)q:\mathbb{D}\rightarrow(0,\infty). In order to satisfy normalization condition eq. 3, which is necessary for existence of solutions and stable algorithms, a normalization is included. This gives

fφ=MgMdsφ(M)q|φ(M)dsq|φ(M).f_{\varphi}=\frac{\int_{M}g^{M}\;\mathrm{d}s}{\int_{\varphi(M)}q_{|\varphi(M)}\;\mathrm{d}s}\cdot q_{|\varphi(M)}. (49)

With this construction, the targeted parameterization of φ(M)\varphi(M) depends on its location and shape in 𝔻\mathbb{D}, as q:𝔻(0,)q:\mathbb{D}\rightarrow(0,\infty) is allowed to vary on the whole domain.

The according material derivative is derived in [12] and has closed form

𝔇m(fφ)[V]=MgMds(φ(M)qds)2qφ(M)qnV,nds+MgMdsφ(M)qdsqTV.\mathfrak{D}_{m}(f_{\varphi})[V]=-\frac{\int_{M}g^{M}\;\mathrm{d}s}{\big{(}\int_{\varphi(M)}q\;\mathrm{d}s\big{)}^{2}}\cdot q\cdot\int_{\varphi(M)}\frac{\partial q}{\partial n}\cdot\langle V,n\rangle\;\mathrm{d}s+\frac{\int_{M}g^{M}\;\mathrm{d}s}{\int_{\varphi(M)}q\;\mathrm{d}s}\cdot\nabla q^{T}V. (50)

Notice that eq. 50 includes both normal and tangential components. However, only its tangential component is needed if regularized gradient systems eq. 13 and eq. 38 are applied. We will write down explicit right hand sides to gradient systems for our exemplary problem in section 3.1.3.

For a volume target fφ(M)𝔻f^{\mathbb{D}}_{\varphi(M)}, we have to satisfy the different normalization condition eq. 24 to guarantee existence of solutions. We propose to use a field q𝔻:𝔻(0,)q^{\mathbb{D}}:\mathbb{D}\rightarrow(0,\infty) defined on the hold-all domain. Then, an according target can be defined as

fφ(M)𝔻={𝔻φing𝔻dx𝔻φinq𝔻dxq𝔻 for x𝔻φin𝔻φoutg𝔻dx𝔻φoutq𝔻dxq𝔻 for x𝔻φout¯.f^{\mathbb{D}}_{\varphi(M)}=\begin{cases}\frac{\int_{\mathbb{D}^{\text{in}}_{\varphi}}g^{\mathbb{D}}\;\mathrm{d}x}{\int_{\mathbb{D}^{\text{in}}_{\varphi}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot q^{\mathbb{D}}\quad\;\,\text{ for }x\in\mathbb{D}^{\text{in}}_{\varphi}\\ \frac{\int_{\mathbb{D}^{\text{out}}_{\varphi}}g^{\mathbb{D}}\;\mathrm{d}x}{\int_{\mathbb{D}^{\text{out}}_{\varphi}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot q^{\mathbb{D}}\quad\text{ for }x\in\overline{\mathbb{D}^{\text{out}}_{\varphi}}.\end{cases} (51)

This is different to the construction of targets fφf_{\varphi} for embedded shapes, since the function fφ(M)𝔻f^{\mathbb{D}}_{\varphi(M)} does only change in order to guarantee normalization condition eq. 24. It cannot vary due to the change of shape of 𝔻\mathbb{D}, which remains fixed. This stays in contrast to the situation for φ(M)𝔻\varphi(M)\subset\mathbb{D}, which can change its position in 𝔻\mathbb{D}. Also notice that fφ(M)𝔻f^{\mathbb{D}}_{\varphi(M)} as defined in eq. 51 can be non-continuous on the shape φ(M)\varphi(M). However, in existence and consistency results theorem 2 and theorem 3 we have not demanded continuity or smoothness of fφ(M)𝔻f^{\mathbb{D}}_{\varphi(M)} on the entire domain 𝔻\mathbb{D}. Smoothness of fφ(M)𝔻f^{\mathbb{D}}_{\varphi(M)} is only demanded for the inner 𝔻φin\mathbb{D}^{\text{in}}_{\varphi} and outside 𝔻φout\mathbb{D}^{\text{out}}_{\varphi} partitioned by φ(M)\varphi(M).

Now we derive the pre-shape material derivative 𝔇m(fφ(M)𝔻)[V]\mathfrak{D}_{m}(f^{\mathbb{D}}_{\varphi(M)})[V] for directions H𝔻1H^{1}_{\partial\mathbb{D}}. These directions are not forced to vanish on the shape φ(M)\varphi(M), which is needed to assemble combined gradients systems with VV acting as test functions. This poses a difficulty in its derivation, the partitioning depends on the pre-shape φEmb(M,𝔻)\varphi\in\operatorname{Emb}(M,\mathbb{D}), but not on ΦDiff𝔻φ(M)(𝔻)\Phi\in\operatorname{Diff}_{\partial\mathbb{D}\cup\varphi(M)}(\mathbb{D}). Let us fix a φEmb(M,𝔻)\varphi\in\operatorname{Emb}(M,\mathbb{D}) and compute on the outer domain 𝔻φout\mathbb{D}^{\text{out}}_{\varphi} with pre-shape calculus rules from [12]. In this computation we write 𝔻out\mathbb{D}^{\text{out}} instead of 𝔻φout\mathbb{D}^{\text{out}}_{\varphi} for readability.

𝔇m(fφ(M)𝔻)[V]|𝔻out=𝔇m(𝔻outg𝔻dx𝔻outq𝔻dxq𝔻)[V]=q𝔻𝔻outq𝔻dx𝔻out(𝔇(g𝔻)[V]+(g𝔻)TV+div(V)g𝔻)dx𝔻outg𝔻dx(𝔻outq𝔻dx)2q𝔻𝔻out(𝔇(q𝔻)[V]+(q𝔻)TV+div(V)q𝔻)dx+𝔻outg𝔻dx𝔻outq𝔻dx(𝔇(q𝔻)[V]+(q𝔻)TV)=q𝔻𝔻outq𝔻dx𝔻outdiv(g𝔻V)dx𝔻outg𝔻dx(𝔻outq𝔻dx)2q𝔻𝔻outdiv(q𝔻V)dx+𝔻outg𝔻dx𝔻outq𝔻dx(q𝔻)TV=q𝔻𝔻outq𝔻dx𝔻φ(M)(g𝔻𝔻outg𝔻dx𝔻outq𝔻dxq𝔻)V,n𝔻outds+𝔻outg𝔻dx𝔻outq𝔻dx(q𝔻)TV=q𝔻𝔻outq𝔻dxφ(M)(g𝔻𝔻outg𝔻dx𝔻outq𝔻dxq𝔻)V,nφ(M)ds+𝔻outg𝔻dx𝔻outq𝔻dx(q𝔻)TV.\displaystyle\begin{split}\mathfrak{D}_{m}(f^{\mathbb{D}}_{\varphi(M)})[V]_{|\mathbb{D}^{\text{out}}}=&\;\mathfrak{D}_{m}\Bigg{(}\frac{\int_{\mathbb{D}^{\text{out}}}g^{\mathbb{D}}\;\mathrm{d}x}{\int_{\mathbb{D}^{\text{out}}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot q^{\mathbb{D}}\Bigg{)}[V]\\ =&\;\frac{q^{\mathbb{D}}}{\int_{\mathbb{D}^{\text{out}}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot\int_{\mathbb{D}^{\text{out}}}\big{(}\mathfrak{D}(g^{\mathbb{D}})[V]+\nabla(g^{\mathbb{D}})^{T}V+\operatorname{div}(V)\cdot g^{\mathbb{D}}\big{)}\;\mathrm{d}x\\ &-\frac{\int_{\mathbb{D}^{\text{out}}}g^{\mathbb{D}}\;\mathrm{d}x}{\big{(}\int_{\mathbb{D}^{\text{out}}}q^{\mathbb{D}}\;\mathrm{d}x\big{)}^{2}}\cdot q^{\mathbb{D}}\cdot\int_{\mathbb{D}^{\text{out}}}\big{(}\mathfrak{D}(q^{\mathbb{D}})[V]+\nabla(q^{\mathbb{D}})^{T}V+\operatorname{div}(V)\cdot q^{\mathbb{D}}\big{)}\;\mathrm{d}x\\ &+\frac{\int_{\mathbb{D}^{\text{out}}}g^{\mathbb{D}}\;\mathrm{d}x}{\int_{\mathbb{D}^{\text{out}}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot\Big{(}\mathfrak{D}(q^{\mathbb{D}})[V]+\nabla(q^{\mathbb{D}})^{T}V\Big{)}\\ =&\;\frac{q^{\mathbb{D}}}{\int_{\mathbb{D}^{\text{out}}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot\int_{\mathbb{D}^{\text{out}}}\operatorname{div}\big{(}g^{\mathbb{D}}\cdot V\big{)}\;\mathrm{d}x\\ &-\frac{\int_{\mathbb{D}^{\text{out}}}g^{\mathbb{D}}\;\mathrm{d}x}{\big{(}\int_{\mathbb{D}^{\text{out}}}q^{\mathbb{D}}\;\mathrm{d}x\big{)}^{2}}\cdot q^{\mathbb{D}}\cdot\int_{\mathbb{D}^{\text{out}}}\operatorname{div}\big{(}q^{\mathbb{D}}\cdot V\big{)}\;\mathrm{d}x\\ &+\frac{\int_{\mathbb{D}^{\text{out}}}g^{\mathbb{D}}\;\mathrm{d}x}{\int_{\mathbb{D}^{\text{out}}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot\nabla(q^{\mathbb{D}})^{T}V\\ =&\;\frac{q^{\mathbb{D}}}{\int_{\mathbb{D}^{\text{out}}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot\int_{\partial\mathbb{D}\cup\varphi(M)}\Big{(}g^{\mathbb{D}}-\frac{\int_{\mathbb{D}^{\text{out}}}g^{\mathbb{D}}\;\mathrm{d}x}{\int_{\mathbb{D}^{\text{out}}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot q^{\mathbb{D}}\Big{)}\cdot\big{\langle}V,n_{\mathbb{D}^{\text{out}}}\big{\rangle}\;\mathrm{d}s\\ &+\frac{\int_{\mathbb{D}^{\text{out}}}g^{\mathbb{D}}\;\mathrm{d}x}{\int_{\mathbb{D}^{\text{out}}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot\nabla(q^{\mathbb{D}})^{T}V\\ =&\;-\frac{q^{\mathbb{D}}}{\int_{\mathbb{D}^{\text{out}}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot\int_{\varphi(M)}\Big{(}g^{\mathbb{D}}-\frac{\int_{\mathbb{D}^{\text{out}}}g^{\mathbb{D}}\;\mathrm{d}x}{\int_{\mathbb{D}^{\text{out}}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot q^{\mathbb{D}}\Big{)}\cdot\big{\langle}V,n_{\varphi(M)}\big{\rangle}\;\mathrm{d}s\\ &+\frac{\int_{\mathbb{D}^{\text{out}}}g^{\mathbb{D}}\;\mathrm{d}x}{\int_{\mathbb{D}^{\text{out}}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot\nabla(q^{\mathbb{D}})^{T}V.\end{split} (52)

Here, n𝔻outn_{\mathbb{D}^{\text{out}}} is the outer unit normal vector field on 𝔻out=𝔻φ(M)\partial\mathbb{D}^{\text{out}}=\partial\mathbb{D}\cup\varphi(M), and nφ(M)n_{\varphi(M)} is the outer unit normal vector field on φ(M)\varphi(M). In particular, we used that g𝔻g^{\mathbb{D}} and q𝔻q^{\mathbb{D}} do neither depend on Φ\Phi nor on φ\varphi, which lets their pre-shape derivatives vanish. Also, we have applied Gauss’ theorem and used V𝔻=0V_{\partial\mathbb{D}}=0. Notice the change of sign for the first summand of the last equality, due to n𝔻out=nφ(M)n_{\mathbb{D}^{\text{out}}}=-n_{\varphi(M)} on φ(M)\varphi(M). Analogous computation on the interior 𝔻in\mathbb{D}^{\text{in}} with boundary 𝔻in=φ(M)\partial\mathbb{D}^{\text{in}}=\varphi(M) give us the pre-shape material derivative

𝔇m(fφ(M)𝔻)[V]={q𝔻𝔻inq𝔻dxφ(M)(g𝔻𝔻ing𝔻dx𝔻inq𝔻dxq𝔻)V,nφ(M)ds+𝔻ing𝔻dx𝔻inq𝔻dx(q𝔻)TV for x𝔻inq𝔻𝔻outq𝔻dxφ(M)(g𝔻𝔻outg𝔻dx𝔻outq𝔻dxq𝔻)V,nφ(M)ds+𝔻outg𝔻dx𝔻outq𝔻dx(q𝔻)TV for x𝔻out¯.\mathfrak{D}_{m}(f^{\mathbb{D}}_{\varphi(M)})[V]=\begin{cases}\frac{q^{\mathbb{D}}}{\int_{\mathbb{D}^{\text{in}}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot\int_{\varphi(M)}\Big{(}g^{\mathbb{D}}-\frac{\int_{\mathbb{D}^{\text{in}}}g^{\mathbb{D}}\;\mathrm{d}x}{\int_{\mathbb{D}^{\text{in}}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot q^{\mathbb{D}}\Big{)}\cdot\big{\langle}V,n_{\varphi(M)}\big{\rangle}\;\mathrm{d}s\\ \qquad\qquad\qquad\qquad\qquad\qquad+\;\frac{\int_{\mathbb{D}^{\text{in}}}g^{\mathbb{D}}\;\mathrm{d}x}{\int_{\mathbb{D}^{\text{in}}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot\nabla(q^{\mathbb{D}})^{T}V\qquad\;\;\,\text{ for }x\in\mathbb{D}^{\text{in}}\\ -\frac{q^{\mathbb{D}}}{\int_{\mathbb{D}^{\text{out}}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot\int_{\varphi(M)}\Big{(}g^{\mathbb{D}}-\frac{\int_{\mathbb{D}^{\text{out}}}g^{\mathbb{D}}\;\mathrm{d}x}{\int_{\mathbb{D}^{\text{out}}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot q^{\mathbb{D}}\Big{)}\cdot\big{\langle}V,n_{\varphi(M)}\big{\rangle}\;\mathrm{d}s\\ \qquad\qquad\qquad\qquad\qquad\qquad+\;\frac{\int_{\mathbb{D}^{\text{out}}}g^{\mathbb{D}}\;\mathrm{d}x}{\int_{\mathbb{D}^{\text{out}}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot\nabla(q^{\mathbb{D}})^{T}V\qquad\;\,\text{ for }x\in\overline{\mathbb{D}^{\text{out}}}.\end{cases} (53)

This pre-shape material derivative is interesting from a theoretical perspective, since it is an example of a derivative depending on the shape of a submanifold φ(M)𝔻\varphi(M)\subset\mathbb{D}, where the actual pre-shape at hand 𝔻\mathbb{D} is of different dimension. Also, we see that the sign of boundary integral on φ(M)\varphi(M) depends on whether the inside or outside of 𝔻\mathbb{D} is regarded. This nicely reflects that changing φ(M)\varphi(M) adds volume on one side and takes it away from the other. We remind the reader that normal directions nφ(M)n_{\varphi(M)} are not normal directions corresponding to the shape of 𝔻\mathbb{D}. They rather lie in the interior of 𝔻\mathbb{D}, and hence are part of the fiber or tangential component of Diff(𝔻)=Emb(𝔻,𝔻)\operatorname{Diff}(\mathbb{D})=\operatorname{Emb}(\mathbb{D},\mathbb{D}).

3.1.3 Pre-Shape Gradient Systems

To compute pre-shape gradients UU we need suitable bilinear forms a(.,.)a(.,.). The systems for our gradients are always of form

a(U,V)=RHS(φ,Φ)[V]H𝔻1(𝔻,n+1)U=BCon 𝔻\displaystyle\begin{split}a(U,V)&=\operatorname{RHS}(\varphi,\Phi)[V]\quad\forall H^{1}_{\partial\mathbb{D}}(\mathbb{D},{\mathbb{R}}^{n+1})\\ U&=\operatorname{BC}\qquad\quad\text{on }\partial\mathbb{D}\end{split} (54)

In our numerical implementations, we test two bilinear forms and four different right hand sides. We abbreviate the right hand sides by RHS(φ,Φ)[V]\operatorname{RHS}(\varphi,\Phi)[V] depending on pre-shapes φEmb(M,𝔻),ΦDiff𝔻φ(M)(𝔻)\varphi\in\operatorname{Emb}(M,\mathbb{D}),\Phi\in\operatorname{Diff}_{\partial\mathbb{D}\cup\varphi(M)}(\mathbb{D}) and test functions VV, and boundary conditions by BC\operatorname{BC}. First, we consider the weak formulation of the linear elasticity equation with zero first Lamé parameter as found in [17]

𝔻μϵ(U):ϵ(V)dx=RHS(φ,Φ)[V]VH01(𝔻,n+1)ϵ(U)=12(UT+U)ϵ(V)=12(VT+V)U=0 on 𝔻.\displaystyle\begin{split}\int_{\mathbb{D}}\mu\cdot\epsilon(U):\epsilon(V)\;\mathrm{d}x&=\operatorname{RHS}(\varphi,\Phi)[V]\qquad\forall V\in H^{1}_{0}(\mathbb{D},\mathbb{R}^{n+1})\\ \epsilon(U)&=\frac{1}{2}(\nabla U^{T}+\nabla U)\\ \epsilon(V)&=\frac{1}{2}(\nabla V^{T}+\nabla V)\\ U&=0\qquad\text{ on }\partial\mathbb{D}.\end{split} (55)

As the second bilinear form, we consider the weak formulation of the vector valued pp-Laplacian equation. Since systems stemming from the pp-Laplacian have the issue to be indefinite, we employ a standard regularization by adding a parameter ε>0\varepsilon>0. To make a comparison with the linear elasticity eq. 55 viable, we use a local weighting μ:𝔻(0,)\mu:\mathbb{D}\rightarrow(0,\infty) in the bilinear form, which then is

𝔻μ(ε2+U:U)p21U:Vdx\displaystyle\int_{\mathbb{D}}\mu\cdot\Big{(}\varepsilon^{2}+\nabla U:\nabla U\Big{)}^{\frac{p}{2}-1}\cdot\nabla U:\nabla V\;\mathrm{d}x =RHS(φ,Φ)[V]VH01(𝔻,n+1).\displaystyle=\operatorname{RHS}(\varphi,\Phi)[V]\qquad\forall V\in H^{1}_{0}(\mathbb{D},\mathbb{R}^{n+1}). (56)

We chose the local weighting μ\mu as the solution of Poisson problem

Δμ=0in 𝔻μ=μmaxon φ(M)μ=μminon 𝔻\begin{split}-\Delta\mu&=0\qquad\;\;\;\text{in }\mathbb{D}\\ \mu&=\mu_{\text{max}}\quad\,\text{on }\varphi(M)\\ \mu&=\mu_{\text{min}}\quad\;\text{on }\partial\mathbb{D}\end{split} (57)

for μmax,μmin>0\mu_{\text{max}},\mu_{\text{min}}>0. In the context of linear elasticity eq. 55 it can be interpreted as the so-called second Lamé parameter.

Remark 10 (Sufficiency of Linear Elements for Pre-Shape Regularization).

In order to apply pre-shape regularization approaches presented in this work, it is completely sufficient to use continuous linear elements to represent involved functions. As we can see in the pre-shape derivative formulas eq. 8 and eq. 30, the highest order of featured derivatives is one. This is important for application in practice, since existing shape gradient systems do not require higher order elements for volume and shape mesh quality tracking. In particular, all following systems are built by using continuous first order elements in FEniCS.

Next, we need the shape derivative of the PDE constrained tracking type shape optimization objective 𝒥\mathcal{J}. It can be derived by a Lagrangian approach using standard shape or pre-shape calculus rules, giving

𝒟𝒥(π(φ))[V]=𝔻(yy¯)y¯TVyT(VT+V)p+div(V)(12(yy¯)2+yTprφ(M)p)dx.\displaystyle\begin{split}\mathcal{D}\mathcal{J}(\pi(\varphi))[V]=&\int_{\mathbb{D}}-(y-\bar{y})\nabla\bar{y}^{T}V-\nabla y^{T}(\nabla V^{T}+\nabla V)\nabla p+\operatorname{div}(V)\Big{(}\frac{1}{2}(y-\bar{y})^{2}+\nabla y^{T}\nabla p-r_{\varphi(M)}p\Big{)}\;\mathrm{d}x.\end{split} (58)

Here, pp is the adjoint solving the adjoint system

Δp=(yy¯)in 𝔻p=0 on 𝔻.\displaystyle\begin{split}-\Delta p&=-(y-\bar{y})\quad\text{in }\,\mathbb{D}\\ p&=0\qquad\qquad\text{ on }\partial\mathbb{D}.\end{split} (59)

It is straight forward to derive the shape derivative of the perimeter regularization 𝒥Perim\mathcal{J}^{\text{Perim}}, which takes the form

𝒟𝒥Perim(π(φ))[V]=φ(M)divΓ(V)ds,\displaystyle\mathcal{D}\mathcal{J}^{\text{Perim}}(\pi(\varphi))[V]=\int_{\varphi(M)}\operatorname{div}_{\Gamma}(V)\;\mathrm{d}s, (60)

where divΓ(V)\operatorname{div}_{\Gamma}(V) is the tangential divergence of VV on φ(M)\varphi(M).

In the following we give four right hand sides representing various (un-)regularized approaches to calculate pre-shape gradients. They correspond to the unregularized shape gradient, the shape parameterization tracking regularized pre-shape gradient, the volume and parameterization tracking regularized pre-shape gradient, and the volume and parameterization tracking regularized pre-shape gradient with free tangential outer boundary.

For the unregularized shape gradient, the right hand side of the gradient system eq. 54 takes the standard form

RHS(φ,Φ)[V]=𝒟𝒥(π(φ))[V]+ν𝒟𝒥Perim(π(φ))[V]=𝔻(yy¯)y¯TVyT(VT+V)p+div(V)(12(yy¯)2+yTprφ(M)p)dx+νφ(M)divΓ(V)dsVH𝔻1(𝔻,n+1).\displaystyle\begin{split}\operatorname{RHS}(\varphi,\Phi)[V]=\;&\mathcal{D}\mathcal{J}(\pi(\varphi))[V]+\nu\cdot\mathcal{D}\mathcal{J}^{\text{Perim}}(\pi(\varphi))[V]\\ =\;&\int_{\mathbb{D}}-(y-\bar{y})\nabla\bar{y}^{T}V-\nabla y^{T}(\nabla V^{T}+\nabla V)\nabla p+\operatorname{div}(V)\Big{(}\frac{1}{2}(y-\bar{y})^{2}+\nabla y^{T}\nabla p-r_{\varphi(M)}p\Big{)}\;\mathrm{d}x\\ &+\nu\cdot\int_{\varphi(M)}\operatorname{div}_{\Gamma}(V)\;\mathrm{d}s\qquad\forall V\in H^{1}_{\partial\mathbb{D}}(\mathbb{D},\mathbb{R}^{n+1}).\end{split} (61)

In this case, the respective boundary condition for the gradient system is simply a Dirichlet zero condition BC=0\operatorname{BC}=0.

Next, we give the right hand side for the shape parameterization regularized pre-shape gradient. For shape parameterization tracking, we employ a target fφf_{\varphi} given by a globally defined function q:𝔻(0,)q:\mathbb{D}\rightarrow(0,\infty) (cf. eq. 49), which in combination yields

RHS(φ,Φ)[V]=𝒟𝒥(π(φ))[V]+ν𝒟𝒥Perim(π(φ))[V]+ατgφ𝒯,V=𝔻(yy¯)y¯TVyT(VT+V)p+div(V)(12(yy¯)2+yTprφ(M)p)dx+νφ(M)divΓ(V)dsφ(M)12((gMφ1detDτφ1)2(MgMdsφ(M)qdsq)2)divΓ(VV,nn)+(gMφ1detDτφ1MgMdsφ(M)qdsq)MgMdsφ(M)qdsΓqTVdsVH𝔻1(𝔻,n+1).\displaystyle\begin{split}\operatorname{RHS}(\varphi,\Phi)[V]=\;&\mathcal{D}\mathcal{J}(\pi(\varphi))[V]+\nu\cdot\mathcal{D}\mathcal{J}^{\text{Perim}}(\pi(\varphi))[V]+\alpha^{\tau}\cdot\langle g_{\varphi}^{\mathcal{T}},V\rangle\\ =\;&\int_{\mathbb{D}}-(y-\bar{y})\nabla\bar{y}^{T}V-\nabla y^{T}(\nabla V^{T}+\nabla V)\nabla p+\operatorname{div}(V)\Big{(}\frac{1}{2}(y-\bar{y})^{2}+\nabla y^{T}\nabla p-r_{\varphi(M)}p\Big{)}\;\mathrm{d}x\\ &+\nu\cdot\int_{\varphi(M)}\operatorname{div}_{\Gamma}(V)\;\mathrm{d}s\\ &-\int_{\varphi(M)}\frac{1}{2}\cdot\Bigg{(}\big{(}g^{M}\circ\varphi^{-1}\cdot\det D^{\tau}\varphi^{-1}\big{)}^{2}-\Big{(}\frac{\int_{M}g^{M}\;\mathrm{d}s}{\int_{\varphi(M)}q\;\mathrm{d}s}\cdot q\Big{)}^{2}\Bigg{)}\cdot\operatorname{div}_{\Gamma}(V-\langle V,n\rangle\cdot n)\\ &\qquad\qquad\quad+\Big{(}g^{M}\circ\varphi^{-1}\cdot\det D^{\tau}\varphi^{-1}-\frac{\int_{M}g^{M}\;\mathrm{d}s}{\int_{\varphi(M)}q\;\mathrm{d}s}\cdot q\Big{)}\cdot\frac{\int_{M}g^{M}\;\mathrm{d}s}{\int_{\varphi(M)}q\;\mathrm{d}s}\cdot\nabla_{\Gamma}q^{T}V\;\mathrm{d}s\qquad\forall V\in H^{1}_{\partial\mathbb{D}}(\mathbb{D},\mathbb{R}^{n+1}).\end{split} (62)

The boundary condition is Dirichlet zero BC=0\operatorname{BC}=0. In order to assemble the shape regularization gφ𝒯,V\langle g_{\varphi}^{\mathcal{T}},V\rangle, it is necessary to compute the tangential Jacobian detDτφ1\operatorname{det}D^{\tau}\varphi^{-1}. In applications, this means storing the vertex coordinates becomes of the initial shape is necessary. Then φ1\varphi^{-1} can be calculated simply as the difference of current shape node coordinates to the initial ones. Hence there is no need to invert matrices to calculate Dτφ1D^{\tau}\varphi^{-1}. We give a strong reminder that DτD^{\tau} is the covariant derivative, and must not be confused with the tangential derivative (cf. [12, Ex. 2]). In the case of an nn-dimensional manifold φ(M\varphi(M, the covariant derivative is a n×nn\times n-matrix, whereas the tangential derivative is (n+1)×(n+1)(n+1)\times(n+1). The use of covariant derivatives requires to calculate local orthonormal frames, which can be done by standard Gram-Schmidt algorithms. Knowing this, the computation of Jacobian determinants is inexpensive, since matrices from applications are of size smaller 3×33\times 3.

Building on eq. 62, we can construct the right hand side for the shape and volume parameterization regularized pre-shape gradient. For this, we use a volume tracking target fφ(M)𝔻f^{\mathbb{D}}_{\varphi(M)} defined by a field q𝔻:𝔻(0,)q^{\mathbb{D}}:\mathbb{D}\rightarrow(0,\infty) (eq. 51). This finally gives

RHS(φ,Φ)[V]=𝒟𝒥(π(φ))[V]+ν𝒟𝒥Perim(π(φ))[V]+ατgφ𝒯,V+α𝔻𝔇𝔍𝔻(Φ)[V]=𝔻(yy¯)y¯TVyT(VT+V)p+div(V)(12(yy¯)2+yTprφ(M)p)dx+νφ(M)divΓ(V)dsφ(M)12((gMφ1detDτφ1)2(MgMdsφ(M)qdsq)2)divΓ(VV,nn)+(gMφ1detDτφ1MgMdsφ(M)qdsq)MgMdsφ(M)qdsΓqTVds𝔻out12((g𝔻Φ1detDΦ1)2(𝔻outg𝔻Φ1detDΦ1dx𝔻outq𝔻dxq𝔻)2)div(PrHMφ(M)1(V))+(g𝔻Φ1detDΦ1𝔻outg𝔻Φ1detDΦ1dx𝔻outq𝔻dxq𝔻)𝔻outg𝔻Φ1detDΦ1dx𝔻outq𝔻dx(q𝔻)TPrHMφ(M)1(V)dx𝔻in12((g𝔻Φ1detDΦ1)2(𝔻ing𝔻Φ1detDΦ1dx𝔻inq𝔻dxq𝔻)2)div(PrHMφ(M)1(V))+(g𝔻Φ1detDΦ1𝔻ing𝔻Φ1detDΦ1dx𝔻inq𝔻dxq𝔻)𝔻ing𝔻Φ1detDΦ1dx𝔻inq𝔻dx(q𝔻)TPrHMφ(M)1(V)dxVH𝔻1(𝔻,n+1).\displaystyle\begin{split}&\qquad\qquad\qquad\operatorname{RHS}(\varphi,\Phi)[V]=\;\mathcal{D}\mathcal{J}(\pi(\varphi))[V]+\nu\cdot\mathcal{D}\mathcal{J}^{\text{Perim}}(\pi(\varphi))[V]+\alpha^{\tau}\cdot\langle g_{\varphi}^{\mathcal{T}},V\rangle+\alpha^{\mathbb{D}}\mathfrak{D}\mathfrak{J}^{\mathbb{D}}(\Phi)[V]\\ =\;&\int_{\mathbb{D}}-(y-\bar{y})\nabla\bar{y}^{T}V-\nabla y^{T}(\nabla V^{T}+\nabla V)\nabla p+\operatorname{div}(V)\Big{(}\frac{1}{2}(y-\bar{y})^{2}+\nabla y^{T}\nabla p-r_{\varphi(M)}p\Big{)}\;\mathrm{d}x\\ &+\nu\cdot\int_{\varphi(M)}\operatorname{div}_{\Gamma}(V)\;\mathrm{d}s\\ &-\int_{\varphi(M)}\frac{1}{2}\cdot\Bigg{(}\big{(}g^{M}\circ\varphi^{-1}\cdot\det D^{\tau}\varphi^{-1}\big{)}^{2}-\Big{(}\frac{\int_{M}g^{M}\;\mathrm{d}s}{\int_{\varphi(M)}q\;\mathrm{d}s}\cdot q\Big{)}^{2}\Bigg{)}\cdot\operatorname{div}_{\Gamma}(V-\langle V,n\rangle\cdot n)\\ &\qquad\qquad\quad+\Big{(}g^{M}\circ\varphi^{-1}\cdot\det D^{\tau}\varphi^{-1}-\frac{\int_{M}g^{M}\;\mathrm{d}s}{\int_{\varphi(M)}q\;\mathrm{d}s}\cdot q\Big{)}\cdot\frac{\int_{M}g^{M}\;\mathrm{d}s}{\int_{\varphi(M)}q\;\mathrm{d}s}\cdot\nabla_{\Gamma}q^{T}V\;\mathrm{d}s\\ &-\int_{\mathbb{D}^{\text{out}}}\frac{1}{2}\cdot\Bigg{(}\big{(}g^{\mathbb{D}}\circ\Phi^{-1}\cdot\det D\Phi^{-1}\big{)}^{2}-\Big{(}\frac{\int_{\mathbb{D}^{\text{out}}}g^{\mathbb{D}}\circ\Phi^{-1}\cdot\det D\Phi^{-1}\;\mathrm{d}x}{\int_{\mathbb{D}^{\text{out}}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot q^{\mathbb{D}}\Big{)}^{2}\Bigg{)}\cdot\operatorname{div}(\operatorname{Pr}_{H^{1}_{\partial M\cup\varphi(M)}}(V))\\ &\qquad\qquad\quad+\Bigg{(}g^{\mathbb{D}}\circ\Phi^{-1}\cdot\det D\Phi^{-1}-\frac{\int_{\mathbb{D}^{\text{out}}}g^{\mathbb{D}}\circ\Phi^{-1}\cdot\det D\Phi^{-1}\;\mathrm{d}x}{\int_{\mathbb{D}^{\text{out}}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot q^{\mathbb{D}}\Bigg{)}\cdot\frac{\int_{\mathbb{D}^{\text{out}}}g^{\mathbb{D}}\circ\Phi^{-1}\cdot\det D\Phi^{-1}\;\mathrm{d}x}{\int_{\mathbb{D}^{\text{out}}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot\nabla(q^{\mathbb{D}})^{T}\operatorname{Pr}_{H^{1}_{\partial M\cup\varphi(M)}}(V)\;\mathrm{d}x\\ &-\int_{\mathbb{D}^{\text{in}}}\frac{1}{2}\cdot\Bigg{(}\big{(}g^{\mathbb{D}}\circ\Phi^{-1}\cdot\det D\Phi^{-1}\big{)}^{2}-\Big{(}\frac{\int_{\mathbb{D}^{\text{in}}}g^{\mathbb{D}}\circ\Phi^{-1}\cdot\det D\Phi^{-1}\;\mathrm{d}x}{\int_{\mathbb{D}^{\text{in}}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot q^{\mathbb{D}}\Big{)}^{2}\Bigg{)}\cdot\operatorname{div}(\operatorname{Pr}_{H^{1}_{\partial M\cup\varphi(M)}}(V))\\ &\qquad\qquad\quad+\Bigg{(}g^{\mathbb{D}}\circ\Phi^{-1}\cdot\det D\Phi^{-1}-\frac{\int_{\mathbb{D}^{\text{in}}}g^{\mathbb{D}}\circ\Phi^{-1}\cdot\det D\Phi^{-1}\;\mathrm{d}x}{\int_{\mathbb{D}^{\text{in}}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot q^{\mathbb{D}}\Bigg{)}\cdot\frac{\int_{\mathbb{D}^{\text{in}}}g^{\mathbb{D}}\circ\Phi^{-1}\cdot\det D\Phi^{-1}\;\mathrm{d}x}{\int_{\mathbb{D}^{\text{in}}}q^{\mathbb{D}}\;\mathrm{d}x}\cdot\nabla(q^{\mathbb{D}})^{T}\operatorname{Pr}_{H^{1}_{\partial M\cup\varphi(M)}}(V)\;\mathrm{d}x\\ &\hskip 426.79134pt\qquad\forall V\in H^{1}_{\partial\mathbb{D}}(\mathbb{D},\mathbb{R}^{n+1}).\end{split} (63)

The last two integrals correspond to the regularizer for volume parameterization tracking. As in previous cases, the corresponding Dirichlet condition is given by BC=0\operatorname{BC}=0. All previous remarks on assembling the right and side are still valid. Additionally, it is necessary to store coordinates of the entire initial hold-all domain. With these, the volume pre-shape Φ1\Phi^{-1} can be calculated as the difference of initial to current coordinates the volume mesh. For volume regularization, calculation of Jacobian determinants detDΦ1\operatorname{det}D\Phi^{-1} does not require local orthonormal frames via Gram-Schmidt algorithms, as no covariant derivatives are used. It is very important to use a correct normalization for q𝔻q^{\mathbb{D}} to ensure existence of solutions. This is necessary, since in practical applications an optimization step leads to change of the underlying shape, and thus inner and outer components of 𝔻\mathbb{D}. Hence it is not enough to simply estimate g𝔻g^{\mathbb{D}} once in the beginning. Either, g𝔻g^{\mathbb{D}} needs to be estimated by eq. 48 in every iteration in which the shape of φ(M)\varphi(M) changes. Or g𝔻g^{\mathbb{D}} is replaced by g𝔻Φ1detDΦ1g^{\mathbb{D}}\circ\Phi^{-1}\cdot\det D\Phi^{-1}, which is motivated by the transformation rule. We have decided for the latter, which can be seen in the last two terms of eq. 63. This also needs to be taken into account when calculating 𝔍𝔻\mathfrak{J}^{\mathbb{D}}, e.g. for line search. As explained in section 2.2.3, it is necessary to use PrHMφ(M)1(V)\operatorname{Pr}_{H^{1}_{\partial M\cup\varphi(M)}}(V) as directions for the volume regularization, if shapes are enforced to stay invariant. The projection can be realized by setting the degrees of freedom of the finite element representation of VV to zero on the shape φ(M)\varphi(M). This leads to vanishing of the first term of 𝔇(fφ(M)𝔻)[V]\mathfrak{D}(f_{\varphi(M)}^{\mathbb{D}})[V] (cf. eq. 53), which does not occur in eq. 63.

Lastly, the right hand side for a volume and parameterization tracking regularized pre-shape gradient with free tangential outer boundary is given by eq. 63 as well. However, instead of employing Dirichlet zero boundary conditions, we permit the boundary 𝔻\partial\mathbb{D} to move tangentially. For this, we set

BCΦ=α𝔻UL2 on 𝔻,\operatorname{BC}_{\Phi}=\alpha^{\partial\mathbb{D}}\cdot U_{L^{2}}\quad\text{ on }\partial\mathbb{D}, (64)

for a scaling factor α𝔻>0\alpha^{\partial\mathbb{D}}>0. Here, UL2U_{L^{2}} is the L2L^{2}-representation of tangential components of 𝔇𝔍𝔻(Φ)\mathfrak{D}\mathfrak{J}^{\mathbb{D}}(\Phi), i.e

𝔻UL2,Vds=𝔇𝔍𝔻(Φ)[VV,n𝔻V]VL2(𝔻,n+1).\int_{\partial\mathbb{D}}\langle U_{L^{2}},V\rangle\;\mathrm{d}s=\mathfrak{D}\mathfrak{J}^{\mathbb{D}}(\Phi)[V-\langle V,n_{\partial\mathbb{D}}\rangle\cdot V]\qquad\forall V\in L^{2}(\partial\mathbb{D},{\mathbb{R}}^{n+1}). (65)

Notice that in practice, this does not require solution of a PDE on 𝔻\partial\mathbb{D}, since the tangential values of 𝔇𝔍𝔻(Φ)\mathfrak{D}\mathfrak{J}^{\mathbb{D}}(\Phi) can be extracted directly from its finite element representation. We remind the reader that this is more a heuristic approach, which we will refine in further works.

3.2 Numerical Results and Comparison of Algorithms

In this subsection we explore computational results of employing unregularized and various pre-shape regularized gradient descents for shape optimization problem eq. 45. We propose an algorithm 1, which is a modified gradient descent method with a backtracking line search featuring regularized gradients. We present 7 implementations of pre-shape gradient descent methods. The first 4 feature the linear elasticity metric eq. 55 with unregularized, shape regularized, volume and shape regularized, and volume and shape regularized free tangential outer boundary right hand sides. The other 3 feature the regularized pp-Laplacian metric eq. 56 with unregularized, shape regularized, volume and shape regularized right hand sides. For the pp-Laplacian metric we dismiss the free tangential outer boundary regularization, since solving it requires a modified Newton’s method and slightly complicates our approach. Both the linear elasticity metric eq. 55 and the regularized pp-Laplacian metric eq. 56 involve a local weighting function μ\mu stemming from eq. 57 inspired by [17]. The two approaches for these metrics without any type of pre-shape regularization are denoted as their ’Vanilla’ versions. For implementations we use the open-source finite-element software FEniCS (cf. [11, 1]). Construction of meshes is done via the free meshing software Gmsh (cf. [7]). We perform our calculations using a single Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz featuring 6 GB of RAM.

Algorithm 1 is essentially a steepest descent with a backtracking line search. The regularization procedures for shape and volume mesh quality take place by modifying the right hand sides as described in section 3.1.3. However we want to pinpoint some important differences of algorithm 1 compared to a standard gradient descent for shape optimization. First, notice that the initial mesh coordinates are stored in order to calculate φk1\varphi_{k}^{-1} and Φk1\Phi_{k}^{-1}. This corresponds to setting initial pre-shapes Φ0=id𝔻0\Phi_{0}=\operatorname{id}_{\mathbb{D}_{0}} and φ0=idM\varphi_{0}=\operatorname{id}_{M}. Since the current mesh coordinates are necessarily stored in a standard gradient descent, φk1\varphi_{k}^{-1} and Φk1\Phi_{k}^{-1} are calculated as mesh coordinate differences. Calculating these inverse embeddings amounts to a matrix difference operation, and therefore is of negligible computational burden. Estimating initial vertex distributions gMg^{M} and g𝔻g^{\mathbb{D}} needs to be done only once at the beginning of our routine. Hence it does not contribute to computational cost in a significant way. If shape regularization is partaking in the gradient system, it is necessary to compute and store local tangential orthonormal frames of the initial shape τ0\tau_{0}. Together with calculation of local tangential orthonormal frames τk\tau_{k} for the current shape φk(M)\varphi_{k}(M), these are used to assemble the covariant Jacobian determinant for the regularized right-hand side of the gradient systems. Since this need to be done for each new iterate φk\varphi_{k}, it indeed increases computational cost. If required, this can be mitigated by parallel computing, since tangential orthonormal bases can be calculated simultaneously for all points pφk(M)p\in\varphi_{k}(M). Another difference to standard steepest descent methods concerns the condition of convergence in line 66 of algorithm 1. It features two conditions, namely sufficient decrease in either the absolute or relative norm of the pre-shape gradient, and sufficient decrease of relative values for the original shape objetive 𝒥\mathcal{J}. We use this approach, since several objective functionals participate simultaneously in formation of pre-shape gradients UkU_{k}. If shape or volume regularization take place, they influence the size of gradients depending on the mesh quality. In order to compare different (un-)regularized gradient systems, we use this criterion to guarantee the same decrease of the original problem’s objective for all strategies. For the same reason, the line search checks for a sufficient decrease of the combined objective functionals matching the gradient regularizations. In some sense, this is a weighted descent for multi criterial optimization, where the objectives are 𝒥\mathcal{J} and regularizations 𝔍τ\mathfrak{J}^{\tau} and 𝔍𝔻\mathfrak{J}^{\mathbb{D}}.

Furthermore, we mention the difference of our two tested metrics a(.,.)a(.,.) acting as left hand sides. The linear elasticity metric eq. 55 leads to a linear system, which is solvable by use of standard techniques such the CG-method. It is reported in [13], that the pp-Laplacian metric has particular advantages in resolution of sharp edges or kinks of optimal shapes. Illustration of this is not the goal of this paper. However, the pp-Laplacian system eq. 56 is increasingly non-linear for larger p2p\geq 2. This significantly increases computational cost and burden of implementation, since Newton’s method requires multiple linear system solves. Also, systems are possibly indefinite if regularization parameter ε>0\varepsilon>0 is too small. If chosen too large, we pay for positive definiteness by overregularizing the gradient systems. In order to achieve convergence of Newton’s method for the pp-Laplacian, we use gradients from the previous shape optimization step as an initial guess.

Remark 11 (Integrating Shape and Volume Regularization in Existing Solvers).

Implementing shape and volume regularization with the pre-shape approach does not require a big overhead, if an existing solver for the shape optimization problem of concern is available. It solely requires accessibility of gradient systems eq. 54 and mesh morphing to update meshes and shapes. With this, adding regularization terms in style of eq. 62 or eq. 63 to existing right hand sides is all that needs to be done. This does not affect the user’s choice of preferred metrics a(.,.)a(.,.) to represent gradients. We highlight this by implementing and comparing our regularizations for the linear elasticity and the non-linear pp-Laplacian metrics. From this perspective, algorithm 1 is only an in-depth explanation how right-hand side modifications of gradient systems are assembled.

1Set starting domain 𝔻0\mathbb{D}_{0} and shape φ0(M)=π(φ0)\varphi_{0}(M)=\pi(\varphi_{0}) and save according vertex coordinates for future computations
2 Choose pre-shape regularizations by setting ατ,α𝔻0\alpha^{\tau},\alpha^{\mathbb{D}}\geq 0
3 Set shape and volume targets q,q𝔻:𝔻(0,)q,q^{\mathbb{D}}:\mathbb{D}\rightarrow(0,\infty)
4 Estimate initial infinitesimal point distributions gMg^{M} for φ0(M)\varphi_{0}(M) and g𝔻g^{\mathbb{D}} for 𝔻0\mathbb{D}_{0} according to eq. 48
5 Calculate local orthonormal tangential bases τ0(p)\tau_{0}(p) for each vertex of pφ0(M)p\in\varphi_{0}(M) using Gram-Schmidt orthonormalization, and save them for future iterations
6 While (Uk>εabs\Big{(}\|U_{k}\|>\varepsilon_{\text{abs}} and UkU0>εrel)\frac{\|U_{k}\|}{\|U_{0}\|}>\varepsilon_{\text{rel}}\Big{)} or 𝒥(π(φk))𝒥(π(φ0))>εrel𝒥\frac{\mathcal{J}(\pi(\varphi_{k}))}{\mathcal{J}(\pi(\varphi_{0}))}>\varepsilon_{\text{rel}}^{\mathcal{J}} do:
7 Assemble right-hand-side of pre-shape gradient system eq. 54:
8 solve for state solution yky_{k} via eq. 45
9 solve for adjoint solution pkp_{k} via eq. 59
10 Calculate local orthonormal tangential bases τφk\tau^{\varphi_{k}} for each vertex of φk(M)\varphi_{k}(M) with same orientation as τ0\tau_{0} using Gram-Schmidt orthonormalization
11 if ατ=0,α𝔻=0\;\;\;\alpha^{\tau}=0,\alpha^{\mathbb{D}}=0: Assemble RHS(φk,Φk)\operatorname{RHS}(\varphi_{k},\Phi_{k}) according to eq. 61
12 elif ατ0,α𝔻=0\alpha^{\tau}\neq 0,\alpha^{\mathbb{D}}=0: Assemble RHS(φk,Φk)\operatorname{RHS}(\varphi_{k},\Phi_{k}) according to eq. 62
13 elif ατ0,α𝔻0\alpha^{\tau}\neq 0,\alpha^{\mathbb{D}}\neq 0: Assemble RHS(φk,Φk)\operatorname{RHS}(\varphi_{k},\Phi_{k}) according to eq. 63
14 Solve for pre-shape gradient UkU_{k}:
15 Calculate local weighting parameters μ\mu by solving eq. 57
16 if linear elasticity:
17 Assemble left-hand-side a(.,.)a(.,.) by eq. 55 and solve by preconditioned CG-method
18 elif pp-Laplacian:
19 Use preconditioned Newton’s method to solve eq. 54 with left-hand-side a(.,.)a(.,.) by eq. 56
20 Perform a linesearch to get a sufficient descent direction U~k\tilde{U}_{k}:
21 U~k1UkUk\tilde{U}_{k}\leftarrow\frac{1}{\|U_{k}\|}\cdot U_{k}
22 while 𝒥(π(φk+U~kφk))+ατ𝔍τ(φk+U~kφk)+α𝔻𝔍𝔻(Φk+U~kΦk)\mathcal{J}(\pi(\varphi_{k}+\tilde{U}_{k}\circ\varphi_{k}))+\alpha^{\tau}\cdot\mathfrak{J}^{\tau}(\varphi_{k}+\tilde{U}_{k}\circ\varphi_{k})+\alpha^{\mathbb{D}}\cdot\mathfrak{J}^{\mathbb{D}}(\Phi_{k}+\tilde{U}_{k}\circ\Phi_{k})
23 𝒥(π(φk))+ατ𝔍τ(φk)+ατ𝔍𝔻(Φk)\qquad\qquad\geq\mathcal{J}(\pi(\varphi_{k}))+\alpha^{\tau}\cdot\mathfrak{J}^{\tau}(\varphi_{k})+\alpha^{\tau}\cdot\mathfrak{J}^{\mathbb{D}}(\Phi_{k}) do:
24 U~k0.5U~k\tilde{U}_{k}\leftarrow 0.5\cdot\tilde{U}_{k}
25 Perform updates:
26 φk+1φk+U~kφk\varphi_{k+1}\leftarrow\varphi_{k}+\tilde{U}_{k}\circ\varphi_{k}
27 Φk+1Φk+U~kΦk\Phi_{k+1}\leftarrow\Phi_{k}+\tilde{U}_{k}\circ\Phi_{k}
Algorithm 1 Simultaneous Shape and Volume Regularized Shape Optimization

For a meaningful comparison of the 77 mentioned approaches, we use the same parameters for the problem throughout. Parameters for source term rφ(M)r_{\varphi(M)} of the PDE constraint in eq. 59 are chosen as r1=1000r_{1}=-1000 and r2=1000r_{2}=1000. The scaling factor for perimeter regularization is ν=0.00001\nu=0.00001. Parameters for calculating local weightings μ\mu via eq. 57 are μmax=1\mu_{\text{max}}=1 and μmin=0.05\mu_{\text{min}}=0.05 for all approaches. The stopping criteria for all routines tested remain the same. Specifically, the tolerance for relative decrease of gradient norms is εrel=0.001\varepsilon_{\text{rel}}=0.001, absolute decrease of gradient norms εabs=0.00001\varepsilon_{\text{abs}}=0.00001 and relative main objective decrease εrel𝒥=0.0005\varepsilon^{\mathcal{J}}_{\text{rel}}=0.0005. If shape regularization is employed, it is weighted with ατ=1000\alpha^{\tau}=1000 and uses a constant target q1q\equiv 1. This targets a uniform distribution of surface cell volume of shapes. For volume regularization, the weighting is α𝔻=100\alpha^{\mathbb{D}}=100 with a constant q𝔻1q^{\mathbb{D}}\equiv 1 targeting uniform volume cells of the hold-all domain. If we permit free tangential movement of the outer boundary eq. 64, we choose a weighting parameter α𝔻=250\alpha^{\partial\mathbb{D}}=250. In case of the pp-Laplacian metric, we chose a parameter p=6p=6. Its regularization parameter is chosen as ε=8\varepsilon=8 for the unregularized, and shape and volume regularized routines. If shape without volume regularization takes place, we had to increase regularization to ε=9.5\varepsilon=9.5. This was necessary, since at some point lower values for ε\varepsilon resulted in indefinite systems during descent with pp-Laplacian gradients.

We compare relative values of 𝒥\mathcal{J}, 𝔍τ\mathfrak{J}^{\tau} and 𝔍𝔻\mathfrak{J}^{\mathbb{D}}, which are illustrated in fig. 2. Here, 𝔍τ\mathfrak{J}^{\tau} is interpretable as the deviation of the shape mesh from a surface mesh with equidistant edges. Similarly, 𝔍𝔻\mathfrak{J}^{\mathbb{D}} can be understood as the deviation of the volume mesh from a volume mesh with uniform cell volumes. Since a change of mesh coordinates leads to different qualities of solutions to the PDE constraint of eq. 45, the regularizations do affect the original objective 𝒥\mathcal{J}. Hence, we also measure distance of shapes φk(M)\varphi_{k}(M) and the target shape by a Hadamard like distance function

dist(φk(M),N):=φk(M)maxpNspds.\operatorname{dist}(\varphi_{k}(M),N):=\int_{\varphi_{k}(M)}\underset{p\in N}{\operatorname{max}}\;\|s-p\|\;\mathrm{d}s. (66)

This gives us a geometric value for convergence of our algorithms, complementing the value of objective functionals for our results.

Refer to caption
Figure 1: Initial mesh 𝔻=[0,1]×[0,2.35]\mathbb{D}=[0,1]\times[0,2.35] with embedded initial shape M=S(0.5,0.5)0.35M=S^{0.35}_{(0.5,0.5)}. The bottle like target shape is included as well.
LE Vanilla LE Tang LE VolTang LE VolTang Free pp-L Vanilla pp-L Tang pp-L VolTang
total time 49.0s 345.4s 199.9s 320.7s 135.4s 316.5s 325.8s
avg. time step 1.2s 2.5s 2.6s 2.8s 2.4s 2.0s 4.6s
number steps 41 137 77 114 55 155 70
Table 1: Total times, averaged times per step and number of steps for all 7 methods (for uniform stopping criteria see in the text above).

From fig. 2 (a)(a) and (b)(b), we see that all 77 methods are converging. They all minimize the original shape objective 𝒥\mathcal{J}, and the geometric mesh distance to the target shape (cf. fig. 1). Seeing that mesh distance is minimized for all methods confirms that the optimal shape of original problem eq. 45 is left invariant by our pre-shape regularizations. Also, convergence to the optimal shape is not affected by the choice of regularization and metric a(.,.)a(.,.). In fig. 2 (a)(a) we also see that, given a fixed metric a(.,.)a(.,.), the values of original target 𝒥\mathcal{J} for regularized routines vary only slightly from the unregularized one. This means that intermediate shapes φk(M)\varphi_{k}(M) are left nearly invariant by all regularization approaches as well. We witness that the pp-Laplacian metric eq. 56 gives a pre-shape gradient with slightly slower convergence if compared to the linear elasticity eq. 55 for unregularized and all regularized variants. However, one should keep in mind that the shapes considered here have rather smooth boundary. Notice that all gradients are normed in the line search of algorithm 1, which permits this comparison.

In table 1 we present the times for all optimization runs. We see that the fastest method in both time and step count featured the unregularized approach with the linear elasticity metric. Regularized approaches all need more steps for convergence, since the convergence condition features sufficient minimization of the gradient norms. As shape and volume tracking objectives 𝔍τ\mathfrak{J}^{\tau} and 𝔍𝔻\mathfrak{J}^{\mathbb{D}} participate in this condition, the optimization routine continues to optimize for mesh quality, despite a sufficient reduction of the original target 𝒥\mathcal{J}. This can be verified in fig. 2. Notice that additional volume regularization did not considerably increase average computational time per step for the linear elasticity approach. The times for approaches featuring shape regularization can be improved by computing tangential orthonormal bases in parallel. We relied on a rather inefficient but convenient calculation of these solving several projection problems using FEniCS.

From table 1, we see that the unregularized pp-Laplacian approach for p=6p=6 needs more steps to convergence compared to a linear elasticity gradient. Average time per step is higher too, since Newton’s method needs to be applied to solve the non-linear gradient system. This approach needs careful selection of regularization parameter ε>0\varepsilon>0 for eq. 56, since the mesh quality degrades quickly for our problem. This makes calculation of gradients by Newton’s method difficult, since conditioning of systems and indefiniteness at some point of the shape optimization routine are an issue. If the shape regularized pp-Laplacian gradient is compared to the unregularized pp-Laplacian gradient, the computational times were slightly faster on average. We amount this to faster convergence of the Newton method, since we needed to employ a higher regularization parameter ε\varepsilon for the shape regularized routine. This also explains longer computational times for the volume regularized pp-Laplacian, since the same regularization parameter ε\varepsilon as in the unregularized approach is permissible, but more Newton iterations are necessary. Since the shape regularization takes place simultaneously with volume regularization, a lower permissible regularization ε\varepsilon indeed shows that volume regularization improves condition of linear systems.

Refer to caption
(a) relative shape objective value 𝒥(π(φi))𝒥(π(φ0))\frac{\mathcal{J}(\pi(\varphi_{i}))}{\mathcal{J}(\pi(\varphi_{0}))}
Refer to caption
(b) log\operatorname{log}-mesh distance of φi(M)\varphi_{i}(M) and target shape
Refer to caption
(c) relative shape tracking value 𝔍τ(φi)𝔍τ(φ0)\frac{\mathfrak{J}^{\tau}(\varphi_{i})}{\mathfrak{J}^{\tau}(\varphi_{0})}
Refer to caption
(d) relative volume tracking value 𝔍𝔻(Φi)𝔍𝔻(Φ0)\frac{\mathfrak{J}^{\mathbb{D}}(\Phi_{i})}{\mathfrak{J}^{\mathbb{D}}(\Phi_{0})}
Figure 2: Relative values for three objective functionals 𝒥\mathcal{J}, 𝔍τ\mathfrak{J}^{\tau} and 𝔍𝔻\mathfrak{J}^{\mathbb{D}} and logged mesh distance to target shape for gradient descents using 7 different (un-)regularized pre-shape gradients and metrics.

To analyze quality of the shape mesh for all routines, we provide the relative value of the shape parameterization tracking target 𝔍τ\mathfrak{J}^{\tau} in table 1 (c)(c), as well as gMφk1detDτφk1g^{M}\circ\varphi^{-1}_{k}\circ\operatorname{det}D^{\tau}\varphi_{k}^{-1} for final shapes in fig. 4 and fig. 7. The relative value of the shape parameterization tracking target 𝔍τ\mathfrak{J}^{\tau} in table 1 (c)(c) measures deviation of the current shape mesh φk(M)\varphi_{k}(M) from a uniform surface mesh. This means larger values indicate more non-uniformity of shape meshes. The colors in fig. 4 and fig. 7 highlight variation of node densities on the shape meshes, where a uniform color indicates approximately equidistant surface nodes. As the starting mesh seen in fig. 1 is constructed via Gmsh, it features an approximately uniform surface mesh. However, both for unregularized linear elasticity and pp-Laplacian approaches, we see in fig. 2 (c)(c) that 𝔍τ\mathfrak{J}^{\tau} increases during optimization. This means surface mesh quality deteriorates if no regularization takes place. For final shapes, this is visualized in fig. 4 (a)(a) and fig. 7 (a)(a). There we clearly see an expansion of cell volumes for the targeted bump at the top. All other routines involve a shape quality regularization by 𝔍τ\mathfrak{J}^{\tau}. In table 1 (c)(c) it is visible that also for these routines, the deviation 𝔍τ\mathfrak{J}^{\tau} from uniform surface meshes increases initially. Once surface mesh quality becomes sufficiently bad, the shape parameterization takes effect and corrects quality until approximate uniformity is achieved. We can clearly see this by convergence of 𝔍τ\mathfrak{J}^{\tau} for all shape regularized methods in table 1 (c)(c). Also, we see an approximately uniform color of gMφk1detDτφk1g^{M}\circ\varphi^{-1}_{k}\circ\operatorname{det}D^{\tau}\varphi_{k}^{-1} for final shapes in fig. 4 and fig. 7, which indicates a nearly equidistant surface mesh. As a caveat, we see in fig. 3 (b)(b) and fig. 6 (c)(c) that shape without volume regularization decreases quality of the surrounding volume mesh. This happens, since surface vertices are transported from areas with low volume at the bottom to areas with high volume at the top. In case no volume regularization takes place, node coordinates from the hold-all domain are not corrected for this change. Nevertheless, if a remeshing strategy is employed for shape optimization including shape regularization, the improved surface mesh quality leads to a superior remeshed domain. Such routines are an interesting subject for further works.

Refer to caption
(a) Linear elasticity without regularization
Refer to caption
(b) Linear elasticity with tangential parameterization tracking
Refer to caption
(c) Linear elasticity with tangential and volume parameterization tracking
Refer to caption
(d) Linear elasticity with tangential and free outer boundary volume parameterization tracking
Figure 3: Meshes of final steps of respective algorithms. Color depicts the value of g𝔻φ1detDφ1g^{\mathbb{D}}\circ\varphi^{-1}\cdot\det D\varphi^{-1}, which is interpretable as the density of allocated volume mesh vertices. A more constant value corresponds to better volume mesh quality.
Refer to caption
(a) Linear elasticity without regularization
Refer to caption
(b) Linear elasticity with tangential parameterization tracking
Refer to caption
(c) Linear elasticity with tangential and volume parameterization tracking
Refer to caption
(d) Linear elasticity with tangential and free outer boundary volume parameterization tracking
Figure 4: Shape meshes of final steps of respective algorithms. Color depicts the value of gMφ1detDτφ1g^{M}\circ\varphi^{-1}\cdot\det D^{\tau}\varphi^{-1}, which is interpretable as the density of allocated shape mesh vertices. A more constant value corresponds to better shape mesh quality.

In fig. 2 (d)(d) relative values of the volume parameterization tracking functional 𝔍𝔻\mathfrak{J}^{\mathbb{D}} are depicted for each routine and step. We interpret these values as a measure for non-uniformity of the volume mesh 𝔻\mathbb{D}. The local density of volume vertices g𝔻Φ1detDΦ1g^{\mathbb{D}}\circ\Phi^{-1}\circ\operatorname{det}D\Phi^{-1} are visualizing this, and are depicted fig. 3 and fig. 6. For pictures zoomed at the upper tip of final shapes, we refer the reader to fig. 5 and fig. 8. From fig. 2 (d)(d) we see that, both for linear elasticity and pp-Laplacian metrics, non-volume regularized approaches have significantly higher value of 𝔍𝔻\mathfrak{J}^{\mathbb{D}}. Values even increase for the pp-Laplacian metric, while there is a slight decrease for the linear elasticity. Notice that the initial mesh is locally refined near the shape MM, which naturally increases the initial value of 𝔍𝔻\mathfrak{J}^{\mathbb{D}} for a uniform target. As already discussed, we see in fig. 2 (d)(d) that shape regularized approaches reduce quality of the volume mesh even further compared to unregularized approaches. The decrease of mesh quality especially visible in zoomed pictures fig. 5 and fig. 8 (a)(a) and (b)(b). We see that for these approaches, volume cells near the shape are compressed to such an extent that their volumes nearly vanish. Also, the cell volume distribution for unregularized and shape regularized approaches varies dramatically, which can be seen in fig. 3 and fig. 6 (a)(a) and (b)(b). If volume regularization 𝔍𝔻\mathfrak{J}^{\mathbb{D}} is applied, we see in fig. 2 (d)(d) that convergence for 𝔍𝔻\mathfrak{J}^{\mathbb{D}} takes place independent of the metric a(.,.)a(.,.) being used. This is apparent when looking at the volume mesh quality in fig. 3 (c)(c) and (d)(d) and fig. 6 (c)(c). Further notice that in fig. 5 (c)(c) and (d)(d) and fig. 8 (c)(c) severe compression of cells neighboring the top of final shapes is avoided. Volume cells inside the neck of final shapes are still more or less compressed for all approaches. The interior cell volume cannot be transported through the shape, since it is forced to stay invariant. Since the mesh topology is not changed during the optimization routine, there is also limited possibility to redistribute the cell volumes inside the shape. This situation could be remedied by cell fusion, edge swapping or remeshing strategies, which is beyond the scope of this article. Finally, we want to highlight the difference of volume regularizations with and without free tangential outer boundary 𝔻\partial\mathbb{D}. If fig. 3 (c)(c) and (d)(d) are compared, we see that the nodes on the outer boundary 𝔻\partial\mathbb{D} changed position for routine (d)(d). Indeed, the cell volume distribution is more uniform for free outer boundary routine (d)(d), which is visualized by less variation of color. This leads to even further increase of volume mesh quality, which can be pinpointed in fig. 2 (d)(d).

Refer to caption
(a) Linear elasticity without regularization
Refer to caption
(b) Linear elasticity with tangential parameterization tracking
Refer to caption
(c) Linear elasticity with tangential and volume parameterization tracking
Refer to caption
(d) Linear elasticity with tangential and free outer boundary volume parameterization tracking
Figure 5: Meshes of final steps of respective algorithms. Color depicts the value of g𝔻φ1detDφ1g^{\mathbb{D}}\circ\varphi^{-1}\cdot\det D\varphi^{-1}, which is interpretable as the density of allocated volume mesh vertices. A more constant value corresponds to better volume mesh quality.
Refer to caption
(a) pp-Laplacian without regularization
Refer to caption
(b) pp-Laplacian with tangential parameterization tracking
Refer to caption
(c) pp-Laplacian with tangential and volume parameterization tracking
Figure 6: Meshes of final steps of respective algorithms. Color depicts the value of g𝔻φ1detDφ1g^{\mathbb{D}}\circ\varphi^{-1}\cdot\det D\varphi^{-1}, which is interpretable as the density of allocated volume mesh vertices. A more constant value corresponds to better volume mesh quality.
Refer to caption
(a) pp-Laplacian without regularization
Refer to caption
(b) pp-Laplacian with tangential parameterization tracking
Refer to caption
(c) pp-Laplacian with tangential and volume parameterization tracking
Figure 7: Shape meshes of final steps of respective algorithms. Color depicts the value of gMφ1detDτφ1g^{M}\circ\varphi^{-1}\cdot\det D^{\tau}\varphi^{-1}, which is interpretable as the density of allocated shape mesh vertices. A more constant value corresponds to better shape mesh quality.
Refer to caption
(a) pp-Laplacian without regularization
Refer to caption
(b) pp-Laplacian with tangential parameterization tracking
Refer to caption
(c) pp-Laplacian with tangential and volume parameterization tracking
Figure 8: Meshes of final steps of respective algorithms. Color depicts the value of g𝔻φ1detDφ1g^{\mathbb{D}}\circ\varphi^{-1}\cdot\det D\varphi^{-1}, which is interpretable as the density of allocated volume mesh vertices. A more constant value corresponds to better volume mesh quality.

4 Conclusion and Outlook

In this work, we have provided several approaches to regularize general shape optimization problems to increase shape and volume mesh quality using pre-shape calculus. Existence of regularized solutions and consistency of modified pre-shape gradient systems is guaranteed by several results for simultaneous shape and volume tracking. With the presented gradient system modifications, our goal of leaving optimal shapes to the original problem invariant was achieved. The computational burden is limited, since no additional solution of linear systems for regularized pre-shape gradients is necessary. We also successfully implemented and compared our pre-shape gradient regularization approaches for linear elasticity and non-linear pp-Laplacian metrics.

There are several possibilities to further develop pre-shape regularization approaches. For one, non-constant targets ff can be designed to adapt mesh refinement non-uniformly. In particular, mesh quality targets increasing solution quality of PDE constraints can be envisioned. Also, we did not touch the topic of pre-shape Hessian, which could be of use to further increase effectiveness of regularization approaches. Furthermore, a combination with discrete techniques, such as remeshing and edge swapping are possible as well.

Acknowledgements

The authors would like to thank Michael Hinze (Koblenz University) and Martin Siebenborn (Universität Hamburg) for a helpful and interesting discussion on the pp-Laplacian metric. This work has been supported by the BMBF (Bundesministerium für Bildung und Forschung) within the collaborative project GIVEN (FKZ: 05M18UTA). Further, the authors acknowledge the support of the DFG research training group 2126 on algorithmic optimization.

References

  • [1] M.S. Alnæs, J. Blechta, J. Hake, A. Johansson, B. Kehlet, A. Logg, C. Richardson, J. Ring, M.E. Rognes, and G.N. Wells. The FEniCS project version 1.5. Archive of Numerical Software, 3(100), 2015.
  • [2] W. Cao, W. Huang, and R.D. Russell. A Study of Monitor Functions for Two-Dimensional Adaptive Mesh Generation. SIAM Journal on Scientific Computing, 20(6):1978–1994, 1999.
  • [3] B. Dacorogna and J. Moser. On a Partial Differential Equation Involving the Jacobian Determinant. In Annales de l’Institut Henri Poincare (C) Non Linear Analysis, volume 7, pages 1–26. Elsevier, 1990.
  • [4] K. Deckelnick, P.J. Herbert, and M. Hinze. A Novel W1,W^{1,\infty} Approach to Shape Optimisation with Lipschitz Domains. arXiv preprint arXiv:2103.13857, 2021.
  • [5] T. Etling, R. Herzog, E. Loayza, and G. Wachsmuth. First and second order shape optimization based on restricted mesh deformations. arXiv preprint arXiv:1810.10313, 2018.
  • [6] J. Friederich, G. Leugering, and P. Steinmann. Adaptive Finite Elements based on Sensitivities for Topological Mesh Changes. Control and Cybernetics, 43, 2014.
  • [7] C. Geuzaine and J.-F. Remacle. Gmsh: A Three-Dimensional Finite Element Mesh Generator with Built-In Pre-and Post-Processing Facilities. In Proceedings of the Second Workshop on Grid Generation for Numerical Computations, Tetrahedron II, 2007.
  • [8] V. Guillemin and A. Pollack. Differential Topology, volume 370. American Mathematical Soc., 2010.
  • [9] J. Haubner, M. Siebenborn, and M. Ulbrich. A Continuous Perspective on Modeling of Shape Optimal Design Problems. arXiv preprint arXiv:2004.06942, 2020.
  • [10] R. Herzog and E. Loayza-Romero. A Manifold of Planar Triangular Meshes with Complete Riemannian Metric. arXiv preprint arXiv:2012.05624, 2020.
  • [11] A. Logg, K.-A. Mardal, G.N. Wells, et al. Automated Solution of Differential Equations by the Finite Element Method. Springer, 2012.
  • [12] D. Luft and V. Schulz. Pre-Shape Calculus: Foundations and Application to Mesh Quality Optimization. arXiv preprint arXiv:2012.09124, 2020.
  • [13] P.M. Müller, N. Kühl, M. Siebenborn, K. Deckelnick, M. Hinze, and T. Rung. A novel p-Harmonic Descent Approach applied to Fluid Dynamic Shape Optimization. ArXiv, 2021.
  • [14] S. Onyshkevych and M. Siebenborn. Mesh Quality Preserving Shape Optimization using Nonlinear Extension Operators. arXiv preprint arXiv:2006.04420, 2020.
  • [15] G. Savard and J. Gauvin. The Steepest Descent Direction for the Nonlinear Bilevel Programming Problem. Operations Research Letters, 15(5):265–272, 1994.
  • [16] S. Schmidt. A Two Stage CVT/Eikonal Convection Mesh Deformation Approach for Large Nodal Deformations. arXiv preprint arXiv:1411.7663, 2014.
  • [17] V. H. Schulz and M. Siebenborn. Computational Comparison of Surface Metrics for PDE Constrained Shape Optimization. Computational Methods in Applied Mathematics, 16(3):485–496, 2016.
  • [18] V.H. Schulz, M. Siebenborn, and K. Welker. Efficient PDE Constrained Shape Optimization based on Steklov-Poincaré Type Metrics. SIAM Journal on Optimization, 26(4):2800–2819, 2016.
  • [19] J.R. Shewchuk. What is a Good Linear Element? Interpolation, Conditioning, Anisotropy, and Quality Measures. Univ. of California, Berkeley, CA, 2002.
  • [20] N.K. Smolentsev. Diffeomorphism groups of compact manifolds. Journal of Mathematical Sciences, 146(6):6213–6312, 2007.